Autodesk University 2016 – A Window into the Future of Design, Construction, and Movie-Making


Generative design, machine learning, virtual reality, robotic systems, 3D printing, Internet-of-Things, and integrated project coordination and collaboration were key themes at Autodesk’s annual conference.


Jeff Kowalski, Autodesk’s CTO, opened up the Autodesk University conference (AU 2016) talking about four major technology developments: 1) Generative design, 2) Machine learning, 3) Virtual reality, and 4) Robotic systems.

Generative Design

Autodesk has been talking about their generative design solution, called Project Dreamcatcher, for over a year. At last year’s AU (Autodesk University) conference, Airbus engineers showed their ‘bionic partition’ created using generative design tools.1 Instead of an engineer precisely specifying the shape of a part, they feed in a set of design goals and constraints (such as the physical connection points for the part, expected loads, which type of manufacturing technology/machines will be used, and so forth) and the software generates thousands of possible designs meeting those criteria. Using a set of filters, the designer can narrow the choices to optimize the parameters they care most about, such as weight, cost, manufacturing time, and so forth. In the case of the Airbus’s bionic partition, they achieved a 45% weight reduction compared with the existing partition that had already been designed for minimum weight by Airbus’s world-class aeronautics design engineers.

Source: Image by Autodesk

Autodesk’s team in Toronto is moving into a new building and have decided to use generative design to reimagine the office workspace. They surveyed all employees about work habits and preferences and data, along with a set of constraints, such as the size of the building interior, location of fixed supports and fixtures, etc. They then allowed the system to generate thousands of different options, optimizing different objectives, such as maximizing outside views, minimizing distractions, maximize human interactions and so forth. The ability to trade off competing objectives is one of the big strengths of this approach vs. human-generated designs.

A commercialized version of Dreamcatcher will be rolled out in the first half of 2017. Before that, generative design capabilities will show up in other products. Autodesk is testing these capabilities with ‘a dozen+’ customers to figure out things like how to make it easier to learn and use, how to get the right inputs, what kinds of controls are most useful, and so forth. Some designers are using it to broaden their perspective, even if they go back and use traditional design tools to create the final version.

Changing Role for Designers

Generative design changes the role of the designer from specifying the design to specifying the objectives, requirements, and constraints. Engineers become a bit more like data scientists; gathering all the right inputs, specifying the constraints and objectives, and then analyzing the resulting solution sets to narrow down and pick the winning design. Often aesthetics are important, so artistic skills and aesthetic sensibilities become more important attributes for designers. Some amount of performance may be traded off for a more beautiful design, when appearances are critical. Aesthetics and making tradeoffs are areas where the designer’s judgment is still crucial.

Dreamcatcher is one of the first multi-objective design optimization tools. This means Autodesk is in new territory figuring out how to get the right UI, the right human-system interactions to make the process iterative and flexible. In the process, they have discovered that some designers want visibility inside the ‘black box’ to understand what is happening and how it works. They hear something like “I give the system goals and constraints, then something magically generates all these possibilities. How are my goals and constraints being linked to the outcomes? Can you give me the ability to nudge the outcomes in a slightly different direction?” It has been said that the smartest entity is not the human or the computer — but rather the human and computer working together as one. It appears that is the instinctive impulse of many designers, to want to remain a full and intimate participant throughout the design process.

Machine Learning

Cloud computing provides nearly unlimited scalability, enabling powerful machine learning intelligence. Autodesk talked about how their design platform will be able to pay attention to how and what the user is working on, and then give recommendations. For example, it may see designs that other users have already created that may fit your needs, and recommend you start with one of those. They said the software will evolve to fit the user’s needs as they are using it. They are also working on using machine learning to teach 3D printers and industrial robots to do smarter things more autonomously, so that smaller shops and manufacturers can buy and use them. As well, Autodesk expects to use machine learning in the context of IoT.

Virtual Reality

Virtual reality was a big theme at the conference. It is a natural fit for architects designing buildings, a major part of Autodesk’s business. It allows them to try out different designs and walk through them at human scale, while they are designing them, and then take the customer on a virtual tour. We got to experience this first hand at AU, donning a VR headset to ‘walk’ through the virtual lobby of a building, into its bar/restaurant, and conference room. We were experiencing the same VR design of a building that was reviewed this way by a customer and actually got approved and built. This capability lets the architect’s clients make suggestions at the early concept/design phase when changes are still relatively easy and feasible. Once construction starts, it is much harder (sometimes impossible) and more expensive to modify the design. Autodesk LIVE2 is a cloud service that takes a Revit model and with two clicks allows users a VR experience with HTC Vive or Oculus Rift headsets. That is a big step in simplifying and speeding up the process of creating VR models for architectural walk throughs.

Source: Image by ChainLink Research

Autodesk is also working on tools to do the product design in virtual reality. This lets engineers in different geographies collaborate together and adjust full scale models appearing in front of them in virtual reality. We got to play with an early and super-simplified version of this that let us stand in front of a full scale virtual car and grab and move around a few different design features, to change the shape of the body of the car. As well, we could change the color, open the engine compartment and look inside.3 I think it will take a while to understand how engineers will best use these new approaches, working closely with them to refine the user experience, use cases, and sets of controls. This will be an exciting area to watch as it develops.

Robotic Systems

Autodesk has a deep history with 3D printing. They are working with MX3D (a manufacturer of robotic 3D printing) who is using software from Autodesk to 3D print an actual bridge across a canal in Amsterdam. The robots look like they are doing arc welding, continually building out the metal structure of the bridge. They are supported by the bridge as they progress in building it. The first bridge is modest in scale, going across a small canal. Eventually, they plan to build much larger structures.

In their lab, Autodesk is taking standard industrial robots and attempting to use machine learning to get the robots to learn tasks, without having to do traditional programming. Initially they are working on ‘simple’ tasks (i.e. simple for a person, but not for a robot), such as picking up and connecting objects (e.g. bolting them together) to ultimately assemble and build things. This is an example, combining machine learning with robotics.

Autodesk Fusion Connect (Formerly SeeControl)

As of May this year, SeeControl, the IoT platform that Autodesk acquired last year, was renamed Fusion Connect. Their tagline is “100% no-coding IoT cloud platform.” I got a demo, and it appears to live up to that promise, from what I saw. Cloud applications can be created and deployed straightforwardly using their visual flow-chart programming paradigm, with the ability to drag and drop data sources, formulas and data sinks from pick lists on the side. The administrative/security structure allows an OEM to create white label experiences so each of their customers has their own environment and is not able to see any of the other customer’s data. The OEM can create roles for those customers, such as administrator or site manager, enabling their customer to do a fair amount of self-service for things like adding users and tweaking the UI (such as color scheme, font sizes, gauge styles, placement of tabs, resizing dashboard elements, etc.)

The demo was for a hypothetical IoT cleaning robot and we saw how easy it was to add alerts; a low battery alert and a hopper full alert. They also created a form for adding a new customer, using a visual design approach: dragging and dropping fields and data, including pulling in customer data from an existing CRM system via an API. They showed how to add provisioning functionality to the application, so it could handle the initial connection to a new unit and allow information to be associated with that unit, such as who owns it, who to alert for each event (this could be the customer’s maintenance person, the dealer providing service, etc.), and other provisioning information and steps.

Fusion Connect’s focus is tools for manufacturers/OEMs to create services and tools to enhance IoT-enabled, smart connected products they build, or potentially even create Product-as-a-Service4 business models. These tools give the OEM a much better handle on service and operating costs, and they can add usage-based billing capabilities. They may also gain better visibility into when the machine is being used in unintended ways.

Services that could be added to a smart connected machine include things like predictive maintenance, self-diagnosis for repairs needed, usage based maintenance or replacement of components, and so forth. The ability to quickly and easily develop these kind of software capabilities and services to a machine, drives an evolution in the design of the machine itself; the ‘light bulb’ goes off for the designer at the OEM and they consider building in more sensors and capabilities for the machine to be self-aware, so they can do more preventative and predictive maintenance. It also provides vital feedback into design, such as the actual rate that parts are wearing out, what the different usage patterns in different environments are, and so forth.

Fusion Connect has a WSYWIG5 report builder that can do things like group alerts by type, show average service times for each type of alert, search and drill down to look at individual incidents, and so forth. The report/dashboard view is highly graphical, and can include maps, gauges, charts, and many other graphic objects, which are all highly editable (labels, layout, format, colors, etc.). Fusion Connect provides a strong set of tools to rapidly create new capabilities and services for an OEM who makes IoT-enabled smart connected products.

Integrated Project Coordination and Collaboration Platforms — For Buildings, Products, Movies

The other big theme from the show was integration of large, complex project teams. All three of Autodesk’s main markets or domains — architecture/construction, design/manufacturing, and movie production — all involve large project teams, often with members from many different companies and freelance contractors. These require a lot of coordination, like an orchestra, but with the challenge of members potentially using many different tools, with data spread out across different locations, such as proprietary data on individuals’ hard drives. All that is supposed to come together into one project, but too often is a big mess. Autodesk is aiming to solve that mess by building a ‘Common Data Environment’ for each of their three main domains, to try and put an end to having to import, export, and email files around and just hoping all the changes are captured. The common data environments are the cloud-based, integrated platforms of Fusion 360 (product design), BIM 360 (building design), and Shotgun (movie production).

Coordinating Manufacturing Projects

In dramatic style, a Briggs Automotive Company Mono (a $220K super car) was driven onto the main stage during the keynote address at AU 2016. It was used to illustrate the complexity of product design and manufacturing throughout the different phases and the various teams that have to work together, from the first conceptual sketch right through to mass scale manufacturing. Autodesk’s vision is a platform that connects the various teams, so anyone anywhere (if they have been granted the proper access rights, of course) can create, edit, simulate, test, view, markup, and ultimately physically produce (both prototype and mass production) with integrated CAM6 capability to drive machine tools and 3D printers.

Integrated Electronic Design will be part of Fusion. It also provides support for production process characteristics, enabling more informed decisions, early in the process. They announced, for example, the introduction of Sheet Metal Fabrication7 within Fusion and showed how the design tools automatically enforced setbacks and clearances needed to ensure that the part could actually be manufactured on the types of production machines that had been specified. It automatically produces the laser cutting and bending instructions. The model is associative, so whenever a change is made to the source 3D model, the manufacturing components are also updated.

They put simulation tools into the hands of designers to optimize at point of design, rather than validating after the design is done. Their Nastran simulator provides general purpose finite element analysis (FEA), including the ability to apply loading conditions over time, simulate buckling loads and see how failures occur. The built-in shape optimization can be used, for example, to get to a minimum weight that still meets the specified safety margin.

Autodesk’s ultimate goal is to go from a linear timeline and waterfall project model to supporting a much more rapid, agile, iterative approach, and the ability to look at various alternatives in parallel. They created branching and merging logic for their Fusion 360 software, to support these kinds of parallel explorations, while maintaining the integrity of the project without requiring convoluted workflows.


Movie production is perhaps the ultimate collaborative project. Hollywood studios will bring 10 or more companies to work together on the same shots, at the same time. Shotgun (acquired by Autodesk) has been used to make some of the most visually stunning movies and TV shows (e.g. Game of Thrones), connecting the production team together. It includes production management tools to track artists (who are by nature allergic to using any administrative tools — they revolt if required to do so). Shotgun is able to automatically track the artist’s work by integrating with the artist’s creative applications (like Autodesk Maya and Autodesk 3ds Max, and external tools like Photoshop). The creative director (and others, such as the producer) can review, annotate, and give feedback in an iterative process. During this review process, they need to see the context of the shots, knowing what comes before and after the one being reviewed. Editors may be making changes to the shot sequence at the same time the creative director is reviewing, so those changes need to be kept in synch in real time. That used to require getting everyone together in the same room once a day, with hours of prep needed, to get everyone on the same page. Now the system makes it happen automatically in the background, enabling everything to move faster and still stay in synch, even if the project team is working in geographically dispersed locations.8

BIM 360

Source: Image by Photo Mix from Pixabay

Building construction projects, especially large commercial ones, are among the most complex projects in the world. Behind the scenes there is a complicated sequence of design-, labor-, machine-, logistics- and material-intensive construction processes requiring tremendous precision and a high degree of coordination. This world is still highly dependent on paper drawings, binders, schedules, and so forth. With the thousands of inputs that go into a project, one unaccounted for change can have a devastating ripple effect on budget and schedules. This presents enormous opportunities to transform the way the industry works. Autodesk aims to do this with BIM 360 Docs, launched last year at AU, providing a common data environment for all the data for a construction project; the models, documents, drawings, photos, schedules, and so forth. BIM 360 can capture every change and show changes as they occur; getting closer and closer to the single source of truth for a project, with clear accountability and attribution. The other products in BIM 360 — Glue, Field, Plan, and Layout — support preconstruction, planning, operations, and maintenance; the full lifecycle of the building. replace

In the works for BIM 360 is Project IQ,a virtual assistant for construction. In her keynote address, Autodesk’s Sarah Hodges said one could think of Project IQ as being a Siri-like assistant for construction professionals. Project IQ mines project data and uses machine learning and algorithms to analyze potential risks across the project, such as the 20 highest risk issues that need to be resolved. Autodesk’s vision is that all of these applications will become workspaces in one holistic experience, with all the data in one place.

Putting it All Together

The extreme complexity of these project team environments — in building construction, movie production, and product design and manufacturing — means that Autodesk will be continually building and refining the ultimate unified single-version-of-the-truth platform for many years to come. So even though it can be viewed as a work-in-progress, they have shown not just a strong vision, but many highly valuable pieces of the puzzle already in place. AU 2016 was a great showcase for what they’ve done not only in project integration and collaboration, but the other leading edge technologies of generative design, machine learning, VR, Robotics/3D printing, and IoT.


1 Autodesk said that Airbus’s bionic partition will be in production planes flying by the middle of next year (2017). They said the first partition they produced was so sought after, it was sent around the world, delaying the static and dynamic testing required before going into production. — Return to article text above

2 LIVE was introduced in July, 2016. — Return to article text above
3 It was a strange sensation to be able to walk right through this car, that looks solid, but of course is nothing but air in front of you. — Return to article text above
4 Autodesk refers to this as Machine-as-a-Service, I’m guessing because that acronym (MaaS) would not be easily confused with acronym PaaS, which most people assume means Platform-as-a-Service (PaaS), rather than Product-as-a-Service. For more on Product-as-a-Service, see Business Model & Scope Questions. — Return to article text above
5 WYSIWYG = What You See Is What You Get. — Return to article text above
6 CAM = Computer Aided Manufacturing. — Return to article text above
7 Including capabilities such as calculating the nesting of different sheet metal parts on a sheet to optimize the material use on shop floor. — Return to article text above
8 Here’s a comparison of different Building Information Modeling (BIM) Software. Return to article text above

To view other articles from this issue of the brief, click here.

Scroll to Top