Jeff Kowalski, Autodesk’s CTO, opened up the Autodesk University conference (AU 2016) talking about four major technology developments: 1) Generative design, 2) Machine learning, 3) Virtual reality, and 4) Robotic systems.
Generative Design
Autodesk has been talking about their generative design solution, called Project Dreamcatcher, for over a year. At last year’s AU (Autodesk University) conference, Airbus engineers showed their ‘bionic partition’ created using generative design tools.1 Instead of an engineer precisely specifying the shape of a part, they feed in a set of design goals and constraints (such as the physical connection points for the part, expected loads, which type of manufacturing technology/machines will be used, and so forth) and the software generates thousands of possible designs meeting those criteria. Using a set of filters, the designer can narrow the choices to optimize the parameters they care most about, such as weight, cost, manufacturing time, and so forth. In the case of the Airbus’s bionic partition, they achieved a 45% weight reduction compared with the existing partition that had already been designed for minimum weight by Airbus’s world-class aeronautics design engineers.

Figure 1 – Example of Generatively Designing a Chair
Autodesk’s team in Toronto is moving into a new building and have decided to use generative design to reimagine the office workspace. They surveyed all employees about work habits and preferences and data, along with a set of constraints, such as the size of the building interior, location of fixed supports and fixtures, etc. They then allowed the system to generate thousands of different options, optimizing different objectives, such as maximizing outside views, minimizing distractions, maximize human interactions and so forth. The ability to trade off competing objectives is one of the big strengths of this approach vs. human-generated designs.
A commercialized version of Dreamcatcher will be rolled out in the first half of 2017. Before that, generative design capabilities will show up in other products. Autodesk is testing these capabilities with ‘a dozen+’ customers to figure out things like how to make it easier to learn and use, how to get the right inputs, what kinds of controls are most useful, and so forth. Some designers are using it to broaden their perspective, even if they go back and use traditional design tools to create the final version.
Changing Role for Designers
Generative design changes the role of the designer from specifying the design to specifying the objectives, requirements, and constraints. Engineers become a bit more like data scientists; gathering all the right inputs, specifying the constraints and objectives, and then analyzing the resulting solution sets to narrow down and pick the winning design. Often aesthetics are important, so artistic skills and aesthetic sensibilities become more important attributes for designers. Some amount of performance may be traded off for a more beautiful design, when appearances are critical. Aesthetics and making tradeoffs are areas where the designer’s judgment is still crucial.
Dreamcatcher is one of the first multi-objective design optimization tools. This means Autodesk is in new territory figuring out how to get the right UI, the right human-system interactions to make the process iterative and flexible. In the process, they have discovered that some designers want visibility inside the ‘black box’ to understand what is happening and how it works. They hear something like “I give the system goals and constraints, then something magically generates all these possibilities. How are my goals and constraints being linked to the outcomes? Can you give me the ability to nudge the outcomes in a slightly different direction?” It has been said that the smartest entity is not the human or the computer — but rather the human and computer working together as one. It appears that is the instinctive impulse of many designers, to want to remain a full and intimate participant throughout the design process.
Machine Learning
Cloud computing provides nearly unlimited scalability, enabling powerful machine learning intelligence. Autodesk talked about how their design platform will be able to pay attention to how and what the user is working on, and then give recommendations. For example, it may see designs that other users have already created that may fit your needs, and recommend you start with one of those. They said the software will evolve to fit the user’s needs as they are using it. They are also working on using machine learning to teach 3D printers and industrial robots to do smarter things more autonomously, so that smaller shops and manufacturers can buy and use them. As well, Autodesk expects to use machine learning in the context of IoT.
Virtual Reality
Virtual reality was a big theme at the conference. It is a natural fit for architects designing buildings, a major part of Autodesk’s business. It allows them to try out different designs and walk through them at human scale, while they are designing them, and then take the customer on a virtual tour. We got to experience this first hand at AU, donning a VR headset to ‘walk’ through the virtual lobby of a building, into its bar/restaurant, and conference room. We were experiencing the same VR design of a building that was reviewed this way by a customer and actually got approved and built. This capability lets the architect’s clients make suggestions at the early concept/design phase when changes are still relatively easy and feasible. Once construction starts, it is much harder (sometimes impossible) and more expensive to modify the design. Autodesk LIVE2 is a cloud service that takes a Revit model and with two clicks allows users a VR experience with HTC Vive or Oculus Rift headsets. That is a big step in simplifying and speeding up the process of creating VR models for architectural walk throughs.
Robotic Systems
In their lab, Autodesk is taking standard industrial robots and attempting to use machine learning to get the robots to learn tasks, without having to do traditional programming. Initially they are working on ‘simple’ tasks (i.e. simple for a person, but not for a robot), such as picking up and connecting objects (e.g. bolting them together) to ultimately assemble and build things. This is an example, combining machine learning with robotics.
Autodesk Fusion Connect (Formerly SeeControl)
As of May this year, SeeControl, the IoT platform that Autodesk acquired last year, was renamed Fusion Connect. Their tagline is “100% no-coding IoT cloud platform.” I got a demo, and it appears to live up to that promise, from what I saw. Cloud applications can be created and deployed straightforwardly using their visual flow-chart programming paradigm, with the ability to drag and drop data sources, formulas and data sinks from pick lists on the side. The administrative/security structure allows an OEM to create white label experiences so each of their customers has their own environment and is not able to see any of the other customer’s data. The OEM can create roles for those customers, such as administrator or site manager, enabling their customer to do a fair amount of self-service for things like adding users and tweaking the UI (such as color scheme, font sizes, gauge styles, placement of tabs, resizing dashboard elements, etc.)
Fusion Connect’s focus is tools for manufacturers/OEMs to create services and tools to enhance IoT-enabled, smart connected products they build, or potentially even create Product-as-a-Service4 business models. These tools give the OEM a much better handle on service and operating costs, and they can add usage-based billing capabilities. They may also gain better visibility into when the machine is being used in unintended ways.
Fusion Connect has a WSYWIG5 report builder that can do things like group alerts by type, show average service times for each type of alert, search and drill down to look at individual incidents, and so forth. The report/dashboard view is highly graphical, and can include maps, gauges, charts, and many other graphic objects, which are all highly editable (labels, layout, format, colors, etc.). Fusion Connect provides a strong set of tools to rapidly create new capabilities and services for an OEM who makes IoT-enabled smart connected products.
Integrated Project Coordination and Collaboration Platforms — For Buildings, Products, Movies
The other big theme from the show was integration of large, complex project teams. All three of Autodesk’s main markets or domains — architecture/construction, design/manufacturing, and movie production — all involve large project teams, often with members from many different companies and freelance contractors. These require a lot of coordination, like an orchestra, but with the challenge of members potentially using many different tools, with data spread out across different locations, such as proprietary data on individuals’ hard drives. All that is supposed to come together into one project, but too often is a big mess. Autodesk is aiming to solve that mess by building a ‘Common Data Environment’ for each of their three main domains, to try and put an end to having to import, export, and email files around and just hoping all the changes are captured. The common data environments are the cloud-based, integrated platforms of Fusion 360 (product design), BIM 360 (building design), and Shotgun (movie production).
Coordinating Manufacturing Projects
Integrated Electronic Design will be part of Fusion. It also provides support for production process characteristics, enabling more informed decisions, early in the process. They announced, for example, the introduction of Sheet Metal Fabrication7 within Fusion and showed how the design tools automatically enforced setbacks and clearances needed to ensure that the part could actually be manufactured on the types of production machines that had been specified. It automatically produces the laser cutting and bending instructions. The model is associative, so whenever a change is made to the source 3D model, the manufacturing components are also updated.
They put simulation tools into the hands of designers to optimize at point of design, rather than validating after the design is done. Their Nastran simulator provides general purpose finite element analysis (FEA), including the ability to apply loading conditions over time, simulate buckling loads and see how failures occur. The built-in shape optimization can be used, for example, to get to a minimum weight that still meets the specified safety margin.
Autodesk’s ultimate goal is to go from a linear timeline and waterfall project model to supporting a much more rapid, agile, iterative approach, and the ability to look at various alternatives in parallel. They created branching and merging logic for their Fusion 360 software, to support these kinds of parallel explorations, while maintaining the integrity of the project without requiring convoluted workflows.
Shotgun
BIM 360
In the works for BIM 360 is Project IQ,a virtual assistant for construction. In her keynote address, Autodesk’s Sarah Hodges said one could think of Project IQ as being a Siri-like assistant for construction professionals. Project IQ mines project data and uses machine learning and algorithms to analyze potential risks across the project, such as the 20 highest risk issues that need to be resolved. Autodesk’s vision is that all of these applications will become workspaces in one holistic experience, with all the data in one place.
Putting it All Together
The extreme complexity of these project team environments — in building construction, movie production, and product design and manufacturing — means that Autodesk will be continually building and refining the ultimate unified single-version-of-the-truth platform for many years to come. So even though it can be viewed as a work-in-progress, they have shown not just a strong vision, but many highly valuable pieces of the puzzle already in place. AU 2016 was a great showcase for what they’ve done not only in project integration and collaboration, but the other leading edge technologies of generative design, machine learning, VR, Robotics/3D printing, and IoT.
___________________________________________
1 Autodesk said that Airbus’s bionic partition will be in production planes flying by the middle of next year (2017). They said the first partition they produced was so sought after, it was sent around the world, delaying the static and dynamic testing required before going into production. — Return to article text above
2 LIVE was introduced in July, 2016. — Return to article text above
3 It was a strange sensation to be able to walk right through this car, that looks solid, but of course is nothing but air in front of you. — Return to article text above
4 Autodesk refers to this as Machine-as-a-Service, I’m guessing because that acronym (MaaS) would not be easily confused with acronym PaaS, which most people assume means Platform-as-a-Service (PaaS), rather than Product-as-a-Service. For more on Product-as-a-Service, see Business Model & Scope Questions. — Return to article text above
5 WYSIWYG = What You See Is What You Get. — Return to article text above
6 CAM = Computer Aided Manufacturing. — Return to article text above
7 Including capabilities such as calculating the nesting of different sheet metal parts on a sheet to optimize the material use on shop floor. — Return to article text above
8 Here’s a comparison of different Building Information Modeling (BIM) Software. — Return to article text above
To view other articles from this issue of the brief, click here.