Industrial LLMs: Outside the Shop
Complex product and system design is aided by large language models to reduce time-to-market and operational costs while identifying and mitigating risks.
Last week’s post on Automation Apprentices surfaced LLMs and Generative AI capabilities applied to products on the shop floor. This week’s focus is on the new paradigm for product development and engineering.
A New Paradigm for Complex System Design
Many industrial products (semiconductors, materials/chemicals, engines) and systems (manufacturing process steps, supply chains) result after significant research and development expenditures. Each of these products and systems adhere to Gall’s Law:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Large language models and other generative AI models can recall and reuse the “small system that worked” from decades of past experiments and intellectual property held by many industrial companies. By augmenting existing processes with large language models and generative AI, companies can significantly reduce time-to-market and operational costs, particularly in sectors like integrated circuit design, where precision and efficiency are paramount. In supply chain management, the ability to quickly identify and mitigate risks translates to enhanced operational resilience, crucial in today’s volatile global market. Perhaps most notably, in the field of materials science, the accelerated discovery of novel materials with specific properties is not just a matter of efficiency but a leap towards innovation, opening doors to new products and solutions that were previously unattainable. These advancements are not just incremental improvements; they represent a new paradigm in how industrials approach research and design, leading to a more innovative, efficient, and responsive industrial landscape.
Duann Scott over at Bits to Atoms wrote “a tale that balances the dazzle of innovation with the sobering real-world challenges of gathering accurate data and the physics of manufacturing” that’s worth a second read:
Here’s the list of companies adding generative AI capabilities into product development, engineering, and operations.
Product Development
PTC
Will ChatGPT AI Revolutionize Engineering and Product Development? Here's What to Know.
Creo Generative Design utilizes the power of AI to revolutionize the design process. While not the same type of AI as ChatGPT, generative design uses AI algorithms to generate and evaluate design options, enabling engineers and designers to create more innovative and efficient products in a fraction of the time it would take using traditional design methods.
Ansys
Ansys Expands AI Offerings with New Virtual Assistant
Ansys is advancing its customer experience, accelerating democratization of simulation, and powering next-generation innovation with new AI capabilities across its simulation portfolio and technical support services
Ansys’ first AI-powered virtual assistant will provide the added option of 24/7 technical support to Ansys customers throughout the world, reducing first response time to mere seconds
Built using ChatGPT technology, Ansys’ latest AI tool is tailored for Ansys customers and trained with Ansys public data, combining the domain expertise normally distributed across multiple engineers into one virtual knowledge engine
MIT
To excel at engineering design, generative AI must learn to innovate, study finds
In their study, Ahmed and Regenwetter reveal the pitfalls of deep generative models when they are tasked with solving engineering design problems. In a case study of bicycle frame design, the team shows that these models end up generating new frames that mimic previous designs but falter on engineering performance and requirements.
The last model the team tested was one that Regenwetter built to generate new geometric structures. This model was designed with the same priorities as the previous models, with the added ingredient of design constraints, and prioritizing physically viable frames, for instance, with no disconnections or overlapping bars. This last model produced the highest-performing designs, that were also physically feasible.
Siemens
Siemens and Microsoft drive industrial productivity with generative artificial intelligence
Siemens and Microsoft are harnessing the collaborative power of generative artificial intelligence (AI) to help industrial companies drive innovation and efficiency across the design, engineering, manufacturing and operational lifecycle of products. To enhance cross-functional collaboration, the companies are integrating Siemens’ Teamcenter® software for product lifecycle management (PLM) with Microsoft’s collaboration platform Teams and the language models in Azure OpenAI Service as well as other Azure AI capabilities.
Semiconductor
NVIDIA
Silicon Volley: Designers Tap Generative AI for a Chip Assist (research paper)
ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-adaptive continued pretraining, supervised fine-tuning (SFT) with domain-specific instructions, and domain-adapted retrieval models. We evaluate these methods on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks. Our findings also indicate that there’s still room for improvement between our current results and ideal outcomes. We believe that further investigation of domain-adapted LLM approaches will help close this gap in the future.
Synopsys
Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design
With the introduction of Synopsys.ai Copilot, Synopsys is harnessing the power of generative AI (GenAI) to bolster design teams with new levels of productivity. Integrated into the full Synopsys EDA stack, Synopsys.ai Copilot is the world’s first GenAI capability for chip design. Trained on the trusted materials that you rely on today, the technology collaborates with engineers on their everyday workflows. What’s more, as Synopsys.ai Copilot learns from your projects, it will eventually be able to deliver more meaningful guidance based on your organization’s best practices and institutional knowledge.
AI-Powered EDA Suite for Chip Design & AI Applications | Synopsys.ai
Cadence
Cadence Design Is Working With Renesas To Build The World’s First LLM Tool For Up-Front Chip Design
Renesas and Cadence have collaborated to develop a novel approach to address the up-front design work by leveraging LLMs, significantly reducing the time and effort from specification to final design. The chip design verification, debugging, and implementation phases remain the same today. They call this accelerating “Correct by Construction” design methodology.
Additive Manufacturing
Additihive
Addithive - Generating GCODE for 3D Printing with Chat GPT-4
Furthermore, integrating ChatGPT 4 with 3D model repositories could open up new possibilities for 3D printing. For example, users could search for models based on descriptions, keywords, or even images, and ChatGPT 4 could generate the GCODE for printing. This could make 3D printing more accessible to a wider audience, as it would eliminate the need for expertise in 3D modeling and slicing.
AI Build
Fictiv
With advances in material science and manufacturing technologies like 3D printing, it can be overwhelming (not to mention time-consuming) to find the right material for your project needs. That’s why we created Materials.AI: a first-of-its-kind artificial intelligence assistant, powered by ChatGPT and Fictiv’s expansive manufacturing database, to help you navigate the complex landscape of plastic and metal materials.
Research
Benchling
AI Experimentation at Benchling
Scientists spend a significant portion of their time creating routine reports and charts from experiments and studies. This becomes even more time-consuming when data spans teams or large, multi-step experimental processes. Report Generation with LLMs in Benchling brings report creation down to just minutes, freeing up scientists to spend less time on administrative work, more time on science.
Facility Design
Institute for Manufacturing Technology and Production Systems
Generative design can be an effective approach to generate optimized factory layouts. One evolving topic in this field is the use of reinforcement learning (RL)-based approaches. Existing research has focused on the utilization of the approach without providing additional insights into the learned metrics and the derived policy. This information, however, is valuable from a layout planning perspective since the planner needs to ensure the trustworthiness and comprehensibility of the results.
Parsons
Revolutionizing Design: The Power Of Generative AI
One of the key benefits of Generative AI in architectural design is its ability to optimize designs for specific criteria or constraints. For example, an architect could use Gen-AI to explore different options for a building’s energy efficiency or structural stability. By inputting specific parameters such as materials, site conditions, and budget constraints into the algorithm, Gen-AI can generate multiple design options that meet those requirements (e.g. establishing the column numbers in a parking garage structure).
Supply Chain
Cosmo Tech
Driving Success in a Complex World: Leveraging GenAI and Simulation to Optimize Decision-Making Under Uncertainty
Microsoft
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
GitHub - microsoft/OptiGuide: Large Language Models for Supply Chain Optimization
Protolabs & Augury
Production Pulse Tech Thursday: Augury and AI, Protolabs on Networking
Manufacturing technology leaders discuss the value of generative AI and how distributed supplier networks can support production.
Product Personalization
Databricks
Solution Accelerator: LLMs for Manufacturing
Augmenting customer support agents. Customer support agents want to be able to query what open/unresolved issues exist for the customer in question and provide an AI-guided script to assist the customer.
Capturing and disseminating domain knowledge through interactive training. The industry is dominated by deep know-how that is often described as "tribal" knowledge. With the aging workforce comes the challenge of capturing this domain knowledge permanently. LLMs could act as reservoirs of knowledge that can then be easily disseminated for training.
Augmenting the diagnostics capability of field service engineers. Field service engineers are often challenged with accessing tons of documents that are intertwined. Having an LLM to reduce the time taken to diagnose the problem will inadvertently increase efficiencies.
Introducing Databricks Assistant, a context-aware AI assistant
MAR 18, 2024 Update: Added Cosmo Tech and Siemens