Generative AI

Assembly Line

Using generative AI, C.H. Robinson has achieved automation across the entire lifecycle of a freight shipment

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: CH Robinson


This new proprietary tech incorporates generative artificial intelligence to overcome the decades-old challenge of automating transactions that shippers still commonly choose to do by email. Shippers directly integrated with C.H. Robinson’s platform have for years been able to get automated service instantly. But the same request sent by email had to wait for a person. Now, more than 10,000 of those routine transactions per day have been automated. Shippers who use email can get the same speed-to-market and cost savings as other customers, and the C.H. Robinson teams that serve them can spend more time on more valuable work.

C.H. Robinson’s new automation tech is being used for:

  • Emailed price requests: This has grown to 2,600 quotes delivered a day, and at 32 seconds is now even faster. Having started with truckload quotes, the tech has also been expanded to handle LTL quotes.
  • Emailed load tenders: The tech is turning emails into 5,500 shipment orders a day, achieved in 90 seconds.
  • Emailed appointments: When a customer uses email rather than C.H. Robinson’s touchless appointments, the tech extracts the details needed to lock in a pick-up or delivery time. So far, this is done 3,000 times a day across more than 26,000 locations within 60 seconds.
  • In-transit visibility: For instances when a carrier’s automated status updates aren’t working, C.H. Robinson is piloting the use of generative AI to interact with the carrier, rather than taking up staff time to send an email, text or instant message.

Read more at AJOT

1X’s Generative World Model for Robot Interactions

📅 Date:

✍️ Authors: Jack Monas, Eric Jang

🔖 Topics: Generative AI, Humanoid

🏢 Organizations: 1X


World models solve a very practical and yet often overlooked challenge when building general-purpose robots: evaluation. If you train a robot to perform 1000 unique tasks, it is very hard to know whether a new model has made the robot better at all 1000 tasks, compared to a prior model. Even the same model weights can experience a rapid degradation in performance in a matter of days due to subtle changes in the environment background or ambient lighting.

Physics-based simulation (Bullet, Mujoco, Isaac Sim, Drake) are a reasonable way to quickly test robot policies. They are resettable and reproducible, allowing researchers to carefully compare different control algorithms. However, these simulators are mostly designed for rigid body dynamics and require a lot of manual asset authoring.

We’re taking a radically new approach to evaluation of general-purpose robots: learning a simulator directly from raw sensor data and using it to evaluate our policies across millions of scenarios. By learning a simulator directly from real data, you can absorb the full complexity of the real world without manual asset creation.

To help accelerate progress towards solving world models for robotics, we are releasing over 100 hours of vector-quantized video (Apache 2.0), pretrained baseline models, and the 1X World Model Challenge, a three-stage challenge with cash prizes.

Read more at 1X Blog

Paperless Parts Wingman™: Your New AI-Powered Partner in Quoting

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Paperless Parts


Wingman is Paperless Parts’ new secure, proprietary, AI-powered automation tool inside of our quoting platform designed to arm you with the right information to make it faster and easier to get to the meat of quoting. Wingman automatically extracts critical details from quote packages (prints, models, emails, etc.), so instead of worrying about making a mistake, you can focus on what really matters: delivering accurate quotes, quickly. Consider it an extension of your team—an intelligent assistant that has your back 24/7, making sure you never miss an important detail.

Wingman recognizes over 10,000 ASTM, AMS, MIL-SPEC, NADCAP, and OEM-specific process/material specifications, highlights them on prints, and presents their definitions right next to the print. Wingman identifies UTS and metric thread specifications, which could indicate taps, inserts, or other hardware requirements. Wingman identifies 29 common process keywords, including anodizing, welding, and heat treating.

Read more at Paperless Parts Blog

Westinghouse Unveils Pioneering Nuclear Generative AI System

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Westinghouse


Westinghouse Electric Company launched its Hive™ nuclear-specific Generative Artificial Intelligence (GenAI) System to deliver custom GenAI solutions for its global customer base. The Hive System will be a game-changing capability that will drive improved cost and schedule through the entire reactor lifecycle from design, licensing, manufacturing, construction and operations.

With the Hive System, customers gain access to more than 100 years of proprietary industry innovation and knowledge pioneered by Westinghouse, powered by its global team of engineers and data scientists, via a highly secure system infrastructure and software. By integrating the Hive System into its own products, services and processes, Westinghouse engineers drive enhancements to their operations and customer applications. Additionally, the Hive System helps customers optimize maintenance planning, enhance inspections and improve the digital user experience to provide operational teams with the right information at the right moment.

Read more at Westinghouse News

Can AI Deliver Fully Automated Factories?

📅 Date:

✍️ Authors: Daniel Kuepper, Leonid Zhukov, Namrata Rajagopal, Yannick Bastubbe

🔖 Topics: Design for X, Generative AI

🏢 Organizations: Boston Consulting Group


The good news for manufacturers is that, based on our research and on-the-ground experience, we believe a significant shift is underway. The entry barriers for implementation that hindered earlier efforts are going to rapidly fall in the next few years. Robots are becoming more capable, flexible, and cost-effective, with embodied agents bringing the power of generative AI into the factory environment. Manufacturers must prepare for the inevitable disruption — or risk falling behind.

Our client chose to adopt a “redesign for automation” approach for its process, products, and layout. This complete overhaul of factory operations added new process steps to improve automation feasibility while removing human-oriented process inefficiencies. For example, our client no longer had to sacrifice valuable floor space for storing inventory that humans can see and reach. Instead, they built second-story vertical storage areas that robots can easily access and navigate. With the freed-up space, they installed more machines to increase output by more than 30%.

Programming and integration is 50 to 70% of the cost of a robotic application. Generative AI interfaces are expected to significantly drive this cost down by providing natural language interface for even non-technical workers to instruct robots. The transformation would be drastic: Instead of one specialized worker for every eight robots, the factory would only require one non-specialized worker for every 25 robots. Industry applications have already emerged. For example, Sereact has already rolled out a voice or text command interface, PickGPT, to interact with robots using simple instructions such as “I need to pack the order.”

Read more at Harvard Business Review

Generative AI Is Everywhere—Including At Birchwood Foods

📅 Date:

✍️ Author: Tom Davenport

🔖 Topics: Generative AI, Worker Training

🏢 Organizations: Birchwood Foods, Synthesia


Birchwood has historically used short videos to communicate safety and health-related information to its employees. This is generally an effective medium for learning, but only if the audience understands the language within the video. It was way too labor-intensive (and infeasible for English-only speakers) to record videos in every language that Birchwood employees speak, so Birchwood had previously used subtitles from a language translation company. That made the videos very expensive to produce, and many employees couldn’t read the captions.

In early 2023 Caitlin Hamstra, the Corporate Learning and Development Manager, mentioned to her boss that there might be a new solution to this problem based on recent AI developments. Kim Crawford, Birchwood’s head of HR, Safety, and Learning & Development, encouraged Hamstra to survey new technologies for translations in video-based learning. After experimenting with several different generative AI language models and vendors, Hamstra came across a UK-based startup called Synthesia. It specializes in creating avatar-based videos from text that can speak in multiple languages—131 of them at last count. Birchwood entered into a purchase agreement with Synthesia in October of 2023, and Hamstra and L&D Supervisor James Nolan began to implement their software.

Since that time, Birchwood has been very productive with AI-enabled translation. It has created 120 videos, each 5 or 6 minutes in length. They use avatars to do the speaking parts, and the 131 available languages cover 95% of the languages spoken at the plants. Crawford says that giving employees work instructions in their own language has led to very positive reactions. It helps, she says, with engagement, retention, safety, and buy-in to the job.

Read more at Forbes

Could reading instruction manuals become a thing of the past?

📅 Date:

✍️ Authors: Michael Dempsey, Will Smale

🔖 Topics: Large Language Model, Generative AI

🏢 Organizations: Aveva, Dozuki, SCG Chemicals


Simon Bennett, Aveva’s head of AI innovation, says the AI can locate where there has been, say, a power failure. It then delves into “a monster PDF manual”. From this, the AI - via a computer screen - generates different ideas of what the problem might be. It can also produce a 3D image of the affected machinery, such as a turbine, with Mr Bennett noting that engineers appreciate such visual responses to their questions.

Dozuki’s AI-powered system CreatorPro can automatically create a user guide based on an engineer making a video of him or her talking through and carrying out a process. “The user uploads the video, and a step-by-step instruction guide is automatically created,” says Allen Yeung, Dozuki’s vice president of product. “The AI chooses the text that accompanies each step, and it can automatically translate that into other languages.”

Read more at BBC

How Chevron is using gen AI to strike oil

📅 Date:

✍️ Author: Taryn Plumb

🔖 Topics: Generative AI, large language model

🏢 Organizations: Chevron


Oil and gas operations generate an enormous amount of data — a seismic survey in New Mexico, for instance, can provide a file that is a petabyte all by itself. “To turn that into an image that you can make a decision with is a 100 exaflop operation,” Bill Braun, Chevron CIO, told the audience at this year’s VB Transform. “It’s an incredible amount of compute.”

This can be helpful, for instance, with well lengths, which are several miles long. Other companies might be working in areas around those wells, and gen AI could alert to interference so that human users can proactively reach out to prevent disruption to either party, Braun explained.

Chevron also uses large language models (LLMs) to craft engineering standards, specifications and safety bulletins and other alerts, he said, and AI scientists are constantly fine-tuning models.

Read more at VentureBeat

Using Predictive and Gen AI to Improve Product Categorization at Walmart

📅 Date:

✍️ Author: Adnan Hassan

🔖 Topics: Generative AI

🏢 Organizations: Walmart


To optimally use the limited display space and enable customers to discover the most pertinent and appealing products within each department, we developed Ghotok, a cutting-edge AI technique which can effectively analyze and understand the relationships between different products and categories. The ultimate goal of this project is to save customers’ time by making their online shopping experience more efficient and fun.

Ghotok’s objective is to consider domain-specific contextual information to understand the many-to-many relationships between two different types of product hierarchies (that is, Category and Product Type) pairs. To achieve this, Ghotok incorporates advances in both Predictive and Generative AI techniques to find the most relevant product types for each category. Instead of choosing one single model for both Predictive and Generative AI, we use an ensemble of models. Ensemble models are a machine learning approach that combines multiple ML models to make predictions. The goal of ensemble learning is to combine the outputs of diverse ML models to create a more precise prediction. One benefit of this approach is that it dispenses with the requirement for customer engagement data (which is noisy as sometimes customers might click on items by mistake or out of curiosity) by leveraging a limited amount of human-labeled data. This makes the model effective for both frequently and rarely visited parts of our product hierarchies.

Read more at Walmart Global Tech

Building a generative AI reservoir simulation assistant with Stone Ridge Technology

📅 Date:

✍️ Authors: Dmitriy Tishechkin, Dan Kahn, Karthik Mukundakrishnan

🔖 Topics: Generative AI, Large language model, Retrieval Augmented Generation

🏢 Organizations: AWS, Stone Ridge Technology


In the field of reservoir simulation, accurate modeling is paramount for understanding and predicting the behavior of subsurface flow through geological formations. However, the complexities involved in creating, implementing, and optimizing these models often pose significant challenges, even for experienced professionals. Fortunately, the integration of artificial intelligence (AI) and large language models (LLMs) offers a transformative solution to streamline and enhance the reservoir simulation workflow. This post describes our efforts in developing an intelligent simulation assistant powered by Amazon Bedrock, Anthropic’s Claude, and Amazon Titan LLMs, aiming to revolutionize the way reservoir engineers approach simulation tasks.

Although not covered in this architecture, two key elements enhance this workflow significantly and are the topic of future exploration: 1) simulation execution using natural language by orchestration through a generative AI agent, and 2) multimodal generative AI (vision and text) analysis and interpretation of reservoir simulation results such as well production logs and 3D depth slices for pressure and saturation evolution. As future work, automating aspects of our current architecture is being explored using an agentic workflow framework as described in this AWS HPC post.

Read more at AWS for Industries

From Aging to Agent-ing with Cognite’s Atlas AI

📅 Date:

✍️ Author: Jason Schern

🔖 Topics: Generative AI, Retrieval Augmented Generation

🏢 Organizations: Cognite


Cognite’s Atlas AI provides a low-code industrial agent builder that enables organizations to use generative AI to address domain-specific challenges with a deep understanding of industry context, terminology, and workflows. These agents leverage advanced technologies to provide expert guidance, and offer highly relevant insights in support of specific tasks.

Context Augmented Generation, CAG, is the evolution of Retrieval Augmented Generation (RAG). RAG retrieves information from external databases to enhance generative models. CAG does this while integrating content and data from multiple sources, such as real-time data, sensor inputs, user interactions, and historical data. CAG allows AI systems to give and create more complex and content-aware responses.

Read more at Cognite Blog

Synopsys and Microsoft: GenAI in Chip Design

New C.H. Robinson Technology Breaks a Decades-Old Barrier to Automation in the Logistics Industry

📅 Date:

🔖 Topics: Generative AI, Automated Quoting, Large Language Model

🏢 Organizations: CH Robinson


In another industry-leading innovation, C.H. Robinson has automated transactions that many shippers still conduct by email. It breaks a long-standing barrier to automation and gives shippers who use email the same speed-to market and cost savings as shippers who are more digitally connected.

Using artificial intelligence, C.H. Robinson’s new technology classifies incoming email, reads it and replicates the steps a person would take to fulfill a customer’s request. For example, shippers often still choose to send an email asking for a price quote rather than log into a digital platform. On an average business day, the global logistics company receives over 11,000 emails from customers and carriers requesting pricing on truckload freight. While the technology is replying to 2,000 customer quote requests a day, it opens the door to automating other transactions shippers and carriers choose to do by email. The large language model (LLM) the technology uses can be trained to identify an email about a load tender, a pickup appointment or a shipment tracking update.

“Our customers can get instant price quotes through our Navisphere platform or any of the 35 largest TMS or ERP systems we’re integrated with. But for someone like a busy warehouse manager with unexpected spot freight or freight in a new lane, an email can just feel easier. Email works the same for everybody. It doesn’t ask for your password. There are no fields to fill in,” said Mark Albrecht, Vice President for Artificial Intelligence. “Before generative AI, replying to that email request defied automation. Customers had to wait for a human just to pass along a quote from our Dynamic Pricing Engine. Now, our new technology reads the email and supplies the quote in an average 2 minutes 13 seconds. C.H. Robinson is doing this at scale, leaving our people more time to help those same customers with more complex requests.”

Read more at Business Wire

United Airlines Turns To Generative AI To Help Explain Flight Delays

📅 Date:

✍️ Author: Steven Norton

🔖 Topics: Generative AI

🏢 Organizations: United Airlines


Now, AI can scan flight systems and, after a quick review from a human, send an alert letting you know the backup was caused by runway construction at SFO. The AI-assisted messages mark an evolution for Every Flight Has a Story, an initiative United launched in 2018 to give customers more information when things go wrong. United knows that more transparent messaging about flight issues can build trust with passengers, and providing even small details have had a positive impact on customer satisfaction metrics.

With generative AI, United’s team of roughly 15 storytellers is getting a tool that will help them handle more messages and provide more details to passengers, important ahead of the busy summer travel season. Around 6,000 flights have received at least one GenAI-assisted message since the rollout earlier this year, and there are plans to scale up quickly.

Read more at Forbes

GenAI-powered Battery Developer Chemix Closes Its $20M Series A Led by Ibex Investors

📅 Date:

🔖 Topics: Funding Event, Generative AI

🏢 Organizations: Chemix, Ibex Investors


Chemix, the pioneer in leveraging a vertically integrated and GenAI-powered self-driving lab for the rapid development of next-gen EV batteries, announced the completion of $20 million in Series A funding. The round was led by Ibex Investors, an investment firm with $1 billion in assets under management, with participation from Mayfield Fund, Berkeley SkyDeck, and Urban Innovation Fund. The round also included strategic investors including BNP Paribas Solar Impulse Venture Fund (SIVF), Global Brain’s KDDI Open Innovation Fund III (KDDI CVC), and Porsche Ventures, the venture capital arm of the sports car manufacturer Porsche AG.

To practically and effectively integrate GenAI into the battery development process, Chemix embarked on a journey from day one to build its own battery lab and collect its own data - creating its battery-specific MIX™ R&D platform. In less than three years, the company has tested 10,000 commercial-format pouch, cylindrical, and prismatic cells, screened through 4,000 unique electrolyte formulations and more than 2,000 pairs of unique electrode designs, and collected a total of 4.5 million cycles. The high-quality, specially curated, and AI-trainable data, all gathered through physical experimentation, have played a critical role in training Chemix’s proprietary transformer-based deep neural network, which then autonomously generates better materials and cell designs on a daily basis on behalf of battery engineers and scientists. For Chemix, this marks just the beginning of truly unleashing the full potential of GenAI to transform the decades-old battery industry, one data point at a time – a vision that is shared among both existing and newly joining Chemix investors.

Read more at Web Wire

How United Airlines uses AI to make flying the friendly skies a bit easier

📅 Date:

✍️ Author: Frederic Lardinois

🔖 Topics: Generative AI

🏭 Vertical: Aerospace

🏢 Organizations: United Airlines


When a flight is delayed, a message with an explanation will arrive by text and in the United app. Most of the time, that message is generated by AI. Meanwhile, in offices around the world, dispatchers are looking at this real-time data to ensure that the crew can still legally fly the plane without running afoul of FAA regulations. And only a few weeks ago, United turned on its AI customer service chatbot.

Not that long ago, it was rather typical to get a notification when a flight was delayed, but no further information about it. Maybe the incoming flight was delayed. Maybe there was a maintenance issue. A few years ago, United started using agents to write short notices that explained the delay and sent those out through its app and as text messages. Now, pulling in data from its chat app and other sources, the vast majority of these messages are written by AI.

Similarly, United is looking at also using generative AI to summarize flight information for its operations teams, so they can get a quick overview of what’s happening.

Later this year, United also plans to launch a tool that is currently called “Get Me Close.” Often, when there’s a delay, customers are willing to change their plans to switch to a nearby airport. I once had United switch me to a flight to Amsterdam when my flight to Berlin got canceled (not that close, but close enough to get a train and still moderate a keynote session the next morning).

Read more at TechCrunch

How Generative AI is Improving Information Retrieval for Engineers

📅 Date:

✍️ Author: Kai Finke

🔖 Topics: Generative AI

🏢 Organizations: igus


One immediate use case of advanced AI in the engineering space involves intelligent product selection. Instead of simply returning products that match a specific user search query, AI-driven search relies on image recognition to infer the technical context of an application photo and then recommend suitable engineered products—even if the user did not have enough information to explicitly search for that product. An example of this intelligent approach to engineering product search can be found in the new igusGO platform.

Read more at Machine Design

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

📅 Date:

🔖 Topics: Lithography, Generative AI

🏭 Vertical: Semiconductor

🏢 Organizations: TSMC, Synopsys, NVIDIA


TSMC, the world’s leading foundry, and Synopsys, the leader in silicon to systems design solutions, have integrated NVIDIA cuLitho with their software, manufacturing processes and systems to speed chip fabrication, and in the future support the latest-generation NVIDIA Blackwell architecture GPUs. NVIDIA also introduced new generative AI algorithms that enhance cuLitho, a library for GPU-accelerated computational lithography, dramatically improving the semiconductor manufacturing process over current CPU-based methods.

Computational lithography is the most compute-intensive workload in the semiconductor manufacturing process, consuming tens of billions of hours per year on CPUs. A typical mask set for a chip — a key step in its production — could take 30 million or more hours of CPU compute time, necessitating large data centers within semiconductor foundries. With accelerated computing, 350 NVIDIA H100 systems can now replace 40,000 CPU systems, accelerating production time, while reducing costs, space and power.

Read more at NVIDIA News

Honeywell exec reveals plan to deliver $100 million in value with generative AI: “Just getting started”

📅 Date:

✍️ Author: Matt Marshall

🔖 Topics: Generative AI

🏢 Organizations: Honeywell


Jordan did not divulge which specific gen AI application is yielding the most value, but said the company has a range of generative projects that fall into five buckets, and that value is being realized across all of them. She said GitHub and large language models (LLMs) for operations, in particular, have shown early promise.

Honeywell has created a Generative AI Council, made up of representatives of the company’s functional and business departments. Each function and department has a plan for generative AI, and that translates into 24 active programs that have either been deployed or will be deployed over the next couple of months, Jordan said. Jordan is personally tracking the P&L of the projects, and how they are controlled, and works closely with a colleague who is running the Council. Moreover, the CEO holds an all-day staff meeting monthly, and generative AI is an agenda topic every month.

Read more at VentureBeat

2024 Cosmo Tech GenAI & Simulation Product Demonstration

Saudi Aramco unveils industry’s first generative AI model

📅 Date:

🔖 Topics: Generative AI, Foundation Model

🏢 Organizations: Aramco


Aramco’s AI model is a pioneering technology in the industrial sector. It has 250 billion parameters that are adjustable during training to generate outputs or make predictions. The AI was trained using seven trillion data points, collecting more than 90 years of company history.

Amin H Nasser, CEO of Saudi Aramco, said the AI model would analyse drilling plans, geological data, historic drilling time and costs as well as recommend the most ideal well options. He added that for the company’s downstream business, “metabrain will have the capability to provide precise forecasts for refined products, including pricing trends, market dynamics and geopolitical insights”.

Aramco plans to develop a version with 1 trillion parameters by the end of this year.

Read more at Offshore Technology

Accelerate Semiconductor machine learning initiatives with Amazon Bedrock

📅 Date:

✍️ Author: Michael Wallner

🔖 Topics: Generative AI, Machine Learning

🏭 Vertical: Semiconductor

🏢 Organizations: AWS


Manufacturing processes generate large amounts of sensor data that can be used for analytics and machine learning models. However, this data may contain sensitive or proprietary information that cannot be shared openly. Synthetic data allows the distribution of realistic example datasets that preserve the statistical properties and relationships in the real data, without exposing confidential information. This enables more open research and benchmarking on representative data. Additionally, synthetic data can augment real datasets to provide more training examples for machine learning algorithms to generalize better. Data augmentation with synthetic manufacturing data can help improve model accuracy and robustness. Overall, synthetic data enables sharing, enhanced research abilities, and expanded applications of AI in manufacturing while protecting data privacy and security.

Read more at AWS for Industry

How Audi improved their chat experience with Generative AI on Amazon SageMaker

📅 Date:

✍️ Authors: Fabrizio Siciliano, Bruno Pistone, Domenico Capano

🔖 Topics: Generative AI, Retrieval Augmented Generation

🏢 Organizations: AWS, Audii, Reply


Audi, and Reply worked with Amazon Web Services (AWS) on a project to help improve their enterprise search experience through a Generative AI chatbot. The solution is based on a technique named Retrieval Augmented Generation (RAG), which uses AWS services such as Amazon SageMaker and Amazon OpenSearch Service. Ancillary capabilities are offered by other AWS services, such as Amazon Simple Storage Service (Amazon S3), AWS Lambda, Amazon CloudFront, Amazon API Gateway, and Amazon Cognito.

In this post, we discuss how Audi improved their chat experience, by using a Generative AI solution on Amazon SageMaker, and dive deeper into the background of the essential components of their chatbot, by showcasing how to deploy and consume two state-of-the-art Large Language Models (LLMs), Falcon 7B-Instruct, designed for Natural Language Processing (NLP) tasks in specific domains where the model follows user instructions and produces the desired output, and Llama-2 13B-Chat, designed for conversational contexts where the model responds to user’s messages in a natural and engaged way.

Read more at AWS Blog

Customize large language models with oil and gas terminology using Amazon Bedrock

📅 Date:

✍️ Authors: Walt Mayfield, Felipe Lopez

🔖 Topics: Generative AI, Large Language Model

🏭 Vertical: Petroleum and Coal

🏢 Organizations: AWS, Equinor


The Norwegian multinational energy company Equinor has made Volve dataset, a set of drilling reports available for research, study, and development purposes. (When using external data, be sure to abide by the license the data is offered under.) The dataset contains 1,759 daily drilling reports—each containing both hourly comments and a daily summary—from the Volve field in the North Sea. Drilling rig supervisors tend to use domain-specific terminology and grammar when describing operations in both the hourly comments and the daily summary. This terminology is standard in the industry, which is why fine-tuning a foundation model using these reports is likely to improve summarization accuracy by enhancing the LLM’s ability to understand jargon and speak like a drilling engineer.

Generative AI has the potential to improve efficiency by automating time-consuming tasks even in domains that require deep knowledge of industry-specific nomenclature and acronyms. Having a custom model that provides drilling engineers with a draft of daily activities has the potential to save hours of work every week. Model customization can also help energy and utilities customers in other applications that involve the generation of highly technical content, as is the case of geological analyses, maintenance reports, and shift handover reports.

Read more at AWS Blog

Integrating LLMs for Explainable Fault Diagnosis in Complex Systems

📅 Date:

✍️ Authors: Akshay J. Dave, Tat Nghia Nguyen, Richard B. Vilim

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Argonne National Laboratory


This paper introduces an integrated system designed to enhance the explainability of fault diagnostics in complex systems, such as nuclear power plants, where operator understanding is critical for informed decision-making. By combining a physics-based diagnostic tool with a Large Language Model, we offer a novel solution that not only identifies faults but also provides clear, understandable explanations of their causes and implications. The system’s efficacy is demonstrated through application to a molten salt facility, showcasing its ability to elucidate the connections between diagnosed faults and sensor data, answer operator queries, and evaluate historical sensor anomalies. Our approach underscores the importance of merging model-based diagnostics with advanced AI to improve the reliability and transparency of autonomous systems.

Read more at arXiv

Cognite Announces Beta Launch of Generative AI-Powered Remote Operations Control Room for Celanese Clear Lake Facility

📅 Date:

🔖 Topics: Generative AI

🏭 Vertical: Chemical

🏢 Organizations: Cognite, Celanese


Cognite, a globally recognized leader in industrial software, today announced the beta launch of a generative AI-powered Remote Operations Control Room (ROCR) at the Celanese facility in Clear Lake, Texas. Celanese, a global chemical and specialty materials company, plans to use the ROCR to deliver full visibility into the real-time operation of its sites worldwide, thereby expediting workflows and gaining operational insights orders of magnitude more efficiently.

By integrating generative AI into a Remote Operations Control Room, Cognite will increase visibility to our site leaders and their teams and enable a multitude of possibilities – from monitoring equipment performance to enhancing root cause analysis to streamlining and enhancing our processes,” said Brenda Stout, vice president of Acetyls Manufacturing at Celanese.

Read more at Cognite Press

Siemens and AWS join forces to democratize generative AI in software development

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Siemens, AWS


Siemens and Amazon Web Services (AWS) are strengthening their partnership and making it easier for businesses of all sizes and industries to build and scale generative artificial intelligence (AI) applications. Domain experts in fields such as engineering and manufacturing, as well as logistics, insurance or banking will be able to create new and upgrade existing applications with the most advanced generative AI technology. To make this possible, Siemens is integrating Amazon Bedrock - a service that offers a choice of high-performing foundation models from leading AI companies via a single API, along with security, privacy, and responsible AI capabilities - with Mendix, the leading low-code platform that is part of the Siemens Xcelerator portfolio.

Read more at PR Newswire

Intel GenAI For Yield

📅 Date:

✍️ Author: Dylan Patel

🔖 Topics: Simulation, Generative AI, Diffusion Network, Generative Adversarial Network

🏭 Vertical: Semiconductor

🏢 Organizations: Intel


Diffusion networks are much better suited to the task. Real samples with added noise are used to train the model, which learns to denoise them. Crucially, diffusion networks in this application were able to replicate the long tails of the sample data distribution, thus providing accurate predictions of process yield.

In Intel’s research, SPICE parameters, used in the design phase as part of device simulation, are used as input for the deep learning model. Its output is the predicted electrical characteristics of the device as manufactured, or ETEST metrics. And the results show the model is capable of correctly predicting the distribution of ETEST metrics. Circuit yield is defined by the tails of this distribution. So, by correctly predicting the distribution of ETEST metrics, the model is correctly predicting yield.

The potential here is clear: better optimization of chip yields at the design stage means lower costs. Fewer mask respins, shorter development times, and ultimately higher yield would all be strong differentiators for foundries and design teams that can implement models into their PDK/design flows.

Read more at Semi Analysis

Revolutionizing Design: The Power Of Generative AI

📅 Date:

🔖 Topics: Facility Design, Generative AI

🏢 Organizations: Parsons, LookX AI


One of the key benefits of Generative AI in architectural design is its ability to optimize designs for specific criteria or constraints. For example, an architect could use Gen-AI to explore different options for a building’s energy efficiency or structural stability. By inputting specific parameters such as materials, site conditions, and budget constraints into the algorithm, Gen-AI can generate multiple design options that meet those requirements (e.g. establishing the column numbers in a parking garage structure).

With text-to-image generation tools becoming more accessible and user-friendly, artists without extensive technical skills can create higher-quality digital art with ease. This breakthrough has the potential to streamline the design process for clients and architects, allowing both to bring concepts to life faster than ever before. The LookX.AI text-to-image application represents a significant step forward for visual content creation by enabling users to create high-quality imagery quickly and efficiently while also pushing boundaries beyond what was previously possible using traditional methods.

Read more at Parsons Blog

Explainable generative design in manufacturing for reinforcement learning based factory layout planning

📅 Date:

✍️ Authors: Matthias Klar, Patrick Ruediger, Maik Schuermann, Goren Tobias Gören, Moritz Glatt, Bahram Ravani, Jan C. Aurich

🔖 Topics: Generative AI, Generative Design, Facility Design, Reinforcement Learning

🏢 Organizations: RPTU Kaiserslautern


Generative design can be an effective approach to generate optimized factory layouts. One evolving topic in this field is the use of reinforcement learning (RL)-based approaches. Existing research has focused on the utilization of the approach without providing additional insights into the learned metrics and the derived policy. This information, however, is valuable from a layout planning perspective since the planner needs to ensure the trustworthiness and comprehensibility of the results. Furthermore, a deeper understanding of the learned policy and its influencing factors can help improve the manual planning process that follows as well as the acceptance of the results. These gaps in the existing approaches can be addressed by methods categorized as explainable artificial intelligence methods which have to be aligned with the properties of the problem and its audience. Consequently, this paper presents a method that will increase the trust in layouts generated by the RL approach. The method uses policy summarization and perturbation together with the state value evaluation. The method also uses explainable generative design for analyzing interrelationships between state values and actions at a feature level. The result is that the method identifies whether the RL approach learns the problem characteristics or if the solution is a result of random behavior. Furthermore, the method can be used to ensure that the reward function is aligned with the overall optimization goal and supports the planner in further detailed planning tasks by providing insights about the problem-defining interdependencies. The applicability of the proposed method is validated based on an industrial application scenario considering a layout planning case of 43 functional units. The results show that the method allows evaluation of the trustworthiness of the generated results by preventing randomly generated solutions from being considered in a detailed manual planning step. The paper concludes with a discussion of the results and a presentation of future research directions which also includes the transfer potentials of the proposed method to other application fields in RL-based generative design.

Read more at Journal of Manufacturing Systems

LLM-based Control Code Generation using Image Recognition

📅 Date:

✍️ Authors: Heiko Koziolek, Anne Koziolek

🔖 Topics: Generative AI, Large Language Model, ChatGPT, Programmable Logic Controller

🏢 Organizations: ABB, Karlsruhe Institute of Technology, Eastman Chemical


LLM-based code generation could save significant manual efforts in industrial automation, where control engineers manually produce control logic for sophisticated production processes. Previous attempts in control logic code generation lacked methods to interpret schematic drawings from process engineers. Recent LLMs now combine image recognition, trained domain knowledge, and coding skills. We propose a novel LLM-based code generation method that generates IEC 61131-3 Structure Text control logic source code from Piping-and-Instrumentation Diagrams (P&IDs) using image recognition. We have evaluated the method in three case study with industrial P&IDs and provide first evidence on the feasibility of such a code generation besides experiences on image recognition glitches.

Read more at arXiv

TwinCAT Chat integrates LLMs into the automation environment

Generative AI for Process Systems Engineering

Unleashing the Potential of Large Language Models in Robotics: RoboDK’s Virtual Assistant

📅 Date:

🔖 Topics: Generative AI, Large Language Model, Virtual Assistant

🏢 Organizations: RoboDK


The RoboDK Virtual Assistant is the first step towards a comprehensive generalized assistant for RoboDK. At its core is OpenAI’s GPT3.5-turbo-0613 model. The model is provided with additional context about RoboDK in the form of an indexed database containing the RoboDK website, documentation, forum threads, blog posts, and more. The indexing process is done with LlamaIndex, a specialized data framework designed for this purpose. Thanks to this integration, the Virtual Assistant can swiftly provide valuable technical support to over 75% of user queries on the RoboDK forum, reducing the time spent searching through the website and documentation via manual methods. Users can expect to have an answer to their question in 5 seconds or less.

Read more at RoboDK Blog

Silicon Volley: Designers Tap Generative AI for a Chip Assist

📅 Date:

✍️ Author: Rick Merritt

🔖 Topics: Generative AI, Large Language Model, Computer-aided Design, Chip Design, Virtual Assistant

🏭 Vertical: Semiconductor

🏢 Organizations: NVIDIA


The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

The paper details how NVIDIA engineers created for their internal use a custom LLM, called ChipNeMo, trained on the company’s internal data to generate and optimize software and assist human designers. Long term, engineers hope to apply generative AI to each stage of chip design, potentially reaping significant gains in overall productivity, said Ren, whose career spans more than 20 years in EDA. After surveying NVIDIA engineers for possible use cases, the research team chose three to start: a chatbot, a code generator and an analysis tool.

On chip-design tasks, custom ChipNeMo models with as few as 13 billion parameters match or exceed performance of even much larger general-purpose LLMs like LLaMA2 with 70 billion parameters. In some use cases, ChipNeMo models were dramatically better.

Read more at NVIDIA Blog

Celanese's Vision for an Autonomous, Self-Optimizing Plant Powered by Generative AI

📅 Date:

✍️ Author: Jonathan Katz

🔖 Topics: Generative AI

🏭 Vertical: Chemical

🏢 Organizations: Celanese, Cognite


One of the key things we’ve been planning to do in 2023 is scaling the (Cognite) platform, bringing all the data together, putting the right context, the right meaning to it, getting it contextualized and modeling it. As part of that investment, we’re using artificial intelligence and generative AI capabilities. But our artificial intelligence journey or generative artificial intelligence is only as good as our underlying data. So, the biggest effort for us has been to standardize the data on common data models, bring it all together, contextualize it and then start leveraging AI capabilities on top of that.

You have to make sure that whatever you’re architecting actually is intuitive and works and addresses the needs of the people. For example, you have this phone, right? I don’t need a user manual or training for this. It just works, and I am married to it. I can’t live without it. So we have to find the balance of making the right solutions for the people and keeping that in mind. Also, we have developed what we call a Digital Manufacturing Academy that is now available globally for all our users. And that academy is really around giving people the ability to upskill, have more data literacy, more digital literacy skills, and even give people the opportunity to start learning how to code, if they need to.

Read more at Chemical Processing

Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning

📅 Date:

✍️ Author: Angie Lee

🔖 Topics: Generative AI, Large Language Model, Industrial Robot, Reinforcement Learning

🏢 Organizations: NVIDIA


A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can. The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

Read more at NVIDIA Blog

To excel at engineering design, generative AI must learn to innovate, study finds

📅 Date:

✍️ Author: Jennifer Chu

🔖 Topics: Generative AI, Generative Design

🏢 Organizations: MIT


“Deep generative models (DGMs) are very promising, but also inherently flawed,” says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. “The objective of these models is to mimic a dataset. But as engineers and designers, we often don’t want to create a design that’s already out there.” He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond “statistical similarity.”

For instance, if DGMs can be built with other priorities, such as performance, design constraints, and novelty, Ahmed foresees “numerous engineering fields, such as molecular design and civil infrastructure, would greatly benefit. By shedding light on the potential pitfalls of relying solely on statistical similarity, we hope to inspire new pathways and strategies in generative AI applications outside multimedia.”

Read more at MIT News

Making Conversation: Using AI to Extract Intel from Industrial Machinery and Equipment

📅 Date:

✍️ Author: Rehana Begg

🔖 Topics: Large Language Model, Generative AI

🏭 Vertical: Automotive

🏢 Organizations: iNAGO


What if your machine could talk? This is the question Ron Di Carlantonio has grappled with since he founded iNAGO 1998. iNAGO was onboard when the Government of Canada supported a lighthouse project led by the Automotive Parts Manufacturers’ Association (APMA) to design, engineer and build a connected and autonomous zero-emissions vehicle (ZEV) concept car and its digital twin that would validate and integrate autonomous technologies. The electric SUV is equipped with a dual-motor powertrain with total output of 550 hp and 472 lb-ft of torque.

The general use of AI-based solutions in the automotive industry stretches across the lifecycle of a vehicle, from design and manufacturing to sales and aftermarket care. AI-powered chatbots, in particular, deliver instant, personalized virtual driver assistance, are on call 27/7 and can evolve with the preferences of tech-savvy drivers. Di Carlantonio now sees an opportunity to extend the use of the intelligent assistant platform to the smart factory by making industrial equipment—CNC machines, presses, conveyors, industrial robots—talk.

Read more at Machine Design

Zebra Technologies Demonstrates Generative AI on Devices Powered by Qualcomm

📅 Date:

🔖 Topics: Partnership, Generative AI

🏢 Organizations: Zebra Technologies, Qualcomm


Zebra Technologies Corporation (NASDAQ: ZBRA), a leading digital solution provider enabling businesses to intelligently connect data, assets, and people has successfully demonstrated a Generative Artificial Intelligence (GenAI) large language model (LLM) running on Zebra handheld mobile computers and tablets without needing connectivity to the cloud.

This breakthrough empowers Zebra partners and customers to unlock exciting productivity gains that will shape the future of work across industries from retail to warehouse and logistics to hospitality and healthcare. On-device execution of GenAI LLMs has the potential to empower front-line workers with new capabilities so they can deliver new outcomes for their end customers.

On-device AI can offer additional personalisation as well as enhanced privacy and security as data remains on the device. It also drives faster performance and lower costs as GenAI searches on the cloud can be expensive. A whitepaper published by Qualcomm Technologies, Inc. suggests that GenAI-based search cost per query is estimated to increase by ten times compared to traditional search methods. By removing the need to utilise the cloud, costs can be reduced.

Zebra’s TC53/TC58 and TC73/TC78 mobile computers and ET6x Series tablets powered by Qualcomm Technologies – together with Zebra’s asset visibility and intelligent automation solutions – deliver elevated data insight, analysis and recommendations, problem solving, planning and creativity. Front-line workers can utilise a smaller on-device model, even in rural, built-up and underground working environments where connection to the cloud may not be possible. Alternatively, users may switch to a cloud-based app or web browser GenAI tool via Zebra’s Wi-Fi 6/6E and 5G enabled devices.

Read more at Zebra News

Toyota Research Institute Unveils Breakthrough in Teaching Robots New Behaviors

📅 Date:

🔖 Topics: Industrial Robot, Generative AI, Diffusion Policy, Large Behavior Model

🏢 Organizations: Toyota


The Toyota Research Institute (TRI) announced a breakthrough generative AI approach based on Diffusion Policy to quickly and confidently teach robots new, dexterous skills. This advancement significantly improves robot utility and is a step towards building “Large Behavior Models (LBMs)” for robots, analogous to the Large Language Models (LLMs) that have recently revolutionized conversational AI.

TRI has already taught robots more than 60 difficult, dexterous skills using the new approach, including pouring liquids, using tools, and manipulating deformable objects. These achievements were realized without writing a single line of new code; the only change was supplying the robot with new data. Building on this success, TRI has set an ambitious target of teaching hundreds of new skills by the end of the year and 1,000 by the end of 2024.

Read more at Toyota Press

Solution Accelerator: LLMs for Manufacturing

📅 Date:

✍️ Authors: Will Block, Ramdas Murali, Nicole Lu, Bala Amavasai

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Databricks


In this solution accelerator, we focus on item (3) above, which is the use case on augmenting field service engineers with a knowledge base in the form of an interactive context-aware Q/A session. The challenge that manufacturers face is how to build and incorporate data from proprietary documents into LLMs. Training LLMs from scratch is a very costly exercise, costing hundreds of thousands if not millions of dollars.

Instead, enterprises can tap into pre-trained foundational LLM models (like MPT-7B and MPT-30B from MosaicML) and augment and fine-tune these models with their proprietary data. This brings down the costs to tens, if not hundreds of dollars, effectively a 10000x cost saving.

Read more at Databricks Blog

The treacherous path to trustworthy Generative AI for Industry

📅 Date:

✍️ Author: Geir Engdahl

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Cognite


Despite the awesome first impact ChatGPT showed and the already significant efficiency gain programming copilots are delivering to developers as users2, making LLMs serve non-developers – the vast majority of the workforce, that is – by having LLMs translate from natural language prompts to API or database queries, expecting readily usable analytics outputs, is not quite so straightforward. Three primary challenges are:

  • Inconsistency of prompts to completions (no deterministic reproducibility between LLM inputs and outputs)
  • Nearly impossible to audit or explain LLM answers (once trained, LLMs are black boxes)
  • Coverage gap on niche domain areas that typically matter most to enterprise users (LLMs are trained on large corpora of internet data, heavily biased towards more generalist topics)

Read more at Cognite Blog

Frontline Copilot | The greatest advancement of the year?? | Digital Factory 2023

Chevron Phillips Chemical tackles generative AI with Databricks

Lumafield Introduces Atlas, an AI Co-Pilot for Engineers

📅 Date:

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Lumafield


Lumafield today unveiled Atlas, a groundbreaking AI co-pilot that helps engineers work faster by answering questions and solving complex engineering and manufacturing challenges using plain language. Atlas is a new tool in Voyager, Lumafield’s cloud-based software for analyzing 3D scan and industrial CT scan data. Along with Atlas, Lumafield announced a major expansion of Voyager’s capabilities, including the ability to upload, analyze, and share data from any 3D scanner.

Read more at Lumafield Articles

🧠 AI PCB Design: How Generative AI Takes Us From Constraints To Possibilities

📅 Date:

✍️ Author: Michael Jackson

🔖 Topics: Generative AI, Generative Design

🏭 Vertical: Computer and Electronic

🏢 Organizations: Cadence


Cadence customers are already reaping the benefits of generative AI within our Joint Enterprise Data and AI (JedAI) Platform. Chip designers are realizing Cadence Cerebrus AI to design chips that are faster, cheaper, and more energy efficient. Now, we’re bringing this generative AI approach to an area of EDA that has traditionally been highly manual—PCB placement and routing.

Allegro X AI flips the PCB design process on its head. Rather than present the operator with a blank canvas, it will take a list of components and constraints that need to be satisfied in the end result and sift through a plethora of design possibilities, encompassing varied placement and routing options. This is hugely powerful for hardware engineers focused on design space exploration (DSE). This has long been par for the course in IC design yet it has more recently become critical to PCB due to the fact that today’s IC complexity doesn’t reduce when it gets onto the PCB—it increases.

However, it’s important to understand that this isn’t Cadence replacing traditional compute algorithms and automation approaches with AI. We remain as committed to accuracy and “correct by construction” as we’ve ever been, and while Allegro X AI is trained on extensive real-world datasets of successful and failed designs, we don’t use that data to determine correctness.

Read more at Semiconductor Engineering

🧠 Toyota and Generative AI: It’s Here, and This is How We’re Using It

📅 Date:

🔖 Topics: Generative AI, Predictive Maintenance

🏢 Organizations: Toyota


Toyota’s initial goal in 2016 was to engineer a resilient cloud safety system, and that led to the development of Safety Connect, a service powered by Drivelink from software company Toyota Connected North America (TCNA). The Safety Connect service is designed to leverage key data points from the vehicle to identify when a collision has occurred and send an automatic notification to call center agents. Should the driver become unconscious, telematics information can provide a more complete picture of the situation, enabling agents to contact authorities faster when it’s needed most.

Vehicle maintenance has also been a focus of AI-driven enhancements. Connected vehicles have hundreds of sensors, and we have been using data from these vehicles to build machine learning models for the most common maintenance items, including batteries, brakes, tires, and oil, and are currently investigating dozens of other components, using daily streaming data from millions of connected and consented vehicles. This suite of predictive maintenance models will help make customers aware of potential maintenance needs prior to component failures, so they can enjoy more reliable mobility experiences.

Read more at Toyota Pressroom

AI and AM: A Powerful Synergy

📅 Date:

✍️ Author: Robin Tuluie

🔖 Topics: Additive Manufacturing, Computer-aided Engineering, Generative AI


There’s an urgent opportunity, right now, to fully exploit the tools of computer-aided engineering (CFD, FEA, electromagnetic simulation and more) using the capabilities of AI. Yes, we’re talking about design optimization—but it’s optimization like never before, automated with machine learning, at a speed and level of precision far beyond what can be accomplished by most manufacturers today.

AI accomplishes this feat by solving the CFD or FEA equations in a non-traditional way: machine learning examines, and then emulates, the overall physical behavior of a design, not every single math problem that underlies that behavior.

Read more at Design and Development Today

🧑‍🏭🧠 Hitachi to use generative AI to pass expert skills to next generation

📅 Date:

✍️ Authors: Yoichiro Hiroi, Tsuyoshi Tamesue

🔖 Topics: Generative AI, Worker Training

🏢 Organizations: Hitachi


Japan’s Hitachi will utilize generative artificial intelligence to pass on expert skills in maintenance and manufacturing to newer workers, aiming to blunt the impact of mass retirements of experienced employees. The company will use the technology to generate videos depicting difficulties or accidents at railways, power stations and manufacturing plants and use them in virtual training for employees.

Hitachi already has developed an AI system that creates images based on 3D data of plants and infrastructure. It projects possible malfunctions – smoke, a cave-in, a rail buckling – onto an image of an actual rail track. This can also be done on images of manufacturing sites, including metal processing and assembly lines. Hitachi will merge this technology into a program for virtual drills that is now under development.

Read more at Nikkei Asia

⛓️🧠 Multinationals turn to generative AI to manage supply chains

📅 Date:

✍️ Author: Oliver Telling

🔖 Topics: Generative AI, Supply Chain Control Tower

🏢 Organizations: Unilever, Siemens, Maersk, Pactum, Walmart, Scoutbee, Altana Technologies


Navneet Kapoor, chief technology officer at Maersk, said “things have changed dramatically over the past year with the advent of generative AI”, which can be used to build chatbots and other software that generates responses to human prompts.

New supply chain laws in countries such as Germany, which require companies to monitor environmental and human rights issues in their supply chains, have driven interest and investment in the area.

Read more at Financial Times

U. S. Steel Aims to Improve Operational Efficiencies and Employee Experiences with Google Cloud’s Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: US Steel, Google


United States Steel Corporation (NYSE: X) (“U. S. Steel”) and Google Cloud today announced a new collaboration to build applications using Google Cloud’s generative artificial intelligence (“gen AI”) technology to drive efficiencies and improve employee experiences in the largest iron ore mine in North America. As a leading manufacturer engaging in gen AI with Google Cloud, U. S. Steel continues to advance its more than 100-year legacy of innovation.

The first gen AI-driven application that U. S. Steel will launch is called MineMind™ which aims to simplify equipment maintenance by providing optimal solutions for mechanical problems, saving time and money, and ultimately improving productivity. Underpinned by Google Cloud’s AI technology like Document AI and Vertex AI, MineMind™ is expected to not only improve the maintenance team’s experience by more easily bringing the information they need to their fingertips, but also save costs from more efficient use of technicians’ time and better maintained trucks. The initial phase of the launch will begin in September and will impact more than 60 haul trucks at U. S. Steel’s Minnesota Ore Operations facilities, Minntac and Keetac.

Read more at Business Wire

Utility AI Beta

Ansys Accelerates Innovation by Expanding AI Offerings with New Virtual Assistant

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Ansys, Microsoft


Expanding artificial intelligence (AI) integration across its simulation portfolio and customer community, Ansys (NASDAQ: ANSS) announced the limited beta release of AnsysGPT, a multilingual, conversational, AI virtual assistant set to revolutionize the way Ansys customers receive support. Developed using state-of-the-art ChatGPT technology available via the Microsoft Azure OpenAI Service, AnsysGPT uses well-sourced Ansys public data to answer technical questions concerning Ansys products, relevant physics, and engineering topics within one comprehensive tool.

Expected in early 2024, AnsysGPT will optimize technical support for customers — delivering information and solutions more efficiently, furthering the democratization of simulation. While currently in beta testing with select customers and channel partners, upon its full release next year AnsysGPT will provide easily accessible 24/7 technical support through the Ansys website. Unlike general AI virtual assistants that use unsupported information, AnsysGPT is trained using Ansys data to generate tailored, applicable responses drawn from reliable Ansys resources including, but not limited to, Ansys Innovation Courses, technical documentation, blog articles, and how-to-videos. Strong controls were put in place to ensure that no proprietary information of any kind was used during the training process, and that customer inputs are not stored or used to train the system in any way.

Read more at Ansys News

Sight Machine Factory CoPilot Democratizes Industrial Data With Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Sight Machine, Microsoft


Sight Machine Inc. today announced the release of Factory CoPilot, democratizing industrial data through the power of generative artificial intelligence. By integrating Sight Machine’s Manufacturing Data Platform with Microsoft Azure OpenAI Service, Factory CoPilot brings unprecedented ease of access to manufacturing problem solving, analysis and reporting.

Using a natural language user interface similar to ChatGPT, Factory CoPilot offers an intuitive, “ask the expert” experience for all manufacturing stakeholders, regardless of data proficiency. In response to a single question, Factory CoPilot can automatically summarize all relevant data and information about production in real-time (e.g., for daily meetings) and generate user-friendly reports, emails, charts and other content (in any language) about the performance of any machine, line or plant across the manufacturing enterprise, based on contextualized data in the Sight Machine platform.

Read more at Sight Machine Press

Retentive Network: A Successor to Transformer for Large Language Models

📅 Date:

✍️ Authors: Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei

🔖 Topics: Retentive Network, Transformer, Large Language Model, Generative AI


In this work, we propose Retentive Network (RetNet) as a foundation architecture for large language models, simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost O(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models.

Read more at arXiv

LongNet: Scaling Transformers to 1,000,000,000 Tokens

📅 Date:

✍️ Authors: Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei

🔖 Topics: Transformer, Large Language Model, Generative AI


Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.

Read more at arXiv

🧠 Toyota Research Institute Unveils New Generative AI Technique for Vehicle Design

📅 Date:

🔖 Topics: Generative AI

🏭 Vertical: Automotive

🏢 Organizations: Toyota


Toyota Research Institute (TRI) today unveiled a generative artificial intelligence (AI) technique to amplify vehicle designers. Currently, designers can leverage publicly available text-to-image generative AI tools as an early step in their creative process. With TRI’s new technique, designers can add initial design sketches and engineering constraints into this process, cutting down the iterations needed to reconcile design and engineering considerations.

TRI researchers released two papers describing how the technique incorporates precise engineering constraints into the design process. Constraints like drag (which affects fuel efficiency) and chassis dimensions like ride height and cabin dimensions (which affect handling, ergonomics, and safety) can now be implicitly incorporated into the generative AI process. The team tied principles from optimization theory, used extensively for computer-aided engineering, to text-to-image-based generative AI. The resulting algorithm allows the designer to optimize engineering constraints while maintaining their text-based stylistic prompts to the generative AI process.

Read more at Toyota Newsroom

3DGPT - your 3D printing friend & collaborator!

Demo: Cognite Data Fusion's Generative AI Copilot

Chemix Brings Transformative AI Technologies to EV Battery Industry, Launching the First AI-Designed Battery in 2023

📅 Date:

🔖 Topics: Generative AI, Generative Design, Materials Science

🏭 Vertical: Electrical Equipment

🏢 Organizations: Chemix


Chemix, the startup leveraging AI to rapidly build high-performance and environmentally sustainable EV batteries, unveiled MIX™, its AI-powered design platform specifically developed to accelerate the commercialization of next-generation EV batteries. By leveraging MIX, Chemix is poised to revolutionize the decades-old EV battery industry, similar to how AI has transformed drug discovery by accelerating pharmaceutical research and development. This will enable the pace of battery innovation to finally catch up with the ever-growing demand for better-performing, safer, and more sustainable EV batteries.

A battery is to an electric vehicle what a processor is to a computer – a critical technical component determining the entire system’s performance. Despite this central importance, the approach researchers have used to develop new battery materials and systems has remained largely unchanged for decades – until now. As opposed to the conventional method that relies on time- and labor-intensive processes for battery development and testing, Chemix adopts a revolutionary AI-based approach. This accelerates the discovery of the best battery materials, formulations, and recipes by leveraging large proprietary experimental datasets and cutting-edge proprietary algorithms. As a result, the company has created a vertically-integrated end-to-end battery development approach from scratch.

Chemix’s innovation combines battery-specific AI algorithms and their vast collected data to accelerate and optimize battery design – seamlessly integrated with the MIX platform. Chemix has used MIX to experimentally test over 2,000 unique battery material designs across more than 40 variations of commercially-relevant battery formats, accumulating nearly three million test cycles.

Read more at Web Wire

🧠 What is Visual Prompting?

📅 Date:

✍️ Author: Mark Sabini

🔖 Topics: Generative AI

🏢 Organizations: Landing AI


Landing AI’s Visual Prompting capability is an innovative approach that takes text prompting, used in applications such as ChatGPT, to computer vision. The impressive part? With only a few clicks, you can transform an unlabeled dataset into a deployed model in mere minutes. This results in a significantly simplified, faster, and more user-friendly workflow for applying computer vision.

In a quest to make Visual Prompting more practical for customers, we studied 40 projects across the manufacturing, agriculture, medical, pharmaceutical, life sciences, and satellite imagery verticals. Our analysis revealed that Visual Prompting alone could solve just 10% of the cases, but the addition of simple post-processing logic increases this to 68%.

Read more at Landing AI Blog

Retrocausal Revolutionizes Manufacturing Process Management with Industry-First Generative AI LeanGPT™ offering

📅 Date:

🔖 Topics: Generative AI, ChatGPT

🏢 Organizations: Retrocausal


Retrocausal, a leading manufacturing process management platform provider, today announced the release of LeanGPT™, its proprietary foundation models specialized for the manufacturing domain. The company also launched Kaizen Copilot™, Retrocausal’s first LeanGPT application that assists industrial engineers in designing and continuously improving manufacturing assembly processes and integrates Lean Six Sigma and Toyota Production Systems (TPS) principles favored by Industrial Engineers (IEs). The industry-first solution gathers intelligence from Retrocausal’s computer vision and IoT-based floor analytics platform Pathfinder. In addition, it can be connected to an organization’s knowledge bases, including Continuous Improvement (CI) systems, Quality Management Systems (QMS), and Manufacturing Execution Systems (MES) systems, in a secure manner.

Read more at Globe Newswire

What does it take to talk to your Industrial Data in the same way we talk to ChatGPT?

📅 Date:

✍️ Author: Jason Schern

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Cognite


The vast data set used to train LLMs is curated in various ways to provide clean, contextualized data. Contextualized data includes explicit semantic relationships within the data that can greatly affect the quality of the model’s output. Contextualizing the data we provide as input to an LLM ensures that the text consumed is relevant to the task at hand. For example, when prompting an LLM to provide information about operating industrial assets, the data provided to the LLM should include not only the data and documents related to those assets but also the explicit and implicit semantic relationships across different data types and sources.

An LLM is trained by parceling text data into smaller collections, or chunks, that can be converted into embeddings. An embedding is simply a sophisticated numerical representation of the ‘chunk’ of text that takes into consideration the context of surrounding or related information. This makes it possible to perform mathematical calculations to compare similarities, differences, and patterns between different ‘chunks’ to infer relationships and meaning. These mechanisms enable an LLM to learn a language and understand new data that it has not seen previously.

Read more at Cognite Blog

Will Generative AI finally turn data swamps into contextualized operations insight machines?

📅 Date:

🔖 Topics: Large Language Model, Generative AI

🏢 Organizations: Cognite


Generative AI, such as ChatGPT/GPT-4, has the potential to put industrial digital transformation into hyperdrive. Whereas a process engineer might spend several hours performing “human contextualization” (at an hourly rate of $140 or more) manually – again and again – contextualized industrial knowledge graphs provide the trusted data relationships that enable Generative AI to accurately navigate and interpret data for Operators without requiring data engineering or coding competencies.

Read more at Cognite Blog