Retrieval Augmented Generation
Assembly Line
Powering Intelligent Factory Operations with Cognizant’s APEx Factory Whisperer and AWS
In this blog post, we expand on a prior blog post on Cognizant’s APEx solution and demonstrate how AWS IoT SiteWise, AWS IoT TwinMaker, and Amazon Bedrock are used to provide expert guidance to mitigate critical issues in the manufacturing plant. The solution parses operational data from the manufacturing environment, alarms, historical trends, troubleshooting results, workshop manuals, standard operating procedures (SOPs), and Piping & Instrumentation (P&ID) diagrams of a factory and associated assets. We will describe how customers use “Factory Whisperer”, Cognizant’s APEx generative AI assistant solution, to increase uptime, improve quality, and reduce operating costs for manufacturing organizations.
Factory Whisperer uses several different AWS services to provide expert-level guidance to plant managers and maintenance teams. When a user asks Factory Whisperer about a problem, the solution uses natural language processing to understand the question. It then retrieves relevant information from a corporate knowledge base containing manuals, procedures, and past troubleshooting data, as well as real-time and historical sensor recordings. The solution uses a Retrieval-Augmented Generation (RAG) technique to combine this contextual information to provide a tailored, expert-like response that suggests steps to diagnose and fix the issue, using a large language model.
Building a generative AI reservoir simulation assistant with Stone Ridge Technology
In the field of reservoir simulation, accurate modeling is paramount for understanding and predicting the behavior of subsurface flow through geological formations. However, the complexities involved in creating, implementing, and optimizing these models often pose significant challenges, even for experienced professionals. Fortunately, the integration of artificial intelligence (AI) and large language models (LLMs) offers a transformative solution to streamline and enhance the reservoir simulation workflow. This post describes our efforts in developing an intelligent simulation assistant powered by Amazon Bedrock, Anthropic’s Claude, and Amazon Titan LLMs, aiming to revolutionize the way reservoir engineers approach simulation tasks.
Although not covered in this architecture, two key elements enhance this workflow significantly and are the topic of future exploration: 1) simulation execution using natural language by orchestration through a generative AI agent, and 2) multimodal generative AI (vision and text) analysis and interpretation of reservoir simulation results such as well production logs and 3D depth slices for pressure and saturation evolution. As future work, automating aspects of our current architecture is being explored using an agentic workflow framework as described in this AWS HPC post.
From Aging to Agent-ing with Cognite’s Atlas AI
Cognite’s Atlas AI provides a low-code industrial agent builder that enables organizations to use generative AI to address domain-specific challenges with a deep understanding of industry context, terminology, and workflows. These agents leverage advanced technologies to provide expert guidance, and offer highly relevant insights in support of specific tasks.
Context Augmented Generation, CAG, is the evolution of Retrieval Augmented Generation (RAG). RAG retrieves information from external databases to enhance generative models. CAG does this while integrating content and data from multiple sources, such as real-time data, sensor inputs, user interactions, and historical data. CAG allows AI systems to give and create more complex and content-aware responses.
How Audi improved their chat experience with Generative AI on Amazon SageMaker
Audi, and Reply worked with Amazon Web Services (AWS) on a project to help improve their enterprise search experience through a Generative AI chatbot. The solution is based on a technique named Retrieval Augmented Generation (RAG), which uses AWS services such as Amazon SageMaker and Amazon OpenSearch Service. Ancillary capabilities are offered by other AWS services, such as Amazon Simple Storage Service (Amazon S3), AWS Lambda, Amazon CloudFront, Amazon API Gateway, and Amazon Cognito.
In this post, we discuss how Audi improved their chat experience, by using a Generative AI solution on Amazon SageMaker, and dive deeper into the background of the essential components of their chatbot, by showcasing how to deploy and consume two state-of-the-art Large Language Models (LLMs), Falcon 7B-Instruct, designed for Natural Language Processing (NLP) tasks in specific domains where the model follows user instructions and produces the desired output, and Llama-2 13B-Chat, designed for conversational contexts where the model responds to user’s messages in a natural and engaged way.