Anomaly Detection
Assembly Line
MIT researchers use large language models to flag problems in complex systems
In a new study, MIT researchers found that large language models (LLMs) hold the potential to be more efficient anomaly detectors for time-series data. Importantly, these pretrained models can be deployed right out of the box.
The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline.
While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model.
In the future, an LLM may also be able to provide plain language explanations with its predictions, so an operator could be better able to understand why an LLM identified a certain data point as anomalous.
Building an Automated Manufacturing Inspection System with FOMO-AD
Just about anything can potentially go wrong in producing a product, so naive approaches that look for specific defects quickly become impractical. For this reason, Bandini decided to put Edge Impulseโs new FOMO-AD algorithm to work. FOMO-AD utilizes a Gaussian mixture model to detect anomalies in conjunction with the powerful and highly efficient FOMO object detection algorithm. This approach allows one to train the model on only normal instances of an object, after which it will be able to recognize any deviations from that normal state. Furthermore, FOMO-AD can pinpoint the locations in an image where anomalies exist, making the inspection process as painless as possible.
Computer vision algorithms tend to be very expensive computationally, but due to the efficiency of the FOMO-AD model, Bandini was able to easily run it on edge computing hardware to keep costs and latency down. In this case, he selected the Texas Instruments SK-TDA4VM development kit. The onboard TDA4VM processor offers eight trillion operations per second of hardware-accelerated AI processing power, which is well more than what is required for the project. Yet the SK-TDA4VM is also inexpensive and requires little power for operation, making it suitable for large-scale deployments. He then paired the kit with a USB webcam to allow it to capture images of components for anomaly detection.
Harness the Power of AI and Stay Ahead of Unplanned Downtime
Inside Siemensโ Bad Neustadt factory: A firsthand look at IT/OT convergence in action
The Bad Neustadt factory produces multi-axis electrical motors for Motion Control Drive systemsโa high variance and high-volume production achieved through a Make-to-Order approach. This highly individualized method produces 500,000+ configurable variants for customers per year. In fact, they estimate that the factory changes the product setup process every seven minutes on average and is approaching being a lot-size of one factory where everything is made to order. To keep pace with this incredible demand, factory management at Bad Neustadt must continuously refine their production processes to meet client expectations and maintain competitiveness.
One major use case is streamlining anomaly detection for end-of-line testing. Referred to as a โreduced test effortโ, it entails using historic data to derive rules for determining if a specific motor needs to be tested. The goal is to minimize testing without sacrificing the quality of the motors shipped. By applying AI algorithms, they can calculate the number of tests needed for a specific motor configuration. Combining historical data with real-time information from the factory floor, the intelligent algorithm dynamically defines the number of tests, resulting in reduced time and money spent on testing. In fact, the EWN achieved 16% less end-of-line tests for motors.
Robust unsupervised-learning based crack detection for stamped metal products
Crack detection plays an important role in the industrial inspection of stamped metal products. While supervised learning methods are commonly used in the quality assessment process, they often require a substantial amount of labeled data, which can be challenging to obtain in a well-tuned production line. Unsupervised learning has demonstrated exceptional performance in anomaly detection. This study proposes an unsupervised algorithm for crack detection on stamped metal surfaces, capable of classification and segmentation without the need for crack images during training. The approach leverages the Vector Quantized-Variational Autoencoder 2 (VQ-VAE2) based model to reconstruct input images, while retaining crack details. Additionally, latent features at different scales are quantized into discrete representations using a codebook. To learn the distribution of these discrete representations from non-crack samples, the study utilizes PixelSNAIL, an autoregressive model used for sequential modeling. In the testing stage, the model assigns low probabilities to discrete features that deviate from the non-crack distribution. These potential crack candidate features are resampled using vectors in the codebook that exhibit the highest dissimilarity. The edited representations are then fed into the decoder to generate resampled images that have the most significant differences in the crack area from the original reconstruction. Crack patterns are extracted at the pixel level by subtracting resampled images from the reconstruction. Prior knowledge that crack patterns often appear darker is leveraged to enhance the crack features. A robust classification criterion is introduced based on the probability given by the autoregressive model. Extensive experiments were conducted using images captured from stamped metal panels. The results demonstrate that the proposed technique exhibits robust performance and high accuracy.
Harnessing Machine Learning for Anomaly Detection in the Building Products Industry with Databricks
One of the biggest data-driven use cases at LP was monitoring process anomalies with time-series data from thousands of sensors. With Apache Spark on Databricks, large amounts of data can be ingested and prepared at scale to assist mill decision-makers in improving quality and process metrics. To prepare these data for mill data analytics, data science, and advanced predictive analytics, it is necessary for companies like LP to process sensor information faster and more reliably than on-premises data warehousing solutions alone.
๐ง โณ Multi-layer parallel transformer model for detecting product quality issues and locating anomalies based on multiple timeโseries process data in Industry 4.0
Smart manufacturing systems typically consist of multiple machines with different processing durations. The continuous monitoring of these machines produces multiple time-series process data (MTPD), which have four characteristics: low data value density, diverse data dimensions, transmissible processing states, and complex coupling relationships. Using MTPD for product quality issue detection and rapid anomaly location can help dynamically adjust the control of smart manufacturing systems and improve manufacturing yield. This study proposes a multi-layer parallel transformer (MLPT) model for product quality issue detection and rapid anomaly location in Industry 4.0, based on proper modeling of the MTPD of smart manufacturing systems. The MLPT consists of multiple customized encoder models that correspond to the machines, each using a customized partition strategy to determine the token size. All encoders are integrated in parallel and output to the global multi-layer perceptron layer, which improves the accuracy of product quality issue detection and simultaneously locates anomalies (including key time steps and key sensor parameters) in smart manufacturing systems. An empirical study was conducted on a fan-out, panel-level package (FOPLP) production line. The experimental results show that the MLPT model can detect product quality issues more accurately than other methods. It can also rapidly realize anomalous locations in smart manufacturing systems.
๐จ๏ธ Visual quality control in additive manufacturing: Building a complete pipeline
In this article, we share a reference implementation of a VQC pipeline for additive manufacturing that detects defects and anomalies on the surface of printed objects using depth-sensing cameras. We show how we developed an innovative solution to synthetically generate point clouds representing variations on 3D objects, and propose multiple machine learning models for detecting defects of different sizes. We also provide a comprehensive comparison of different architectures and experimental setups. The complete reference implementation is available in our git repository.
The main objective of this solution is to develop an architecture that can effectively learn from a sparse dataset, and is able to detect defects on a printed object by controlling the surface of the printed object each time a new layer is added. To address the challenge of acquiring a sufficient quantity of defect anomalies data for accurate ML model training, the proposed approach leverages a synthetic data generation approach. The controlled nature of the additive manufacturing process reduces the presence of unaccounted exogenous variables, making synthetic data a valuable resource for initial model training. In addition to this, we hypothesize that by deliberately inducing overfitting of the model on good examples, the model will become more accurate in predicting the presence of anomalies/defects. To achieve this, we generate a number of normal examples with introduced noise in a ratio that balances the defects occurrence expected during the manufacturing process. For instance, if the fault ratio is 10 to 1, we generate 10 similar normal examples for every 1 defect example. Hence, the pipeline for initial training consists of two modules: the synthetic generation module and the module for training anomaly detection models.
U.S. Navy Takes Falkonry AI to the High Seas for Increased Equipment Reliability and Performance
Falkonry today announced a big leap for Falkonry AI with the Office of Naval Research deploying its AI applications to advance equipment reliability on the high seas. This AI deployment is carried out with a Falkonry-designed reference architecture using NVIDIA accelerated computing and Oracle Cloud Infrastructureโs (OCIโs) distributed cloud. It enables better performance and reliability awareness using electrical and mechanical time series data from thousands of sensors at ultra-high speed.
Falkonry has designed its automated anomaly detection application, Falkonry Insight, to take advantage of Edge computing capabilities that are now available for high security and edge-to-cloud connectivity. Falkonry Insight includes a patent-pending, high-throughput time series AI engine that inspects every sensor data point to identify reliability and performance anomalies along with their contributing factors. Falkonry Insight organizes the information needed by operations teams to determine root causes and automatically informs operations teams to take rapid action. By inserting an edge device into the US Navyโs operational environment that can process data continuously, increasingly sophisticated naval platforms can maintain high reliability and performance out at sea.
Detecting anomalies in high-dimensional IoT data using hierarchical decomposition and one-class learning
Automated health monitoring, including anomaly/fault detection, is an absolutely necessary attribute of any modern industrial system. Problems of this sort are usually solved through algorithmic processing of data from a great number of physical sensors installed in various equipment. A broad range of ML-based and statistical techniques are used here. An important common parameter that defines the practical complexity and tractability of the problem is the dimensionality of the feature vector generated from the signals of the sensors.
While there is a great variety of methods and techniques described in ML and statistical literature, it is easy to go in the wrong direction when trying to solve problems for industrial systems with a large number of IoT sensors. The seemingly โobviousโ and stereotypical solutions often lead to dead-ends or unnecessary complications when applied to such systems. Here we generalize our experience and delineate some potential pitfalls of the stereotypical approaches. We also outline quite a general methodology that helps to avoid such traps when dealing with IoT data of high dimension. The methodology rests on two major pillars: hierarchical decomposition and one-class learning. This means that we try to start health monitoring from the most elementary parts of the whole system, and we learn mainly from the healthy state of the system.
Anomaly detection in industrial IoT data using Google Vertex AI: A reference notebook
Modern manufacturing, transportation, and energy companies routinely operate thousands of machines and perform hundreds of quality checks at different stages of their production and distribution processes. Industrial sensors and IoT devices enable these companies to collect comprehensive real-time metrics across equipment, vehicles, and produced parts, but the analysis of such data streams is a challenging task.
We start with a discussion of how the health monitoring problem can be converted into standard machine learning tasks and what pitfalls one should be aware of, and then implement a reference Vertex AI pipeline for anomaly detection. This pipeline can be viewed as a starter kit for quick prototyping of IoT anomaly detection solutions that can be further customized and extended to create production-grade platforms.
Build an Anomaly Detection Model using SME expertise
Achieving World-Class Predictive Maintenance with Normal Behavior Modeling
Central to the normal behavior modeling (NBM) concept is an algorithm known as an autoencoder, shown in Figure 1. Over time, the autoencoderโs input layer ingests a continuous stream of quantitative data from equipment sensors (temperature, pressure, etc.). This data is then fed to a hidden layer (of which there are typically several), where it gets compressed. Numerical weights (a value between 0 and 1) are then applied to each node, with the goal of eventually reproducing the input values at the output layer.
The principal purpose of NBM is to define the normal state of a complex system and then proactively identify instances where the system is operating outside of normal with sufficient advance warning to allow maintenance or repair actions to take place to avoid revenue loss, repair costs, and safety compromises that typically come with such failures.
Predicting Defrost in Refrigeration Cases at Walmart using Fourier Transform
As the largest grocer in the United States, Walmart has a massive assembly of supermarket refrigeration systems in its stores across the country. Food quality is an essential part of our customer experience and Walmart spends a considerable amount annually on maintenance of its vast portfolio of refrigeration systems. In an effort to improve the overall maintenance practices, we use preventative and proactive maintenance strategies. We at Walmart Global Tech use IoT data and build algorithms to study and proactively detect anomalous events in refrigeration systems at Walmart.
Condition monitoring in steel mills: 3 fault detections
Forecast Anomalies in Refrigeration with PySpark & Sensor-data
A refrigeration has four important components: Compressor, Condenser Fan, Evaporator Fan & Expansion Valve. Loosely speaking, together they try to keep the pressure at a reasonable level so as to maintain the temperature within (Remember, PV = nRT). In Walmart, we collect sensor data for all of these components (eg. pressure, fan speed, temperature) at a 10 minutes interval along with metrics like if the system is in defrost or not, compressor is locked out or not etc. We also capture outside air temperature as it impacts the condenser fan speed and in turn, the temperature.
The objective is to minimize the number of malfunctions and suggest probable resolutions of the same to save time. So, we leveraged this telemetry information in order to forecast anomalies in temperature, which would help in prioritizing issues and be proactive rather than reactive.
Intelligent edge management: why AI and ML are key players
What will the future of network edge management look like? We explain how artificial intelligence and machine learning technologies are crucial for intelligent edge computing and the management of future-proof networks. Whatโs required, and what are the building blocks needed to make it happen?
Using Machine Learning to identify operational modes in rotating equipment
Vibration monitoring is key to performing condition monitoring-based maintenance in rotating equipment such as engines, compressors, turbines, pumps, generators, blowers, and gearboxes. However, periodic route-based vibration monitoring programs are not enough to prevent breakdowns, as they normally offer a narrower view of the machinesโ conditions.
Adding Machine Learning algorithms to this process makes it scalable, as it allows the analysis of historic data from equipment. One of the benefits is being able to identify operational modes and help maintenance teams to understand if the machine is operating in normal or abnormal conditions.
Application of AI to Oil Refineries and Petrochemical Plants
Artificial intelligent (AI), machine learning, data science, and other advanced technologies have been progressing remarkably, enabling computers to handle labor- and time-consuming tasks that used to be done manually. As big data have become available, it is expected that AI will automatically identify and solve problems in the manufacturing industry. This paper describes how AI can be used in oil refineries and petrochemical plants to solve issues regarding assets and quality.
A Case of Applying AI to an Ethylene Plant
Unexpected equipment failures or maintenance may result in unscheduled plant shutdowns in continuously operating petrochemical plants such as ethylene plants. To avoid this, the operation status needs to be continuously monitored. However, since troubles in plants have various causes, it is difficult for human workers to precisely grasp the plant status and notice the signs of unexpected failures and need for maintenance. To solve this problem, we worked with a customer in an ethylene plant and developed a solution based on AI analysis. Using AI analysis based on customer feedback, we identified several factors from numerous sensor parameters and created an AI model that can grasp the plant status and detect any signs of abnormalities. This paper introduces a case study of AI analysis carried out in an ethylene plant and the new value that AI technology can offer to customers, and then describes how to extend the solution business with AI analysis.