Convolutional Neural Network (CNN)

Assembly Line

🧠 Monitoring the misalignment of machine tools with autoencoders after they are trained with transfer learning data

πŸ“… Date:

✍️ Authors: Mustafa Demetgul, Qi Zheng, Ibrahim Nur Tansel, Jürgen Fleischer

πŸ”– Topics: Convolutional Neural Network, LSTM, Autoencoder, Machine Tool

🏒 Organizations: Karlsruhe Institute of Technology


CNC machines have revolutionized manufacturing by enabling high-quality and high-productivity production. Monitoring the condition of these machines during production would reduce maintenance cost and avoid manufacturing defective parts. Misalignment of the linear tables in CNCs can directly affect the quality of the manufactured parts, and the components of the linear tables wear out over time due to the heavy and fluctuating loads. To address these challenges, an intelligent monitoring system was developed to identify normal operation and misalignments. Since damaging a CNC machine for data collection is too expensive, transfer learning was used in two steps. First, a specially designed experimental feed axis test platform (FATP) was used to sample the current signal at normal and five levels of left-side misalignment conditions ranging from 0.05 to 0.25 mm. Four different algorithm combinations were trained to detect misalignments. These combinations included a 1D convolution neural network (CNN) and autoencoder (AE) combination, a temporal convolutional network (TCN) and AE combination, a long short-term memory neural network (LSTM) and AE combination, and a CNN, LSTM, and AE combination. At the second step, Wasserstein deep convolutional generative adversarial network (W-DCGAN) was used to generate data by integrating the observed characteristics of the FATP at different misalignment levels and collected limited data from the actual CNC machines. To evaluate the similarity and limited diversity of generated and real signals, t-distributed stochastic neighbor embedding (T-SNE) method was used. The hyperparameters of the model were optimized by random and grid search. The CNN, LSTM, and AE combination demonstrated the best performance, which provides a practical way to detect misalignments without stopping production or cluttering the work area with sensors. The proposed intelligent monitoring system can detect misalignments of the linear tables of CNCs, thus enhancing the quality of manufactured parts and reducing production costs.

Read more at The International Journal of Advanced Manufacturing Technology

Predicting congestion in fleets of robots

πŸ“… Date:

πŸ”– Topics: Autonomous Mobile Robot, Convolutional Neural Network, LSTM

🏒 Organizations: Amazon


Many Amazon fulfillment centers use mobile robots to move shelves, retrieve products, and deliver them to workers for sorting, reducing the need for employees to walk long distances. For simplicity and scalability, the path-planning algorithm those robots currently use focuses on individual agents and ignores interactions between multiple agents.

In a paper we presented at this year’s International Conference on Robotics and Automation (ICRA), we propose a deep-learning model that can predict congestion on the floor in real time. We tested the model’s predictions in simulations of two downstream applications: dynamic path planning in sortation centers, where our model improved throughout by 4.4%, and travel time estimation, where it improved the mean absolute percentage error by 30% to 40% relative to the current production methods.

Read more at Amazon Science

🦾 Inside sewts’ textile-handling robots

πŸ“… Date:

✍️ Author: Brianna Wessling

πŸ”– Topics: Industrial Robot, Convolutional Neural Network

🏭 Vertical: Textiles

🏒 Organizations: sewts, IDS Imaging


Traditionally, clothing has been a challenge for robots to handle because of its malleability. Currently, available software systems and conventional image processing typically have limits when it comes to easily deformable material, limiting the abilities of commercially available robots and gripping systems.

VELUM, sewts’ robotic system, is able to analyze dimensionally unstable materials like textiles and handle them. This means VELUM can feed towels and similar linen made of terry cloth easily and without creases into existing folding machines.

sewts developed AI software to process the data supplied by the cameras. This software uses features like the course of the seam and the relative position of seams to analyze the topology of the textiles. The program classifies these features according to textile type and class, and then translates these findings into robot commands. The company uses Convolutional Neural Networks (CNNs) and classical image processing to process the data, including IDS peak, a software development kit from IDS.

Read more at The Robot Report

A new intelligent fault diagnosis framework for rotating machinery based on deep transfer reinforcement learning

πŸ“… Date:

✍️ Authors: Daoguang Yang, Hamid Reza Karimi, Marek Pawelczyk

πŸ”– Topics: Bearing, Reinforcement Learning, Machine Health, Convolutional Neural Network

🏒 Organizations: Politecnico di Milano, Silesian University of Technology


The advancement of artificial intelligence algorithms has gained growing interest in identifying the fault types in rotary machines, which is a high-efficiency but not a human-like module. Hence, in order to build a human-like fault identification module that could learn knowledge from the environment, in this paper, a deep reinforcement learning framework is proposed to provide an end-to-end training mode and a human-like learning process based on an improved Double Deep Q Network. In addition, to improve the convergence properties of the Deep Reinforcement Learning algorithm, the parameters of the former layers of the convolutional neural networks are transferred from a convolutional auto-encoder under an unsupervised learning process. The experiment results show that the proposed framework could efficiently extract the fault features from raw time-domain data and have higher accuracy than other deep learning models with balanced samples and better performance with imbalanced samples.

Read more at Control Engineering Practice

πŸš™ Application of optimized convolutional neural network to fixture layout in automotive parts

πŸ“… Date:

✍️ Authors: Javier Villena Toro, Anton Wiberg, Mehdi Tarkian

πŸ”– Topics: Convolutional Neural Network, Computer-aided Design

🏒 Organizations: Linkâping University


Fixture layout is a complex task that significantly impacts manufacturing costs and requires the expertise of well-trained engineers. While most research approaches to automating the fixture layout process use optimization or rule-based frameworks, this paper presents a novel approach using supervised learning. The proposed framework replicates the 3-2-1 locating principle to layout fixtures for sheet metal designs. This principle ensures the correct fixing of an object by restricting its degrees of freedom. One main novelty of the proposed framework is the use of topographic maps generated from sheet metal design data as input for a convolutional neural network (CNN). These maps are created by projecting the geometry onto a plane and converting the Z coordinate into gray-scale pixel values. The framework is also novel in its ability to reuse knowledge about fixturing to lay out new workpieces and in its integration with a CAD environment as an add-in. The results of the hyperparameter-tuned CNN for regression show high accuracy and fast convergence, demonstrating the usability of the model for industrial applications. The framework was first tested using automotive b-pillar designs and was found to have high accuracy (β‰ˆβ€‰100%) in classifying these designs. The proposed framework offers a promising approach for automating the complex task of fixture layout in sheet metal design.

Read more at The International Journal of Advanced Manufacturing Technology

CircularNet: Reducing waste with Machine Learning

πŸ“… Date:

✍️ Authors: Robert Little, Umair Sabir

πŸ”– Topics: Sustainability, Machine Learning, Convolutional Neural Network

🏒 Organizations: Google


The facilities where our waste and recyclables are processed are called β€œMaterial Recovery Facilities” (MRFs). Each MRF processes tens of thousands of pounds of our societal β€œwaste” every day, separating valuable recyclable materials like metals and plastics from non-recyclable materials. A key inefficiency within the current waste capture and sorting process is the inability to identify and segregate waste into high quality material streams. The accuracy of the sorting directly determines the quality of the recycled material; for high-quality, commercially viable recycling, the contamination levels need to be low. Even though the MRFs use various technologies alongside manual labor to separate materials into distinct and clean streams, the exceptionally cluttered and contaminated nature of the waste stream makes automated waste detection challenging to achieve, and the recycling rates and the profit margins stay at undesirably low levels.

Enter what we call β€œCircularNet”, a set of models that lowers barriers to AI/ML tech for waste identification and all the benefits this new level of transparency can offer. Our goal with CircularNet is to develop a robust and data-efficient model for waste/recyclables detection, which can support the way we identify, sort, manage, and recycle materials across the waste management ecosystem.

Read more at Tensorflow Blog

Improving Yield With Machine Learning

πŸ“… Date:

✍️ Author: Laura Peters

πŸ”– Topics: Machine Learning, Convolutional Neural Network, ResNet

🏭 Vertical: Semiconductor

🏒 Organizations: KLA, Synopsys, CyberOptics, Macronix


Machine learning is becoming increasingly valuable in semiconductor manufacturing, where it is being used to improve yield and throughput.

Synopsys engineers recently found that a decision tree deep learning method can classify 98% of defects and features at 60X faster retraining time than traditional CNNs. The decision tree utilizes 8 CNNs and ResNet to automatically classify 12 defect types with images from SEM and optical tools.

Macronix engineers showed how machine learning can expedite new etch process development in 3D NAND devices. Two parameters are particularly important in optimizing the deep trench slit etch β€” bottom CD and depth of polysilicon etch recess, also known as the etch stop.

KLA engineers, led by Cheng Hung Wu, optimized the use of a high landing energy e-beam inspection tool to capture defects buried as deep as 6Β΅m in a 96-layer ONON stacked structure following deep trench etch. The e-beam tool can detect defects that optical inspectors cannot, but only if operated with high landing energy to penetrate deep structures. With this process, KLA was looking to develop an automated detection and classification system for deep trench defects.

Read more at Semiconductor Engineering

Application of deep learning methods for more efficient water demand forecasting

πŸ“… Date:

✍️ Author: Anjana G. Rajakumar

πŸ”– Topics: Convolutional Neural Network

🏒 Organizations: Hitachi


In recent years, such predictions have also found wide application in near-optimal control operations of water networks. Water demand prediction is an active field, where different methods and techniques have been applied including conventional statistical methods and machine learning methods. Due to advancements in the field of sensing and IoT, an increasing amount of data is becoming available for water distribution systems, including water demand data. Therefore, we are seeing greater use of deep learning methods to develop models for water demand forecasting in recent years as deep learning methods can deal with seasonality as well as random patterns in the data, and provide accurate results compared to traditional methods.

We observed that the frequency of data, amount of data, and quality of data has an impact on the deep learning model accuracy. In CNN-LSTM, CNN effectively extracts the inherent characteristics of historical water consumption data such as seasonality, and LSTM can fully reflect the long-term historical process and future trend. Hence, water demand forecast predictions using CNN-LSTM produced a better result when compared to other single models such as GRU, MLP, CNN and LSTM.

Read more at Industrial AI Blog

Deep Learning For Industrial Inspection

Operation planning method using convolutional neural network for combined heat and power system

πŸ“… Date:

✍️ Author: Tetsushi Ono

πŸ”– Topics: Convolutional Neural Network

🏒 Organizations: Hitachi


The energy efficiency of a combined heat and power (CHP) can reach about 85%, whereas conventional thermal power plants operate only at 45% efficiency or lower. CHPs perform better mainly because the heat from generators can be used as a energy source to meet heat demands or power refrigerators to generate cold water, in other words the β€œwaste” heat is used and not wasted. Therefore, a growing number of factories and commercial buildings are installing combined heat and power (CHP) systems that include various energy storage devices. To reduce the energy cost of CHPs, optimal operation plans to satisfy time-varying energy demands with minimum energy cost are required. However, conventional operation planning methods using optimized calculation have an issue with long computing time. Especially these days, operation plans need to be generated within a few minutes or even seconds to make up for output of renewable energy sources.

Read more at Hitachi Industrial AI Blog

Deep learning-based automatic optical inspection system empowered by online multivariate autocorrelated process control

πŸ“… Date:

πŸ”– Topics: automated optical inspection, convolutional neural network

🏒 Organizations: National Taiwan University of Science and Technology


Defect identification of tiny-scaled electronics components with high-speed throughput remains an issue in quality inspection technology. Convolutional neural networks (CNNs) deployed in automatic optical inspection (AOI) systems are powerful for detecting defects. However, they focus on individual samples but suffer from poor process control and lack of monitoring and providing the online status regarding the production process. Integrating CNN and statistical process control models will empower high-speed production lines to achieve proactive quality inspection. With the performance of the average run length for a certain range of the shifts, the proposed control chart has high detection performance for small mean shifts in quality. The proposed control chart is successfully applied to an electronic conductor manufacturing process. The proposed model facilitates a systematic quality inspection for tiny electronics components in a high-speed production line. The CNN-based AOI model empowered by the proposed control chart enables quality checking at the individual product level and process monitoring at the system level simultaneously. The contribution of the present study lies in the proposed process control framework integrating with the CNN-based AOI model in which a residual-based mixed multivariate cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control chart for monitoring online multivariate autocorrelated processes to efficiently detect defects.

Read more at The International Journal of Advanced Manufacturing Technology

An implementation of YOLO-family algorithms in classifying the product quality for the acrylonitrile butadiene styrene metallization

πŸ“… Date:

✍️ Authors: Yuh Wen Chen, Jing Mau Shiu

πŸ”– Topics: Convolutional Neural Network, Visual Inspection

🏒 Organizations: Da-Yeh University


In the traditional electroplating industry of Acrylonitrile Butadiene Styrene (ABS), quality control inspection of the product surface is usually performed with the naked eye. However, these defects on the surface of electroplated products are minor and easily ignored under reflective conditions. If the number of defectiveness and samples is too large, manual inspection will be challenging and time-consuming. We innovatively applied additive manufacturing (AM) to design and assemble an automatic optical inspection (AOI) system with the latest progress of artificial intelligence. The system can identify defects on the reflective surface of the plated product. Based on the deep learning framework from You Only Look Once (YOLO), we successfully started the neural network model on graphics processing unit (GPU) using the family of YOLO algorithms: from v2 to v5. Finally, our efforts showed an accuracy rate over an average of 70 percentage for detecting real-time video data in production lines. We also compare the classification performance among various YOLO algorithms. Our visual inspection efforts significantly reduce the labor cost of visual inspection in the electroplating industry and show its vision in smart manufacturing.

Read more at The International Journal of Advanced Manufacturing Technology

YOLO V3 + VGG16-based automatic operations monitoring and analysis in a manufacturing workshop under Industry 4.0

πŸ“… Date:

✍️ Authors: Jihong Yan, Zipeng Wang

πŸ”– Topics: Convolutional Neural Network

🏒 Organizations: Harbin Institute of Technology


Under the background of Industry 4.0 and smart manufacturing, operators are still the core of manufacturing production, and the standardization of their actions greatly affects production efficiency and quality. However, they have not received enough attention. In view of the monitoring and analysis of operators’ actions in the manufacturing field, this paper proposes the YOLO V3 + VGG 16 transfer learning network. First, the region detection of key operators is realized by using YOLO V3, and an action dataset is constructed. Second, using transfer learning to realize the automatic recognition, monitoring and analysis of small sample data, the recognition accuracy of the proposed method is greater than 96%, and the average deviation of the action execution time is less than 1 s. This research is expected to provide guidance for increasing the degree of workshop automation, improving the standardization of operators’ actions, optimizing action processes and ensuring product quality.

Read more at ScienceDirect

How pioneering deep learning is reducing Amazon’s packaging waste

πŸ“… Date:

✍️ Author: Sean O'Neill

πŸ”– Topics: Machine Learning, Computer Vision, Convolutional Neural Network, Sustainability, E-commerce

🏒 Organizations: Amazon


Fortunately, machine learning approaches β€” particularly deep learning β€” thrive on big data and massive scale, and a pioneering combination of natural language processing and computer vision is enabling Amazon to hone in on using the right amount of packaging. These tools have helped Amazon drive change over the past six years, reducing per-shipment packaging weight by 36% and eliminating more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.

β€œWhen the model is certain of the best package type for a given product, we allow it to auto-certify it for that pack type,” says Bales. β€œWhen the model is less certain, it flags a product and its packaging for testing by a human.” The technology is currently being applied to product lines across North America and Europe, automatically reducing waste at a growing scale.

Read more at Amazon Science

A deep transfer learning method for monitoring the wear of abrasive belts with a small sample dataset

πŸ“… Date:

✍️ Authors: Zhihang Li, Qian Tang, Sibao Wang, Penghui Zhang

πŸ”– Topics: convolutional neural network, predictive maintenance

🏒 Organizations: Chongqing University


According to the analysis of displacement data, a new method for the prediction of abrasive belt wear states using a multiscale convolutional neural network based on transfer learning is proposed. Initially, first-order difference preprocessing is ingeniously performed on displacement data. Then, the network parameters of the model are obtained by pretraining the fault dataset and are directly transferred or fine-tuned according to the preprocessed displacement data. Finally, the preprocessed displacement data corresponding to different abrasive belt wear states are accurately classified. This method verifies the application of transfer learning between cross-domain data in industry and resolves the contradiction between the large sample size required for deep learning and the difficulty of obtaining a large amount of sample data in actual production. The experimental results show that this method can accurately predict the wear status of abrasive belts, with an average prediction accuracy of 93.1%. This method has the advantages of low cost and easy operation, and can be applied to guide the replacement time of abrasive belts in production.

Read more at ScienceDirect

Visual search: how to find manufacturing parts in a cinch

πŸ“… Date:

✍️ Authors: Artem Ivashchenko, Sergey Parakhin, Aleksey Romanov

πŸ”– Topics: Convolutional Neural Network, Computer Vision, Optical Character Recognition, Visual Search

🏒 Organizations: Grid Dynamics


The process of engineering a robust mechanical product, whether it’s an escalator or a car engine, requires many small parts. We accept that these parts wear out over time and require replacement to avoid breakdowns and to keep the mechanics of the product running smoothly.

During our analysis of the data that the client shared with us, we found a mix of photos of the parts themselves, photos of packages or only product labels. Serial numbers or easily distinguishable characters were clearly visible in some photographs, but not in all of them. One of the primary challenges we faced, therefore, was dealing with the differences between the photos the engineers were submitting compared to the images in the search catalog. For example, there were examples of visually indistinguishable images where only the model number differentiated the part, photos of a sticker with a serial number instead of an object itself, rulers alongside objects in photos to indicate scale, and drawings of the part in the catalog instead of photos.

For this use case we implemented the CNN model based on ResNeXt architecture (ResNeXt-50 (32Γ—4d)) pre-trained on an ImageNet dataset. However, the manufacturing parts we were dealing with were not adequately available in the pre-trained dataset, which meant we had to enhance the training dataset with about 10 000 independently sourced manufacturing part images along with the client-supplied labeled dataset.

Read more at Grid Dynamics Blog

Hybrid machine learning-enabled adaptive welding speed control

πŸ“… Date:

✍️ Authors: Joseph Kershaw, Rui Yu, YuMing Zhang, Peng Wang

πŸ”– Topics: machine learning, robot welding, convolutional neural network

🏒 Organizations: University of Kentucky


This research presents a preliminary study on developing appropriate Machine Learning (ML) techniques for real-time welding quality prediction and adaptive welding speed adjustment for GTAW welding at a constant current. In order to collect the data needed to train the hybrid ML models, two cameras are applied to monitor the welding process, with one camera (available in practical robotic welding) recording the top-side weld pool dynamics and a second camera (unavailable in practical robotic welding, but applicable for training purpose) recording the back-side bead formation. Given these two data sets, correlations can be discovered through a convolutional neural network (CNN) that is good at image characterization. With the CNN, top-side weld pool images can be analyzed to predict the back-side bead width during active welding control.

Read more at Science Direct

Applying deep learning to sensor data to support workers in manufacturing

πŸ“… Date:

✍️ Author: Yuichi Sakurai

πŸ”– Topics: cyber-physical systems, convolutional neural network

🏒 Organizations: Hitachi


To achieve next-generation production systems and Multiverse Mediation with CPSs, 4M (huMan, Machine, Material, and Method) work transitions need to be clarified and used more accurately. However, traditional systems cannot detect deviations in manual procedures. To resolve these issues, we are developing a highly accurate detection technology for β€œhuman work”. Figure 2 shows the assembly cells considered in this study.

Compared to conventional approaches, we achieved a 15% reduction in product assembly time and a deviation detection leak of almost zero (more than 95% work identification accuracy). These results demonstrated the potential for our system to efficiently and effectively support manufacturing workers and contribute to greater efficiency and quality management in the assembly of complex equipment.

Read more at Hitachi Industrial AI Blog

Fabs Drive Deeper Into Machine Learning

πŸ“… Date:

✍️ Author: Anne Meixner

πŸ”– Topics: machine learning, machine vision, defect detection, convolutional neural network

🏭 Vertical: Semiconductor

🏒 Organizations: GlobalFoundries, KLA, SkyWater Technology, Onto Innovation, CyberOptics, Hitachi, Synopsys


For the past couple decades, semiconductor manufacturers have relied on computer vision, which is one of the earliest applications of machine learning in semiconductor manufacturing. Referred to as Automated Optical Inspection (AOI), these systems use signal processing algorithms to identify macro and micro physical deformations.

Defect detection provides a feedback loop for fab processing steps. Wafer test results produce bin maps (good or bad die), which also can be analyzed as images. Their data granularity is significantly larger than the pixelated data from an optical inspection tool. Yet test results from wafer maps can match the splatters generated during lithography and scratches produced from handling that AOI systems can miss. Thus, wafer test maps give useful feedback to the fab.

Read more at Semiconductor Engineering

Efficient federated convolutional neural network with information fusion for rolling bearing fault diagnosis

πŸ“… Date:

✍️ Authors: Zehui Zhang, Xiaobin Xu, Wenfeng Gong, Yuwang Chen, Haibo Gao

πŸ”– Topics: Bearing, Federated Learning, Convolutional Neural Network, Machine Health

🏒 Organizations: Wuhan University of Technology, Hangzhou Dianzi University, The University of Manchester


In the past year, various deep learning-based fault diagnosis methods have been designed to guarantee the stable, safe, and efficient operation of electromechanical systems. To achieve excellent diagnostic performance, the conventional centralized learning (CL) approach is adopted to collect as much data as possible from multiple industrial participants for deep model training. Due to privacy concerns and potential conflicts, industrial participants are unwilling to share their data resources. To solve the issues, this study proposes a fault diagnosis method based on federated learning (FL) and convolutional neural network (CNN), which allows different industrial participants to collaboratively train a global fault diagnosis model without sharing their local data. Model training is locally executed within each industrial participant, and the cloud server updates the global model by aggregating the local models of the participants. Specifically, an adaptive method is designed to adjust the model aggregation interval according to the feedback information of the industrial participants in order to reduce the communication cost while ensuring model accuracy. In addition, momentum gradient descent (MGD) and dropout layer are used to accelerate convergence rate and avoid model overfitting, respectively. The effectiveness of the proposed method is verified on a non-independent and identically distributed (non-iid) rolling bearing fault dataset. The experiment results indicate that the proposed method has higher accuracy than traditional fault diagnosis methods. Moreover, this study provides a promising collaborative training approach to the fault diagnosis field.

Read more at Control Engineering Practice

3D Vision Technology Advances to Keep Pace With Bin Picking Challenges

πŸ“… Date:

✍️ Author: Jimmy Carroll

πŸ”– Topics: machine vision, convolutional neural network

🏒 Organizations: Zivid, CapSen Robotics, IDS Imaging Development Systems, Photoneo, Universal Robots, Allied Moulded


When a bin has one type of object with a fixed shape, bin picking is straightforward, as CAD models can easily recognize and localize individual items. But randomly positioned objects can overlap or become entangled, presenting one of the greatest challenges in bin picking. Identifying objects with varying shapes, sizes, colors, and materials poses an even larger challenge, but by deploying deep learning algorithms, it is possible to find and match objects that do not conform to one single geometrical description but belong to a general class defined by examples, according to Andrea Pufflerova, Public Relations Specialist at Photoneo.

β€œA well-trained convolutional neural network (CNN) can recognize and classify mixed and new types of objects that it has never come across before,”

Read more at A3

Real-World ML with Coral: Manufacturing

πŸ“… Date:

✍️ Author: Michael Brooks

πŸ”– Topics: edge computing, AI, machine learning, computer vision, convolutional neural network, Tensorflow, worker safety

🏒 Organizations: Coral


For over 3 years, Coral has been focused on enabling privacy-preserving Edge ML with low-power, high performance products. We’ve released many examples and projects designed to help you quickly accelerate ML for your specific needs. One of the most common requests we get after exploring the Coral models and projects is: How do we move to production?

  • Worker Safety - Performs generic person detection (powered by COCO-trained SSDLite MobileDet) and then runs a simple algorithm to detect bounding box collisions to see if a person is in an unsafe region.
  • Visual Inspection - Performs apple detection (using the same COCO-trained SSDLite MobileDet from Worker Safety) and then crops the frame to the detected apple and runs a retrained MobileNetV2 that classifies fresh vs rotten apples.

Read more at TensorFlow Blog