Industrial Robot
Assembly Line
Vibration Compensation Improves Robot Performance
Any machine that moves deals with vibration, whether it’s a 3D printer, machine tool or robot. It’s typically managed to the point that many do not even realize that vibration is a problem, but with proper strategies to deal with vibration, machines can run faster and more efficiently. Ulendo is an Ann Arbor, Michigan-based startup that produces software solutions for manufacturing automation. The company launched with a product that’s now called Ulendo VC (for “vibration compensation”), which applies an algorithm to fused filament fabrication (FFF) 3D printers, counteracting the machine’s vibration patterns and enabling it to run up to five times faster while maintaining part quality.
Since the launch of Ulendo VC, the company has expanded into laser powder bed fusion (LPBF) additive manufacturing with Ulendo HC (heat compensation), which optimizes the path of an LPBF machine’s laser to reduce heat-induced deformation and stress. It’s also offering Ulendo Calibration-as-a-Service, which applies its vibration compensation to end-users’ extrusion 3D printers (as opposed to working with machine suppliers, as is the case with Ulendo VC). The company is also broadening its reach to include machines outside the 3D printing space with its fourth product: vibration compensation for robots.
Ulendo’s solution actually compensates for a machine’s vibrations instead of working around them. It does this by “tricking the machine,” as Okwudire describes it. Traditionally, Ulendo measures the machine’s vibration patterns and creates a “calibration map” that predicts how the machine will vibrate and calculates how to offset that motion. So, if a machine veers to the left instead of going straight, Ulendo VC-R (vibration compensation for robots) adjusts the path to the right, which cancels out the leftward motion and causes the machine to go straight.
When we kicked off @MachinaLabs_ to build a robotic craftsman, we knew it wasn’t just about building a robot; it was about building a mind of the craftsman. Two major challenges stood in our way:
— Edward Mehr (@EdwardMehr) December 19, 2024
1- Dexterity: A robot needs to handle diverse manufacturing operations like a human… https://t.co/WxTal2qVco pic.twitter.com/wIMemTm2VH
Chinese Robots Hit the Factory Floor
China’s factories have been at the heart of the country’s economic rise, helping the country come to account for nearly a third of global manufacturing and providing millions of jobs in the process. Increasingly, though, robots are taking over. By the end of last year, Chinese manufacturers had deployed 470 robot units per 10,000 workers, some three times the global average.
ARMADA: Augmented Reality for Robot Manipulation and Robot-Free Data Acquisition
Teleoperation for robot imitation learning is bottlenecked by hardware availability. Can high-quality robot data be collected without a physical robot? We present a system for augmenting Apple Vision Pro with real-time virtual robot feedback. By providing users with an intuitive understanding of how their actions translate to robot motions, we enable the collection of natural barehanded human data that is compatible with the limitations of physical robot hardware. We conducted a user study with 15 participants demonstrating 3 different tasks each under 3 different feedback conditions and directly replayed the collected trajectories on physical robot hardware. Results suggest live robot feedback dramatically improves the quality of the collected data, suggesting a new avenue for scalable human data collection without access to robot hardware.
Robotics engineers are in high demand — but what is the job really like?
Top Reasons Robotic Companies Fail
Mastering Robotic Arm Design: Efficient Leak Detection & More
ROKAE Robotics: Empowering Foton Cummins Lights-Out Factory
Building Custom Robot Simulations with Wandelbots NOVA and NVIDIA Isaac Sim
NVIDIA Inception partner and deep tech startup Wandelbots is making it easier for any roboticist to simulate robots in physics-based digital training environments, delivered through intuitive human-machine interfaces (HMI). Developers, system integrators, and automation engineers use Wandelbots to build their own application interface, connecting end users to the simulated environment. Factory planners can then interact with a robot cell on the shop floor and use the digital world to train the robot.
Using the “ghost teaching” method (Video 1), developers use a visual tool to manipulate a robot’s end effector or workpieces in simulation, teaching it how to move and pick up objects. This intuitive approach simplifies the programming of robot cells by providing a visual interface for positioning and movement, making it accessible even for those with minimal programming experience. With pose data from the simulation automatically transferred to the robot’s program, the need for complex coding is eliminated.
PickNik Robotics’s Movelt Pro 6 Platform Brings Flexibility to Robotics
PickNik Robotics (PR) unveiled its MoveIt Pro Release 6, the latest version of its flexible, open development platform for robotic applications spanning multiple industries. The hardware-agnostic, AI-driven system now has a true-to-life simulation engine for digital twins or virtual representations of physical assets and processes.
The Digital Twin technology employed by the Movlet Pro platform employs robot runtime algorithms in a true-to-life physics simulator. Users benefit from many reference applications, including bin-picking, welding/cutting, door opening (for facility access), preassembly and assembly, mobile manipulation, and more. Customers can also integrate the Movelt Pro Platform into fleet management platforms to provide a holistic view of operations.
Inovance Robotics: Enhancing Packaging Industry Automation | Smart Manufacturing | Inovance
In America’s Factories, Even the Robots Are Getting Less Work
Manufacturers are cutting back on purchases of automation equipment, executives said, as business slows on production lines and shop floors. More human workers are lining up for work again, too. Some companies that bought robots during the pandemic-driven labor crunch underestimated the maintenance and programming skill needed to deploy them to more complicated tasks.
About 21% of manufacturers cited labor shortfalls as an impediment to full production during the second quarter, down from 45% in the same quarter in 2022, according to Census Bureau survey data compiled by Michigan State University supply management professor Jason Miller. Material shortages were cited by less than 12% of manufacturers, compared with 39% two years earlier.
Vention launches MachineMotion AI, an AI-enabled automation controller for robotics and industrial applications
Vention, the company behind the cloud-based Manufacturing Automation Platform (MAP), is launching an AI-enabled motion controller, MachineMotion AI. This 3rd-generation controller, built on NVIDIA accelerated computing, is designed to significantly simplify the development and deployment of robotics applications for manufacturers of all sizes.
This announcement signifies a major advancement in entering the post-PLC (programmable logic controller) era. Automated equipment—including robots, conveyors, and computer vision systems—can now be orchestrated by a single controller powering the entire machine, making it truly plug-and-play. By eliminating the traditional divide between robots and PLC programming, this architecture makes programming simpler and speeds up the deployment cycle, leading to improved ROI for manufacturers.
MachineMotion AI is compatible with leading robot brands, including Universal Robots, FANUC, and ABB. It delivers up to 3,000W of power and drives up to 30 servo motors via EtherCAT. Powered by the NVIDIA Jetson platform for edge AI and robotics, MachineMotion AI advances AI-enabled robots with NVIDIA GPU-accelerated path planning and the ability to run 2D/3D perception models trained in synthetic and physical environments.
Fast and Accurate Relative Motion Tracking for Dual Industrial Robots
Industrial robotic applications such as spraying, welding, and additive manufacturing frequently require fast, accurate, and uniform motion along a 3D spatial curve. To increase process throughput, some manufacturers propose a dual-robot setup to overcome the speed limitation of a single robot. Industrial robot motion is programmed through waypoints connected by motion primitives (Cartesian linear and circular paths and linear joint paths at constant Cartesian speed). The actual robot motion is affected by the blending between these motion primitives and the pose of the robot (an outstretched/near-singularity pose tends to have larger path tracking errors). Choosing the waypoints and the speed along each motion segment to achieve the performance requirement is challenging. At present, there is no automated solution, and laborious manual tuning by robot experts is needed to approach the desired performance. In this paper, we present a systematic three-step approach to designing and programming a dual robot system to optimize system performance. The first step is to select the relative placement between the two robots based on the specified relative motion path. The second step is to select the relative waypoints and the motion primitives. The final step is to update the waypoints iteratively based on the actual measured relative motion. Waypoint iteration is first executed in simulation and then completed using the actual robots. For performance assessment, we use the mean path speed subject to the relative position and orientation constraints and the path speed uniformity constraint. We have demonstrated the effectiveness of this method on two systems, a physical testbed of two ABB robots and a simulation testbed of two FANUC robots, for two challenging test curves.
Open-TeleVision: Why human intelligence could be the key to next-gen robotic automation
MIT and UCSD unveiled a new immersive remote control experience for robots. This innovative system, dubbed “Open-TeleVision,” enables operators to actively perceive the robot’s surroundings while mirroring their hand and arm movements. As the researchers describe it, the system “creates an immersive experience as if the operator’s mind is transmitted to a robot embodiment.”
The Open-TeleVision system takes a different approach to robotics. Instead of trying to replicate human intelligence in a machine, it creates a seamless interface between human operators and robotic bodies. The researchers explain that their system “allows operators to actively perceive the robot’s surroundings in a stereoscopic manner. Additionally, the system mirrors the operator’s arm and hand movements on the robot.”
Mech-GPT: Enabling Robots to Understand Linguistic Instructions and Handle Complex Tasks
OpenVLA: An Open-Source Vision-Language-Action Model
Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable policies for visuomotor control. Yet, widespread adoption of VLAs for robotics has been challenging as 1) existing VLAs are largely closed and inaccessible to the public, and 2) prior work fails to explore methods for efficiently fine-tuning VLAs for new tasks, a key component for adoption. Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-source VLA trained on a diverse collection of 970k real-world robot demonstrations. OpenVLA builds on a Llama 2 language model combined with a visual encoder that fuses pretrained features from DINOv2 and SigLIP. As a product of the added data diversity and new model components, OpenVLA demonstrates strong results for generalist manipulation, outperforming closed models such as RT-2-X (55B) by 16.5% in absolute task success rate across 29 tasks and multiple robot embodiments, with 7x fewer parameters. We further show that we can effectively fine-tune OpenVLA for new settings, with especially strong generalization results in multi-task environments involving multiple objects and strong language grounding abilities, and outperform expressive from-scratch imitation learning methods such as Diffusion Policy by 20.4%. We also explore compute efficiency; as a separate contribution, we show that OpenVLA can be fine-tuned on consumer GPUs via modern low-rank adaptation methods and served efficiently via quantization without a hit to downstream success rate. Finally, we release model checkpoints, fine-tuning notebooks, and our PyTorch codebase with built-in support for training VLAs at scale on Open X-Embodiment datasets.
Robotic palm mimics human touch
MIT CSAIL researchers enhance robotic precision with sophisticated tactile sensors in the palm and agile fingers, setting the stage for improvements in human-robot interaction and prosthetic technology. If you have kept up with the protean field, gripping and grasping more like humans has been an ongoing Herculean effort. Now, a new robotic hand design developed in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has rethought the oft-overlooked palm. The new design uses advanced sensors for a highly sensitive touch, helping the “extremity” handle objects with more detailed and delicate precision.
GelPalm has a gel-based, flexible sensor embedded in the palm, drawing inspiration from the soft, deformable nature of human hands. The sensor uses a special color illumination tech that uses red, green, and blue LEDs to light an object, and a camera to capture reflections. This mixture generates detailed 3D surface models for precise robotic interactions.
Unlocking new value in industrial automation with AI
Working with the robotics team at NVIDIA, we have successfully tested NVIDIA robotics platform technologies, including NVIDIA Isaac Manipulator foundation models for robot a grasping skill with the Intrinsic platform. This prototype features an industrial application specified by one of our partners and customers, Trumpf Machine Tools. This grasping skill, trained with 100% synthetic data generated by NVIDIA Isaac Sim, can be used to build sophisticated solutions that can perform adaptive and versatile object grasping tasks in sim and real. Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to complete the task using the foundation model and synthetic training data.
Together with Google DeepMind, we’ve demonstrated some novel and high value methods for robotic programming and orchestration — many of which have practical applications today:
- Multi-robot motion planning with machine learning
- Learning from demonstration, applied to two-handed dexterous manipulation
- Foundation model for perception by enabling a robotic system to understand the next task and the physical objects involved requires a real-time, accurate, and semantic understanding of the environment.
Vibration Suppression Methods for Industrial Robot Time-Lag Filtering
This paper analyzes traditional vibration suppression methods in order to solve the vibration problem caused by the stiffness of flexible industrial robots. The principle of closed-loop control dynamic feedforward vibration suppression is described as the main method for solving robot vibration suppression. This paper proposes a method for time-lag filtering based on T-trajectory interpolation, which combines the T-planning curve and the time-lag filtering method. The method’s basic principle is to dynamically adjust the trajectory output through the algorithm, which effectively suppresses the amplitude of the harmonic components of a specific frequency band to improve the vibration response of industrial robot systems. This experiment compared traditional vibration suppression methods with the time-lag filtering method based on T-trajectory interpolation. A straight-line method was proposed to measure the degree of vibration. The results demonstrate that the time-lag filtering method based on T-trajectory interpolation is highly effective in reducing the vibration of industrial robots. This makes it an excellent option for scenarios that demand real-time response and high-precision control, ultimately enhancing the efficiency and stability of robots in performing their tasks.
Covariant Announces a Universal AI Platform for Robots
Covariant is announcing RFM-1, which the company describes as a robotics foundation model that gives robots the “human-like ability to reason.” “Foundation model” means that RFM-1 can be trained on more data to do more things—at the moment, it’s all about warehouse manipulation because that’s what it’s been trained on, but its capabilities can be expanded by feeding it more data. “Our existing system is already good enough to do very fast, very variable pick and place,” says Covariant co-founder Pieter Abbeel. “But we’re now taking it quite a bit further. Any task, any embodiment—that’s the long-term vision. Robotics foundation models powering billions of robots across the world.” From the sound of things, Covariant’s business of deploying a large fleet of warehouse automation robots was the fastest way for them to collect the tens of millions of trajectories (how a robot moves during a task) that they needed to train the 8 billion parameter RFM-1 model.
Get it Done with Automated Tote Assembly
Augmented reality makes new robots easier to start up
New KUKA.MixedReality software visualizes the environment of robot cells live on your smartphone to support fast, safe and intuitive robot start-up. The mobile app displays tools and interference geometries to enable early detection of potential hazards so users can eliminate them before a robot starts work.
Augmented Reality (AR) enables intuitive robot startup. It connects the real and virtual worlds to enrich the environment of the robotic cell with clear, uncomplicated digital information. Users can detect and correct errors quickly, which accelerates installation and increases safety. For example, the software can simulate robot motion with a virtual gripper. To prevent damage to the robot or gripper, any potential collisions that show up in the AR environment can be prevented early in the real environment.
The software can be used to simulate robot movement with a virtual gripper, for example. If potential collisions are detected in the AR environment, they can be prevented at an early stage in the real environment so that neither the robot nor the gripper is damaged. KUKA.MixedReality consists of the KUKA.MixedReality Assistant app and the additional KUKA.MixedReality Safe technology package, which is installed on the robot controller.
Robots Are Looking Better to Detroit as Labor Costs Rise
Tesla has been a leader in factory robotization, putting pressure on competitors to follow suit. Last year, executives at the world’s most valuable automaker said introducing more automated equipment was a crucial tool in its goal to cut the cost of making future models by 50%.
Dozens of new battery factories and electric-vehicle plants in the works will also open the door to broader use of high-tech systems, analysts say. It is easier and less costly to install robots in a new facility versus retrofitting an existing one. Plus, it is more streamlined to have updated systems that “speak” to each other smoothly, as opposed to popping in a new machine among older ones.
What Progress Has There Been In Industrial Robots?
Robots, separated from the software and sensors that give them perception and intelligence, aren’t terribly complex (conceptually at least). They’re a collection of control systems, motors, and gearboxes mounted to lever arms that make it possible to move things around in particular ways in 3D space. They can have different end effectors mounted to them for tasks like grasping, welding, or painting. One possible axis of improvement for robotic hardware is cost: if robots get cheaper over time, it becomes possible to cost-effectively use them in more locations.
So, overall, robots have fallen substantially in cost since the 1980s, while simultaneously getting more precise. A robot with a 1-meter reach and 10 kg payload capacity would cost anywhere from 2 to 3 times as much in the 1980s as it does today, even before considering the rise of cheap Chinese robot arms, which seem poised to make prices fall even more. And the modern industrial robot is roughly 100 times more accurate, and far more capable at smooth, continuous motion, than robots in the past.
The Future of Robotics: Robotics Foundation Models and the Role of Data
One key factor that has enabled the success of generative AI in the digital world is a foundation model trained on a tremendous amount of internet-scale data. However, a comparable dataset did not previously exist in the physical world to train a robotics foundation model. That dataset had to be built from the ground up — composed of vast amounts of “warehouse-scale” real-world data and synthetic data.
Covariant’s robotics foundation model relies on this mix of data. General image data, paired with text data, is used to help the model learn a foundational semantic understanding of the visual world. Real-world warehouse production data, augmented with simulation data, is used to refine its understanding of specific tasks needed in warehouse operations, such as object identification, 3D understanding, grasp prediction, and place prediction.
Robots Assemble Tools…for Robots
When assessing the company’s production line, OnRobot engineers recognized that numerous assembly processes involved the repetitive use of screws. One process, in particular, stood out: the mounting of Quick Changers to OnRobot’s various end-of-arm tooling products. The process is common across many OnRobot products. Regardless of the base product, the process requires the same four screws.
At OnRobot’s factory in Denmark, the initial automated screwdriving application focused on mounting the Quick Changer onto the company’s 2FG7, 3FG15 and VGC10 grippers. Soon, however, the application scope expanded to include the mounting of printed circuit board assemblies on the VGC10 gripper. In the near future, the company’s RG2 and RG6 grippers will be integrated into the automated Quick Changer assembly process, too. Additional subassembly tasks on the horizon.
Creative Robot Tool Use with Large Language Models
We introduce RoboTool, enabling robots to use tools creatively with large language models, which solves long-horizon hybrid discrete-continuous planning problems with the environment- and embodiment-related constraints.
In this work, we are interested in solving language-instructed long-horizon robotics tasks with implicitly activated physical constraints. By providing LLMs with adequate numerical semantic information in natural language, we observe that LLMs can identify the activated constraints induced by the spatial layout of objects in the scene and the robot’s embodiment limits, suggesting that LLMs may maintain knowledge and reasoning capability about the 3D physical world. Furthermore, our comprehensive tests reveal that LLMs are not only adept at employing tools to transform otherwise unfeasible tasks into feasible ones but also display creativity in using tools beyond their conventional functions, based on their material, shape, and geometric features.
Mech-Mind's Industrial 3D Camera Mech-Eye: Empowering Robotic Integrators
Introducing TwinBox: RoboDK’s Compact Solution for Production Robot Integration
RoboDK TwinBox represents the latest step in production robot programming for automation engineers. This compact system, launched in November 2023, integrates pre-installed RoboDK software into industrial PCs and small single-board computers or IPCs. RoboDK TwinBox can manage multiple devices and robots from various manufacturers simultaneously in a production environment. TwinBox can be easily controlled through a web browser, allowing you to trigger actions remotely and have a 3D view of your cell.
Automated Battery Assembly Line
Explore Mech-Mind's innovative application in EV battery production
Amazon Introducing Warehouse Overhaul With Robotics to Speed Deliveries
Amazon is introducing an array of new artificial intelligence and robotics capabilities into its warehouse operations that will reduce delivery times and help identify inventory more quickly. The revamp will change the way Amazon moves products through its fulfillment centers with new AI-equipped sortation machines and robotic arms. It is also set to alter how many of the company’s vast army of workers do their jobs. Amazon says its new robotics system, named Sequoia after the giant trees native to California’s Sierra Nevada region, is designed for both speed and safety. Humans are meant to work alongside new machines in a way that should reduce injuries, the company says.
Amazon said it would also start to test a bipedal robot named Digit in its operations. Digit, which is designed by Agility Robotics, can move, grasp and handle items, and will initially be used by the company to pick up and move empty tote containers.
Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can. The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.
Increase manufacturing processes by 25% with AI, Opcenter and Retrocausual a Siemens Partner
New Foundations: Controlling robots with natural language
The integration of Large Language Models (LLMs) in robotics is a rapidly evolving field, with numerous projects pushing the boundaries of what’s possible. These projects are not just isolated experiments, but pieces of a larger puzzle that collectively paint a picture of a future where robots are more intelligent, adaptable and interactive.
SayCan and Code as Policies are two early papers that indicate how an LLM can understand a task in natural language and create actions from it. “Code as Policies” leverages the ability of LLMs to output code and demonstrate how the language model can produce the actual code to perform a robotic action.
Instruct2Act connects the sense-making ability with vision capabilities. This way the robotic application (in this case a simulation) can identify, localize and segment (define object outlines for the best grabbing position) known or unknown objects according to the task. Similarly, NL-MAP connects the “SayCan” project with a mapping step, where the robot scans a room for objects before it can output tasks. The TidyBot research project focuses on a real world application for LLMs and robotics. A team at Princeton university developed a robot that can tidy up a room. It adapts to personal preferences (”socks in 3rd drawer on the right”) and benefits from general language understanding. For example, it knows that trash should go into the trash bin because it was trained on internet-scale language data.
Interactive Language achieves robotic actions from spoken commands by training a neural network on demonstrated moves connected with language and vision data.
While much of the work related to this technology is still in its early stages and limited to lab research, some applications such as PickGPT from logistics company Sereact’s, are starting to show the vast commercial potential.
Digital Robotics Twin in Omniverse
Optimizing Multi-Robot Workcell Performance
What distinguishes the many possible designs? In a good design, every robot spends approximately the same amount of time per cycle; we want to avoid having some robots idle while others are still working. Similarly, in a good design, every robot is able to move without spending much time either waiting for other robots to move out of its way or taking a non-optimal path so as to avoid other robots. The goal of a design is to enable high-performance choreography of the robots.
Our team has developed Optimization-as-a-Service (OaaS), which uses a proprietary algorithm to find high-performing design options that would otherwise take months to be discovered by a team of engineers. We have obtained speedups of 5-20 percent using our OaaS (discussed later), compared to designs that were laboriously developed by experienced engineers. This result highlights that design is both very difficult and very important
Improving Quality and Consistency with Robotic Sanding
Toyota Research Institute Unveils Breakthrough in Teaching Robots New Behaviors
The Toyota Research Institute (TRI) announced a breakthrough generative AI approach based on Diffusion Policy to quickly and confidently teach robots new, dexterous skills. This advancement significantly improves robot utility and is a step towards building “Large Behavior Models (LBMs)” for robots, analogous to the Large Language Models (LLMs) that have recently revolutionized conversational AI.
TRI has already taught robots more than 60 difficult, dexterous skills using the new approach, including pouring liquids, using tools, and manipulating deformable objects. These achievements were realized without writing a single line of new code; the only change was supplying the robot with new data. Building on this success, TRI has set an ambitious target of teaching hundreds of new skills by the end of the year and 1,000 by the end of 2024.
Automated packaging tech helps streamline snack and bakery operations
Kellogg’s uses packaging automation to create efficient production lines with the capacity for high throughput volume. This entails using automation in areas where manual labor may struggle to keep up with rapid rates of product output. “Our automation systems handle tasks such as placing products into containers and transporting them to the end of the line, as well as filling unit loads and loading them onto trucks via forklifts. The product mostly remains untouched by human hands until it reaches the retailer,” says David Sosnoski, director of packaging engineering for salty snacks, Kellogg Co.
Robotics plays a significant role within Kellogg’s packaging automation initiatives, offering a diverse range of options, from basic and simple robots to advanced collaborative robots (cobots). “Cobots are designed to replace repetitive human tasks that do not require a larger, fully automated solution,” Sosnoski says. “They have proven valuable in filling the middle ground between tasks that are too complex for traditional automation but exceed human capabilities in certain areas.”
How to Train a MIRAI Positioning Skill
🔋🦾 Battery Manufacturer Boosts Output With Automation
ESS Inc. of Wilsonville, OR, has developed iron-flow battery systems, the Energy Warehouse and the Energy Center, designed to meet the energy needs of customers ranging from small industrial facilities to large, utility-scale projects. The Energy Warehouse is built inside a standard 40-foot shipping container for easy transport and commissioning. It provides six to 12 hours of storage, with a nominal power of 75 kilowatts (kW), a peak energy capacity of 500 kilowatt-hours (kWh) and a rated energy capacity of 400 kWh. ESS was selected by Consumers Energy, Michigan’s largest energy provider, to provide an Energy Warehouse for a solar energy system at a gas compression facility. Consumers Energy provides natural gas and electricity to two thirds of Michigan’s 10 million residents.
Initially, ESS assembled products manually. As volumes grew, it implemented a semiautomatic assembly line. Workers manually assembled the stacks of materials that made up the positive and negative electrodes of the battery. Those stacks would then be handed off to robots to ensure that the seal between all the layers is done with great precision. Even on the semiautomatic line, control software is used to reduce idle time and improve utilization of key equipment.
Design for Robotic Assembly
In reality, equating the abilities of robots and human assemblers is risky. What’s easy for a human assembler can be difficult or impossible for a robot, and vice versa. To ensure success with robotic assembly, engineers must adapt their parts, products and processes to the unique requirements of the robot.
Reorienting an assembly adds cycle time without adding value. It also increases the cost of the fixtures. And, instead of a SCARA or Cartesian robot, assemblers may need a more expensive six-axis robot.
Robotic grippers are not as nimble as human hands, and some parts are easier for robots to grip than others. A part with two parallel surfaces can be handled by a two-fingered gripper. A circular part can be handled by its outside edges, or, if it has a hole in the middle, its inside edges. Adding a small lip to a part can help a gripper reliably manipulate the part and increase the efficiency of the system. If the robot will handle more than one type of part, the parts should be designed so they can all be manipulated with the same gripper. A servo-driven gripper could also help in that situation, since engineers can program stroke length and gripping force.
Robotic Cell Installation | ABAGY ROBOTIC WELDING
Automation that displays flashes of creativity
More complex icing and decorating systems are replacing traditional waterfall icing operations. For example, Inline Filling Systems recently developed a swirled icing decoration on top of a fully baked cake. The system relies on servo pump fillers feeding a pair of eight-across nozzle manifolds attached to two ABB robots applying a distinctive, programmable decoration pattern over an Auto-Bake Serpentine production line.
The Deco-Bot is a self-contained robotic decorating, glazing, depositing and spraying system with a built-in conveyor and quick hookup for heated and non-heated pumps. “Cobots are not the speed demon robotics you see on a caged line. They serve a different purpose,” he explained. “They help manufacturers with the labor crisis with a small footprint while being easy to clean and set up, and they are very safe.” It’s not uncommon to see 10 to 12 cobots decorating cakes night and day on the same production line.
Mitsubishi Electric & SICK Sensor Intelligence: Conveyor Tracking with Zoned Safety
Robots Automate Assembly of Auto Parts
AMG is Husco’s in-house factory automation arm. It designs and builds most of the manufacturing lines for Husco, and it recently began offering its services to outside clients as well.
While many manufacturers, including Husco, have been devoting more and more of their efforts to EVs, increasing the efficiency of internal combustion engines remains important. One crucial development has been the use of variable-force solenoids in car and truck engines. These small devices optimize the opening of the valves that let fuel and air into the cylinders at the heart of each engine, helping to increase both fuel efficiency and horsepower.To reach its goal, the plant would have to produce a fully assembled and tested solenoid every 6.1 seconds. To make that possible, the AMG team developed a modular automated assembly system consisting of a pallet-transfer conveyor and 10 Epson SCARA robots for most of the material handling. They settled on one Epson G6, two G3, and seven T-Series systems.
Husco and AMG most often use Epson T-Series robots for pick and place operations, but upgrade to the G-Series when they need higher speed and accuracy.
🦾 Kicking RaaS with Robotics as a Service
One major difference between robotics as a service and more traditional models of automation is financing. Typically, users purchase automation equipment, such as robots or cobots, and pay integrators to set them up if necessary. The idea of leasing or renting equipment can be a big adjustment for manufacturers who are used to buying equipment and amortizing it. And while she says that Rapid isn’t necessarily opposed to eventually selling the system to its customers, she also points out that it might not make sense. “If you’re running the system for three shifts for five years, it’s kind of coming to the end of its guaranteed life at that point. Do you really want to own it, or do you just want it to be our problem if it breaks, or something happens to it or it needs to be replaced?”
Behrens also appreciates the service aspect of Rapid’s business model, particularly the 24/7 monitoring. “When we have a problem, sometimes Rapid is calling us to say, hey, we see one of your robots is down. We’re going to do this to fix it,” Bellingham says. A lot of service can be done remotely, which helps maintain uptime. And when remote service isn’t an option, Rapid can diagnose the problem remotely, set up a replacement system in its office and bring it to the shop to swap out. Rapid takes the faulty system back to its office to fix, minimizing downtime for manufacturers.
Lift: Advanced, automated metal forming, controls, training, optimization
Advanced manufacturing techniques are advancing and on display at Lift in Detroit. Lift is operated by the American Lightweight Materials Manufacturing Innovation Institute (ALMMII). Lift is a public-private partnership among the U.S. Department of Defense, industry and academia, and it is part of the national network of manufacturing innovation institutes. Major participants in the Lift facility include Hexagon, Kearney, Siemens, U.S. Department of Defense and Department of Commerce.
Others, such as ABB, Fanuc and Festo, have supplied equipment to the advanced manufacturing demonstration and training facility. Siemens experts Tom Hoffman, Drew Whitney, Ed Chenhalls, Isaac Sislo, Matt Sislo and Alec Hopkins provided the Lift tour and information on April 13, during the Manufacturing in America event, in Detroit. The Siemens area at Lift headquarters can hold about 50 people for workshops on topics such as digital threads, digital twins, simulation, automation, controls, design, maintenance and industrial machinery.
How SCARA, Six-Axis, and Cartesian Pick-And-Place Robotics Optimize and Streamline Electronics Manufacturing Processes
Hastening the adoption of robotics in semiconductor manufacture are burgeoning classes of six-axis robots, selective compliance assembly robot arms (SCARAs), cartesian machinery, and collaborative robots featuring reconfigurable or modular hardware as well as unifying software to greatly simplify implementation. These robots and their supplemental equipment must be designed, rated, and installed for cleanroom settings or else risk contaminating delicate wafers with impurities. Requirements are defined by ISO 14644-1:2015, which classifies cleanroom air cleanliness by particle concentration.
Advanced cleanroom-rated robotic end-of-arm tooling (EoAT or end effectors) such as grippers are core to semiconductor production. Here, EOATs must have high dynamics and the ability to execute tracing, placing, and assembling with exacting precision. In some cases, EoAT force feedback or machine vision boosts parts-handling accuracy by imparting adaptive capabilities — so pick-and-place routines are quickly executed even if there’s some variability in workpiece positions, for example. Such sensor and feedback advancements can sometimes render the complicated electronics-handling fixtures of legacy solutions unnecessary.
🦾 Inside sewts’ textile-handling robots
Traditionally, clothing has been a challenge for robots to handle because of its malleability. Currently, available software systems and conventional image processing typically have limits when it comes to easily deformable material, limiting the abilities of commercially available robots and gripping systems.
VELUM, sewts’ robotic system, is able to analyze dimensionally unstable materials like textiles and handle them. This means VELUM can feed towels and similar linen made of terry cloth easily and without creases into existing folding machines.
sewts developed AI software to process the data supplied by the cameras. This software uses features like the course of the seam and the relative position of seams to analyze the topology of the textiles. The program classifies these features according to textile type and class, and then translates these findings into robot commands. The company uses Convolutional Neural Networks (CNNs) and classical image processing to process the data, including IDS peak, a software development kit from IDS.
🦾🧑🏭 The Robotization of High-mix, Low-volume Production Gains Momentum
One company, ABAGY, overcomes the limitations of traditional robotics. With this software, manufacturers can use robots for custom projects or even one-of-a-kind parts. No robot programming is required. The software automatically generates a robot program to produce a specific product, which only takes minutes. Using machine vision, the system scans the parts and adjusts the robot’s path depending on the actual position and deviations of the product.
A manufacturer in Sabetha, Kan., already had a robotic cell, but wanted to increase its utilization. The robotic cell was used for a limited number of parts because the programming was tedious. After implementing a new system with AI and machine vision, the setup time reduced dramatically — only 10-to-15 minutes for a new product — and the robot can now be used for many more products. It used to take 90-to-120 minutes to program a robot to produce one rotor. That’s a big win for a manufacturer with high-mix production. In the first month of the robotic cell’s operation, the company created 50 different technical charts. The company plans further robotization of production.
🦾 Patent filing strategy for emerging robotics companies
A robust patent portfolio is key to an emerging robotics company’s growth and success, but building such a patent portfolio takes strategy, time, and money. Thorough examination and understanding of a company’s individual business objectives, the landscape within the robotics industry, and country-specific patent rules and regulations are all important factors to ensure that resources are allocated appropriately and that a patent portfolio filing strategy meets the company’s needs.
Utility patent applications cover how a technology works and, in the United States, generally fall into one of two categories – provisional applications and non-provisional applications. Noting the distinction between the two can be crucial for developing a filing strategy. Design applications can also be filed to cover the ornamental design of a product.
OnRobot D:PLOY now compatible with Yaskawa
Comau uses Intrinsic Flowstate to automate the assembly of rigid components
Isolation in Industrial Robot Systems
🦾 Transferring Industrial Robot Assembly Tasks from Simulation to Reality
By lessening the complexity of the hardware architecture, we can significantly increase the capabilities and ways of using the equipment that makes it financially efficient even for low-volume tasks. Moreover, the further development of the solution can be mostly in the software part, which is easier, faster and cheaper than hardware R&D. Having chipset architecture allows us to start using AI algorithms - a huge prospective. To use RL for challenging assembly tasks and address the reality gap, we developed IndustReal. IndustReal is a set of algorithms, systems, and tools for robots to solve assembly tasks in simulation and transfer these capabilities to the real world.
We introduce the simulation-aware policy update (SAPU) that provides the simulated robot with knowledge of when simulation predictions are reliable or unreliable. Specifically, in SAPU, we implement a GPU-based module in NVIDIA Warp that checks for interpenetrations as the robot is learning how to assemble parts using RL.
We introduce a signed distance field (SDF) reward to measure how closely simulated parts are aligned during the assembly process. An SDF is a mathematical function that can take points on one object and compute the shortest distances to the surface of another object. It provides a natural and general way to describe alignment between parts, even when they are highly symmetric or asymmetric.
We also propose a policy-level action integrator (PLAI), a simple algorithm that reduces steady-state (that is, long-term) errors when deploying a learned skill on a real-world robot. We apply the incremental adjustments to the previous instantaneous target pose to produce the new instantaneous target pose. Mathematically (akin to the integral term of a classical PID controller), this strategy generates an instantaneous target pose that is the sum of the initial pose and the actions generated by the robot over time. This technique can minimize errors between the robot’s final pose and its final target pose, even in the presence of physical complexities.
🦾 How Robots & Artificial Intelligence are Transforming Unilever's Material Innovation Factory
Each machine at the MIF is designed to crunch colossal amounts of data and maintain consistency across samples and testing. Take Ariana, for example. This robot prepares multiple consistent hair fiber samples in seconds. These perfectly prepped strands are then used for research and testing as part of scientists’ work to create haircare products for Unilever brands. Dove’s Intensive Repair line, now on sale in the UK and the U.S., was developed with Ariana’s assistance, resulting in Unilever’s patented Fibre Repair Actives technology that helps to reconstruct inner hair fibers, reducing breakage and repairing from within.
Artificial intelligence is one advance that’s helping Unilever make progress at pace, allowing scientists to explore vast quantities of data in record time and translate discoveries into new formulas. The vibrant yet fully vegan Hourglass Confession Red Zero lipstick is one such example. Red lipstick is usually formulated using carmine – a pigment requiring over 1,000 crushed beetles per product. But using AI, Unilever’s experts were able to analyze color combinations and possibilities that would have taken millions of physical experiments to replicate. The lipstick launched in 2021.
More than 200 patents were filed between 2020 and 2022 based on data generated at MIF, and Unilever has invested more than €100 million ($123.4 million) in the innovation hub in the past three years.
☀️ Utility-scale solar installation goes automated
Terabase announced it is launching an automated utility-scale solar installation system, dubbed Terafab. The company describes the service as an automated “field factory” that can double installation productivity. The installation system makes use of digital twins, logistics software, an on-site digital command center, a field-deployed automated assembly line, and installation rovers that can operate 24/7.
“We successfully field-tested Terafab last year, building 10 MW of a 400 MW site in Texas. Today’s launch is the next step forward to rapid commercial scale-up,” said Matt Campbell, chief executive officer and co-founder of Terabase. Terabase has partnered with developer Intersect Power, engineering, procurement and construction firm Signal Energy, tracking hardware provider NEXtracker, and solar panel manufacturer First Solar to develop the Terafab facility. Terafab is pegged for commercial deployment starting in Q3 2023.
🦾 Renault Retrofits Robots at Refactory
The robots that retired from Renault’s plants in Sandouville, France, Maubeuge, France, and Douai are sent to the retrofit unit, which is run by Francesetti Nathalie, head of the tooling department at the plant. In the past, each plant retrofitted its own machines. Now, the Refactory revamps them all so the automaker can reap the benefit of a specialist team pooling their expertise in a dedicated workshop. By 2023, the team will double in size and have eight technicians and a scheduler.
By retrofitting robots, Renault has reduced investments in new projects and repair costs. This operation has also shortened supply chains, which are getting longer and longer for new robots. Ultimately, Renault’s goal is to retrofit more than 170 robots per year to support the company’s shift to producing electric vehicles. The operation will save the automaker some 3 million euros per year.
🚙🏭 Tesla Rethinks the Assembly Line
Engineers at Tesla Inc. have developed a new process that they claim will reduce EV production costs by 50 percent, while reducing factory space by 40 percent. The “unboxed” system was outlined during the automaker’s recent Investor Day event at its new factory in Austin, TX. Tesla believes that its more efficient production method will lead to a paradigm shift in the way that vehicles are mass-produced. It focuses on eliminating linear assembly lines and producing more subassemblies out of large castings.
“The traditional way of making a vehicle is to stamp it, build a body-in-white, paint it and do final assembly,” says Lars Moravy, vice president of vehicle engineering at Tesla. “These individual shops are dictated by the boundaries that exist in auto factories. If something goes wrong in final assembly, you block the whole line and you end up with buffering in between.”
“We simplified Model Y assembly with a structural battery, where the battery is [also] the floor,” says Moravy. “We put the front seats and the interior module on top of the battery pack, and we bring it up through a big open hole [in the bottom of the body]. This allows us to do things in parallel and reduce the final assembly line by about 10 percent.
“Unboxed assembly is also known as ‘delayed 3D,’” adds Mwangi. “In other words, you stay in 2D as much as possible and go to 3D as late as possible in the vehicle production process. That means you have open access to the majority of your work areas, which gives you an opportunity to simplify operations. It also lends itself to simpler automation, because robots don’t need to work around a shell.”
Cycle Time with Robots. Faster or Slower?
Our universal AI, the Covariant Brain, powering ABB and Fabuc robots simultaneously
BMW Paint Shop with Artificial Intelligence: Automated Rework
Underfluid Hydromount Cell Courtesy of Arnold Machine
How Delta Robotics Optimize and Streamline Electronics Manufacturing Processes
Delta robots are relatively small robots employed in handling food items for packaging, pharmaceuticals for casing, and electronics for assembly. The robots’ precision and high speed make them ideally suited to these applications. Their parallel kinematics enables this fast and accurate motion while giving them a spiderlike appearance that’s quite different from that of articulated-arm robots. Delta robots are usually (though not always) ceiling mounted to tend moving assembly and packaging lines from above. They have a much smaller working volume than an articulated arm, and very limited ability to access confined spaces. That said, their stiffness and repeatability are assets in high-precision processing of delicate workpieces — including semiconductors being assembled.
Delta robots provide affordable and flexible automation for electronics manufacturing. They often provide higher speed and more flexibility than other robotics and automated pick-and-place machines.
🔏🚗 In-Depth Analysis of Cyber Threats to Automotive Factories
We found that Ransomware-as-a-Service (RaaS) operations, such as Conti and LockBit, are active in the automotive industry. These are characterized by stealing confidential data from within the target organization before encrypting their systems, forcing automakers to face threats of halted factory operations and public exposure of intellectual property (IP). For example, Continental (a major automotive parts manufacturer) was attacked in August, with some IT systems accessed. They immediately took response measures, restoring normal operations and cooperating with external cybersecurity experts to investigate the incident. However, in November, LockBit took to its data leak website and claimed to have 40TB of Continental’s data, offering to return the data for a ransom of $40 million.
Previous studies on automotive factories mainly focus on the general issues in the OT/ICS environment, such as difficulty in executing security updates, knowledge gaps among OT personnel regarding security, and weak vulnerability management. In light of this, TXOne Networks has conducted a detailed analysis of common automotive factory digital transformation applications to explain how attackers can gain initial access and link different threats together into a multi-pronged attack to cause significant damage to automotive factories.
In the study of industrial robots, controllers sometimes enable universal remote connection services (such as FTP or Web) or APIs defined by the manufacturer to provide operators with convenient robot operation through the Control Station. However, we found that most robot controllers do not enable any authentication mechanism by default and cannot even use it. This allows attackers lurking in the factory to directly execute any operation on robots through tools released by robot manufacturers. In the case of Digital Twin applications, attackers lurking in the factory can also use vulnerabilities in simulation devices to execute malicious code attacks on their models. When a Digital Twin’s model is attacked, it means that the generated simulation environment cannot maintain congruency with the physical environment. This entails that, after the model is tampered with, there may not necessarily be obvious malicious behavior which is a serious problem because of how long this can go unchecked and unfixed. This makes it easy for engineers to continue using the damaged Digital Twin in unknown circumstances, leading to inaccurate research and development or incorrect decisions made by the factory based on false information, which can result in greater financial losses than ransomware attacks.
How ChatGPT Programmed an Industrial Robot
Our initial challenge for ChatGPT involved programming the Yaskawa robot to perform a wire cut. This is a very simple task. However, ChatGPT isn’t intrinsically familiar with the INFORM programming language, which is integral to Yaskawa robots. As such, our first step was to delineate the fundamental commands of this language.
Furthermore, ChatGPT had no understanding of the physical robot, its movements, or the typical process of wire-cutting. To address this, we established several coordinates using the robot’s teach pendant and outlined the basic principles of operation.
With these prerequisites met, we put forward our request for ChatGPT to create the required program. The AI successfully rose to the challenge, generating a program that we then transferred to the robot for a test run. The outcome was encouraging, with the robot effectively performing the wire-cutting task as directed.
Turning a broken 2 ton robot into a CNC-machine | ABB IRB6400
🦾 Protolabs presents 2023 Robotics Manufacturing Status Report
Industrial robots are also perceived as one of the most powerful ways to automate and build flexible production lines that enable customised production models like made-to-order and engineering-to-order. The latter model leads to mass customization i.e., the ability to produce highly customised products with only marginal increase in production cost.Industrial robots can automatically reconfigure production lines to produce alternative product variants with limited, or even zero, human intervention. Nevertheless, this flexible manufacturing approach is gradually reaching its limits, as radically differentiated products require changes, not only in the configuration of the production line, but also on the machinery used, especially when there is a need to manufacture a new product. Designing the production system of a new product requires efforts that are orders of magnitude higher than producing a variant of an existing product.
A key enabler of the advancement of robotics innovation is the leveraging of digital models to enable the manufacturing as a service (MaaS) paradigm. MaaS or digital manufacturing platforms offer access to various manufacturing processes, such as 3D printing, CNC machining, and injection moulding, and provide an easy transactional experience by allowing customers to upload their part designs to quickly get quotes for manufacturing costs and lead times.
Calculating the ROI of a Robotic Cell: A Comprehensive Guide
The integration of robotic cells in manufacturing processes has proven to be a game-changer for businesses. It has helped businesses achieve high efficiency, accuracy, and productivity while reducing labor costs and non-quality costs. That being said, the initial cost of automation is usually a cause for concern, and businesses need to assess the return on investment (ROI) before implementing a robotic cell.
More recently, there has been more and more pressure put on the reduction of manufacturing costs, forcing industrial companies to aim for a faster ROI. In some industries, the target for ROI has been reduced from three years to less than one year.
Can Large Language Models Enhance Efficiency In Industrial Robotics?
One of the factors that slow down the penetration of industrial robots into manufacturing is the complexity of human-to-machine interfaces. This is where large language models, such as ChatGPT developed by OpenAI, come in. Large language models are a cutting-edge artificial intelligence technology that can understand and respond to human language at times almost indistinguishable from human conversation. Its versatility has been proven in applications ranging from chatbots to language translation and even creative writing.
It turns out that large language models are quite effective at generating teach pendant programs for a variety of industrial robots, such as KUKA, FANUC, Yaskawa, ABB and others.
READY Robotics and NVIDIA Isaac Sim Accelerate Manufacturing With No-Code Tools
🦾 How to Make a Cost-effective Flexible Robotic Solution for Low-volume Production
Despite its process flexibility, the unified cell is still a production system that requires a level of investment that might not be the best suited for scenarios where one wants to produce short production runs of products that don’t share common applications (e.g. joining equipment) therefore require reconfiguration of its applications for every production run. To address this problem, we need a production system that can be reconfigured with multiple process capabilities while reusing as much equipment as possible to allow good utilisation of investment-heavy resources while meeting the machine safety directives and technical requirements.
By lessening the complexity of the hardware architecture, we can significantly increase the capabilities and ways of using the equipment that makes it financially efficient even for low-volume tasks. Moreover, the further development of the solution can be mostly in the software part, which is easier, faster and cheaper than hardware R&D. Having “chipset” architecture allows us to start using AI algorithms - a huge prospective.
🦾 From Simulation to Reality: A Tale of Robotic Heartache
It’s hard to imagine building any physical object without first fully designing, assembling, and even testing it on your computer. CAD/CAM tools and multi-physics simulations are readily available, allowing you to iterate quickly in the digital world rather than building N-prototypes of your widget and spending a lot of money trying to get it right. Validating a process in simulation is critical because there is very little margin for error on a real robot. Robots can be big, powerful beasts that will obediently destroy thousands of dollars of material just because you were off by 1 millimeter in your calculations. Not to mention the potential danger to personnel who happen to be near the robot during testing. So, the more we can simulate, the safer and more productive we will all be.
🦾 How to Address Tradeoffs in Robot Performance
Innovations in robotic automation have allowed manufacturers in countless industries to achieve higher throughput, improved quality, and safer working environments. But choosing a robot for an automation task often involves balancing tradeoffs between three key performance criteria: speed, payload, and precision. In other words, to achieve high precision, a user may have to sacrifice somewhat on the application’s speed and payload. Alternatively, if the robot’s payload is increased, the operating speed may need to be reduced. The underlying cause of these performance tradeoffs is vibration of the robot arm.
🔏🦾 Anatomy of Robots: Cybersecurity in the Modern Factory
In highly networked modern factories and complex robots’ operating modes, attackers have the opportunity to use more diverse methods to carry out cyberattacks on robots, particularly in the case of manufacturers who do not take product cybersecurity issues seriously. This complacency creates opportunities for attackers that break into a factory to easily compromise these devices. When robots are successfully attacked, in addition to directly causing the factory to halt the manufacture of products, this tampering will also affect the safety of people’s lives due to the nature of close cooperation between co-bots and humans. With this in mind, using past and current robotics cybersecurity literature and research as reference, we will analyze the following potential attack scenarios for robots.
ChatGPT for Robotics: Design Principles and Model Abilities
ChatGPT unlocks a new robotics paradigm, and allows a (potentially non-technical) user to sit on the loop, providing high-level feedback to the large language model (LLM) while monitoring the robot’s performance. By following our set of design principles, ChatGPT can generate code for robotics scenarios. Without any fine-tuning we leverage the LLM’s knowledge to control different robots form factors for a variety of tasks. In our work we show multiple examples of ChatGPT solving robotics puzzles, along with complex robot deployments in the manipulation, aerial, and navigation domains.
Fast Recognition of Snap-Fit for Industrial Robot Using a Recurrent Neural Network
Snap-fit recognition is an essential capability for industrial robots in manufacturing. The goal is to protect fragile parts by quickly detecting snap-fit signals in the assembly. In this letter, we propose a fast recognition method of snap-fit for industrial robots. A snap-fit dataset generation strategy of automatically acquiring labels is presented in the presence of data collection is complicated. A multilayer recurrent neural network (RNN) is designed for snap-fit recognition. An extensive evaluation based on two different datasets shows that the proposed method makes reliable and fast recognitions. Real-time experiments on industrial robot also demonstrate the effectiveness of the proposed method.
Unlocking the industrial potential of robotics and automation
Some aspects of productive activity are more amenable to automation than others are, with routine tasks at the head of the line. Activities such as picking, packing, sorting, movement from point to point, and quality assurance are already automated to some extent, and these will continue to see heavy investment over the coming years. Conversely, activities such as assembly, stamping, surface treatment, and welding, all of which require high levels of human input, are less likely to be automated in the short to medium terms.
A standout message from the survey is that automation is not easy. Participants report that the primary challenges to adoption include the capital cost of robots and a company’s general lack of experience with automation, cited by 71 percent and 61 percent of respondents, respectively. Some say that business confidence in technology is low, leading to challenges around conviction and funding. Moreover, respondents’ expectations of production and reliability gains through automation are offset by the belief that such gains will eliminate jobs and may affect existing contracts. In fact, that is not the case since automation typically leads to changes in workplace roles rather than the creation of redundancies.
Fast, Easy Six-Axis Robot Integration Created by a Molder for Molders
For Scott and his staff, few tools are more critical to profitability and efficiency than automation, which is why Noble Plastics has Fanuc six-axis robots on all its injection machines. The integration was performed inhouse with the philosophy that, as Rogers puts it, “The robot should be a partner for the operator, not a hindrance.” After 20-plus years of robot integration experience and eight years as an authorized integrator for Fanuc robots, Noble Plastics is now launching a turnkey package of a robot, basic and intuitive user interface, end-of-arm tooling (EOAT)—if desired, integration with the injection machine controls, job-specific programming and operator training. “We can do all this faster and at lower cost than your average integrator,” Rogers says, “and the end result is easier for the operator to use.”
Systems can be delivered in as little as 2 to 4 weeks and commissioned in 1 to 2 days, vs. up to 4 to 6 months. All this adds up to what Rogers thinks is a unique set of capabilities to serve injection molding customers in need of highly flexible automation. Is six-axis an expensive solution? Not if you make good use of its capabilities, says Rogers. “Depending on how many shifts you run, it could be $2 to $5/hr. And there are some things you can do with a six-axis that you can’t do with human operators or any other kind of robot.”
World's Largest Pasta Production Plant a Showcase for Integrated Robotics and Sustainable Distribution
Barilla’s flagship pasta manufacturing plant in Parma, Italy boasts a 430,000 square foot distribution facility – fully automated, lights-out, 24/7/365 operation – equipped with120 laser-guided vehicles, 37 robotic systems including high-density AS/RS, palletizers, labelers and stretch wrappers, handling 320,000 tons of pasta annually. Designed, manufactured and installed by E80 Group, this distribution facility is not only an example of excellence in integrated robotics systems, but also a showpiece for energy and environmental efficiency.
To realize such an integrated-system and energy-efficient strategy at Barilla’s Parma distribution facility, E80 Group (E80) was selected to design, manufacture and install a solution. E80 is an Italian-based multinational leader specializing in creating automated solutions for companies that produce fast-moving consumer goods, particularly in the food, beverage and tissue sectors. It has been a leading manufacturer of integrated-robotics systems for distribution facilities for almost three decades, specifically laser-guided vehicles (LGVs), robotic palletizers and other end-of-line robotic systems. The company’s latest technology advances have made LGVs particularly attractive for sustainability and reduced energy usage.
RT-1: Robotics Transformer for Real-World Control at Scale
Major recent advances in multiple subfields of machine learning (ML) research, such as computer vision and natural language processing, have been enabled by a shared common approach that leverages large, diverse datasets and expressive models that can absorb all of the data effectively. Although there have been various attempts to apply this approach to robotics, robots have not yet leveraged highly-capable models as well as other subfields.
Several factors contribute to this challenge. First, there’s the lack of large-scale and diverse robotic data, which limits a model’s ability to absorb a broad set of robotic experiences. Data collection is particularly expensive and challenging for robotics because dataset curation requires engineering-heavy autonomous operation, or demonstrations collected using human teleoperations. A second factor is the lack of expressive, scalable, and fast-enough-for-real-time-inference models that can learn from such datasets and generalize effectively.
To address these challenges, we propose the Robotics Transformer 1 (RT-1), a multi-task model that tokenizes robot inputs and outputs actions (e.g., camera images, task instructions, and motor commands) to enable efficient inference at runtime, which makes real-time control feasible. This model is trained on a large-scale, real-world robotics dataset of 130k episodes that cover 700+ tasks, collected using a fleet of 13 robots from Everyday Robots (EDR) over 17 months. We demonstrate that RT-1 can exhibit significantly improved zero-shot generalization to new tasks, environments and objects compared to prior techniques. Moreover, we carefully evaluate and ablate many of the design choices in the model and training set, analyzing the effects of tokenization, action representation, and dataset composition. Finally, we’re open-sourcing the RT-1 code, and hope it will provide a valuable resource for future research on scaling up robot learning.
Covariant Robotic Depalletization | Mixed-SKU Pallets
Bridgestone Unveils Soft-Robot Hand for Package Handling
How Does Advance Concrete Use a Robot for Welding
ROBOT PROGRAMMING: the big problem and how to solve it
In this note I share the arguments presented in those conversations, besides other similar with other important clients, why programming is frequently one of the big problems of the integration and use of robots, along with their maintenance, but the latter may be topic for another note.
As we have previously discussed, it can take 80 hours the basic programming training. The costs per seat is around $ 5,300 USD, if they also require training courses in equipment and application software, for example, vision cameras or welding, you could invest several thousand dollars more. To reduce a bit the costs and complexity of robot programming, some IDE and simulation software has been created for universal use, I mean, they can be used in all brands of robots, these are generally based on CAM interface most used in CNC machines and can work relatively well for modifications, updates, and development of simple programs, which do not require the complexity of handling decision subroutines or more complex mathematical functions. Gaining the skills and experience that give confidence in the code is very important, especially when the code is the only thing preventing a $90k robot from hitting a $450k machine tool.
Keep It Simple and Shined with Automated Surface Finishing
Walgreens Turns to Prescription-Filling Robots to Free Up Pharmacists
Walgreens Boots Alliance Inc. is turning to robots to ease workloads at drugstores as it grapples with a nationwide shortage of pharmacists and pharmacist technicians.
The nation’s second-largest pharmacy chain is setting up a network of automated, centralized drug-filling centers that could fill a city block. Rows of yellow robotic arms bend and rotate as they sort and bottle multicolored pills, sending them down conveyor belts. The company says the setup cuts pharmacist workloads by at least 25% and will save Walgreens more than $1 billion a year.
The ultimate goal: give pharmacists more time to provide medical services such as vaccinations, patient outreach and prescribing of some medications. Those services are a relatively new and growing revenue stream for drugstores, which are increasingly able to bill insurers for some clinical services.
Robots Are Taking Over Chinese Factories
NVIDIA Robotics Software Jumps to the Cloud, Enabling Collaborative, Accelerated Development of Robots
Robotics developers can span global teams testing for navigation of environments, underscoring the importance of easy access to simulation software for quick input and iterations. Using Isaac Sim in the cloud, roboticists will be able to generate large datasets from physically accurate sensor simulations to train the AI-based perception models on their robots. The synthetic data generated in these simulations improves the model performance and provides training data that often can’t be collected in the real world.
Developers will have three options to access it. It will soon be available on the new NVIDIA Omniverse Cloud platform, a suite of services that enables developers to design and use metaverse applications from anywhere. It’s available now on AWS RoboMaker, a cloud-based simulation service for robotics development and testing. And, developers can download it from NVIDIA NGC and deploy it to any public cloud.
Predictive-Maintenance Tech Is Taking Off as Manufacturers Seek More Efficiency
Anna Farberov, general manager of PepsiCo Labs, the technology venture arm of PepsiCo Inc., said that over the past year so-called predictive-maintenance systems at four Frito-Lay plants reduced unexpected breakdowns, interruptions and incremental costs for replacement parts, among other benefits.
Developed by New York-based startup Augury Inc., the technology has helped Frito-Lay add some 4,000 hours a year of manufacturing capacity—the equivalent of several million pounds of snacks coming off the production line and shipped to store shelves, Ms. Farberov said.
Amazon’s Janus framework lifts continual learning to the next level
“The problem with machine learning is that models must adapt to continually changing data conditions,” says Cassie Meeker, an Amazon Robotics applied scientist who is an expert user of Amazon’s continuous learning system. “Amazon is a global company — the types of packages we ship and the distribution of these packages changes frequently. Our models need to adapt to these changes while maintaining high performance. To do this, we require continual learning.” To get there, Meeker’s team created a new learning system—a framework powerful and smart enough to adapt to distribution shifts in real time.
Adaptive Computing in Robotics: Making the Intelligent Factory Possible
Demand for robotics is accelerating rapidly. According to the research firm, Statista, the global market for industrial robots, as an example, will more than double from US$81 billion in 2021, to over US$165 billion in 2028 (1). Today, you can find the technologies you need to build a robot that is safe and secure and can operate alongside humans. But getting these technologies working together can be a huge undertaking. Complicating matters is the addition of artificial intelligence which is making it more difficult to keep up with computational demands. In order to meet today’s rapid pace of innovation, roboticists are turning toward adaptive computing platforms. These offer lower latency and deterministic, multi-axis control with built-in safety and security on a modular platform that is scalable for the future.
Robot integration ease of use a priority
Leading robot manufacturers – ABB, Comau, Epson, Fanuc, Jaka, Kawasaki, Kuka, Nachi, Panasonic, Stäubli, TM Robot, Yamaha, Yaskawa – joined forces at the initiative of Siemens to develop a solution. Around 70 percent of the world’s robot manufacturers were on board. Now, the joint work has paid off. A uniform data interface between the PLC and the robot controllers has been defined to make robot programming uniform – and thus more efficient – for PLC programmers and PLC suppliers. Via this data interface, robot programs can be written completely in the PLC by calling the robot functions and reporting the required robot state information back to the PLC.
Towards Helpful Robots: Grounding Language in Robotic Affordances
In “Do As I Can, Not As I Say: Grounding Language in Robotic Affordances”, we present a novel approach, developed in partnership with Everyday Robots, that leverages advanced language model knowledge to enable a physical agent, such as a robot, to follow high-level textual instructions for physically-grounded tasks, while grounding the language model in tasks that are feasible within a specific real-world context. We evaluate our method, which we call PaLM-SayCan, by placing robots in a real kitchen setting and giving them tasks expressed in natural language. We observe highly interpretable results for temporally-extended complex and abstract tasks, like “I just worked out, please bring me a snack and a drink to recover.” Specifically, we demonstrate that grounding the language model in the real world nearly halves errors over non-grounded baselines. We are also excited to release a robot simulation setup where the research community can test this approach.
Sim2Real AI Helps Robots Think Outside The Box
At Ambi Robotics, our robotic systems learn how to handle diverse items using data generated by advanced simulation. We fine-tune our simulations to the performance of our sensors, our robots, and variations on the items our robots will handle. Our simulations run extremely fast, hundreds of times faster than robots training in the physical world, so we can train our robots overnight. This is what enables our solutions to work reliably from day one.
ROS: How Well Does it Address Manufacturers’ Needs?
Using ROS, developers can build the three main components of a robot: the actuators, sensors, and control systems. These components are then unified with ROS tools, namely topics and messages. The messages are used to plan the robot’s movement and, using a digital twin, developers can ensure that their code works without having to actually test it on a real robot.
Omnirobotic’s AutonomyOS™ is a middleware meant to simplify and widen how robots are being used. While they both aim to achieve similar results, AutonomyOS™ flips the script by removing the need to code – something that still drives ROS. AutonomyOS™ can be primarily used by High-Mix manufacturers for a variety of different applications like paint spray processes, welding, and sanding. What is “High-Mix” Manufacturing? It is generally defined as any manufacturer or production that processes more than 100 different SKUs in batches fewer than 1000 each year – basically, a lot more variation than mass manufacturing.
Multi-granularity service composition in industrial cloud robotics
Industrial cloud robotics employs cloud computing technology to provide various operational services, such as robotic control modules that enable customized screwing and welding. Service composition technology enables the flexible implementation of complex industrial robotic applications based on the collaboration of multiple industrial cloud robotic services. Most studies considered cloud robotic services with a single robotic manipulator with a fixed function. To utilize the advantages of coarse-grained services encapsulated by multi-functional robots, manipulators, and control applications, a multi-granularity service composition method is introduced considering the multi-functional resources and capabilities of the cloud robotic services. Then a quality-of-service-aware multi-granularity robotic service composition model is built to evaluate the composition solution. Furthermore, a multi-granularity robotic service matching strategy is proposed according to the matching constraints of coarse-grained services. Six representative multi-objective evolutionary algorithms are adopted to optimize five quality-of-service attributes of the composite service simultaneously. Experiments demonstrate that the proposed multi-granularity robotic service composition method can remarkably improve the quality of robotic composite services for complex manufacturing tasks by utilizing coarse-grained services in addition to fine-grained services. The performances of six multi-objective evolutionary algorithms are compared to determine the most suitable algorithm for the multi-granularity robotic service composition problem.
Veo Robotics’ 2022 Manufacturing Automation Outlook
To optimize their investments, manufacturers must create an environment conducive to this new hybrid workplace. To do this, manufacturers must create environments where robots and humans can work nearby as collaborators to optimize space and increase productivity. Ironically, however, as manufacturers are increasingly seeking ways to automate their processes, antiquated safeguarding measures are, in many cases, hindering their ability to embrace it fully.
In fact, manufacturers surveyed by Veo cited many areas that they would like to automate but cannot because of safeguarding complexities or obstructions. Areas mentioned include, operator load stations (24%) welding & soldering (18%), primary packaging (17%) and palletizing/depalletizing (13%).
How Realtime’s Technology Simplifies Motion Planning
Can Realtime Robotics’ RapidPlan Software Break Through Industrial Automation’s Slow Progress?
Despite continuous advances in the field of robotics, complex motion through space still presents a stumbling block for bots. In oftentimes hectic industrial settings, complex motion that entails a robot getting from point A to point B to perform a task is a feat that generally involves weeks to months of programming time, resulting in movement that’s relatively slow and collision-prone. It’s a challenge that George Konidaris, cofounder and chief roboticist of Realtime Robotics, says has been a persistent hurdle for robotics researchers since 1979—and he thinks RapidPlan represents a significant leap of progress.
Part of the software’s promise is enabling robots to more quickly determine the best path of movement in dynamic environments like factories, which Konidaris says so far hasn’t been accomplished. Software users start with RapidPlan Create, which guides them through the robotic programming phase. Then RapidPlan Control runs the robots’ operations.
Data from Realtime indicates that the software can reduce programming time by 70-80 percent, increase throughput rate by 10-30 percent and decrease a bot’s life cycle cost by up to 50 percent.
Meet the Robotiq Screwdriving Solution
OSARO™ Robotic Induction System
Engine block assembly line for Scania's trucks of tomorrow
Rent-a-Robot and Our Tight Labor Market
Today, firms like Formic Technologies, Stout Industrial Technologies, and Rios Intelligent Machines, Inc. are helping move robotics into new sectors of the economy and into small- and medium-sized firms through a robots-as-a-service model. The robotic systems they offer are not only more nimble, smarter, and more efficient than their predecessors of a quarter-century ago but are also cost-effective in helping mid-sized and small firms overcome constraints posed by capital and technological know-how. Companies paying $15 an hour or more for labor—if they are able to find workers at all—can rent a robotic solution for around $8 per hour per robot while avoiding capital outlays of as much as $125,000 per unit. Avoiding big-ticket investments combined with a 40 to 50 percent reduction in labor costs is the kind of thing that gets business-owner attention.
Expanding the robotics toolbox: Physics changes in Unity 2022.1
Simulate sophisticated, environment-aware robots with the new inverse dynamics force sensor tools. Explore dynamics with the completely revamped Physics Debugger. Take advantage of the performance improvements in interpolation, batch queries, and more.
Robots and CNC Machines - An Assembly Configuration Made in Heaven
The real challenge with manufacturing comes from making all of your equipment work together. Electrical and automation controls have been around for decades, and the technology has advanced at nearly overwhelming rates. It’s almost a guarantee that any new machine will need some sort of custom interface board or network protocol to communicate and work in harmony with the rest of the process or overall system.
Two types of machines have a tendency to be a bit more difficult for novice users to integrate into a larger system scope - robots and CNC machines.
At Amazon Robotics, simulation gains traction
“To develop complex robotic manipulation systems, we need both visual realism and accurate physics,” says Marchese. “There aren’t many simulators that can do both. Moreover, where we can, we need to preserve and exploit structure in the governing equations — this helps us analyze and control the robotic systems we build.”
Drake, an open-source toolbox for modeling and optimizing robots and their control system, brings together several desirable elements for online simulation. The first is a robust multibody dynamics engine optimized for simulating robotic devices. The second is a systems framework that lets Amazon scientists write custom models and compose these into complex systems that represent actual robots. The third is what he calls a “buffet of well-tested solvers” that resolve numerical optimizations at the core of Amazon’s models, sometimes as often as every time step of the simulation. Lastly, is its robust contact solver. It calculates the forces that occur when rigid-body items interact with one another in a simulation.
Cloud-edge-device collaboration mechanisms of deep learning models for smart robots in mass personalization
Personalized products have gradually become the main business model and core competencies of many enterprises. Large differences in components and short delivery cycles of such products, however, require industrial robots in cloud manufacturing (CMfg) to be smarter, more responsive and more flexible. This means that the deep learning models (DLMs) for smart robots should have the performance of real-time response, optimization, adaptability, dynamism, and multimodal data fusion. To satisfy these typical demands, a cloud-edge-device collaboration framework of CMfg is first proposed to support smart collaborative decision-making for smart robots. Meanwhile, in this context, different deployment and update mechanisms of DLMs for smart robots are analyzed in detail, aiming to support rapid response and high-performance decision-making by considering the factors of data sources, data processing location, offline/online learning, data sharing and the life cycle of DLMs. In addition, related key technologies are presented to provide references for technical research directions in this field.
RoboDK Pro Training - Module 06 - 08 - Part Feeder - Create Mechanism - Robot Simulation - Tutorial
Automated Assembly for Waterproof Electrical Connectors, Courtesy of Noble Plastics
Robots Become More Useful In Factories
“The main focus of manufacturing is to increase productivity measured in throughput over a time period, with minimum downtime,” said Sathishkumar Balasubramanian, head of product management and marketing for IC verification at Siemens EDA. “But assembly line manufacturing line is a dynamic environment, and automation is only part of the solution. On the outside, it seems to be important to have constant flow. However, variability in manufacturing flow is inevitable, and how the manufacturing process adapts to variation is highly critical to keep the downtime to a minimum. For example, in bottling manufacturing, how the work moves from station 1 to station 4, and a change in bottle orientation, can be addressed by an adaptive production line to meet peak demand with minimum disruption. That is very important. The ability to sense the status of manufacturing line at the edge is key to robotic manufacturing process.”
Industry 5.0: Adding the Human Edge to Industry 4.0
The arms of pick and place robots are equipped with end effectors similar to human hands that are specifically designed for picking various types of objects. These may include components that are further used in the manufacturing processes of products.
Pick and place robots have a wide range of capabilities. Depending on specific application requirements, they can be equipped with several types of end effectors. The most common ones include vacuum grippers with suction cups, fingered grippers, clawed grippers, magnetic grippers, or custom grippers. To achieve a high level of flexibility, pick and place robots are often equipped with multiple arms and heads. This helps them approach objects from several angles at any given time.
Mitsubishi Electric Develops Teaching-less Robot System Technology
Mitsubishi Electric Corporation announced it has developed a teaching-less robot system technology to enable robots to perform tasks, such as sorting and arrangement as fast as humans without having to be taught by specialists. The system incorporates Mitsubishi Electric’s Maisart AI technologies including high-precision speech recognition, which allows operators to issue voice instructions to initiate work tasks and then fine-tune robot movements as required. The technology is expected to be applied in facilities such as food-processing factories where items change frequently, which has made it difficult until now to introduce robots. Mitsubishi Electric aims to commercialize the technology in or after 2023 following further performance enhancements and extensive verifications.
Realtime Robotics enhances responsive workcell monitoring by reading CAD files with CAD Exchanger
It was Realtime Robotics who managed to fuse all the latest technological achievements and academic research and to elaborate a specialized toolkit for on-the-fly motion planning. “Realtime Robotics has created technology that solves a 30-year-old challenge in the robotics industry. Our motion planning solution allows industrial automation to move collision-free and respond to obstacles in real time,” says Will Floyd-Jones, Co-Founder & Robotics Engineer in Realtime Robotics.
The initial plan to cobble together various libraries was no longer relevant when Realtime Robotic stumbled upon CAD Exchanger. “We would have to hack a bunch of things together and try to figure out how to get them to import data in one common way. And then we discovered that CAD Exchanger has already solved the problem and would just do that for us,” explains Will Floyd-Jones, Co-Founder & Robotics Engineer in Realtime Robotics.
MIRAI Case: Micropsi Industries Automates Leak Testing with Intelligent Complete solution
Marlan Lets Cobot Perform Heavy Repetitive Sanding Work
One of the operations that are common when processing solid surface products is sanding. This is heavy, repetitive work that requires skilled personnel. Such personnel is becoming increasingly difficult to find. In addition, it is important that the quality is guaranteed. It became increasingly difficult for Marlan to organize this task properly. The company therefore went in search of a way to automate this process as much as possible.
After delivery, the cobot was deployed within two weeks. Heerema was able to program it within half an hour, without any programming experience. After the implementation and installation, two employees were trained and the cobot was fine-tuned to determine the correct pressure when sanding. This step was also the start of further optimizing other parts of the production. According to Heerema, some employees were immediately enthusiastic, but others were afraid of losing their jobs. “But that’s not what we’re about at all. We want to make it easier for employees and give them the opportunity to increase their output.” The employees are now fully accustomed to the cobot and see it as a kind of colleague.
The cobot at Marlan is currently used for sanding bathtubs. This is a large object that is difficult to sand manually. A major problem here is monitoring consistent quality. Heerema: “But with a cobot you can guarantee an even pressure which also ensures constant product quality.”
MiR+UR autonomous picking and transport
Can Boston Dynamics’ Robots Spot And Stretch Make It Profitable?
Fleet of MiR robots at Mirgor in Argentina
Machine Shop Creates Robot Machining Cell Before There was Work for It
This machine shop’s self-integrated robot was purchased without a project in mind. However, when a particular part order came in, the robot paired with the proper machine tool was an optimal fit for the job, offering consistency and an increase in throughput.
The M-10 is a six-axis robot that is designed specifically for small work cells and can lift up to 12 kg. Young purchased the robot with a force sensor, which he highly recommends. Force sensors enable robots to detect force and torque applied to the end effector. This provides it with an almost human sense of touch. Surprising to Young and his team, the force sensor was not difficult to set up and use.
After the robot purchase and the order came in, it was time to search for the right machine tool for the job. The Hardinge Bridgeport V480 APC VMC was attractive to Young because of its pallet changing system that maximizes spindle uptime.
Custom Tool’s automated data collection and reporting system developed by company president, Gillen Young, uses a web-based, Industrial Internet of Things (IIoT) platform to pull data from machines that have agents for the open-source MTConnect communication protocol as well as the company’s JobBoss enterprise resource planning (ERP) software. The platform is Devicewise for Factory from Telit, a company that offers IIoT modules, software and connectivity services and software.
This Robotic Avatar Welds, Cuts, Lifts While Controlled By A VR Operator Over 5G
Guardian XT is the latest “highly dextrous mobile industrial robot” from Sarcos. Think of it as the top half of your body with super-strong arms, configurable attachments for different tasks, a built-in battery pack, cameras and sensors for eyes, and a 5G connection for taking orders from a remote operator who sees what the robot sees via a VR headset and wears a motion capture suit so the robot does what he or she does.
With different attachments on its arms, Guardian XT can weld, sand, grind, cut, inspect, and more. Over time the company will be developing more quick-swap attachments for more capabilities, just like an excavating company might purchase different buckets or attachments for its machinery as different jobs have varying requirements. Plus, there’s a three-fingered robotic hand coming that can hold and use many of the tools a human uses today.