Imitation Learning

Assembly Line

RealMan Showcases Ultra-Lightweight Humanoid Robotic Arms at Advanced Manufacturing Madrid 2024

πŸ“… Date:

πŸ”– Topics: Imitation Learning

🏒 Organizations: RealMan Robotics


RealMan Robotics, a global leader specializing in the development, production and sales of ultra-lightweight humanoid robotic arms, is unveiling its latest robotic arms, joint model and embodied intelligence platforms at Advanced Manufacturing Madrid 2024 in Spain.

RealMan’s dual-arm embodied intelligence development platform is a state-of-the-art data collection device designed for large embodied models. Using imitation learning algorithms and a combination of 50 task demonstrations with static data training, it can achieve a task success rate of up to 90%. The standard configuration includes dual main arms for operation, a passive secondary arm, a global camera for object recognition, and a localized camera mounted on the secondary arm.

Read more at Realman Robotics Blog

Precision Home Robotics w/Real-to-Sim-to-Real

Micropsi raises $30M to retrain industrial robots using human demonstrations

πŸ“… Date:

✍️ Author: Kyle Wiggers

πŸ”– Topics: funding event, Imitation Learning

🏒 Organizations: Micropsi Industries


Micropsi claims that, using AI, MIRAI can generate robot movements in real time β€” dealing with variations in position, color, and lighting conditions. It can also be trained and retrained for various process steps, the company says, including detecting leaks in machinery, screwing screws and inserting cables into products, and sorting objects on the assembly line.

β€œData is generated by giving demonstrations, the rest is done by MIRAI in the background β€” it takes [about] 30 minutes of demonstrations and between one and three hours of number-crunching time in the cloud to create a new skill from scratch,” Vuine explained. β€œUsers learn how to create good datasets by iterating their skills: Record some data, see the robot perform, add some more data to address weaknesses, and after three or four iterations, a very robots skill has been created. No one at Micropsi Industries needs to understand the use case, and no one on the customer side needs to understand the machine learning.”

Read more at VentureBeat

Can Robots Follow Instructions for New Tasks?

πŸ“… Date:

✍️ Authors: Chelsea Finn, Eric Jang

πŸ”– Topics: robotics, natural language processing, imitation learning

🏒 Organizations: Google, Everyday Robots


The results of this research show that simple imitation learning approaches can be scaled in a way that enables zero-shot generalization to new tasks. That is, it shows one of the first indications of robots being able to successfully carry out behaviors that were not in the training data. Interestingly, language embeddings pre-trained on ungrounded language corpora make for excellent task conditioners. We demonstrated that natural language models can not only provide a flexible input interface to robots, but that pretrained language representations actually confer new generalization capabilities to the downstream policy, such as composing unseen object pairs together.

In the course of building this system, we confirmed that periodic human interventions are a simple but important technique for achieving good performance. While there is a substantial amount of work to be done in the future, we believe that the zero-shot generalization capabilities of BC-Z are an important advancement towards increasing the generality of robotic learning systems and allowing people to command robots. We have released the teleoperated demonstrations used to train the policy in this paper, which we hope will provide researchers with a valuable resource for future multi-task robotic learning research.

Read more at Google AI Blog

Toward Generalized Sim-to-Real Transfer for Robot Learning

πŸ“… Date:

✍️ Authors: Daniel Ho, Kanishka Rao

πŸ”– Topics: reinforcement learning, AI, robotics, imitation learning, generative adversarial networks

🏒 Organizations: Google


A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.

To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies β€” so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning β€” and thus bridge the visual discrepancy between sim and real.

Read more at Google AI Blog