Generative Adversarial Networks (GAN)
Assembly Line
Digital Transformation in Medical Device Manufacturing
The technique gained notoriety as a tool for creating “deepfake” videos on the internet, but it can also be adapted to work with 3D data to customize production of physical products, a concept that Goodfellow has dubbed “GANufacturing.” Glidewell is the first company to use GANs to make better teeth. Dentists often spend considerable time and effort creating custom dental prostheses. Not only does a new prosthesis have to fit a 3D shape that works with the patient’s other teeth, but it also must work well with the overall pattern of the person’s bite. As a result, a prosthesis typically needs to be tested on a dental model and ground to fit. Through GANufacturing, Glidewell can generate a near-perfect, realistic and functional tooth that needs little or no post-processing.
Toward Generalized Sim-to-Real Transfer for Robot Learning
A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.
To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies — so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning — and thus bridge the visual discrepancy between sim and real.
Influence estimation for generative adversarial networks
Expanding applications [1, 2] of generative adversarial networks (GANs) makes improving the generative performance of models increasingly crucial. An effective approach to improve machine learning models is to identify training instances that “harm” the model’s performance. Recent studies [3, 4] replaced traditional manual screening of a dataset with “influence estimation.” They evaluated the harmfulness of a training instance based on how the performance is expected to change when the instance is removed from the dataset. An example of a harmful instance is a wrongly labeled instance (e.g., a “dog” image labeled as a “cat”). Influence estimation judges this “cat labeled dog image” as a harmful instance when the removal of “cat labeled dog image” is predicted to improve the performance (Figure 1)