Massachusetts Institute of Technology (MIT)
Canvas Category Consultancy : Research : Academic
Committed to doing groundbreaking work in computing, CSAIL have played key roles in developing innovations like the World Wide Web, RSA encryption, Ethernet, parallel computing and much of the technology underlying the ARPANet and the Internet.
Assembly Line
Combining next-token prediction and video diffusion in computer vision and robotics
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a simple change to the diffusion training scheme that makes this sequence denoising considerably more flexible.
When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length. However, they make these generations while being unaware of desirable states in the far future — such as steering its sequence generation toward a certain goal 10 tokens away — and thus require additional mechanisms for long-horizon (long-term) planning. Diffusion models can perform such future-conditioned sampling, but lack the ability of next-token models to generate variable-length sequences.
Researchers from CSAIL want to combine the strengths of both models, so they created a sequence model training technique called “Diffusion Forcing.” The name comes from “Teacher Forcing,” the conventional training scheme that breaks down full sequence generation into the smaller, easier steps of next-token generation (much like a good teacher simplifying a complex concept).
Diffusion Forcing found common ground between diffusion models and teacher forcing: They both use training schemes that involve predicting masked (noisy) tokens from unmasked ones. In the case of diffusion models, they gradually add noise to data, which can be viewed as fractional masking. The MIT researchers’ Diffusion Forcing method trains neural networks to cleanse a collection of tokens, removing different amounts of noise within each one while simultaneously predicting the next few tokens. The result: a flexible, reliable sequence model that resulted in higher-quality artificial videos and more precise decision-making for robots and AI agents.
Blue Energy Secures $45M to Make Clean, Reliable Nuclear Power Commercially Viable
Blue Energy, a nuclear power plant company, emerged from stealth with a $45 million Series A fundraise co-led by Engine Ventures and At One Ventures, with investment from Angular Ventures, Tamarack Global, Propeller Ventures, Starlight Ventures, and Nucleation Capital. Blue Energy also introduced its modular nuclear power plant that can be centrally manufactured in existing shipyards. Shipyard manufacturing reduces the cost and build time of deploying nuclear power safely, making nuclear power economically competitive with fossil fuels and renewables. The funding will be used to advance Blue Energy’s core engineering work and site development, and secure additional partners.
The AI datacenter and manufacturing boom has emphasized the demand for reliable, clean electricity in the U.S. and the world. To decarbonize and grow electricity production while maintaining energy affordability and security, the global energy generation mix must include more nuclear power. However, new nuclear plant construction projects face multi-year delays and exorbitant costs. While there have been exciting advancements in nuclear reactor technology, reactors make up less than 10% of the cost of nuclear power plants; over 90% of the cost comes from construction and regulatory challenges in the rest of the plant. Blue Energy’s innovation is a modular, reactor-agnostic power plant architecture to house the next generation of nuclear reactors. Blue Energy’s power plants use centralized shipyard manufacturing to dramatically reduce the capital costs from $10K/kW to $2K/kW and shrink build times from 10 years to 2 years.
Semiconductor-free, monolithically 3D-printed logic gates and resettable fuses
Additive manufacturing has the potential to enable the inexpensive, single-step fabrication of fully functional electromechanical devices. However, while the 3D printing of mechanical parts and passive electrical components is well developed, the fabrication of fully 3D-printed active electronics, which are the cornerstone of intelligent devices, remains a challenge. Existing examples of 3D-printed active electronics show potential but lack integrability and accessibility. This work reports the first active electronics fully 3D-printed via material extrusion, i.e. one of the most accessible and versatile additive manufacturing processes. The technology is proof-of-concept demonstrated through the implementation of the first fully 3D-printed, semiconductor-free, solid-state logic gates, and the first fully 3D-printed resettable fuses. The devices take advantage of a positive temperature coefficient phenomenon found to affect narrow traces of 3D-printed copper-reinforced, polylactic acid. Although the reported devices don’t perform competitively against semiconductor-enabled integrated circuits, the customisability and accessibility intrinsic to material extrusion additive manufacturing make this technology promisingly disruptive. This work serves as a steppingstone for the semiconductor-free democratisation of electronic device fabrication and is of immediate relevance for the manufacture of custom, intelligent devices far from traditional manufacturing centres.
MIT researchers use large language models to flag problems in complex systems
In a new study, MIT researchers found that large language models (LLMs) hold the potential to be more efficient anomaly detectors for time-series data. Importantly, these pretrained models can be deployed right out of the box.
The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline.
While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model.
In the future, an LLM may also be able to provide plain language explanations with its predictions, so an operator could be better able to understand why an LLM identified a certain data point as anomalous.
AeroShield Materials Announces Opening of New Manufacturing Facility, $5M Additional Funding to Produce Cutting-Edge, Energy Efficient Technology for Built Environment
AeroShield Materials, an MIT spin out developing cutting-edge technology for energy efficiency applications in the built environment, announced today the opening of their new manufacturing facility in Waltham, Massachusetts and the close of $5M in additional funding with participation from existing investors Massachusetts Clean Energy Center and MassVentures as well as new investor MassMutual Ventures. The funding will support AeroShield in scaling its business by enabling the company to expand its team to more than 30 employees in the 12,000 square-foot facility where it plans to manufacture full-sized products, including double pane windows.
AeroShield’s breakthrough transparent silica aerogel sheets are some of the most insulating materials ever created and can be added inside double-pane windows and doors to improve energy performance by up to 65%. The facility in Waltham will significantly expand the company’s capabilities with cutting-edge aerogel R&D labs, manufacturing equipment, assembly lines, and testing equipment.
Precision Home Robotics w/Real-to-Sim-to-Real
Open-TeleVision: Why human intelligence could be the key to next-gen robotic automation
MIT and UCSD unveiled a new immersive remote control experience for robots. This innovative system, dubbed “Open-TeleVision,” enables operators to actively perceive the robot’s surroundings while mirroring their hand and arm movements. As the researchers describe it, the system “creates an immersive experience as if the operator’s mind is transmitted to a robot embodiment.”
The Open-TeleVision system takes a different approach to robotics. Instead of trying to replicate human intelligence in a machine, it creates a seamless interface between human operators and robotic bodies. The researchers explain that their system “allows operators to actively perceive the robot’s surroundings in a stereoscopic manner. Additionally, the system mirrors the operator’s arm and hand movements on the robot.”
SiTration Raises $11.8 Million for Critical Metals Recovery
SiTration, a materials recovery company serving the mining and metals industries, announced it has raised $11.8 million in seed capital. The financing round was led by 2150 with participation from BHP Ventures, Extantia, and Orion Industrial Ventures. Previous investors Azolla Ventures and MIT-affiliated E14 Fund also participated in the oversubscribed round. The funding will be used to scale the company’s novel solution for the recovery of critical metals and minerals and to deploy pilot systems with commercial partners.
Founded as a spinoff from research conducted at MIT, SiTration is working to address the demand for critical materials needed to manufacture technologies that are key to the clean energy transition, including electric motors, wind turbines, and batteries. The company’s innovative solution lowers both the cost and the resource intensity of extracting and recycling materials, contributing to the overall push towards a circular economy.
Making steel with electricity
Boston Metal is seeking to clean up the steelmaking industry using an electrochemical process called molten oxide electrolysis (MOE), which eliminates many steps in steelmaking and releases oxygen as its sole byproduct.
Boston Metal’s molten oxide electrolysis process takes place in modular MOE cells, each the size of a school bus. Iron ore rock is fed into the cell, which contains the cathode (the negative terminal of the MOE cell) and an anode immersed in a liquid electrolyte. The anode is inert, meaning it doesn’t dissolve in the electrolyte or take part in the reaction other than serving as the positive terminal. When electricity runs between the anode and cathode and the cell reaches around 1,600 degrees Celsius, the iron oxide bonds in the ore are split, producing pure liquid metal at the bottom that can be tapped. The byproduct of the reaction is oxygen, and the process doesn’t require water, hazardous chemicals, or precious-metal catalysts.
The production of each cell depends on the size of its current. Lambotte says with about 600,000 amps, each cell could produce up to 10 tons of metal every day. Steelmakers would license Boston Metal’s technology and deploy as many cells as needed to reach their production targets.
Robotic palm mimics human touch
MIT CSAIL researchers enhance robotic precision with sophisticated tactile sensors in the palm and agile fingers, setting the stage for improvements in human-robot interaction and prosthetic technology. If you have kept up with the protean field, gripping and grasping more like humans has been an ongoing Herculean effort. Now, a new robotic hand design developed in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has rethought the oft-overlooked palm. The new design uses advanced sensors for a highly sensitive touch, helping the “extremity” handle objects with more detailed and delicate precision.
GelPalm has a gel-based, flexible sensor embedded in the palm, drawing inspiration from the soft, deformable nature of human hands. The sensor uses a special color illumination tech that uses red, green, and blue LEDs to light an object, and a camera to capture reflections. This mixture generates detailed 3D surface models for precise robotic interactions.
Active Surfaces Secures $5.6M in Oversubscribed Pre-Seed Funding to Revolutionize Solar Technology
Active Surfaces, an innovative flexible solar panel startup spun out from MIT, announced it has raised $5.6 million in an oversubscribed pre-seed funding round. The round was led by Safar Partners, a prominent deep tech venture capital fund. Additional participants—including QVT, Lendlease, Type One Ventures, Umami Capital, Sabanci Climate Ventures, New Climate Ventures, SeaX Ventures, and others—reflect a diverse support base ranging from institutional VCs to corporate backers.
Active Surfaces is pioneering the next generation of solar technology with its lightweight, flexible solar panels that can be integrated into virtually any surface. Unlike bulky traditional solar panels, Active Surfaces’ technologies will seamlessly blend into everyday environments, from small consumer products that can go anywhere to large commercial, office, and industrial buildings.
MIT spin-off Rapid Liquid Print raises $7M for 3D printing
MIT spin-off Rapid Liquid Print has raised $7 million in funding for its novel liquid-based 3D printing technology. Boston-based Rapid Liquid Print was founded as an additive manufacturing startup in 2015 as a spin-off from the Massachusetts Institute of Technology (MIT). Germany’s HZG Group led the investment round, joined by BMW i Ventures and MassMutual through MM Catalyst Fund (MMCF).
The name of the company says it all: Rapid Liquid Print is a new 3D printing process developed at MIT’s Self-Assembly Lab in Boston. In this innovative process, a liquid object is “drawn” in three dimensions within a gel suspension. A gantry system injects a liquid material mixture into a container filled with a specifically engineered gel, drawing the desired object into three-dimensional space via a nozzle. The gel holds the object in suspension – as if in zero gravity – while the object cures during printing.
The entire printing process takes minutes and requires no additional support structures to be printed. The printed objects can be used immediately without post-processing.
Quaise Energy Raises $21 Million to Accelerate Terawatt-Scale Deep Geothermal Energy
Quaise Energy, the company unlocking terawatt-scale geothermal, announced the closing of a $21 Million Series A1 financing round led by Prelude Ventures and Safar Partners. Mitsubishi Corporation and Standard Investments are among several new investors participating in the round. This latest funding will enhance Quaise field operations and strengthen the company’s supply chain position, while ongoing product development will continue with pre-existing capital.
Quaise is uniquely positioned to harness deep geothermal energy worldwide at 3-20 km below the Earth’s surface. To achieve such a feat, the company has advanced a novel technique to vaporize rock using high-power microwaves in the millimeter range, based on more than a decade of research at MIT and recent testing at Oak Ridge National Laboratory. The original MIT experiments have now been scaled up 100x, with field demonstrations commencing this year.
Method rapidly verifies that a robot will avoid collisions
MIT researchers have developed a safety check technique which can prove with 100 percent accuracy that a robot’s trajectory will remain collision-free (assuming the model of the robot and environment is itself accurate). Their method, which is so precise it can discriminate between trajectories that differ by only millimeters, provides proof in only a few seconds.
The researchers accomplished this using a special algorithmic technique, called sum-of-squares programming, and adapted it to effectively solve the safety check problem. Using sum-of-squares programming enables their method to generalize to a wide range of complex motions.
New AI model could streamline operations in a robotic warehouse
Getting 800 robots to and from their destinations efficiently while keeping them from crashing into each other is no easy task. It is such a complex problem that even the best path-finding algorithms struggle to keep up with the breakneck pace of e-commerce or manufacturing.
The researchers built a deep-learning model that encodes important information about the warehouse, including the robots, planned paths, tasks, and obstacles, and uses it to predict the best areas of the warehouse to decongest to improve overall efficiency. Their technique divides the warehouse robots into groups, so these smaller groups of robots can be decongested faster with traditional algorithms used to coordinate robots. In the end, their method decongests the robots nearly four times faster than a strong random search method.
The technique also streamlines computation by encoding constraints only once, rather than repeating the process for each subproblem. For instance, in a warehouse with 800 robots, decongesting a group of 40 robots requires holding the other 760 robots as constraints. Other approaches require reasoning about all 800 robots once per group in each iteration.
Using AI to discover stiff and tough microstructures
Innovative AI system from MIT CSAIL melds simulations and physical testing to forge materials with newfound durability and flexibility for diverse engineering uses.
A team of researchers moved beyond traditional trial-and-error methods to create materials with extraordinary performance through computational design. Their new system integrates physical experiments, physics-based simulations, and neural networks to navigate the discrepancies often found between theoretical models and practical results. One of the most striking outcomes: the discovery of microstructured composites — used in everything from cars to airplanes — that are much tougher and durable, with an optimal balance of stiffness and toughness.
Rounding the system out was their “Neural-Network Accelerated Multi-Objective Optimization” (NMO) algorithm, for navigating the complex design landscape of microstructures, unveiling configurations that exhibited near-optimal mechanical attributes. The workflow operates like a self-correcting mechanism, continually refining predictions to align closer with reality.
DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models
Nature evolves creatures with a high complexity of morphological and behavioral intelligence, meanwhile computational methods lag in approaching that diversity and efficacy. Co-optimization of artificial creatures’ morphology and control in silico shows promise for applications in physical soft robotics and virtual character creation; such approaches, however, require developing new learning algorithms that can reason about function atop pure structure. In this paper, we present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks. DiffuseBot bridges the gap between virtually generated content and physical utility by (i) augmenting the diffusion process with a physical dynamical simulation which provides a certificate of performance, and ii) introducing a co-design procedure that jointly optimizes physical design and control by leveraging information about physical sensitivities from differentiable simulation. We showcase a range of simulated and fabricated robots along with their capabilities.
Closing the design-to-manufacturing gap for optical devices
We introduce neural lithography to address the ‘design-to-manufacturing’ gap in computational optics. Computational optics with large design degrees of freedom enable advanced functionalities and performance beyond traditional optics. However, the existing design approaches often overlook the numerical modeling of the manufacturing process, which can result in significant performance deviation between the design and the fabricated optics. To bridge this gap, we, for the first time, propose a fully differentiable design framework that integrates a pre-trained photolithography simulator into the model-based optical design loop. Leveraging a blend of physics-informed modeling and data-driven training using experimentally collected datasets, our photolithography simulator serves as a regularizer on fabrication feasibility during design, compensating for structure discrepancies introduced in the lithography process. We demonstrate the effectiveness of our approach through two typical tasks in computational optics, where we design and fabricate a holographic optical element (HOE) and a multi-level diffractive lens (MDL) using a two-photon lithography system, showcasing improved optical performance on the task-specific metrics.
Inside the lab: MIT CSAIL
This 3D printer can watch itself fabricate objects
Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real-time to ensure no areas have too much or too little material.
Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.
In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.
To excel at engineering design, generative AI must learn to innovate, study finds
“Deep generative models (DGMs) are very promising, but also inherently flawed,” says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. “The objective of these models is to mimic a dataset. But as engineers and designers, we often don’t want to create a design that’s already out there.” He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond “statistical similarity.”
For instance, if DGMs can be built with other priorities, such as performance, design constraints, and novelty, Ahmed foresees “numerous engineering fields, such as molecular design and civil infrastructure, would greatly benefit. By shedding light on the potential pitfalls of relying solely on statistical similarity, we hope to inspire new pathways and strategies in generative AI applications outside multimedia.”
🧠🚚 Venti Aims To Bring Autonomous Vehicles To Ports, Factories And Airports
Venti Technologies, based in both Singapore and Boston, Massachusetts, is helping to advance the category with its autonomous logistics for industrial and global supply chain hubs. The technology and algorithms behind Venti were invented by Dr. Daniela Rus, who is also the director of CSAIL—the MIT Computer Science and Artificial Intelligence Laboratory, which in the past has been the source of many start-ups representing some $2 trillion in revenue.
Preparing a container port of this size for the use of autonomous vehicles is no small technological feat. The container port is six kilometres on a side and Venti had to GPS map the facility down to a meter. Then, using mathematical modelling, deep learning, and theoretically grounded algorithms, Venti’s proprietary platform of autonomy technologies, including a suite of powerful logistics algorithms is deployed to automate the interactions between vehicles at one of largest and most technologically sophisticated ports in the world.
🧠 How MIT’s Liquid Neural Networks can solve AI problems from robotics to self-driving cars
Liquid neural networks, a novel type of deep learning architecture developed by researchers at the Computer Science and Artificial Intelligence Laboratory at MIT (CSAIL), offer a compact, adaptable and efficient solution to certain AI problems. These networks are designed to address some of the inherent challenges of traditional deep learning models.
Liquid neural networks represent a significant departure from traditional deep learning models. They use a mathematical formulation that is less computationally expensive and stabilizes neurons during training. The key to LNNs’ efficiency lies in their use of dynamically adjustable differential equations, which allows them to adapt to new situations after training. This is a capability not found in typical neural networks.
This significant reduction in size has several important consequences, Rus said. First, it enables the model to run on small computers found in robots and other edge devices. And second, with fewer neurons, the network becomes much more interpretable. Interpretability is a significant challenge in the field of AI. With traditional deep learning models, it can be difficult to understand how the model arrived at a particular decision.
A simpler method for learning to control a robot
Researchers from MIT and Stanford University have devised a new machine-learning approach that could be used to control a robot, such as a drone or autonomous vehicle, more effectively and efficiently in dynamic environments where conditions can change rapidly.
The researchers’ approach incorporates certain structure from control theory into the process for learning a model in such a way that leads to an effective method of controlling complex dynamics, such as those caused by impacts of wind on the trajectory of a flying vehicle. With this structure, they can extract a controller directly from the dynamics model, rather than using data to learn an entirely separate model for the controller.
The researchers also found that their method was data-efficient, which means it achieved high performance even with few data. For instance, it could effectively model a highly dynamic rotor-driven vehicle using only 100 data points. Methods that used multiple learned components saw their performance drop much faster with smaller datasets.
📦 The chore of packing just got faster and easier
A team of researchers from MIT and Inkbit (an MIT spinout company based in Medford, Massachusetts), headed by Wojciech Matusik, an MIT professor and Inkbit co-founder, is presenting this technique, which they call “dense, interlocking-free and Scalable Spectral Packing,” or SSP.
The first step in SSP is to work out an ordering of solid 3D objects for filling a fixed container. One possible approach, for example, is start with the largest objects and end with the smallest. The next step is to place each object into the container. To facilitate this process, the container is “voxelized,” meaning that it is represented by a 3D grid composed of tiny cubes or voxels, each of which may be just a cubic millimeter in size. The grid shows which parts of the container — or which voxels — are already filled and which are vacant.
Figuring out the best placements for each and every object as the container fills up obviously requires a lot of calculations. But the team employed a mathematical technique, the fast Fourier transform (FFT), which had never been applied to the packing problem before. By using FFT, the problems of minimizing voxel overlap and minimizing gaps for all voxels in the container can be solved through a relatively limited set of calculations, such as simple multiplications, as opposed to the impractical alternative of testing out all possible locations for the objects to be positioned inside. And that makes packing faster by several orders of magnitude.
Why 3D printing is vital to success of US manufacturing
Closed-loop fully-automated frameworks for accelerating materials discovery
Our work shows that a fully-automated closed-loop framework driven by sequential learning can accelerate the discovery of materials by up to 10-25x (or a reduction in design time by 90-95%) when compared to traditional approaches. We show that such closed-loop frameworks can lead to enormous improvement in researcher productivity in addition to reducing overall project costs. Overall, these findings present a clear value proposition for investing in closed-loop frameworks and sequential learning in materials discovery and design enterprises.
MIT Professor Neil Gershenfeld on How to Make Anything (Almost)
Computing With Chemicals Makes Faster, Leaner AI
A device that draws inspiration from batteries now appears surprisingly well suited to run artificial neural networks. Called electrochemical RAM (ECRAM), it is giving traditional transistor-based AI an unexpected run for its money—and is quickly moving toward the head of the pack in the race to develop the perfect artificial synapse. Researchers recently reported a string of advances at this week’s IEEE International Electron Device Meeting (IEDM 2022) and elsewhere, including ECRAM devices that use less energy, hold memory longer, and take up less space.
A commercial ECRAM chip that accelerates AI training is still some distance away. The devices can now be made of foundry-friendly materials, but that’s only part of the story, says John Rozen, program director at the IBM Research AI Hardware Center. “A critical focus of the community should be to address integration issues to enable ECRAM devices to be coupled with front-end transistor logic monolithically on the same wafer, so that we can build demonstrators at scale and establish if it is indeed a viable technology.”
Machine learning facilitates “turbulence tracking” in fusion reactors
A multidisciplinary team of researchers is now bringing tools and insights from machine learning to aid this effort. Scientists from MIT and elsewhere have used computer-vision models to identify and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.
Monitoring the formation and movements of these structures, called filaments or “blobs,” is important for understanding the heat and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor walls to meet those flows. However, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information must be tracked by marking them manually in video data.
The researchers built a synthetic video dataset of plasma turbulence to make this process more effective and efficient. They used it to train four computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the same ways that humans would.
When the researchers tested the trained models using real video clips, the models could identify blobs with high accuracy — more than 80 percent in some cases. The models were also able to effectively estimate the size of blobs and the speeds at which they moved.
Using artificial intelligence to control digital manufacturing
MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time. They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.
The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.
Robots learn how to shape Play-Doh
Simple, Cheap, and Portable: A Filter-Free Desalination System for a Thirsty World
A group of scientists from MIT has developed just such a portable desalination unit; it’s the size of a medium suitcase and weighs less than 10 kilograms. The unit’s one-button operation requires no technical knowledge. What’s more, it has a completely filter-free design. Unlike existing portable desalination systems based on reverse osmosis, the MIT team’s prototype does not need any high-pressure pumping or maintenance by technicians.
At Amazon Robotics, simulation gains traction
“To develop complex robotic manipulation systems, we need both visual realism and accurate physics,” says Marchese. “There aren’t many simulators that can do both. Moreover, where we can, we need to preserve and exploit structure in the governing equations — this helps us analyze and control the robotic systems we build.”
Drake, an open-source toolbox for modeling and optimizing robots and their control system, brings together several desirable elements for online simulation. The first is a robust multibody dynamics engine optimized for simulating robotic devices. The second is a systems framework that lets Amazon scientists write custom models and compose these into complex systems that represent actual robots. The third is what he calls a “buffet of well-tested solvers” that resolve numerical optimizations at the core of Amazon’s models, sometimes as often as every time step of the simulation. Lastly, is its robust contact solver. It calculates the forces that occur when rigid-body items interact with one another in a simulation.
Neuro-symbolic AI could provide machines with common sense
Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems.
“We’re trying to bring together the power of symbolic languages for knowledge representation and reasoning as well as neural networks and the things that they’re good at, but also with the idea of probabilistic inference, especially Bayesian inference or inverse inference in a causal model for reasoning backwards from the things we can observe to the things we want to infer, like the underlying physics of the world, or the mental states of agents,” Tenenbaum says.
There are several attempts to use pure deep learning for object position and pose detection, but their accuracy is low. In a joint project, MIT and IBM created “3D Scene Perception via Probabilistic Programming” (3DP3), a system that resolves many of the errors that pure deep learning systems fall into.
Real-world robotic-manipulation system
So the next phase of the project was to teach the robot to use video feedback to adjust trajectories on the fly. Until now, Tedrake’s team had been using machine learning only for the robot’s perceptual system; they’d designed the control algorithms using traditional control-theoretical optimization. But now they switched to machine learning for controller design, too.
To train the controller model, Tedrake’s group used data from demonstrations in which one of the lab members teleoperated the robotic arm while other members knocked the target object around, so that its position and orientation changed. During training, the model took as input sensor data from the demonstrations and tried to predict the teleoperator’s control signals.
This requires a combination of machine learning and the more traditional, control-theoretical analysis that Tedrake’s group has specialized in. From data, the machine learning model learns vector representations of both the input and the control signal, but hand-tooled algorithms constrain the representation space to optimize the control signal selection. “It’s basically turning it back into a planning and control problem, but in the feature space that was learned,” Tedrake explains.
Toward smart production: Machine intelligence in business operations
Our research looked at five different ways that companies are using data and analytics to improve the speed, agility, and performance of operational decision making. This evolution of digital maturity begins with simple tools, such as dashboards to aid human decision making, and ends with true MI, machines that can adjust their own performance autonomously based on historical and real-time data.
Machine Learning Improves Fusion Modeling
If researchers hope to control fusion for energy production, they need a better understanding of the turbulent motion of ions and electrons in plasmas moving through fusion reactors. The field lines of toroidal structures known as tokamaks force the plasma particles; the intent is to confine them long enough to produce significant net energy gains, but that’s a challenge with extraordinarily high temperatures but also small spaces.
In a couple of recent publications, MIT researchers have begun to directly test the accuracy of this reduced model by combining physics with machine learning. According to MIT’s researchers, the model examines the dynamic relationship of physical variables such as density, electric potential, and temperature and, at the same time, quantities such as the turbulent electric field and electron pressure. The researchers discovered that the turbulent electric fields associated with pressure fluctuations predicted by the reduced fluid model are compatible with high-fidelity gyrokinetic predictions in plasmas relevant to existing fusion devices.
Tiny machine learning design alleviates a bottleneck in memory usage on internet-of-things devices
Researchers are working to reduce the size and complexity of the devices that these algorithms can run on, all the way down to a microcontroller unit (MCU) that’s found in billions of internet-of-things (IoT) devices. An MCU is memory-limited minicomputer housed in compact integrated circuit that lacks an operating system and runs simple commands. These relatively cheap edge devices require low power, computing, and bandwidth, and offer many opportunities to inject AI technology to expand their utility, increase privacy, and democratize their use — a field called TinyML.
Teaching Robots Dexterous Hand Manipulation
Roboat III: A Robotic Boat Transportation System
Machine-learning system accelerates discovery of new materials for 3D printing
The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses.
A material developer selects a few ingredients, inputs details on their chemical compositions into the algorithm, and defines the mechanical properties the new material should have. Then the algorithm increases and decreases the amounts of those components (like turning knobs on an amplifier) and checks how each formula affects the material’s properties, before arriving at the ideal combination.
The researchers have created a free, open-source materials optimization platform called AutoOED that incorporates the same optimization algorithm. AutoOED is a full software package that also allows researchers to conduct their own optimization.
Using blockchain technology to protect robots
The use of blockchain technology as a communication tool for a team of robots could provide security and safeguard against deception, according to a study by researchers at MIT and Polytechnic University of Madrid. The research may also have applications in cities where multi-robot systems of self-driving cars are delivering goods and moving people across town.
A blockchain offers a tamper-proof record of all transactions — in this case, the messages issued by robot team leaders — so follower robots can eventually identify inconsistencies in the information trail. Leaders use tokens to signal movements and add transactions to the chain, and forfeit their tokens when they are caught in a lie, so this transaction-based communications system limits the number of lies a hacked robot could spread, according to Eduardo Castelló, a Marie Curie Fellow in the MIT Media Lab and lead author of the paper.
Giving robots better moves
At the core of the RightHand Robotics solution is the idea of using machine vision and intelligent grippers to make piece-picking robots more adaptable. The combination also limits the amount of training needed to run the robots, equipping each machine with what the company equates to hand-eye coordination.
RightHand Robotics also utilizes an end-of-arm tool that combines suction with novel underactuated fingers, which Odhner says gives the robots more flexibility than robots relying solely on suction cups or simple pinching grippers. “Sometimes it actually helps you to have passive degrees of freedom in your hand, passive motions that it can make and can’t actively control,” Odhner says of the robots. “Very often those simplify the control task. They take problems from being heavily over-constrained and make them tractable to run through a motion planning algorithm.”
The data the robots collect are also used to improve reliability over time and shed light on warehouse operations for customers.
Classify This Robot-Woven Sneaker With 3D-Printed Soles as 'Footware'
For athletes trying to run fast, the proper shoe can be essential to achieving peak performance. For athletes trying to run as fast as humanly possible, a runner’s shoe can also become a work of individually customized engineering.
This is why Adidas has married 3D printing with robotic automation in a mass-market footwear project it’s called Futurecraft.Strung, expected to be available for purchase as soon as later this year. Using a customized, 3D-printed sole, a Futurecraft.Strung manufacturing robot can place some 2,000 threads from up to 10 different sneaker yarns in one upper section of the shoe.
Using tactile-based reinforcement learning for insertion tasks
A paper entitled “Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry” was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.