Google

Canvas Category Software : Hyperscaler : General

Website | Blog | LinkedIn | Video

Primary Location Mountain View, California, United States

Financial Status NASDAQ: GOOG

Our mission is to organize the world’s information and make it universally accessible and useful.

Assembly Line

Toyota shifts into overdrive: Developing an AI platform for enhanced manufacturing efficiency

📅 Date:

🔖 Topics: Partnership, IT OT Convergence

🏢 Organizations: Toyota, Google


Our goal was to build an AI Platform that enabled factory floor employees, regardless of their AI expertise, to create machine learning models with ease. This would facilitate the automation of manual, labor-intensive tasks, freeing up human workers to focus on higher-value activities such as process optimization, AI implementation in other production areas, and data-driven decision-making. By the time we completed implementing the AI platform earlier this year, we found it would be able to save us as many as 10,000 hours of mundane work annually through manufacturing efficiency and process optimization.

The speed of build and processing was also a big appeal for us . In particular, Google Kubernetes Engine (GKE), its Autopilot and Image Streaming provide flexibility and speed, thereby allowing us to improve cost-effectiveness in terms of operational burden. We measured the communication speed of containerization during the system evaluation process, and found that Google Cloud was four times faster scaling from zero than other existing services. The speed of communication and processing is extremely important, as we use up to 10,000 images when creating the learning model. When we first started developing AI technology in-house, we struggled with flexible system scaling and operations. In this regard, too, using Google Cloud was the ideal choice.

Currently, the development team compiles work logs and feedback from the manufacturing floor, and we believe that the day will soon come when we will start utilizing generative AI. For example, the team is considering using AI to create images for testing machine learning during the production preparation stage, which has been challenging due to a lack of data. In addition, we are considering using Gemini Code Assist to improve the developer experience, or using Gemini to convert past knowledge into RAG and implement a recommendation feature.

Read more at Google Cloud Blog

Meet Willow, our state-of-the-art quantum chip

📅 Date:

✍️ Author: Hartmut Neven

🔖 Topics: Quantum Computing

🏭 Vertical: Semiconductor

🏢 Organizations: Google


Published in Nature, our results show that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes. We tested ever-larger arrays of physical qubits, scaling up from a grid of 3x3 encoded qubits, to a grid of 5x5, to a grid of 7x7 — and each time, using our latest advances in quantum error correction, we were able to cut the error rate in half. In other words, we achieved an exponential reduction in the error rate. This historic accomplishment is known in the field as “below threshold” — being able to drive errors down while scaling up the number of qubits. You must demonstrate being below threshold to show real progress on error correction, and this has been an outstanding challenge since quantum error correction was introduced by Peter Shor in 1995.

Read more at Google Blog

Renault Group Acquires a Supply Chain Control Tower to Manage Transportation in Europe and Beyond

📅 Date:

🔖 Topics: Partnership, Supply Chain Control Tower

🏢 Organizations: Renault, Shippeo, Google


In 2020, Renault teamed up with Shippeo, a provider of software for multimodal transportation visibility, along with Google’s artificial intelligence model, to create the first control tower for the automotive industry in Europe. The application employs AI modules fed by real-time traceability data from logistics providers and production stock, along with other sources of news. The resulting support tool gave operational teams end-to-end coverage of operations on a global basis.

By the first half of 2021, 80 of Renault’s carriers had connected to Shippeo’s transportation visibility platform. The first version of the control tower for inbound deliveries went live in October of 2022, covering parts transportation by truck throughout Europe, the Middle East and North Africa. Outbound carriers were added shortly thereafter. As of January 2023, Renault had also integrated the tracking of 100% of its maritime transport services.

The control tower allows for a real-time link between parts transportation and vehicle production. Drawing on AI and machine learning from Google, the system calculates accurate estimated times of arrival, accounting for variables such as weather, strikes, border-crossing delays and traffic levels. The analysis helps it to detect potential disruptions early on, sharpening Renault’s crisis-management strategies and making possible mitigating efforts such as truck rerouting and acceleration, and transshipment of critical parts.

According to the automaker, the Shippeo application has led to “remarkable enhancements” in service levels for the transportation of parts from suppliers to the plants, while also improving safety and fostering global collaboration with supply chain partners. Tracking accuracy is based on a detailed analysis of GPS data, and weekly animation of the automaker’s 10 major carriers in the European region. At the factory, the tool has taken 15 minutes out of an average waiting time of 90 minutes for trucks, Renault added.

Read more at Supply Chain Brain

The Inside Story of Google’s Quiet Nuclear Quest

📅 Date:

✍️ Author: Ross Koningstein

🔖 Topics: Machine Learning

🏢 Organizations: Google, TAE Technologies, DeepMind, Commonwealth Fusion Systems


The first research effort came from a proposal by my colleague Ted Baltz, a senior Google engineer, who wanted to bring the company’s computer-science expertise to fusion experiments at TAE Technologies in Foothill Ranch, Calif. He believed machine learning could improve plasma performance for fusion.

In 2014, TAE was experimenting with a warehouse-size plasma machine called C-2U. This machine heated hydrogen gas to over a million degrees Celsius and created two rings of plasma, which were slammed together at a speed of more than 960,000 kilometers per hour. Powerful magnets compressed the combined plasma rings, with the goal of fusing the hydrogen and producing energy. The challenge for TAE, as for all other companies trying to build commercial fusion reactors, was how to heat, contain, and control the plasma long enough to achieve real energy output, without damaging its machine.

A nice side benefit from our multiyear collaboration with TAE was that people within the company—engineers and executives—became knowledgeable about fusion. And that resulted in Alphabet investing in two fusion companies in 2021, TAE and Commonwealth Fusion Systems. By then, my colleagues at Google DeepMind were also using deep reinforcement learning for plasma control within tokamak fusion reactors.

Read more at IEEE Spectrum

Introducing Waymo's Research on an End-to-End Multimodal Model for Autonomous Driving

📅 Date:

🔖 Topics: Foundation Model, Autonomous Driving

🏢 Organizations: Waymo, Google


At Waymo, we have been at the forefront of AI and ML in autonomous driving for over 15 years, and are continuously contributing to advancing research in the field. Now, we are sharing our latest research paper on an End-to-End Multimodal Model for Autonomous Driving (EMMA).

Powered by Gemini, a multimodal large language model developed by Google, EMMA employs a unified, end-to-end trained model to generate future trajectories for autonomous vehicles directly from sensor data. Trained and fine-tuned specifically for autonomous driving, EMMA leverages Gemini’s extensive world knowledge to better understand complex scenarios on the road.

Our research demonstrates how multimodal models, such as Gemini, can be applied to autonomous driving and explores pros and cons of the pure end-to-end approach. It highlights the benefit of incorporating multimodal world knowledge, even when the model is fine-tuned for autonomous driving tasks that require good spatial understanding and reasoning skills. Notably, EMMA demonstrates positive task transfer across several key autonomous driving tasks: training it jointly on planner trajectory prediction, object detection, and road graph understanding leads to improved performance compared to training individual models for each task. This suggests a promising avenue of future research, where even more core autonomous driving tasks could be combined in a similar, scaled-up setup.

Read more at Waymo Blog

Honeywell and Google Cloud to Accelerate Autonomous Operations with AI Agents for the Industrial Sector

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: Honeywell, Google


Honeywell (NASDAQ: HON) and Google Cloud announced a unique collaboration connecting artificial intelligence (AI) agents with assets, people and processes to accelerate safer, autonomous operations for the industrial sector.

This partnership will bring together the multimodality and natural language capabilities of Gemini on Vertex AI – Google Cloud’s AI platform – and the massive data set on Honeywell Forge, a leading Internet of Things (IoT) platform for industrials. This will unleash easy-to-understand, enterprise-wide insights across a multitude of use cases. Honeywell’s customers across the industrial sector will benefit from opportunities to reduce maintenance costs, increase operational productivity and upskill employees. The first solutions built with Google Cloud AI will be available to Honeywell’s customers in 2025.

Read more at PR Newswire

Supply Chain Automation Startup BackOps.ai Secures $2M in Pre-Seed Funding led by Gradient

📅 Date:

🔖 Topics: Funding Event

🏢 Organizations: BackOps AI, Google


BackOps.ai, an AI-driven platform for automating supply chain logistics and acting as the central nervous system for operations, announced a $2 million pre-seed funding led by Gradient, Google’s early-stage AI fund, with participation from 10VC.

BackOps.ai provides supply chain customers with a frictionless way to adopt AI into existing workflows—providing automation learned from their data in a way that previously required a backend engineer. Delivering operational efficiency and savings at scale, BackOps.ai’s simple interface and plain-text commands allow companies to harness the full power of existing global supply chain tech stacks.

Read more at PR Newswire-PRWeb

Quantum error correction below the surface code threshold

📅 Date:

🔖 Topics: Quantum Computing

🏢 Organizations: Google


Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of Λ = 2.14 ± 0.02 when increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% ± 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its best physical qubit’s lifetime by a factor of 2.4 ± 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 μs at distance-5 up to a million cycles, with a cycle time of 1.1 μs. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated error events occurring approximately once every hour, or 3 × 109 cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.

Read more at arXiv

Alphabet to invest $5B in Waymo, its self-driving vehicle unit

📅 Date:

🔖 Topics: Funding Event

🏢 Organizations: Waymo, Google


Google parent company Alphabet plans to invest $5 billion in Waymo, its unit for autonomous vehicles, over the next few years. Alphabet CTO Ruth Porat announced the news during the company’s quarterly financial results call.

Prior to this, Waymo raised $2.25 billion in its first external funding round in 2020. The company raised another $2.5 billion in 2021 in a round that included funding from Andreessen Horowitz, AutoNation, Canada Pension Plan Investment Board, Fidelity Management & Research Company and more.

The new funding round will enable Waymo to continue to build the world’s leading autonomous driving company, he said.

Read more at TFN

Again raises $43M from Google Ventures and others to turn CO2 into green chemicals

📅 Date:

🔖 Topics: Funding Event

🏢 Organizations: Again, Google, HV Capital


Again, a Danish climate tech startup, which turns carbon dioxide into valuable chemicals, has raised $43 million in Series A funding. The investment round was co-led by Google Ventures (which invested in ClimateX and StatusPRO) and HV Capital. Kompas VC, EIFO – Denmark’s Export and Investment Fund, ACME Capital, and Atlantic Labs also participated in the round. With this round, the total funding raised by Again accounts for $100 million, including a $47 million Horizon Europe grant for the PyroCO2 project.

The new funding will be used to build additional facilities to combat the climate crisis at scale. It will be used to build additional production capacity to deliver green chemicals to customers, and R&D to expand Again’s product portfolio and bring more molecules to market.

Read more at TFN

How AlloyDB transformed Bayer’s data operations

📅 Date:

🔖 Topics: Data architecture

🏢 Organizations: Bayer, AlloyDB, Google


Migrating to AlloyDB has been transformative for our business. In our previous PostgreSQL setup, the primary writer was responsible for both write operations and replicating those changes to reader nodes. The anticipated increase in write traffic and reader count would have overwhelmed this node, leading to potential bottlenecks and increased replication lag. AlloyDB’s architecture, which utilizes a single source of truth for all nodes, significantly reduced the impact of scaling read traffic. After migrating, we saw a dramatic improvement in performance, ensuring our ability to meet growing demands and maintain consistently low replication delay. In parallel load tests, a smaller AlloyDB instance reduced response times by over 50% on average and increased throughput by 5x compared to our previous PostgreSQL solution.

By migrating to AlloyDB, we’ve ensured that our business growth won’t be hindered by database limitations, allowing us to focus on innovation. The true test of our migration came during our first peak harvest season, a time where performance is critical for product decision timelines. Due to agriculture’s seasonal nature, a delay of just a few days can postpone a product launch by an entire year. Our customers were understandably nervous, but thanks to Google Cloud and AlloyDB, the harvest season went as smoothly as we could have hoped for.

To support our data strategy, we have adopted a consistent architecture across our Google Cloud projects. For a typical project, the stack consists of Google Kubernetes Engine (GKE) hosted pods and pipelines for publishing events and analytics data. While Bayer uses Apache Kafka across teams and cloud providers for data streaming, individual teams regularly use Pub/Sub internally for messaging and event-driven architectures. Data for analytics and reporting is generally stored in BigQuery, with custom processes for materialization once it lands. By using cross-project BigQuery datasets, we are able to work with a larger, real-time user group and enhance our operational capabilities.

Read more at Google Cloud Blog

Heuristics on the high seas: Mathematical optimization for cargo ships

📅 Date:

✍️ Authors: Virgile Galle, Tom Tangl

🔖 Topics: Route Optimization

🏢 Organizations: Google


Google’s Operations Research team is proud to announce the Shipping Network Design API, which implements a new solution to the most efficient routes for shipping. Our approach scales better, enabling solutions to world-scale supply chain problems, while being faster than any known previous attempts. It is able to double the profit of a container shipper, deliver 13% more containers, and do so with 15% fewer vessels. Read on to see how we did it.

There are three components to the Liner Shipping Network Design and Scheduling Problem (LSNDSP). Network design determines the order in which vessels visit ports, network scheduling determines the times they arrive and leave, and container routing chooses the journey that containers take from origin to destination. Every container shipping company needs to solve all three challenges, but they are typically solved sequentially. Solving them all simultaneously is more difficult but is also more likely to discover better solutions.

Solutions to network design create service lines that a small set of vessels follow: for instance, sailing between eastern Asia, through the Suez canal, and to southern Europe. These service lines are published with dates, so that shippers can know when and where to have their containers ready at port.

Read more at Google Research Blog

Verse™ Secures $20.5 Million in Series A Funding led by GV to Help Organizations Reduce Electricity Costs & Emissions

📅 Date:

🔖 Topics: Funding Event

🏢 Organizations: Verse, Google


Verse, whose software enables organizations to understand, plan, and manage clean energy, has raised a $20.5M Series A funding round. The investment, led by GV (Google Ventures) with participation from Coatue, CIV, and MCJ Collective, will support Verse as it scales commercial operations and develops new product capabilities to help organizations reduce emissions and lower electricity costs.

Read more at PR Newswire

Unlocking new value in industrial automation with AI

📅 Date:

✍️ Author: Wendy Tan White

🔖 Topics: Foundation Model, Industrial Robot

🏢 Organizations: Intrinsic, NVIDIA, Trumpf, Google


Working with the robotics team at NVIDIA, we have successfully tested NVIDIA robotics platform technologies, including NVIDIA Isaac Manipulator foundation models for robot a grasping skill with the Intrinsic platform. This prototype features an industrial application specified by one of our partners and customers, Trumpf Machine Tools. This grasping skill, trained with 100% synthetic data generated by NVIDIA Isaac Sim, can be used to build sophisticated solutions that can perform adaptive and versatile object grasping tasks in sim and real. Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to complete the task using the foundation model and synthetic training data.

Together with Google DeepMind, we’ve demonstrated some novel and high value methods for robotic programming and orchestration — many of which have practical applications today:

  • Multi-robot motion planning with machine learning
  • Learning from demonstration, applied to two-handed dexterous manipulation
  • Foundation model for perception by enabling a robotic system to understand the next task and the physical objects involved requires a real-time, accurate, and semantic understanding of the environment.

Read more at Intrinsic Blog

Google, Microsoft, and Nucor announce a new initiative to aggregate demand to scale the adoption of advanced clean electricity technologies

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: Google, Microsoft, Nucor


Google LLC, Microsoft Corporation, and Nucor Corporation announced they will work together across the electricity ecosystem to develop new business models and aggregate their demand for advanced clean electricity technologies. These models will be designed to accelerate the development of first-of-a-kind (FOAK) and early commercial projects, including advanced nuclear, next-generation geothermal, clean hydrogen, long-duration energy storage (LDES) and others.

The companies will initially focus on proving out the demand aggregation and procurement model through advanced technology pilot projects in the United States. The companies will pilot a project delivery framework focused on three enabling levers for early commercial projects: signing offtake agreements for technologies that are still early on the cost curve, bringing a clear customer voice to policymakers and other stakeholders on broader long-term ecosystem improvements, and developing new enabling tariff structures in partnership with energy providers and utilities.

Read more at PR Newswire

HD Hyundai, Google Cloud team up to accelerate generative AI innovation

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: Hyundai, Google


HD Hyundai and Google Cloud have formed a strategic partnership to use the US firm’s multimodal AI model Gemini, unveiled earlier this month, across the Korean company’s core businesses, including shipbuilding, heavy machinery and energy. Under the partnership, Google Cloud will provide HD Hyundai with enterprise tools such as the Vertex AI platform to develop industry-specific AI applications. Starting in January 2024, HD Hyundai and Google Cloud will develop various AI solutions tailored to industry-specific needs and cultivate AI experts at the Korean conglomerate.

Read more at PR Newswire

Lightmatter Accelerates Growth and Expands Photonic Chip Deployments With $155M in New Funding; Now Valued at $1.2B

📅 Date:

🔖 Topics: Funding Event

🏢 Organizations: Lightmatter, Google


Lightmatter, the leader in photonics, announced today it has raised a $155M Series C-2 led by GV (Google Ventures) and Viking Global Investors, with participation from others. With this round, Lightmatter has raised over $420 million to date and is now valued at over $1.2B. This new financing allows the company to expedite growth to meet the increasing demand for high-performance computing (HPC) from AI innovators. Lightmatter plans to expand its world-class team and office footprint, while accelerating its ability to provide customers increased performance on the most advanced AI workloads.

Lightmatter is developing photonic technologies that reconstruct how chips calculate and communicate, which can be leveraged by the biggest cloud providers, semiconductor companies, and enterprises for their computing needs. The company provides a full stack of photonics-enabled hardware and software solutions that simultaneously reduce power consumption and increase performance. This is essential for highly compute-intensive workloads such as AI, which have grown rapidly to affect every critical industry.

Read more at Business Wire

Xometry Leverages Google Cloud To Accelerate The Digitization Of Manufacturing Globally

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: Xometry, Google


Xometry, the global AI-powered marketplace connecting enterprise buyers with suppliers of manufacturing services, today announced a partnership with Google Cloud to leverage Vertex AI to help accelerate the deployment of new auto-quote methods and models within Xometry’s AI-powered Instant Quoting Engine. Using Vertex AI, Xometry will accelerate the deployment of its instant-quoting and fulfillment capabilities to encompass the broadest and most comprehensive set of manufacturing technologies. As a result, Vertex AI will help Xometry expand the markets it serves for custom manufacturing and further advance the digitization of manufacturing globally.

Read more at Business Wire

Automate plant maintenance using MDE with ABAP SDK for Google Cloud

📅 Date:

✍️ Authors: Manas Srivastava, Devesh Singh

🔖 Topics: Manufacturing Analytics, Cloud Computing, Data Architecture

🏢 Organizations: Google, SAP, Litmus


Analyzing production data at scale for huge datasets is always a challenge, especially when there’s data from multiple production facilities involved with thousands of assets in production pipelines. To help solve this challenge, our Manufacturing Data Engine is designed to help manufacturers manage end-to-end shop floor business processes.

Manufacturing Data Engine (MDE) is a scalable solution that accelerates, simplifies, and enhances the ingestion, processing, contextualization, storage, and usage of manufacturing data for monitoring, analytical, and machine learning use cases. This suite of components can help manufacturers accelerate their transformation with Google Cloud’s analytics and AI capabilities.

Read more at Google Cloud Blog

Broadcom’s transformation journey with Google Cloud

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: Broadcom, Google


Since the migration to Google Cloud, we’ve eliminated 165 software test labs and saved 50% on costs by hosting most of our work on the cloud instead of relying on dedicated hardware running in Broadcom data centers.

Our collaboration with Google Cloud has also enabled Broadcom to deliver new product features faster while keeping products up to date and free of technical debt. This is a major competitive boon. Adopting Google Cloud as a scalable platform for product development, we now deliver rapid elasticity when catering to increased spikes in requests for products that can reach up to a million requests per second. Equally important, it’s helped us keep the platform secure to protect our customers’ workload and data.

Prior to migrating, Broadcom operated 50-plus data centers globally. The plan was to replace all 50-plus data centers in six months—which we did. It was crucial that we got this right because the workloads were time-sensitive and customer-sensitive, and any glitch could have a huge impact on customers.

Read more at Broadcom Blog

🧠🦾 Google’s Robotic Transformer 2: More Than Meets the Eye

📅 Date:

✍️ Author: Michael Levanduski

🔖 Topics: Transformer Net, Machine Vision, Vision-language-action Model

🏢 Organizations: Google


Google DeepMind’s Robotic Transformer 2 (RT2) is an evolution of vision language model (VLM) software. Trained on images from the web, RT2 software employs robotics datasets to manage low-level robotics control. Traditionally, VLMs have been used to combine inputs from both visual and natural language text datasets to accomplish more complex tasks. Of course, ChatGTP is at the front of this trend.

Google researchers identified a gap in how current VLMs were being applied in the robotic space. They note that current methods and approaches tend to focus on high-level robotic theory such as strategic state machine models. This leaves a void in the lower-level execution of robotic action, where the majority of control engineers execute work. Thus, Google is attempting to bring the power and benefits of VLMs down into the control engineers’ domain of programming robotics.

Read more at Control Automation

U. S. Steel Aims to Improve Operational Efficiencies and Employee Experiences with Google Cloud’s Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: US Steel, Google


United States Steel Corporation (NYSE: X) (“U. S. Steel”) and Google Cloud today announced a new collaboration to build applications using Google Cloud’s generative artificial intelligence (“gen AI”) technology to drive efficiencies and improve employee experiences in the largest iron ore mine in North America. As a leading manufacturer engaging in gen AI with Google Cloud, U. S. Steel continues to advance its more than 100-year legacy of innovation.

The first gen AI-driven application that U. S. Steel will launch is called MineMind™ which aims to simplify equipment maintenance by providing optimal solutions for mechanical problems, saving time and money, and ultimately improving productivity. Underpinned by Google Cloud’s AI technology like Document AI and Vertex AI, MineMind™ is expected to not only improve the maintenance team’s experience by more easily bringing the information they need to their fingertips, but also save costs from more efficient use of technicians’ time and better maintained trucks. The initial phase of the launch will begin in September and will impact more than 60 haul trucks at U. S. Steel’s Minnesota Ore Operations facilities, Minntac and Keetac.

Read more at Business Wire

How AI is helping airlines mitigate the climate impact of contrails

🧠🦾 RT-2: New model translates vision and language into action

📅 Date:

🔖 Topics: Robot Arm, Transformer Net, Machine Vision, Vision-language-action Model

🏢 Organizations: Google


Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.

High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.

In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities.

Read more at Deepmind Blog

🧠🦾 RoboCat: A self-improving robotic agent

📅 Date:

🔖 Topics: Robot Arm, Transformer Net

🏢 Organizations: Google


RoboCat learns much faster than other state-of-the-art models. It can pick up a new task with as few as 100 demonstrations because it draws from a large and diverse dataset. This capability will help accelerate robotics research, as it reduces the need for human-supervised training, and is an important step towards creating a general-purpose robot.

RoboCat is based on our multimodal model Gato (Spanish for “cat”), which can process language, images, and actions in both simulated and physical environments. We combined Gato’s architecture with a large training dataset of sequences of images and actions of various robot arms solving hundreds of different tasks.

The combination of all this training means the latest RoboCat is based on a dataset of millions of trajectories, from both real and simulated robotic arms, including self-generated data. We used four different types of robots and many robotic arms to collect vision-based data representing the tasks RoboCat would be trained to perform.

Read more at Deepmind Blog

SAP and Google Cloud Expand Partnership to Build the Future of Open Data and AI for Enterprises

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: SAP, Google


SAP SE (NYSE SAP) and Google Cloud announced an extensive expansion of their partnership, introducing a comprehensive open data offering designed to simplify data landscapes and unleash the power of business data.

The offering enables customers to build an end-to-end data cloud that brings data from across the enterprise landscape using the SAP Datasphere solution together with Google’s data cloud, so businesses can view their entire data estates in real time and maximize value from their Google Cloud and SAP software investments.

Read more at SAP News

FogLAMP on Google Cloud

🦾♻️ Robotic deep RL at scale: Sorting waste and recyclables with a fleet of robots

📅 Date:

✍️ Authors: Sergey Levine, Alexander Herzog

🔖 Topics: Recycling, Robot Picking, Reinforcement Learning

🏢 Organizations: Google


In “Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators”, we discuss how we studied this problem through a recent large-scale experiment, where we deployed a fleet of 23 RL-enabled robots over two years in Google office buildings to sort waste and recycling. Our robotic system combines scalable deep RL from real-world data with bootstrapping from training in simulation and auxiliary object perception inputs to boost generalization, while retaining the benefits of end-to-end training, which we validate with 4,800 evaluation trials across 240 waste station configurations.

Read more at Google AI Blog

How BigQuery helps Leverege deliver business-critical enterprise IoT solutions at scale

📅 Date:

🔖 Topics: IIoT, Cloud Computing

🏢 Organizations: Google, Leverege


Leverege IoT Stack is deployed with Google Kubernetes Engine (GKE), a fully managed kubernetes service for managing collections of microservices. Leverege uses Google Cloud Pub/Sub, a fully managed service, as the primary means of message routing for data ingestion, and Google Firebase for real-time data and user interface hosting. For long-term data storage, historical querying and analysis, and real-time insights , Leverege relies on BigQuery.

BigQuery allows Leverege to record the full volume of historical data at a low storage cost, while only paying to access small segments of data on-demand using table partitioning. For each of these examples, historical analysis using BigQuery can help identify pain points and improve operational efficiencies. They can also do so with both public datasets and private datasets. This means an auto wholesaler can expose data for specific vehicles, but not the entire dataset (i.e., no API queries). Likewise, a boat engine manufacturer can make subsets of data available to different end users.

Read more at Google Cloud Blog

Building a Visual Quality Control solution in Google Cloud using Vertex AI

📅 Date:

✍️ Authors: Oleg Smirnov, Marko Nikolic, Ilya Katsov

🔖 Topics: Computer Vision, Quality Assurance

🏢 Organizations: Grid Dynamics, Google


In this blog post, we consider the problem of defect detection in packages on assembly and sorting lines. More specifically, we present a real-time visual quality control solution that is capable of tracking multiple objects (packages) on a line, analyzing each object, and evaluating the probability of a defect or damaged parcel. The solution was implemented using Google Cloud Platform (GCP) Vertex AI platforms and GCP AutoML services, and we have made the reference implementation available in our git repository. This implementation can be used as a starting point for developing custom visual quality control pipelines.

Read more at Grid Dynamics Blog

⭐ Hunting For Hardware-Related Errors In Data Centers

📅 Date:

✍️ Author: Anne Meixner

🔖 Topics: Manufacturing Analytics

🏢 Organizations: Google, Meta, Synopsys


The data center computational errors that Google and Meta engineers reported in 2021 have raised concerns regarding an unexpected cause — manufacturing defect levels on the order of 1,000 DPPM. Specific to a single core in a multi-core SoC, these hardware defects are difficult to isolate during data center operations and manufacturing test processes. In fact, SDEs can go undetected for months because the precise inputs and local environmental conditions (temperature, noise, voltage, clock frequency) have not yet been applied.

For instance, Google engineers noted ‘an innocuous change to a low-level library’ started to give wrong answers for a massive-scale data analysis pipeline. They went on to write, “Deeper investigation revealed that these instructions malfunctioned due to manufacturing defects, in a way that could only be detected by checking the results of these instructions against the expected results; these are ‘silent’ corrupt execution errors, or CEEs.”

Engineers at Google further confirmed their need for internal data, “Our understanding of CEE impacts is primarily empirical. We have observations of the form, ‘This code has miscomputed (or crashed) on that core.’ We can control what code runs on what cores, and we partially control operating conditions (frequency, voltage, temperature). From this, we can identify some mercurial cores. But because we have limited knowledge of the detailed underlying hardware, and no access to the hardware-supported test structures available to chip makers, we cannot infer much about root causes.”

Read more at Semiconductor Engineering

RT-1: Robotics Transformer for Real-World Control at Scale

📅 Date:

✍️ Authors: Keerthana Gopalakrishnan, Kanishka Rao

🔖 Topics: Industrial Robot, Transformer Net, Open Source

🏢 Organizations: Google


Major recent advances in multiple subfields of machine learning (ML) research, such as computer vision and natural language processing, have been enabled by a shared common approach that leverages large, diverse datasets and expressive models that can absorb all of the data effectively. Although there have been various attempts to apply this approach to robotics, robots have not yet leveraged highly-capable models as well as other subfields.

Several factors contribute to this challenge. First, there’s the lack of large-scale and diverse robotic data, which limits a model’s ability to absorb a broad set of robotic experiences. Data collection is particularly expensive and challenging for robotics because dataset curation requires engineering-heavy autonomous operation, or demonstrations collected using human teleoperations. A second factor is the lack of expressive, scalable, and fast-enough-for-real-time-inference models that can learn from such datasets and generalize effectively.

To address these challenges, we propose the Robotics Transformer 1 (RT-1), a multi-task model that tokenizes robot inputs and outputs actions (e.g., camera images, task instructions, and motor commands) to enable efficient inference at runtime, which makes real-time control feasible. This model is trained on a large-scale, real-world robotics dataset of 130k episodes that cover 700+ tasks, collected using a fleet of 13 robots from Everyday Robots (EDR) over 17 months. We demonstrate that RT-1 can exhibit significantly improved zero-shot generalization to new tasks, environments and objects compared to prior techniques. Moreover, we carefully evaluate and ablate many of the design choices in the model and training set, analyzing the effects of tokenization, action representation, and dataset composition. Finally, we’re open-sourcing the RT-1 code, and hope it will provide a valuable resource for future research on scaling up robot learning.

Read more at Google AI Blog

Using AI to increase asset utilization and production uptime for manufacturers

📅 Date:

🔖 Topics: Manufacturing Analytics

🏢 Organizations: Google, Litmus


Google Cloud created purpose-built tools and solutions to organize manufacturing data, make it accessible and useful, and help manufacturers to quickly take significant steps on this journey by reducing the time to value. In this post, we will explore a practical example of how manufacturers can use Google Cloud manufacturing solutions to train, deploy and extract value from ML-enabled capabilities to predict asset utilization and maintenance needs. The first step to a successful machine learning project is to unify necessary data in a common repository. For this, we will use Manufacturing Connect, the factory edge platform co-developed with Litmus, to connect to manufacturing assets and stream the asset telemetries to Pub/Sub.

The following scenario is based on a hypothetical company, Cymbal Materials. This company is a factitious discrete manufacturing company that runs 50+ factories in 10+ countries. 90% of Cymbal Materials manufacturing processes involve milling, which are accomplished using industrial computer numerical control (CNC) milling machines. Although their factories implement routine maintenance checklists, there are unplanned and unknown failures that happen occasionally. However, many of the Cymbal Materials factory workers lack the experience to identify and troubleshoot failures due to labor shortage and high turnover rate in their factories. Hence, Cymbal Materials is working with Google Cloud to build a machine learning model that can identify and analyze failures on top of Manufacturing Connect, Manufacturing Data Engine, and Vertex AI.

Read more at Google Cloud Blog

Intro to deep learning to track deforestation in supply chains

📅 Date:

✍️ Author: Alexandrina Garcia-Verdin

🏢 Organizations: Google


In my experience, I have observed that it’s common in machine learning to surrender to the process of experimenting with many different algorithms in a trial and error fashion, until you get the desired result. My peers and I at Google have a People and Planet AI YouTube series where we talk about how to train and host a model for environmental purposes using Google Cloud and Google Earth Engine. Our focus is inspiring people to use deep learning, and if we could rename the series, we would call it AI for Minimalists since we would recommend artificial neural networks for most of our use cases. And so in this episode we give an overview of what deep learning is and how you can use it for tracking deforestation in supply chains.

Read more at Google Cloud Blog

The art of effective factory data visualization

Anomaly detection in industrial IoT data using Google Vertex AI: A reference notebook

📅 Date:

✍️ Authors: Volodymyr Koliadin, Ilya Katsov

🔖 Topics: Anomaly Detection, IIoT

🏢 Organizations: Grid Dynamics, Google


Modern manufacturing, transportation, and energy companies routinely operate thousands of machines and perform hundreds of quality checks at different stages of their production and distribution processes. Industrial sensors and IoT devices enable these companies to collect comprehensive real-time metrics across equipment, vehicles, and produced parts, but the analysis of such data streams is a challenging task.

We start with a discussion of how the health monitoring problem can be converted into standard machine learning tasks and what pitfalls one should be aware of, and then implement a reference Vertex AI pipeline for anomaly detection. This pipeline can be viewed as a starter kit for quick prototyping of IoT anomaly detection solutions that can be further customized and extended to create production-grade platforms.

Read more at Grid Dynamics Blog

Table Tennis: A Research Platform for Agile Robotics

📅 Date:

✍️ Authors: Avi Singh, Laura Graesser

🔖 Topics: Reinforcement Learning

🏢 Organizations: Google


Robot learning has been applied to a wide range of challenging real world tasks, including dexterous manipulation, legged locomotion, and grasping. It is less common to see robot learning applied to dynamic, high-acceleration tasks requiring tight-loop human-robot interactions, such as table tennis. There are two complementary properties of the table tennis task that make it interesting for robotic learning research. First, the task requires both speed and precision, which puts significant demands on a learning algorithm. At the same time, the problem is highly-structured (with a fixed, predictable environment) and naturally multi-agent (the robot can play with humans or another robot), making it a desirable testbed to investigate questions about human-robot interaction and reinforcement learning. These properties have led to several research groups developing table tennis research platforms.

Read more at Google AI Research

How Boeing overcame their on-premises implementation challenges with data & AI

CircularNet: Reducing waste with Machine Learning

📅 Date:

✍️ Authors: Robert Little, Umair Sabir

🔖 Topics: Sustainability, Machine Learning, Convolutional Neural Network

🏢 Organizations: Google


The facilities where our waste and recyclables are processed are called “Material Recovery Facilities” (MRFs). Each MRF processes tens of thousands of pounds of our societal “waste” every day, separating valuable recyclable materials like metals and plastics from non-recyclable materials. A key inefficiency within the current waste capture and sorting process is the inability to identify and segregate waste into high quality material streams. The accuracy of the sorting directly determines the quality of the recycled material; for high-quality, commercially viable recycling, the contamination levels need to be low. Even though the MRFs use various technologies alongside manual labor to separate materials into distinct and clean streams, the exceptionally cluttered and contaminated nature of the waste stream makes automated waste detection challenging to achieve, and the recycling rates and the profit margins stay at undesirably low levels.

Enter what we call “CircularNet”, a set of models that lowers barriers to AI/ML tech for waste identification and all the benefits this new level of transparency can offer. Our goal with CircularNet is to develop a robust and data-efficient model for waste/recyclables detection, which can support the way we identify, sort, manage, and recycle materials across the waste management ecosystem.

Read more at Tensorflow Blog

Synopsys helps semiconductor designers accelerate chip design and development on Google Cloud

📅 Date:

🔖 Topics: Electronic Design Automation

🏢 Organizations: Synopsys, Google


EDA software is a large consumer of high performance computing capacity in the cloud. With the release of Synopsys Cloud bring-your-own-cloud (BYOC) solution on Google Cloud, chip designers can now scale their Google Cloud infrastructure with Synopsys’s leading EDA tools under the flexible FlexEDA pay-per-use model and access unlimited EDA software license availability on-demand by the hour or minute.

Read more at Google Cloud Blog

Lufthansa increases on-time flights by wind forecasting with Google Cloud ML

📅 Date:

✍️ Author: Anant Nawalgaria

🔖 Topics: Machine Learning, Forecasting

🏢 Organizations: Lufthansa, Google


The magnitude and direction of wind significantly impacts airport operations, and Lufthansa Group Airlines are no exception. A particularly troublesome kind is called BISE: it is a cold, dry wind that blows from the northeast to southwest in Switzerland, through the Swiss Plateau. Its effects on flight schedules can be severe, such as forcing planes to change runways, which can create a chain reaction of flight delays and possible cancellations. In Zurich Airport, in particular, BISE can potentially reduce capacity by up to 30%, leading to further flight delays and cancellations, and to millions in lost revenue for Lufthansa (as well as dissatisfaction among their passengers).

Machine learning (ML) can help airports and airlines to better anticipate and manage these types of disruptive weather events. In this blog post, we’ll explore an experiment Lufthansa did together with Google Cloud and its Vertex AI Forecast service, accurately predicting BISE hours in advance, with more than 40% relative improvement in accuracy over internal heuristics, all within days instead of the months it often takes to do ML projects of this magnitude and performance.

Read more at Google Cloud Blog

How Volkswagen and Google Cloud are using machine learning to design more energy-efficient cars

📅 Date:

🔖 Topics: Generative Design, Sustainability

🏭 Vertical: Automotive

🏢 Organizations: Volkswagen, Google


Volkswagen strives to design beautiful, performant, and energy efficient vehicles. This entails an iterative process where designers go through many design drafts, evaluating each, integrating the feedback, and refining. For example, a vehicle’s drag coefficient—its resistance to air—is one of the most important factors of energy efficiency. Thus, getting estimates of the drag coefficient for several designs helps the designers experiment and converge toward more energy-efficient solutions. The cheaper and faster this feedback loop is, the more it enables the designers.

This joint research effort between Volkswagen and Google has produced promising results with the help of the Vertex AI platform. In this first milestone, the team was able to successfully bring recent AI research results a step closer to practical application for car design. This first iteration of the algorithm can produce a drag coefficient estimate with an average error of just 4%, within a second. An average error of 4%, while not quite as accurate as a physical wind tunnel test, can be used to narrow a large selection of design candidates to a small shortlist. And given how quickly the estimates appear, we have made a substantial improvement on the existing methods that take days or weeks. With the algorithm that we have developed, designers can run more efficiency tests, submit more candidates, and iterate towards richer, more effective designs in just a small fraction of the time previously required.

Read more at Google Cloud Blog

Towards Helpful Robots: Grounding Language in Robotic Affordances

📅 Date:

✍️ Authors: Brian Ichter, Karol Hausman

🔖 Topics: Industrial Robot, Natural Language Processing

🏢 Organizations: Google, Everyday Robots


In “Do As I Can, Not As I Say: Grounding Language in Robotic Affordances”, we present a novel approach, developed in partnership with Everyday Robots, that leverages advanced language model knowledge to enable a physical agent, such as a robot, to follow high-level textual instructions for physically-grounded tasks, while grounding the language model in tasks that are feasible within a specific real-world context. We evaluate our method, which we call PaLM-SayCan, by placing robots in a real kitchen setting and giving them tasks expressed in natural language. We observe highly interpretable results for temporally-extended complex and abstract tasks, like “I just worked out, please bring me a snack and a drink to recover.” Specifically, we demonstrate that grounding the language model in the real world nearly halves errors over non-grounded baselines. We are also excited to release a robot simulation setup where the research community can test this approach.

Read more at Google AI Blog

Aramco and Cognite join forces in new data venture

📅 Date:

🏢 Organizations: Aramco, Cognite, Google


Aramco and Cognite, a global leader in industrial software, have launched CNTXT, a joint venture based in the Kingdom of Saudi Arabia. Headquartered in Riyadh, CNTXT aims to support the Kingdom’s industrial digitalization, and the wider MENA region.

CNTXT will provide digital transformation services enabled by advanced cloud solutions and leading industrial software. These solutions and services aim to help public and private sector companies to future-proof their data infrastructure, increase revenue, cut costs and reduce risks while enhancing operational sustainability and security. CNTXT is Google Cloud’s reseller for cloud solutions in the Kingdom and the exclusive reseller of Cognite Data Fusion in MENA region. Additionally, Google Cloud is expected to launch a “Center of Excellence” later this year to provide training to developers and business leaders in how to use cloud technologies.

Read more at Aramco Press Releasee

Maersk Mobile: All the Way with Flutter

📅 Date:

🔖 Topics: App Architecture

🏢 Organizations: Maersk, Google


The Maersk App helps our customers to follow the progress of their shipment in real-time. In late 2017, the team built the app on native platforms (Android and iOS), with a very small group of engineers compared to the size of the web teams. Keeping up with requirements to solve the business needs of our customers was challenging and time-consuming as all development had to be done twice. Over time, tech debt for maintaining two codebases was getting high as the underlying platforms changed as well as new features and services for our customers in a rapidly growing userbase.

One additional underrated benefit is its seamless integration with Firebase (BaaS – Backend – as – a – Service platform by Google). Engineers can benefit from Firebase’s services like analytics, performance monitoring, crash reporting, app distribution to QA etc. which are available out of the box with minimal code/configuration changes.

We incorporated BLOC architecture to manage business logic and UI(view) separately. BLOC architecture helped us manage the state more effectively for the App as it was easy to have common state throughout the app for persistent user experience with improved security on user accessibility.

Read more at Maersk News

TELUS: Solving for workers’ safety with edge computing and 5G

📅 Date:

🔖 Topics: 5G, Edge Computing, Worker Safety

🏢 Organizations: Google, TELUS


Together with Google Cloud, we have been leveraging solutions with the power of MEC and 5G to develop a workers’ safety application in our Edmonton Data Center that enables on-premise video analytics cameras to screen manufacturing facilities and ensure compliance with safety requirements to operate heavy-duty machinery. The CCTV (closed-circuit television) cameras we used are cost-effective and easier to deploy than RTLS (real time location services) solutions that detect worker proximity and avoid collisions. This is a positive, proactive step to steadily improve workplace safety. For example, if a worker’s hand is close to a drill, that drill press will not bore holes in any surface until the video analytics camera detects that the worker’s hand has been removed from the safety zone area.

Read more at Google Cloud Blog

Introducing new Google Cloud manufacturing solutions: smart factories, smarter workers

📅 Date:

🔖 Topics: Cloud Computing, Machine Health

🏢 Organizations: Google, Litmus


The new manufacturing solutions from Google Cloud give manufacturing engineers and plant managers access to unified and contextualized data from across their disparate assets and processes.

Manufacturing Data Engine is the foundational cloud solution to process, contextualize and store factory data. The cloud platform can acquire data from any type of machine, supporting a wide range of data, from telemetry to image data, via a private, secure, and low cost connection between edge and cloud. With built-in data normalization and context-enrichment capabilities, it provides a common data model, with a factory-optimized data lakehouse for storage.

Manufacturing Connect is the factory edge platform co-developed with Litmus that quickly connects with nearly any manufacturing asset via an extensive library of 250-plus machine protocols. It translates machine data into a digestible dataset and sends it to the Manufacturing Data Engine for processing, contextualization and storage. By supporting containerized workloads, it allows manufacturers to run low-latency data visualization, analytics and ML capabilities directly on the edge.

Read more at Google Cloud Blog

Price optimization notebook for apparel retail using Google Vertex AI

📅 Date:

✍️ Authors: Volodymyr Koliadin, Ilya Katsov

🔖 Topics: Demand Planning

🏭 Vertical: Apparel

🏢 Organizations: Google


One of the key requirements of a price optimization system is an accurate forecasting model to quickly simulate demand response to price changes. Historically, developing a Machine Learning forecast model required a long timeline with heavy involvement from skilled specialists in data engineering, data science, and MLOps. The teams needed to perform a variety of tasks in feature engineering, model architecture selection, hyperparameter optimization, and then manage and monitor deployed models.

Vertex AI Forecast provides advanced AutoML workflow for time series forecasting which helps dramatically reduce the engineering and research effort required to develop accurate forecasting models. The service easily scales up to large datasets with over 100 million rows and 1000 columns, covering years of data for thousands of products with hundreds of possible demand drivers. Most importantly it produces highly accurate forecasts. The model scored in the top 2.5% of submissions in M5, the most recent global forecasting competition which used data from Walmart.

Read more at Google Cloud Blog

Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance

📅 Date:

🔖 Topics: Large Language Model, Transformer

🏢 Organizations: Google


Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.

Read more at Google AI Blog

UPS Expands Deal With Google Cloud to Prepare for Surge in Data

📅 Date:

🏢 Organizations: Google, UPS


Logistics company to gain network, storage and compute capacity to help it analyze new data coming from initiatives such as RFID chips on packages

Read more at Wall Street Journal (Paid)

Robust Routing Using Electrical Flows

📅 Date:

✍️ Authors: Ali Kemal Sinop, Kostas Kollias

🏢 Organizations: Google


We view the road network as a graph, where intersections are nodes and roads are edges. Our method then models the graph as an electrical circuit by replacing the edges with resistors, whose resistances equal the road traversal time, and then connecting a battery to the origin and destination, which results in electrical current between those two points. In this analogy, the resistance models how time-consuming it is to traverse a segment. In this sense, long and congested segments have high resistances.

Read more at Google AI Blog

Improving PPA In Complex Designs With AI

📅 Date:

✍️ Author: John Koon

🔖 Topics: Reinforcement Learning, Generative Design

🏭 Vertical: Semiconductor

🏢 Organizations: Google, Cadence, Synopsys


The goal of chip design always has been to optimize power, performance, and area (PPA), but results can vary greatly even with the best tools and highly experienced engineering teams. AI works best in design when the problem is clearly defined in a way that AI can understand. So an IC designer must first see if there is a problem that can be tied to a system’s ability to adapt to, learn, and generalize knowledge/rules, and then apply these knowledge/rules to an unfamiliar scenario.

Read more at Semiconductor Engineering

Can Robots Follow Instructions for New Tasks?

📅 Date:

✍️ Authors: Chelsea Finn, Eric Jang

🔖 Topics: robotics, natural language processing, imitation learning

🏢 Organizations: Google, Everyday Robots


The results of this research show that simple imitation learning approaches can be scaled in a way that enables zero-shot generalization to new tasks. That is, it shows one of the first indications of robots being able to successfully carry out behaviors that were not in the training data. Interestingly, language embeddings pre-trained on ungrounded language corpora make for excellent task conditioners. We demonstrated that natural language models can not only provide a flexible input interface to robots, but that pretrained language representations actually confer new generalization capabilities to the downstream policy, such as composing unseen object pairs together.

In the course of building this system, we confirmed that periodic human interventions are a simple but important technique for achieving good performance. While there is a substantial amount of work to be done in the future, we believe that the zero-shot generalization capabilities of BC-Z are an important advancement towards increasing the generality of robotic learning systems and allowing people to command robots. We have released the teleoperated demonstrations used to train the policy in this paper, which we hope will provide researchers with a valuable resource for future multi-task robotic learning research.

Read more at Google AI Blog

Inside X’s Mission to Make Robots Boring

📅 Date:

✍️ Author: Steven Levy

🔖 Topics: robotics

🏢 Organizations: Google


It’s research by Everyday Robots, a project of X, Alphabet’s self-styled “moonshot factory.” The cafe testing ground is one of dozens on the Google campus in Mountain View, California, where a small percentage of the company’s massive workforce has now returned to work. The project hopes to make robots useful, operating in the wild instead of controlled environments like factories. After years of development, Everyday Robots is finally sending its robots into the world—or at least out of the X headquarters building—to do actual work.

Read more at WIRED

Chip floorplanning with deep reinforcement learning

AWS, Google, Microsoft apply expertise in data, software to manufacturing

📅 Date:

✍️ Author: Ilene Wolff

🔖 Topics: digital transformation

🏢 Organizations: Ford, AWS, Google, Microsoft


As manufacturing becomes digitized, Google’s methodologies that were developed for the consumer market are becoming relevant for industry, said Wee, who previously worked in the semiconductor industry as an industrial engineer. “We believe we’re at a point in time where these technologies—primarily the analytics and AI area—that have been very difficult to use for the typical industrial engineer are becoming so easy to use on the shop floor,” he said. “That’s where we believe our competitive differentiation lies.”

Meanwhile, Ford is also selectively favoring human brain power over software to analyze data and turning more and more to in-house coders than applications vendors. “The solution will be dependent upon the application,” Mikula said. “Sometimes it will be software, and sometimes it’ll be a data analyst who crunches the data sources. We would like to move to solutions that are more autonomous and driven by machine learning and artificial intelligence. The goal is to be less reliant on purchased SaaS.”

Read more at SME Media

Altana AI Raises $15M Series A Investment to Build the Single Source of Truth On the Global Supply Chain

📅 Date:

🔖 Topics: Funding Event

🏢 Organizations: Altana AI, Google


Altana AI has secured $15 million in Series A funding, led by GV (formerly Google Ventures). Floating Point, Ridgeline Partners, and existing investors Amadeus Capital Partners and Schematic Ventures joined the round, which closed in May 2021.

The company’s AI platform — the Altana Atlas — connects and learns from billions of data points to create a living, intelligent map of global commerce. Multinational enterprises like Boston Scientific are connecting to the Altana Atlas to map their supply chains beyond their immediate suppliers, build more resilient supplier networks, and manage risk across their global footprint. Government agencies and global logistics providers in the US and abroad are using the Altana Atlas to surface illicit activity and security threats hiding in opaque supply chain networks. To enable compliant trade at the speed of e-commerce, the world’s largest logistics providers and customs agencies are using the Altana Atlas to expedite lawful shipments across borders while filtering out illicit shipments.

Altana is pioneering a unique federated machine learning approach that enables shared global intelligence without data sharing, unlocking information that was never before available to power artificial intelligence. Karim Faris, General Partner at GV said, “Altana has cracked the code on creating intelligence from data that cannot be brought together directly because of privacy, sovereignty, and intellectual property concerns. In just two-and-a-half years since its founding, Altana is already working with a number of the world’s most important government agencies, logistics providers, and enterprises to transform how they manage global supply chains.”

Read more at PR Newswire

Introducing Intrinsic

📅 Date:

✍️ Author: Wendy Tan-White

🔖 Topics: robotics

🏢 Organizations: Google, Alphabet, Intrinsic


Intrinsic is working to unlock the creative and economic potential of industrial robotics for millions more businesses, entrepreneurs, and developers. We’re developing software tools designed to make industrial robots (which are used to make everything from solar panels to cars) easier to use, less costly and more flexible, so that more people can use them to make new products, businesses and services.

Read more at Medium

Visual Inspection AI: a purpose-built solution for faster, more accurate quality control

📅 Date:

✍️ Authors: Mandeep Wariach, Thomas Reinbacher

🔖 Topics: cloud computing, computer vision, machine learning, quality assurance

🏢 Organizations: Google


The Google Cloud Visual Inspection AI solution automates visual inspection tasks using a set of AI and computer vision technologies that enable manufacturers to transform quality control processes by automatically detecting product defects.

We built Visual Inspection AI to meet the needs of quality, test, manufacturing, and process engineers who are experts in their domain, but not in AI. By combining ease of use with a focus on priority uses cases, customers are realizing significant benefits compared to general purpose machine learning (ML) approaches.

Read more at Google Cloud Blog

Toward Generalized Sim-to-Real Transfer for Robot Learning

📅 Date:

✍️ Authors: Daniel Ho, Kanishka Rao

🔖 Topics: reinforcement learning, AI, robotics, imitation learning, generative adversarial networks

🏢 Organizations: Google


A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.

To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies — so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning — and thus bridge the visual discrepancy between sim and real.

Read more at Google AI Blog

With new geothermal project, it’s full steam ahead for 24/7 carbon-free energy

📅 Date:

🔖 Topics: Partnership

🏢 Organizations: Fervo Energy, Google


When Google announced our plan to go beyond purchasing renewable power for 100% of our energy usage and operate on 24/7 carbon-free energy by 2030, we noted that achieving this goal will require new transaction structures, advancements in clean energy policy, and innovative new technologies. Today, we’re pleased to announce that one of these new technologies—a first-of-its-kind, next-generation geothermal project—will soon begin adding carbon-free energy to the electric grid that serves our data centers and infrastructure throughout Nevada, including our Cloud region in Las Vegas.

Google and clean-energy startup Fervo have just signed the world’s first corporate agreement to develop a next-generation geothermal power project, which will provide an “always-on” carbon-free resource that can reduce our hourly reliance on fossil fuels. In 2022, Fervo will begin adding “firm” geothermal energy to the state’s electric grid system, where Google’s commitments already include one of the world’s largest corporate solar-plus-storage power purchase agreements.

Importantly, this collaboration also sets the stage for next-generation geothermal to play a role as a firm and flexible carbon-free energy source that can increasingly replace carbon-emitting fossil fuels—especially when aided by policies that expand and improve electricity markets; incentivize deployment of innovative technologies; and increase investments in clean energy research, development, and demonstration (RD&D).

Read more at Google Cloud Blog

Learning to Manipulate Deformable Objects

📅 Date:

✍️ Authors: Daniel Seita, Andy Zeng

🔖 Topics: robotics

🏢 Organizations: Google


In “Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks,” to appear at ICRA 2021, we release an open-source simulated benchmark, called DeformableRavens, with the goal of accelerating research into deformable object manipulation. DeformableRavens features 12 tasks that involve manipulating cables, fabrics, and bags and includes a set of model architectures for manipulating deformable objects towards desired goal configurations, specified with images. These architectures enable a robot to rearrange cables to match a target shape, to smooth a fabric to a target zone, and to insert an item in a bag. To our knowledge, this is the first simulator that includes a task in which a robot must use a bag to contain other items, which presents key challenges in enabling a robot to learn more complex relative spatial relations.

Read more at Google AI Blog

Google Cloud and Seagate: Transforming hard-disk drive maintenance with predictive ML

📅 Date:

✍️ Authors: Nitin Aggarwal, Rostam Dinyari

🔖 Topics: machine learning, predictive maintenance

🏭 Vertical: Computer and Electronic

🏢 Organizations: Google, Seagate


At Google Cloud, we know first-hand how critical it is to manage HDDs in operations and preemptively identify potential failures. We are responsible for running some of the largest data centers in the world—any misses in identifying these failures at the right time can potentially cause serious outages across our many products and services. In the past, when a disk was flagged for a problem, the main option was to repair the problem on site using software. But this procedure was expensive and time-consuming. It required draining the data from the drive, isolating the drive, running diagnostics, and then re-introducing it to traffic.

That’s why we teamed up with Seagate, our HDD original equipment manufacturer (OEM) partner for Google’s data centers, to find a way to predict frequent HDD problems. Together, we developed a machine learning (ML) system, built on top of Google Cloud, to forecast the probability of a recurring failing disk—a disk that fails or has experienced three or more problems in 30 days.

Read more at Google Cloud Blog

Multi-Task Robotic Reinforcement Learning at Scale

📅 Date:

✍️ Authors: Karol Hausman, Yevgen Chebotar

🔖 Topics: reinforcement learning, robotics, AI, machine learning

🏢 Organizations: Google


For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.

Read more at Google AI Blog

Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all

📅 Date:

✍️ Author: @TiernanRayTech

🔖 Topics: AI, machine learning, robotics, reinforcement learning

🏢 Organizations: Google


With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.

Read more at ZDNet

Rearranging the Visual World

📅 Date:

✍️ Authors: Andy Zeng, Pete Florence

🔖 Topics: AI, machine learning, robotics

🏢 Organizations: Google


Transporter Nets use a novel approach to 3D spatial understanding that avoids reliance on object-centric representations, making them general for vision-based manipulation but far more sample efficient than benchmarked end-to-end alternatives. As a consequence, they are fast and practical to train on real robots. We are also releasing an accompanying open-source implementation of Transporter Nets together with Ravens, our new simulated benchmark suite of ten vision-based manipulation tasks.

Read more at Google AI Blog

Edge-Inference Architectures Proliferate

📅 Date:

✍️ Author: Bryon Moyer

🔖 Topics: AI, machine learning, edge computing

🏭 Vertical: Semiconductor

🏢 Organizations: Cadence, Hailo, Google, Flex Logix, BrainChip, Synopsys, GrAI Matter, Deep Vision, Maxim Integrated


What makes one AI system better than another depends on a lot of different factors, including some that aren’t entirely clear.

The new offerings exhibit a wide range of structure, technology, and optimization goals. All must be gentle on power, but some target wired devices while others target battery-powered devices, giving different power/performance targets. While no single architecture is expected to solve every problem, the industry is in a phase of proliferation, not consolidation. It will be a while before the dust settles on the preferred architectures.

Read more at Semiconductor Engineering

RightHand Robotics raises $23 million from Menlo Ventures, Google

📅 Date:

🔖 Topics: funding event

🏢 Organizations: RightHand Robotics, Menlo Ventures, Google


With its reinforced bank account, Somerville, Mass.-based RightHand plans to expand its business and technical teams and broaden its suite of product applications, the firm said. “The funds will be used to support our growth and in hiring people as fast as we effectively can,” Martinelli said. “We’re getting follow-on orders and we need to support those orders and extend the product line, both for projects in the U.S. and in Europe and Japan.”

Read more at DC Velocity

Google Glass Didn't Disappear. You Can Find It On The Factory Floor

📅 Date:

🔖 Topics: augmented reality, quality assurance

🏢 Organizations: Google, AGCO


With Google Glass, she scans the serial number on the part she’s working on. This brings up manuals, photos or videos she may need. She can tap the side of headset or say “OK Glass” and use voice commands to leave notes for the next shift worker.

Peggy Gullick, business process improvement director with AGCO, says the addition of Google Glass has been “a total game changer.” Quality checks are now 20 percent faster, she says, and it’s also helpful for on-the-job training of new employees. Before this, workers used tablets.

Read more at NPR

Augmented Reality Is Already Improving Worker Performance

📅 Date:

✍️ Authors: Magid Abraham, Marco Annunziata

🔖 Topics: augmented reality, quality assurance

🏢 Organizations: Google, General Electric


The video below, for example, shows a side-by-side time-lapse comparison of a GE technician wiring a wind turbine’s control box using the company’s current process, and then doing the same task while guided by line-of-sight instructions overlaid on the job by an AR headset. The device improved the worker’s performance by 34% on first use.

There’s been concern about machines replacing human workers, and certainly this is happening for some jobs. But the experience at General Electric and other industrial firms shows that, for many jobs, combinations of humans and machines outperform either working alone. Wearable augmented reality devices are especially powerful, as they deliver the right information at the right moment and in the ideal format, directly in workers’ line of sight, while leaving workers’ hands free so they can work without interruption. This dramatically reduces the time needed to complete a job because workers needn’t stop what they’re doing to flip through a paper manual or engage with a device or workstation. It also reduces errors because the AR display provides explicit guidance overlaid on the work being done, delivered on demand. Workers need only follow the detailed instructions directly in front of them in order to move through a sequence of steps to completion. If they encounter problems, they can launch training videos or connect by video with remote experts to share what they see through their smart glasses and get real-time assistance.

Read more at Harvard Business Review