Stanford University

Canvas Category Consultancy : Research : Academic

Website | LinkedIn

Primary Location Stanford, California, USA

Stanford was founded almost 150 years ago on a bedrock of societal purpose. Our mission is to contribute to the world by educating students for lives of leadership and contribution with integrity; advancing fundamental knowledge and cultivating creativity; leading in pioneering research for effective clinical therapies; and accelerating solutions and amplifying their impact

Assembly Line

OpenVLA: An Open-Source Vision-Language-Action Model

đź“… Date:

✍️ Authors: Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti

đź”– Topics: Vision-Language-Action Model, Large Language Model, Industrial Robot

🏢 Organizations: Stanford University


Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable policies for visuomotor control. Yet, widespread adoption of VLAs for robotics has been challenging as 1) existing VLAs are largely closed and inaccessible to the public, and 2) prior work fails to explore methods for efficiently fine-tuning VLAs for new tasks, a key component for adoption. Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-source VLA trained on a diverse collection of 970k real-world robot demonstrations. OpenVLA builds on a Llama 2 language model combined with a visual encoder that fuses pretrained features from DINOv2 and SigLIP. As a product of the added data diversity and new model components, OpenVLA demonstrates strong results for generalist manipulation, outperforming closed models such as RT-2-X (55B) by 16.5% in absolute task success rate across 29 tasks and multiple robot embodiments, with 7x fewer parameters. We further show that we can effectively fine-tune OpenVLA for new settings, with especially strong generalization results in multi-task environments involving multiple objects and strong language grounding abilities, and outperform expressive from-scratch imitation learning methods such as Diffusion Policy by 20.4%. We also explore compute efficiency; as a separate contribution, we show that OpenVLA can be fine-tuned on consumer GPUs via modern low-rank adaptation methods and served efficiently via quantization without a hit to downstream success rate. Finally, we release model checkpoints, fine-tuning notebooks, and our PyTorch codebase with built-in support for training VLAs at scale on Open X-Embodiment datasets.

Read more at arXiv

Bedrock Materials Secures $9 Million Seed Funding

đź“… Date:

đź”– Topics: Funding Event

🏢 Organizations: Bedrock Materials, Trucks Venture Capital, Refactor Capital, Version One Ventures, Stanford University


Bedrock Materials, a pioneering battery technology startup launched out of Stanford University in 2023, announced a successful close of $9 million in seed funding alongside the inauguration of its new Research & Development (R&D) headquarters in Chicago, Illinois. The financing round was led by Trucks Venture Capital, Refactor Capital, and Version One Ventures. Additional investment was provided by Hanover Technology Investment Management, SpaceCadet Ventures, Brainstorm Capital, Evergreen Climate Innovations, Expansion VC, Climate Capital, Quest Venture Partners, Meliorate Partners, Valia Ventures, Ritual Capital, and several individual angel investors with strong ties to the electric vehicle and battery industries.

Bedrock Materials specializes in producing essential materials for low-cost, eco-friendly sodium-ion batteries. These batteries are seen as a next-generation alternative to lithium-ion versions, utilizing affordable, widely available materials. Last month, the company began R&D scale production of battery precursor materials at its facility in Near West Chicago. Plans are underway to open a larger, permanent site later this year.

Read more at PR Newswire

This AI Hunts for Hidden Hoards of Battery Metals

đź“… Date:

✍️ Author: Josh Goldman

đź”– Topics: Machine Learning

🏭 Vertical: Mining

🏢 Organizations: KoBold Metals, Stanford University


The mining industry’s rate of successful exploration—meaning the number of big deposit discoveries found per dollar invested—has been declining for decades. At KoBold, we sometimes talk about “Eroom’s law of mining.” As its reversed name suggests, it’s like the opposite of Moore’s law. In accordance with Eroom’s law of mining, the number of ore deposits discovered per dollar of capital invested has decreased by a factor of 8 over the last 30 years. (The original Eroom’s law refers to a similar trend in the cost of new pharmaceutical discoveries.)

Our exploration program in northern Quebec provides a good case study. We began by using machine learning to predict where we were most likely to find nickel in concentrations significant enough to be worth mining. We train our models using any available data on a region’s underlying physics and geology, and supplement the results with expert insights from our geologists. In Quebec, the models pointed us to land less than 20 km from currently operating mines.

Over the course of the summer in Quebec, we drilled 10 exploration holes, each more than a kilometer away from the last. Each drilling location was determined by combining the results from our predictive models with the expert judgment of our geologists. In each instance, the collected data indicated we’d find conductive bodies in the right geologic setting—possible minable ore deposits, in other words—below the surface. Ultimately, we hit nickel-sulfide mineralization in 8 of the 10 drill holes, which equates to easily 10 times better than the industry average for similarly isolated drill holes.

Read more at IEEE Spectrum