Introducing Gemini Robotics-ER 1.6: A Leap Forward in Embodied Thinking
For robots to be truly helpful in our daily lives and industries, they must do more than just follow instructions; they must think about the physical world. From navigating a complex facility to interpreting the needle on a pressure gauge, a robot’s “embodied thinking” allows it to bridge the gap between digital intelligence and physical action. This notion is increasingly becoming a reality with the advent of advanced robotics technology.
Gemini Robotics-ER 1.6: A Significant Upgrade
Today, we are excited to introduce Gemini Robotics-ER 1.6, a significant upgrade to our reasoning-first model. This update enables robots to understand their environment with unprecedented precision. By improving spatial reasoning and multiple viewpoint understanding, we are equipping the next generation of physical agents with a new level of autonomy.
Enhanced Thinking Skills for Advanced Robotics
This model specializes in thinking skills critical to robotics, including visual and spatial understanding, task planning, and success recognition. It acts as a high-level reasoning model for a robot, capable of performing tasks by natively calling tools such as Google Search to find information, vision-language-action models (VLAs), or other custom third-party functions. These capabilities ensure that robots are not just reactive but proactive agents in various environments.
Notable Improvements Over Previous Models
Gemini Robotics-ER 1.6 shows significant improvements over its predecessors, Gemini Robotics-ER 1.5 and Gemini 3.0 Flash, particularly in enhancing spatial and physical reasoning skills such as pointing, counting, and success recognition. We’re also unlocking a new capability: instrument reading, which allows robots to read complex gauges and sight glasses. This advancement was made possible through our close collaboration with Boston Dynamics, highlighting the importance of partnership in technological innovation.
Availability and Developer Support
Starting today, Gemini Robotics-ER 1.6 is available to developers via the Gemini API and Google AI Studio. To facilitate a seamless experience, we offer a developer colab that includes examples for configuring the model and guiding you to perform embodied reasoning tasks. This support ensures that developers can swiftly integrate these cutting-edge capabilities into their projects, fostering innovation and efficiency.
For more detailed information and to access the resources, visit our official blog Here.
“`

