Renowned robotics experts delve into humanoids, generative AI and other advancements in robotics technology

Last month, a comprehensive report was released covering the latest in robotics and the path it’s blazing for future technologies. The reporter reached out to some of the biggest names in the industry from CMU, UC Berkeley, Meta, NVIDIA, Boston Dynamics and the Toyota Research Institute, asking them the same six questions related to robotics.

Matthew Johnson-Roberson from CMU, believes that Generative AI will significantly bolster the capabilities of robots. According to him, this technology will enable robots to better generalize across a wide range of tasks, enhance their adaptability to new environments, and improve their ability to autonomously learn and evolve.

Dhruv Batra from Meta sees generative AI playing two distinct roles in embodied AI and robotics research. He thinks that training and testing in simulation cannot scale without generative AI. He also believes that architectures for self-supervised learning will play a crucial role in generating sensory observations that robots will observe in the future.

Aaron Saunders from Boston Dynamics feels that generative AI offers opportunities to create conversational interfaces to robots, improve the quality of computer vision functions, and potentially enable customer-facing capabilities such as visual question answering. He thinks that these capabilities are likely to extend past language and vision into robotic planning and control.

Russ Tedrake from TRI believes that generative AI has the potential to bring revolutionary new capabilities to robotics. He sees the ability for robots to communicate in natural language and connect to internet-scale language and image data giving robots a much more robust understanding and reasoning about the world.

Ken Goldberg from UC Berkeley says that in 2023, generative AI transformed robotics. Large language models allow robots and humans to communicate in natural language, while large Vision-Language-Action models can facilitate robot perception and control the motions of robot arms and legs.

Deepu Talla from NVIDIA is already seeing productivity improvements with generative AI across industries. He believes that genAI’s impact will be transformative across robotics from simulation to design.

When asked for their thoughts on the humanoid form factor, Ken Goldberg admitted to being skeptical in the past but is reconsidering his position after seeing the latest humanoids and quadrupeds. He believes that humanoids have many advantages over wheels in homes and factories and that bimanual (two-armed) robots are essential for many tasks.

Deepu Talla from NVIDIA feels that autonomous humanoids are even harder to design and that these robots will need multimodal AI to understand the environment around them. However, he also sees breakthroughs in generative AI capabilities making the robot skills needed for humanoids more generalizable.

Matthew Johnson-Roberson considers the humanoid form factor to be a complex engineering and design challenge but believes it has the potential to be extremely versatile and intuitively usable in a variety of social and practical contexts.

Max Bajracharya from TRI stated that places where robots might assist people tend to be designed for people, so these robots will likely need to fit and work in those environments.

When asked about the next major category for robotics, Max Bajracharya sees a lot of potential and needs in agriculture. He believes that the outdoor and unstructured nature makes it an area with a lot of opportunities.

All in all, the report provides a comprehensive breakdown of robotics in 2023 and the path it’s blazing for future technologies. With insights from industry experts, it’s clear that generative AI is set to play a crucial role in the future of robotics, and that the humanoid form factor and agriculture are key areas for the industry’s development in the coming years.

Share:

Hot News