The first news from this year's Automate conference comes via Alphabet x spinout Intrinsic. The company announced at the Chicago event on Monday that it will incorporate a number of Nvidia offerings into its Flowstate robotics applications platform.
That includes Isaac Manipulator, a collection of fundamental models designed to create workflows for robotic arms. The offering launched at GTC in March, with some of the biggest names in industrial automation already on board. The list includes Yaskawa, Solomon, PickNik Robotics, Ready Robotics, Franka Robotics and Universal Robots.
The collaboration focuses specifically on grasping (grasping and picking up objects), one of the key modalities for manufacturing and fulfillment automation. Systems are trained on large data sets, with the goal of executing tasks that work on all hardware (i.e., hardware agnosticism) and with different objects.
That is, selection methods can be transferred to different environments, instead of having to train each system for each scenario. As humans, once we figure out how to pick things up, that action can be adapted to different objects in different environments. For the most part, robots can't do that, at least not for now.
“In the future, developers will be able to use ready-made universal understanding skills like these to greatly speed up their programming processes,” Intrinsic founder and CEO Wendy Tan White said in a post. “For the industry as a whole, this development shows how basic models could have a profound impact, including making current robot programming challenges easier to manage at scale, creating applications that were not previously feasible, reducing development and increase flexibility for end users.
Flowstate's first tests were conducted on Isaac Sim, Nvidia's robotics simulation platform. The intrinsic customer, Trumpf Machine Tools, has been working with a prototype of the system.
“This universal grasping skill, trained with 100% synthetic data in Isaac Sim, can be used to create sophisticated solutions that can perform adaptive and versatile object grasping tasks in simulation and real,” says Tan White of Trumpf's work with the platform. “Instead of coding specific grippers to grasp specific objects in a certain way, an efficient code for a particular gripper and object is automatically generated to complete the task using the basic model.”
Intrinsic is also working with Alphabet-owned DeepMind to figure out posture estimation and path planning, two other key aspects of automation. For the latter, the system was trained on more than 130,000 objects. The company says the systems can determine the orientation of objects in “a few seconds,” an important part of being able to detect them.
Another key piece of Intrinsic's work with DeepMind is the ability to operate multiple robots together. “Our teams have tested this 100% ML-generated solution to seamlessly orchestrate four separate robots working on a scaled-down automotive welding application simulation,” says Tan White. “Each robot's motion plans and trajectories are generated automatically, are collision-free, and are surprisingly efficient—performing about 25 percent better than some traditional methods we've tested.”
The team is also working on systems that use two arms at once, a configuration more in line with the emerging world of humanoid robots. It's something we'll see a lot more of in the coming years, humanoid or not. Going from one arm to two opens up a whole world of additional applications for these systems.