In the Lab

At r4robot, we're always exploring new tools to train robots.

Here's a peek at some of our research and development.

 

 

 

In these research demos, an embodied AI (in our case, a robot arm) watches someone perform a task, and then repeats the task on its own. It can also learn to recognize a specific object by playing with the object introduced to it.

This is all done with very simple tools: two webcams, and a low-cost computer. It's a very simple interaction, but one with a few core ideas around accessibility, efficient computing, and physical interaction.

One thing we find inspiring is that interactions with robots start to move away from a screen-based approach. While screens can offer loads of relevant information and rich multi-touch interaction, they constrain the interaction within a one to two-dimensional space. We have an opportunity here to design interfaces that are physically engaging and that make full use of the vast landscape of each of our senses within the real physical world.

As we continue to develop intelligent machines that exist in the physical world, we aim to take advantage of the full breadth of human sensory experience.

Back to blog