Google's Robotics Researchers Are Developing New Robots Capable of Writing Own Program

It is still a long way to go before robotics can attain generalized real-world applications, but robotics researchers at Google are giving us new robots that are capable of writing their own programs. 

Humans may find writing codes to make robots do a task challenging. That task could be assigned to robots now. Why not?

A Code for Every Instruction

Programming is not as easy as some of us may like to think. The truth is that even easy languages such as HTML need a coder to know and differentiate syntax and tools to deploy. 

The current robotics science requires a more involved process to write code for every instruction a robot gets. Even moving a limb of a robot needs a specific code. And for a robot to know when its task has been completed, there is another set of code for that.

Robotics Researchers Figured Out 

The good thing is that robotics researchers at Google have already figured out how a robot can perform specific-task-only actions. Let us say a robot can pick a bottle instead of a glass is next to impossible if you don't know the code the robot is operating.

Lately, however, Google's robotics experts have already developed a robot capable of writing its own code. 

Based on the natural language instructions, there is no more need for you to go into the configuration files so that you change block_target_color from #FF000 to #FFFF 00. Type "pick up the yellow block," and the robot will intelligently do the rest.

Read Also: Google Opens New AI Research in Canada

Pathways 

Google's Pathways Language Model (PaLM) has developed a language that is code-specific called Code as Policies (CaP).

The present systems of artificial intelligence (AI) are limited in the sense that they will train the subject from nothing for each new task performed. This means that every time a subject learns a new task, it will forget everything it has learned.

Basically, it is like starting from scratch with a new task.

PaLM has changed all this. It can train a model to interpret the language instructions and make these instructions into a program it can execute.

Autonomous

The model resembles an autonomous entity running on its own, which is quite a long task with current AI systems.

At Google, however, the transformation takes place. Here, the robotics researchers trained a model with a set of instructions with corresponding code given to it. 

In a blog post this week, the researchers said the model received a new set of instructions, generated new code recomposing API calls, combined new tasks, and "expresses feedback loops to assemble new behaviors at runtime."

The researchers said that upon receiving a comment-like prompt, the model could come up with its own code.

New Code, New Instructions 

For a CaP to write new specific-task code, Google researchers gave it "hints" similar to what APIs or tools are available and a few more example pairs of instructions-to-code. 

Then, the CaP writes new codes for new instructions using "hierarchical code generation," prompting it to repeatedly define new tasks, stack up its library over time, and construct by itself an active codebase. 

With only a set of instructions, the model can write its code and reuse it when it takes on similar instructions later.

Google engineers are testing this model. A positive result of the tests could be the springboard for the next-gen robots.

Related Article: One of the Perks of Google I/O? Robotic Bartenders

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost