Where Your Speech Is Free And Your Comment Is King - Post A Link

THE NEWS COMMENTER

Screenshot Wired.com
VOTE  (0)  (0)

Inside Facebook's New Robotics Lab, Where AI and Machines Friend One Another


Added 05-20-19 07:06:02am EST - “The social network has a plan to merge the worlds of artificial intelligence and real-world machines, so that both may grow more powerful.” - Wired.com

CLICK TO SHARE

Posted By TheNewsCommenter: From Wired.com: “Inside Facebook's New Robotics Lab, Where AI and Machines Friend One Another”. Below is an excerpt from the article.

At first glance, Facebook’s nascent robotic platform looks a bit … chaotic. In a new lab in its palatial Silicon Valley HQ, a red-and-black Sawyer robot arm (from the recently defunct company Rethink Robotics) is waving all over the place with a mechanical whine. It’s supposed to casually move its hand to a spot in space to its right, but it goes up, up, up and way off course, then resets to its starting position. Then the arm goes right, and gets pretty close to its destination. But then, agh!, it resets again before—maddeningly for those of us rooting for it—veering wildly off course again.

But, like a hare zigzagging back and forth to avoid a falcon, this robot’s seeming madness is in fact a special brand of cleverness, one that Facebook thinks holds the key to not only better robots, but for developing better artificial intelligence. This robot, you see, is teaching itself to explore the world. And that could one day, Facebook says, lead to intelligent machines like telepresence robots.

At the moment robots are very dumb—generally you have to spell everything out in code for them: This is how you roll forward, this is how you move your arm. We humans are much smarter in how we learn. Even babies understand that an object that moves out of view hasn’t vanished from the physical universe. They learn they can roll a ball, but not a couch. It’s fine to fall off a couch, but not a cliff.

All of that experimentation builds a model of the world in your brain, which is why later on you can learn to drive a car without crashing it immediately. “We know in advance that if we're driving near a cliff and we turn the wheel to the right, the car is going to run off a cliff and nothing good is going to happen,” says Yann LeCun, chief AI scientist at Facebook. We have a self-learned model in our head that keeps us from doing dumb things. Facebook is trying to give that kind of model to the machines, too. Systems that learn “models of the world is in my opinion the next challenge to really make significant progress in AI,” LeCun adds.

Now, the group at Facebook isn’t the first to try to get a robot to teach itself to move. Over at UC Berkeley, a team of researchers used a technique called reinforcement learning to teach a two-armed robot named Brett to shove a square peg in a square hole. Simply put, the robot tries lots and lots of random movements. If one gets it closer to the goal, the system gives it a digital “reward.” If it screws up, it gets a digital “demerit,” which the robot keeps a tally of. Over many iterations, the reward-seeking robot gets its hand closer and closer to square hole, and eventually drops the peg in.

Read more...

Post a comment.

CLICK TO SHARE

BACK TO THE HOME-PAGE