Where Your Speech Is Free And Your Comment Is King - Post A Link

THE NEWS COMMENTER

VOTE  (0)  (0)

Your future neural network: A big black box full of light and mirrors


Added 05-20-19 12:10:02pm EST - “Optical neural network scales, saves energy?"as long as it isn't all counted.” - Arstechnica.com

CLICK TO SHARE

Posted By TheNewsCommenter: From Arstechnica.com: “Your future neural network: A big black box full of light and mirrors”. Below is an excerpt from the article.

Artificial intelligence (AI) has experienced a revival of pretty large proportions in the last decade. We have gone from AI being mostly useless to letting it ruin our lives in obscure and opaque ways. We’ve even given AI the task of crashing our cars for us.

AI experts will tell us that we just need bigger neural networks and the cars will probably stop crashing. You can get there by adding more graphics cards to an AI, but the power consumption becomes excessive. The ideal solution would be a neural network that can process and shovel data around at near-zero energy cost, which may be where we are headed with optical neural networks.

To give you an idea of the scale of energy we are talking about here, a good GPU uses 20 picoJoules (1pJ is 10-12J ) for each multiply and accumulate operation. A purpose-built integrated circuit can reduce that to about 1pJ. But, if a team of researchers are correct, an optical neural network might reduce that to an incredible 50 zeptoJoules (1zJ is 10-21J). So, how did the researchers come to that conclusion?

Let’s start with an example of how a neural network works. A set of inputs are spread across a set of neurons. Each input to a neuron is weighted and added, then the output from each neuron is given a boost. Stronger signals are amplified more than weak signals, making differences larger. That combination of multiplication, addition, and boost occurs in a single neuron, and neurons are placed in layers, with the output from one layer becomes the input of the next. As signals propagate through layers, this structure will amplify some and suppress others.

For all of this to do useful calculations, we need to preset the weighting of all inputs in all layers, as well as the boost function (more accurately, the nonlinear function) parameters. These weights are usually set by giving the neural network a training dataset to work on. During training, the weighting and function parameters evolve to good values through repeated failure and occasional success.

Read more...

Post a comment.

CLICK TO SHARE

BACK TO THE HOME-PAGE