BRAIN SURGEON NOT REQUIRED (May 27, 1998)

"The next scientific revolution that will introduce learning robots, seeing machines, and talking machines will be based on scientists’ understanding of how the human brain learns," he explains. "The trouble is, a wide body of science happens to be wrong. And unless scientists face the facts, progress on these marvelous inventions will slow to a crawl."

At stake, he maintains, is how powerful and independent the next generation of robots will be. Prevailing theory, he maintains, will leave us with robots that require "babysitting" - an inordinately large amount of human input to accomplish their tasks. What’s necessary, he says, is machines that are autonomous.

Drawing on an unusual background for research in this field - operations research, which is better known for developing Wall Street trading models and airlines’ yield management systems - Dr. Roy has created mathematical models to fill the gap.

Ongoing Debate
Dr. Roy started questioning the classical theories of brain-like learning two years ago. The questions turned into a crusade. Since then he has argued with scholars in imposing subspecialties like cognitive science, computational neuroscience, and artificial neural networks.

The exchanges have taken place over the Internet and in two open debates, first at the International Neural Network Conference (ICNN'97) in Houston in April, 1997 and earlier this month at the World Conference on Computational Intelligence (WCCI'98) in Anchorage, Alaska.

Only recently has Dr. Roy seen other scientists - most notably Professor Christoph von der Malsburg of Ruhr-University in Germany, a pioneer in the field - acknowledge his position.

A classic, flawed theory
Prevailing thought draws on the teaching of Donald Hebb of McGill University, Montreal, a pioneer theoretician who postulated a mechanism by which the brain learns to distinguish objects and signals, add, and understand grammar. According to Hebb, learning involves adjusting the "strength of connections" between cells or neurons in groups of cells known as neural networks.

Hebb’s followers extended his idea about brain-like intelligent learning systems with 2 concepts:

- Autonomous systems. Each neuron is a self-adjusting cell that changes the strengths of its connections to other neurons when learning so that it makes fewer errors when it repeats a task. These neurons are viewed as "autonomous or self-learning systems." Scientists used this idea to derive "local learning laws," or mathematical formulas believed to be used by neurons.

- Instantaneous learning. These scientists also presumed that learning in the brain is "instantaneous" - as soon as something to be learned is presented, the appropriate brain cells use their "local learning laws" to make instant adjustments to the strength of their connections to other neurons. When learning is complete, the brain discards its memory of the learning example.

This theory of "memoryless learning" excited scientists and engineers worldwide because it allowed them to envision simple brain-like learning machines that wouldn’t need huge amounts of computerized memory.

Stumbling block
The major stumbling block for future technology, says Dr. Roy, is that none of these learning methods reproduce the external characteristics of the human brain, principally its independent way of learning. Therefore, methods based on these classical ideas require constant intervention by engineers and computer scientists - providing network designs, setting appropriate parameters correctly, and so on - to make them work. This drawback is severe, he maintains.

Instead, says, Dr. Roy, scientists must admit that their constructs diverge from the human brain and return to the original model. Drawing on the way the brain actually works, he has used operations research to create autonomous learning algorithms that are more human-like because they don’t require ongoing input.

Dr. Roy is confident his challenge will prevail. "The best model those who study artificial intelligence have is still the human brain," he says. "Up until now, we’ve done an adequate job copying its workings. We have to do a better job."

Dr. Asim Roy is the author and co-author of numerous articles on artificial intelligence, including "A Neural Network Learning Theory and a Polynomial Time RBF Algorithm," which appeared in the IEEE Transactions on Neural Networks; "Iterative Generation of Higher-Order Nets in Polynomial Time Using Linear Programming," which also appeared in that journal; "An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems," Neural Networks; "A Polynomial Time Algorithm for the Construction and Training of a Class of Multilayer Perceptrons," Neural Networks; "A Polynomial Time Algorithm for Generating Neural Networks for Pattern Classification - Its Stability Properties and Some Test Results," Neural Computation; and "Pattern Classification Using Linear Programming," ORSA Journal on Computing.

The Institute for Operations Research and the Management Sciences (INFORMS) is an international scientific society with 12,000 members, including Nobel Prize laureates, dedicated to applying scientific methods to help improve decision-making, management, and operations. Members of INFORMS work primarily in business, government, and academia. They are represented in fields as diverse as airlines, health care, law enforcement, the military, the stock market, and telecommunications.