“In a positronic brain, the concept of redundancy is taken to an extreme. All of the copies must agree at all times, and the diagnostic systems run checks constantly. If a few, or even one, of the billions of redundant copies of the embedded Three Laws do not produce identical results compared to the majority state, that can force a partial, perhaps even a complete, shutdown.” Jomaine could see in Kresh’ s face that he was losing him.
“Forgive me,” Jomaine said. “I did not mean to lecture at you. But it is the existence of these billions of copies of the Laws that is so crippling to positronic brain development. An experimental brain cannot really
“I see the difficulty,” Donald said. “I must confess that I find the concept of a robot with your modified Three Laws rather distressing. But even so, I can see why your gravitonic brains do not have this inflexibility problem, because the Laws are not so widely distributed. But isn’t it riskier to run with fewer backups and copies?”
“Yes, it is. But the degree of risk involved is microscopic. Statistically speaking, your brain, Donald, is not likely to have a major Three Laws programming failure for a quadrillion years. A gravitonic brain with only a few hundred levels of redundancy is likely to have a Law-level programming failure sooner than that. Probably it can’t go more than a billion or two years between failures.
“Of course, either brain type will wear out in a few hundred years, or perhaps a few thousand at the outside, with special maintenance. Yes, the positronic brain is millions of times less likely to fail. But even if the chance of being sucked into a black hole is millions of times lower than the chance of being struck by a meteor, both are so unlikely that they might as well be impossible for all the difference it makes in our everyday lives. There is no increase in the
“That is a comforting argument, Dr. Terach, but I cannot agree that the danger levels can be treated as equivalent. If you were to view the question in terms of a probability ballistics analysis-”
“All right, Donald,” Kresh interrupted. “We can take it as read that nothing could be as safe as a positronic brain robot. But let’s forget about theory here, Terach. You’ve told me how the New Laws or Three Laws can be embedded into a gravitonic brain. What about Caliban? What about your splendid No Law rogue robot? Did you just leave the embedding step out of the manufacturing process on his brain?”
“No, no. Nothing that simple. There are matrices of paths meant to contain the Laws, which stand astride all the volitional areas of the gravitonic brain. In effect, they make the connection between the brain’ s subtopologic structures. If those matrices are left blank, the connections aren’t complete and the robot would be incapable of action. We
“What, Doctor, was the nature of the experiment?” Donald asked.
“To find out what laws a robot would choose for itself. Fredda believed-we believed-that a robot given no other Law-level instruction than to seek after a correct system of living would end up reinventing her New Laws. Instead of laws, she-we-embedded Caliban ‘ s matrices with the desire, the need, for such laws. We gave him a very detailed, but carefully edited, on-board datastore that would serve as a source of information and experience to help him in guiding his actions. He was to be run through a series of laboratory situations and simulations that would force him to make choices. The results of those choices would gradually embed themselves in the Law matrices, and thus write themselves in as the product of his own action.”
“Were you not at all concerned at the prospect of having a lawless robot in the labs?” Donald asked.
Jomaine nodded, conceding the point. “We knew there was a certain degree of risk to what we. were doing. We were very careful about designing the matrices, about the whole process. We even built a prototype before Caliban, a sessile testbed unit, and gave it to Gubber to test in a double-blind setup.”
“Double-blind?” Kresh asked.