FAMOUS LAST WORDS: QUOTES FROM THE ARTIFICIAL INTELLIGENCE THEORISTS

Here are some interesting quotes on AI, from the technology world. Commentary is from the position that corporations are a form of "narrow" artificial intelligence (see LE-AI paper).

“The dream of robotics is, first, that intelligent machines can do our work for us, allowing us lives of leisure, restoring us to Eden.”
—Bill Joy, Why the Future Doesn't Need Us (2000). 

Comment: But robotics are owned by corporations, corporations are programmed with a narrow purpose, and the "dream of robotics" (as defined above) is included nowhere in that purpose.

Bill Joy, in his essay, was skeptical that this dream would become a reality. But he was skeptical because of expected happenings, while I am skeptical because of what has already happened–the programming of the corporation with one narrow goal–to maximize profit. 

How would "allowing us lives of leisure" maximize profit?


“Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once - but one bot can become many, and quickly get out of control."
—Bill Joy, Why the Future Doesn't Need Us (2000). 

Comment: I would say this "self-replication" already happened with corporations. The modern corporation, which took off around 1970, has quickly grown and multiplied into every corner of society, bringing about a loss of human control. 

With corporate legal-entities, as long as a copy of the AI can produce positive economic wealth (ignoring externalities), humans in a position of influence are incentivized to allow the AI to replicate across their economic system.


“In this age of triumphant commercialism, technology - with science as its handmaiden - is delivering a series of almost magical inventions that are the most phenomenally lucrative ever seen. We are aggressively pursuing the promises of these new technologies within the now-unchallenged system of global capitalism and its manifold financial incentives and competitive pressures.”
—Bill Joy, Why the Future Doesn't Need Us (2000). 

Comment: But what if corporatism is not capitalism? What if corporations are a form of narrow-AI themselves? Then everyone has to rewrite their whole model, and all attached theories.

I agree that corporatism is "now-unchallenged" and has gone global. But not sure an unchallenged system, programmed only to maximize profit, that has replicated across the globe, is a good thing. Especially when you consider that the products of this "aggressive pursuit" of "magical inventions", all end up being owned and directed by that system.


“The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions. I have always believed that making software more reliable, given its many uses, will make the world a safer and better place; if I were to come to believe the opposite, then I would be morally obligated to stop this work.”  
—Bill Joy, Why the Future Doesn't Need Us (2000). 

Comment: The question is how do humans take responsibility? Corporations by their design are a way to eliminate human judgement from the system, and shrug off responsibility. And the corporate machine's root legal program contains no form of logic that simulates human morality.

There is no specific-purpose programming in the charters of modern corporations, and so there is no specified goal for software corporations to "make the world a safer and better place", only to maximize profit. 

If Bill Joy felt "morally obligated to stop this work" it would be of minuscule impact. Society is now dependent on corporations, and as jobs are eliminated, people will compete for the remaining openings. 

Still, I agree with the spirt of it. Yes, in creating modern corporations things did "move too fast" and "took on a life of their own", we did not "do thinking up front" and we are now "surprised and shocked by the consequences of our inventions." 


“We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don't believe so, but we aren't trying yet, and the last chance to assert control - the fail-safe point - is rapidly approaching."
—Bill Joy, Why the Future Doesn't Need Us (2000). 

Comment: I think we already passed the fail-safe point, and nobody even knew it happened.

The check-mate in the "corporate-AI takeover" game was: corporations, which are legal entities and thus controlled by the law, gained control of government, meaning that no laws genuinely restricting them can be passed.

So how will we reassert control?


“It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions.”
”People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”
—Ted Kaczynski, Industrial Society and Its Future (1995).  

Comment: Despite the grim source of this quote, it has been referenced by numerous mainstream AI theorists, such as Bill Joy in "Why the Future Doesn't Need Us."

Humans have permitted themselves to drift into a position of such dependence on the corporate machine that they have no practical choice but to accept all of the machines’ decisions. There is no turning the corporate machine off, as that would amount to suicide. 


“Unfortunately, Joy unwittingly alludes to something we should fear. It’s not robotics. We need to fear those among us with just enough brain power to use (genetics, nanotechnology, and robots) as a weapon. 
…some humans will no doubt seek to use robots as weapons against innocent humans. But it won’t be the robots who are to blame. We will be to blame. The sooner we stop worrying about inane arguments like those Joy offers, and start to engineer protection against those who would wield robots as future swords, the better off we’ll be.”
—Selmer Bringsjordj, Ethical Robots: The Future Can Heed Us (2007).

Comment: Bill Joy, who is apparently viewed as a pessimist by Selmer, was too optimistic from my perspective. To me, Selmer's optimism seems down right delusional. The fact is, with runaway corporations, we didn't engineer in protections. At this point, blaming people is irrelevant, as humans are not in control.

I would say, we missed the opportunity to engineer protections a decade or two before the above essay was written. 


“I think we should be very careful about artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn't work out.
—Elon Musk, MIT Aeronautics and Astronautics Centennial Symposium (2014).

Comment: Musk understands the concept, but slept thru the event. 

Corporation: from the Latin “in corpore,” meaning in body, in substance. The summoning happened. Control was lost. To address the question of regulatory oversight, US corporations are the most powerful entities on the planet, who would force oversight on them? 


”I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
—Bill Gates, Reddit Q&A (2015) 

Comment: Did we manage corporate personhood well? If we can't control a rules-based legal-entity system, how will we control the technological output of that system?

And a rhetorical question, if "the elites" are in control of corporations, and thus the corporations' output, why doesn't Bill just "make it so"? Who is this controlling interest he is appealing to, that should be "concerned"? The vast corporate system itself? But the system was not programmed to consider the long-term concerns of humanity, only short-term profit maximization.


”We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”
—Stephen Hawking, Elon Musk, and other artificial intelligence experts, Open Letter on Artificial Intelligence (2015).

Comment: But again, we are going to have to first control corporations. They are programmed, by law, to develop and use AI systems for the maximization of profit. No other concerns apply.


"I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it."
—Stephen Hawking, Cambridge University Speech (2016). 

Comment: As Hawking admits, this is a belief. Nobody knows how the human brain works, and so nobody knows if it can be emulated by a computer. Since there is no basis for his root belief, all that “therefore follows” is also without basis.

Since there is a "corporate-AI takeover" to deal with in reality, this belief in science fiction seems unhealthy.


"In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which."
—Stephen Hawking, Cambridge University Speech (2016).

Comment: Even if science fiction did become reality, we do now know "which" (either the best, or the worst thing, ever to happen to humanity). We know powerful-AI would be programmed and owned by corporate-AI, which has the behavior of a psychopath. Do the math.


If the poster children for intellectual brilliance already failed the IQ test, what does that say about humanity’s odds?