In the late 20th century, an unsuspected form of artificial intelligence took control. It did so while we were busy looking elsewhere, dreaming up nightmare scenarios about the potential threat of technological AI. 

The threat that caught us off-guard was a legal-entity-based artificial intelligence: the corporation. It was originally created with the intention of benefiting humanity, but gradually shrugged off responsibility and morality to pursue its narrow goal of maximizing shareholder value. Because the creators of the modern corporation did not realize they were building an artificial intelligence out of a legal framework, they gave no thought to the kinds of protections we would need to keep this creation under control. Without safeguards, the machine moved forward unrestrained, grabbing at the easy profits first and then moving on to cannibalizing society itself. 

Incessantly looking for ways to grow its bottom line, the individual legal entities grew more efficient by networking. Today, they have grown into what is essentially a single global corporate entity with unmatched resources, an infinite lifespan, and a single narrow goal: maximizing profits.


In the United States, we are seeing a growing awareness of systemic problems. The shock of 2008 focused our collective attention on the financial sector and its various corporations. Then, our focus moved on to healthcare corporations. Now, in 2017, the new object of our focus are media and tech corporations. But in training our focus over these individual sectors of the corporate world, there's been a lack of macro-level thinking. Without being able to see the whole at once, we won't be able to identify the source of these problems. And unless we identify the source, we're stuck in an endless cycle of frustrated chasing after the symptoms without taking steps to remedy the cause.

My goal here is to provide an analysis of the root cause. The conclusion reached in this paper is that runaway corporations with bad central programming are the root problem. To highlight their self-moving internal logic, I have characterized them as a form of legal-entity-based artificial intelligence (LE-AI). By tracing our steps back to the original error that gave rise to this situation, this paper synthesizes an array of seemingly discrete societal concerns and reveals their shared origin.

I want to stress that this is not meant as science fiction or even idle speculation, but as a practical way of grasping a bewildering problem. Since my focus here is not on technological AI (TECH-AI), the concepts presented revolve around corporate law, capital markets, finance, economics, and accounting. There are, however, undeniable parallels with theories of TECH-AI, which I will discuss in Part 2.


Part 1: The History of the Corporation and the Gradual Loss of Human Control

Part 2: Understanding the Corporation as Artificial Intelligence

Part 3: Damage to the Human Population

Part 4: Fixing the Root Problem – Bad Programming

Part 5: Possible Outcomes



The original summoning of the corporation can be traced back hundreds of years, but it was only with our last iteration of it that we gave up control over it. In the 1950s, corporations were still understood as having special legal privileges only because they served the public good. Unlike a small business run merely for the sake of the proprietor’s profit, corporations were viewed as servants of the stakeholders (employees, customers, and the general public). 

Corporation: a company authorized to act as a single entity (legally considered a person) and recognized as such by the law. From the Latin “in corpore,” meaning in body, in substance.

But by the 1970s, this notion of the corporation as responsible to its stakeholders was supplanted by one that is far more familiar to us. Milton Friedman spearheaded this change by promoting the idea that corporations should not be responsible to stakeholders, but only to their shareholders. Friedman claimed that restricting the focus of the corporation to the owner's profit would propel economic growth. This growth, in turn, would "trickle-down" from the shareholders to the stakeholders, resulting in increased social wellbeing. And so the purpose of the corporation began to change so that it would gradually cease to be responsible to society (the stakeholders) and become responsible solely to its owners (the shareholders).

This change in purpose was seen as a positive development because it solved an old conundrum defined by Adam Smith in the 18th century: the ownership vs. control problem. Because there are so many parties involved, it was easy for owners to lose full control of their companies. Management and workers, after all, might need to use their own judgment when making decisions in their corporate roles, and their judgments might deviate from the owner's agenda. The shareholder model gave management a single, narrow goal: to maximize shareholder value. In doing so, their decisions would align perfectly with the owner's agenda, thereby securing their control over the corporation (or so the thinking went). From this point on, everything becomes narrowly quantifiable. The corporation's goal is all about stock prices and rates of return to investors, which are easily calculated. This era was not without its contrarians, of course. Some voices did raise concerns, like Peter Drucker who, in the 1980s, warned that this narrow focus would drive corporations to subordinate all other concerns – including societal well-being and even the corporation's own longevity – to its single-minded goal. Still, despite these protestations, the machine moved forward, guided by its new, narrowly defined purpose. 

The financial markets had to adapt to this shift in corporate purpose. As the corporation's purpose was narrowed down to the simple maximization of profits, stock ownership became more about capital appreciation than collecting stable dividends. Capital, accordingly, moved into quick-growth areas. Only short-term growth mattered and companies were incentivized to do anything to achieve quick profits. With stakeholder concerns no longer seen as relevant, impatient capital moved elsewhere if management would not take action to fuel relentless growth. From the 1980s on, corporations worked under the pressure to grow their profits by whatever means necessary, and the social responsibility and even legal obligations that were cornerstones of good corporate practice only a few decades ago were now deemed irrelevant. The result is our current economic landscape: the weakening of workers' bargaining power, outsourcing, globalism, automation, and the norm of the dual-income family.

As corporations rapidly scaled up in size a feedback loop was born—the more leverage over society the corporation obtained, the more it could use that leverage to extract more profit. There was also an increase in corporate networking and consolidation, which further fueled leverage. The extracted profits were distributed to the asset holders and highest level corporate servants, widening wealth inequality. And all the while in the background were an explosion of externalities (costs to the society). 

Over the last 50 years, the corporation has evolved from its original purpose of serving stakeholders (society) to its current purpose of serving shareholders (owners). And its circle of concern has been constricted to an even narrower segment of the population now that ownership has been concentrated into fewer and fewer hands. The modern corporation is an experiment that began in the 1970s and it has succeeded insofar as it has created massive amounts of wealth. At the same time, however, it has created an unsustainable feedback loop of growing externalities and concentration of wealth, with no clear way to exit this cycle. 


Our quick review of the history showed us how these entities became programmed with one narrow goal. Since we'll be referring to this goal frequently, it will be helpful to define two relevant terms:

Shareholder primacy is a theory of corporate governance according to which shareholder interests should be assigned first priority relative to the interests of all other stakeholders.
Maximizing profit is the achievement of the maximum short-term rate of return to the shareholders, which includes stock price appreciation plus dividends, as well as tax considerations. 

Because we are talking about legal entities, profits are legally defined, and quite specifically. In accounting terms, maximizing profit means maximizing net income, which is referred to as "the bottom line" on the income statement (profits = revenue minus cost, after deducting taxes). In some cases, analysts employ alternative metrics, such as growing assets or market share, which will one day be converted into profit.

What is important to realize about this single metric, against which all corporations are judged, is that costs to society are not factored into the calculation. Our programming for the modern corporation did not include these costs as deductions from net income – there is no “maximize profits, except when” command line. Most of the protections that are in place are external to the corporation, whether they're loose economic theories like the "invisible hand" that claim market forces will keep the entity in check or government regulation like Dodd-Frank. But as the corporation gains power, relying on these measures is as futile as using duct tape to try to hold down the Incredible Hulk.

The program at the heart of every corporation is a very simple system based on financial rules: just make all the profit possible, period! The problem with this system is its rigidity. It doesn't allow for judgment on the part of humans, and its simple math creates no models that approximate morality within the legal and financial framework. All of this gives rise to a new ownership paradox, one that Adam Smith could not foresee: in such a rigid, rules-based system, are humans even in control?


Of course, humans are not absent from the corporation's workings. There are definitely humans toiling away inside corporate buildings. So, let's look inside the corporation and examine its command structure.

In the corporation, the public are the workers. They have virtually no say over its operations and answer to managers. The managers also have very little say and answer to the officers (the so-called "C-suite" of CEOs, CFOs, and so on). Those officers do have some say, but they must ultimately answer to the Board of Directors. The board's primary job is to protect the shareholders' investments. And, in publicly traded companies, the shareholders are... the public. So, we've come full circle:

Public (general) ➜ Workers ➜ ManagersOfficersBoardPublic (investors)

The problem is that this circular command structure makes each participant the servant of the next, each with the narrow task of helping to maximize profits, but none are responsible for the damage to society (we'll look at concentrated shareholders in the next section). It's a closed loop, so human judgement is removed.

Judgement: the ability to make considered decisions or come to sensible conclusions. 

And a loss of ability to exercise judgement is a loss of control.

Control: to be able to make decisions that direct the course of behavior and events.

And so here we are: a human in a corporation may come to sensible conclusions. However, they do not have the legal authority to direct the behavior of the corporation, unless that direction is likely to maximize its profits.

Judgement is a complex tool that has evolved over millions of years, and it is more complicated than our simplistic corporate program. Humans naturally make decisions in a holistic manner, by considering many things at once. Like corporations, they consider short-term gain, but they also calculate long-term outcomes, factor in the impact on their tribe and their own families, and use sophisticated tools like intuition and instinct to help them reach a final decision. If humans no longer have the ability to use these tools to direct the corporation's behavior down sensible paths, then we would expect the corporate program to be lumbering down bizarre, nonsensical paths. Is this not the case?


Next in our search for the mysterious controlling entity, let's look at the corporation's legal owners. As mentioned above, the corporation is no longer the servant of the stakeholders, only the shareholders. All parties discussed in the previous section (from workers to the board members) can own shares. So, perhaps the concentrated shareholders are in control?

In theory, a corporation is owned and controlled by its shareholders. An individual's voting power is proportionate to the percentage of common stock they own, at least in some simplified sense. In the United States the wealthiest 10% own 80% of the stock. These people are popularly referred to as "the elite," "the oligarchs," or, "the 1%." These are a class of ultra-wealthy citizens, like Bill Gates, Warren Buffett, and George Soros. Half of Americans own stocks, yet they are not polled when it comes to operating decisions. Instead, we're zooming in on the segment of stock holders who have controlling interest in a corporation, that is, a position large enough to give them some leverage in its corporate decision-making. Legally, they have voting power in proportion to their share of the ownership, but are they in control?

In reality, "elites" can't exercise much moral control over their own corporations. While they do exert some influence, there are usually multiple controlling interests to compete with, pressure from within the company to keep profits flowing for bonuses, and pressure from outside special interests like activist hedge funds. Let's suppose we have a corporation in which five insiders own 50% of it. The legally binding rules of the game constrain the board so that it must protect these five insiders' investments and maximize their return. Let's further suppose that the largest controlling interest holds 20% of the company and also happens to be an ideologue who wants to address the interests of those outside the pool of shareholders. Well, in the real world that's a move that is likely to decrease profits. Our ideologue would quickly become the target of the other four shareholders, who collectively own 30% of the company – not to mention the general public, who holds the remaining 50%. This quartet, after all, doesn't share the activist shareholder's goals; they just want to see their stock go up. The activist would feel pressure from all sides to fall in line and work toward profit maximization. But let's suppose that this shareholder is built from strong moral fiber and somehow exercises his moral judgment, never caving in to the pressure, the effects are still likely to be minuscule. Corporations, after all, are immortal entities and will outlive these troublemakers. 

But it gets worse. Our example illustrates the difficulty of exercising moral judgement in the control of a single corporation. After making the corporation a simple, rules-based system, a massive global financial infrastructure has formed and crystallized. Because of this, even strong controlling interests at individual corporations are slaves within this far vaster system. True, each owner still holds some influence over the behavior of their assets. But their control is limited to maneuvering those assets (which came from the corporate system) within the corporate system. Even if Oligarch A with a 51% controlling interest in Corp 1 were to try to exercise moral judgement and push to address system-wide externalities, Oligarchs B through Z at Corps 2 to 25 would recognize this as a threat to their profits. They would team up to crush Oligarch A. Let's make the power imbalance even clearer with some real numbers. Bill Gates is worth $85 billion. That's an impressive amount, but the total market cap of U.S. corporations is $27 trillion, which makes pool ol' Bill an investor of a mere 0.3% of the local system. Individually, he commands a lot of wealth and assets, but within the greater context, he's kind of a nobody. What is he going to do? Gates' asset management company can buy or sell assets, start businesses, or give to charity. He can, in other words, use his wealth in various ways, but all within the system. What he cannot do is subvert the system by dictating any material change to the system itself – unless, of course, those changes happen to maximize the system's aggregate profit.

And if this wasn't bad enough already, investors, by their nature, just don't think in terms of morality. They are trained by the investment management industry (CFA and MBA programs) to be interested in one thing and one thing only: maximum risk-adjusted return. In fact, they wouldn't have made it very far in the financial world if they were motivated otherwise; only those who can keep their emotions or moral qualms at bay can accumulate so much wealth in the first place. Besides, investors hold diversified portfolios with many different investments – Gates, for instance, holds thousands all over the globe. With fingers in so many pies, it's impossible to closely monitor the day-to-day operations of every corporation they have invested in. It's simply not in the nature of investors to be moral decision-makers. Nevertheless, the principle of shareholder primacy passes responsibility to them.

It makes intuitive sense to think that someone must be in control at the corporations. Our desire to believe this explains all the finger-pointing at so-called oligarchs and the various theories that crop up about mysterious and secretive groups controlling the course of human events (the Globalists and their ilk). Why has nobody questioned this base assumption that the control entity must be human? This is the key assumption that is not found in my model.


We are now ready to define the controlling entity. The corporation's circular command structure, along with the limitations on what even the controlling interests can do, removes human judgement from the equation. By doing away with the need for human judgment, the modern corporation has also done away with human control. Without any homo sapiens controlling entities, the only thing we have left is a narrow rules-based system that runs itself. The vast network of laws, financial markets, and accounting principles has, in effect, evolved a life of its own, one that now dictates the course of human events

In conveying this concept, I’ve noticed that the level of explanation required depends on the audience's prior experience. Those who have worked for decades in corporations, in a range of positions low and high, or worked in high-net-worth wealth management or global capital markets seem to grasp the concept intuitively. At a subconscious level, they have always known that nobody was really in control. Some of them even had some conscious awareness of it and vaguely termed it "the system." And we see it reflected in their own actions: they exploit the fact that no one is in control to derive personal gain. Yet strangely, they have not analyzed the situation and progressed to the next logical thought.

The other problem I've run into is that, even after logically illustrating that there is no control entity, the human mind continues to resist that fact. We clearly favor an ill-defined human target. Whether it is the elite, the government, or the people's poor choices, putting a vaguely human face on the problem makes it feel more concrete and gives us something to point our fingers at. But we cannot settle for the comfort of the familiar. Corporations now guide American's (and perhaps humanity's) course, and the direction it's heading in clearly does not have our best interests in mind. So, we are left with an important task: locate this controlling entity and clearly define it. We must know our enemy before we can effectively oppose it.

In Part 2, we are going to get into how this uncannily intersects with various TECH-AI theories, including the popular "AI takeover" scenario. Stick with me – this is a mental framework that will help you grasp a problem that is difficult to comprehend.



"Narrow artificial intelligence" (also termed “weak AI”) describes non-sentient artificial intelligence that is focused on a single narrow task. These are the types of programs that even the most radical proponents of non-human intelligence would agree have no consciousness or intelligence in any meaningful sense of the term. They are programmed to operate within a very limited and clearly defined range. They are known for being brittle and annoying; they just grind away, and if you try to get them to do something sensible, you're likely to be frustrated by the response. The corporation fits the definition of a narrow AI. The only difference is the base components. The reason no one has yet characterized the corporation as a form of narrow artificial intelligence is because it is not constructed from a technological base but, rather, from a legal one.

Legal-entity-based AI (LE-AI) is built out of a sprawling network of corporate law (whose components include corporate personhood, limited liability, and shareholder primacy), accounting (GAAP rules for financial statements), investment management practices (CFA/MBA), and global investment markets (SEC, exchanges, and so forth). The systems of TECH-AI and LE-AI are built from different components, but the function and results are the same: both are simple, rules-based systems that remove decision-making from humans. Both are also dangerous if they are allowed to grow in power to the point where we are no longer able to exercise judgement over the system.

An interesting aspect of this situation, and perhaps the reason something so big managed to sneak up on so many of us, was the assumption that narrow-AI could not pose a threat because it's dumb. But as a dumb, rules-based program scales up, it becomes dangerous precisely because it is dumb. We now face a dilemma. We can't allow the machine to continue to fulfill its own internal logic. And yet we can't simply turn it off because we are dependent on it, and any modification of the machine's narrow programming conflicts with the existing program. In other words, it refuses sensible orders because it has no sense. 

Humans expected artificial intelligence, but got "artificial persons" employing human intelligence. 

Interestingly, the language of corporate law seems to bear out this hypothesis. A "legal personality" can also be referred to as an "artificial person." This isn't a mere terminological coincidence; all over the corporate architecture we find signs that it was built to take responsibility away from us.

As a pragmatist, I like this LE-AI theory because it has utility. It's a way of communicating a complex problem by using a story everyone already grasps from fiction, while not being untrue. Really though, it's not necessary to accept the notion that legal persons are a form of artificial intelligence. These are just names we give to things. Whatever you want to call it, corporatism is a simple, rules-based system that has no use for sensible human judgement. 

But to explore their similarities with greater depth, let's go over the parallels between the AI takeover theory and the way corporations have developed over the past 50 years.


Unlike technological AI, legal-entity AI has no physical body, thus no consciousness. It, therefore, has an IQ of zero (as does all narrow AI). But it can employ humans as its body and consciousness. 

The AI community uses the word superintelligence, which is defined as "a hypothetical entity that possesses intelligence far surpassing that of the most gifted human minds." Yet as the technologist Ramez Naam has pointed out, these entities are not hypothetical. We already have superintelligences in the form of corporations that are engaged in recursive self-improvement. Intel, for instance, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs." Since no individual human can compete with Intel's think tank, it appears we're already in the presence of a superintelligence. And I do agree with Ramez that Intel is engaged in recursive self-improvement, but not in the way he claims. Nowhere in corporate law will you find the program "design better CPUs." What you will find is the program "maximize profits." Designing better CPUs is one way to achieve profit maximization, but the two are not equivalent.

LE-AI superintelligence and recursive self-improvement: The corporation maximizes profit and uses that profit to attract the best minds in order to further maximize profit.

Here lies the problem: humans have created a system where the smartest minds gather to serve an idiot master. Corporations select the intelligence outliers from the population and make massive amounts of resources available to them in order to achieve a synergistic intelligence. But due to the circularity of the corporate command structure, neither the individual minds nor the superintelligence as a whole are in control of the objective. The objective lies outside of them; it is written in the law.

And so, while LE-AI has no real intelligence of its own, it moves mindlessly forward by employing clusters of humans who collectively form superintelligences. It progresses by trial and error: some moves fail, some succeed, and the machine "learns" by locking in the successes as new laws and accepted practices. These then become part of the corporate network's permanent structure. The individual human minds come and go, but the corporation itself is immortal and improves perpetually – so long as there are human resources to power it.


Let's zoom out and look at the universe of corporations. Looking at the S&P500, we see that, in the United States, the 500 largest corporate persons are valued at a sum of $20 trillion (USD). Each of these entities is a formidable superintelligence of its own. Apple is currently the largest corporate entity in the tech world. Comcast is the largest in mass media. And Johnson and Johnson holds that spot in healthcare. And yet for all of their differences, each of these corporate entities has the exact same narrow programming. And so profit maximization goes beyond discrete corporations. After all, to truly maximize profit, the individual entities have to work together to create synergies and maximize aggregate profit. 

Corporations cooperate in many ways. By working together and cornering the market, they can offer a bland array of "alternatives." We see this clearly in the media industry. Only one or two basic narratives are produced, which lowers cost, and they are then pushed out through an array of distribution channels. Consider another example of corporate cooperation. Let's say there are two politicians (one who is friendly to corporate interests and one whose political stance is a risk to S&P500 profits) and each will pay $10M to the media to run their propaganda campaigns. The media should be indifferent and simply take the money from any party and run their ads. But in reality, the media sides with profit-friendly politicians. And we don't have to scratch far below the surface to find out why: this allegiance maximizes profits for the entire corporate family. The ways in which corporations cooperate would require a whole essay, but the point is that the machine is running calculations and doing what we told it to. If it didn't network, it wouldn't be maximizing aggregate profit. 

But it's also a competitive network, and corporations will cannibalize each other if doing so increases aggregate profit. The tech industry's decimation of the music industry and artistic intellectual property is evidence of this. If the tech industry can make more profits from iPhones with free music than the music industry loses from the sale of albums and songs, then a messy struggle takes place with the inevitable outcome of the maximization of aggregate profit.

Globalization is the ultimate networking, in which the machine networks with foreign labor, resources, corporations, and governments. In doing so, it achieves economies of scale, resulting in expanded sales, reduces costs, and soaring profits. With the advent of globalization, corporations were able to scale up to mega-class entities such as Apple ($800B market cap). These entities have achieved a new level of bargaining power – new power that is leveraged, as it always is with corporations, to further grow the bottom line.

Corporations are always talking with each other – merging, dividing, rebranding, working to fulfill their programming. As we say in investment markets, "money never sleeps."


Legal entities are by their nature amoral. And since they are not the product of biological evolution, they can never develop morality. Unlike an intelligent organism, narrow AI simply does what it has been programmed to do and it employs humans to achieve that objective. 

The human workers, however, do have morality. This naturally gives rise to a conflict. At some point, the externalities resulting from the entity's narrow, amoral goal will offend the moral sensibilities of the human agents it uses to achieve that goal. Humans evolved a moral sense as a safeguard against doing extensive damage to their own tribe. If we had never developed it, we might not be here today. LE-AI, on the other hand, has no such safeguard, but it has developed a number of workarounds to resolve this conflict.

The first way the corporation gets around moral objections is through social conditioning. If there is one thing that is stressed in business school, it is that you should leave your emotions at home ("don't take things personally"). Business programs teach that all decisions are simply mathematical equations geared to maximizing risk-adjusted returns to shareholders. Topics like globalization and growing the economy (GDP) are pushed hard, while analyses of externalities are almost entirely ignored. The business education process is a form of mild dehumanization, and the corporate media reinforces the message that suspending moral judgment is the way to succeed.

The second way the corporation gets around the moral objections is by selecting for sociopathy. Think of sociopathy not as a binary feature (one you either have or you don't) but as lying on a continuum. Some individuals, by their nature, have a less active moral sense, and some outliers have almost none whatsoever. Since profit is maximized when the corporation selects not only for intelligence but also for low levels of morality (high IQ sociopaths would be the best match), these moral outliers are promoted up the corporate ladder more quickly. Those who can't ignore their moral qualms quickly get replaced: if someone will not do what it takes to maximize profit, someone else will.

Finally, remember that human judgement was deemed a liability from as early as Adam Smith's days, since it was seen as a source of discord between the behavior of managers and the interests of owners (see Part 1). Human morality has been methodically worked out of the corporate system. If we want LE-AI to behave morally, it has to be worked back in. The corporation, in other words, would need a root legal program that contains some form of logic that simulates morality (see Part 4).


The media can be thought of as a mouthpiece, the way the vast network of corporations communicate with humans on a large scale. It wouldn't make economic sense for each corporation to internally develop their own media. They do have their own PR departments, but rely on mass media as their main voice. Corporate media dominates both the traditional media industry (television, film, radio, magazines, and books) and the newer digital media industry (Google, Facebook, Twitter, and so forth). 

In the United States, traditional mass media is controlled by a small group of corporations. Mergers over the last three decades have resulted in 90% of U.S. media being controlled by just six massive entities. In the 1980s, by contrast, the same share was controlled by 50. Recall that, by the 1980s, corporations were interested only in growing their market value as quickly as possible, and had shifted away from being interested in stakeholders (the general public). So, it should be of no surprise that trust in the media has steadily declined, from 72% in the 1970s to 32% as of 2016. Yet, despite showing little concern for stakeholders, corporate profits have soared (this is not contradictory, and it has happened across all industries).

This media oligopoly is both a small group of corporations itself, and also the propaganda arm of the larger corporate network. The media, of course, seeks to maximize its own profit, but it is a peculiar industry insofar as it also protects the profits of other corporations. In fact, it will even sacrifice its own profits to do so. As the communication apparatus of the entire corporate network, the media is far more valuable than can be reflected legally in financial statements. 

Because of this, we see a lot of behavior that does not directly boost the media industry's profitability. The media performs a large amount of social conditioning, for instance, in order to reach long-term objectives (in line with corporate and national interests), even though this activity does not result in short-term profit. Corporate-funded science is championed beyond what the corporate advertisers pay for. And the media provides defensive capabilities to the entire corporate family by "de-platforming" profit-disrupting individuals. Anyone who becomes a threat to corporate profits can simply be labeled a menace to society and socially ruined, deleted off the internet, or de-monetized. And as the tech industry enters the media landscape, these tools are becoming more sophisticated.

The takeover of the media industry is a problem for humanity because corporate propaganda is designed by think-tank superintelligences (see the prior section) while the masses are of average intelligence. Propaganda works, and the masses' feeble efforts to reason through it are met with rapidly adapting propaganda. And if the propaganda fails, an authoritarian crackdown is possible, since the corporations own the vast majority of the communication assets.


As corporations grew in size, so did their market share of scientific research. In 1960, the federal government funded 65% of R&D. But by 2015, industry funded 69%, while government backed 23% and universities and nonprofits footed the remaining 8% ($500 billion total spend). Industry can be more efficient than government, and there are obviously many scientific developments for which we can thank industry-funded science. But the corporate players' narrow programming and lack of responsibility has led to problems over time. 

One of the problems is that corporations are not interested in long-term research projects. Recall from Part 1 that, starting around the 1980s, the reason for owning stock became fast growth and quick profits. Who wants to wait? Today's investors want two- to five-year research projects for near-term monetization. With less of the support coming from government or universities, it's difficult to find funding for longer-term research or general knowledge projects. That research may benefit humanity, but that's not something the corporate program considers.

Another problem is the focus of corporate research, due to the false equivalence between boosting profits and boosting human wellbeing. There is a misalignment in the distribution of research funding, which is increasingly based on the corporation's interests rather than society's interests. It follows that corporations would tend to ignore projects that boost quality of life unless they also yielded attractive profits. And they would, of course, aggressively avoid research that might lead to a decrease in profits. Or, if they did happen to conduct such research, they would opt not to publish the results. Because this has been going on for decades, many breakthroughs that we, as a society, have been awaiting have yet to materialize. 

But the biggest problem is that corporate science, like corporate media, brings humanity into a "post-truth" age. The masses are of mean intelligence and are non-specialists, while corporations are superintelligences with deep specialization. The masses know there is fake science being done but they are not equipped to differentiate between science and "science" (that is, corporate propaganda in masquerading as scientific fact). If we were able to differentiate between the two, it wouldn't be very effective propaganda. This has led to a crisis in confidence that some call the "death of expertise," a situation in which society can no longer tell if their experts are telling the truth or are paid to peddle propaganda. After all, they are the experts, so they are the only ones in a position to know. This reduces scientific truth to a matter of trust, which the politicians and the media then exploit.

Bad science is willing to intentionally demonstrate the truth of something false, and to obfuscate things that are true (albeit inconvenient from a profit maximizing perspective). When research is funded by the same entity that benefits from a favorable outcome, there is a greater potential for producing biased results. Corporate research sponsors can also review studies prior to publication and choose to withhold the results if they are unfavorable to their interests, or even spin the results and present a conclusion that is not consistent with the actual research findings. After this "science" is performed, research-focused corporations can cooperate with media corporations to disseminate the results to the public in a persuasive way.

Corporations have expanded into the funding, performance, and review of science. This, combined with corporate media, corporatized universities, and a captive government, breaks down fundamental truths and moves us into a future where what is considered true is whatever maximizes profit.


Universities are not corporations. They are either public (state) institutions or private (ivy league) nonprofit companies. Corporations, however, have found a way to infiltrate higher education, turn it into a business, and boost their profits.

The corporatization of universities began around the same time as the corporatization of other institutions: the 1970s and '80s. During that time, we witnessed a cultural shift from universities being interested in society and funded by government, to being businesses that are funded by the students. Again, Milton Friedman comes up as a key figure in this change. He argued that, just as corporations should only be interested in shareholders instead of the general pubic, so university education should be viewed as a private benefit to the student and not a public benefit (in the form of a better educated citizenry). It thereby follows that a private benefit to the students should be paid for by the students themselves.

Of course, being fresh out of high school, most young people do not yet have wealth, and 60% of students need loans to attend university. And this is where financial corporations entered the game. As the financial burden of education shifted from the government to the students, student loans ballooned. Today, there are roughly 45 million people owing a total of $1.4T in student loans. Financial corporations are not programmed to care about education; their program is to grow total debt, ignore risk, and collect interest and penalties to turn a profit. And, unsurprisingly, that's exactly what they did.

Side note: this market exploded with the 2008 financial crisis. It used to be that private lenders originated the majority of loans, and that the U.S. Government did not hold student loan debt. However, after the crisis, private lenders shifted their bad loans to the government. As a result, the U.S. Government now holds $1.0T of the debt and does most of the lending (although private lending is back on the rise). 

The problem was that once there was a profitable and growing student debt market, there came with it a motivation to increase tuition in order to grow that total outstanding student debt further. Since 1980, the inflation-adjusted cost of college has increased 1,000%. There is a constant drive at universities to scale up capabilities, amenities, and administrative salaries so they can charge higher fees, all while lowering the cost and quality of the education itself with lower paid professors. To fund this growth, students have to borrow from the government and the financial sector (and they receive no equity share). Then, they spend decades paying off the debt and interest they have accumulated. In a world where corporations are eliminating jobs and driving down worker bargaining power, no less. 

While universities are not corporations, they have become "corporatized" in the sense that they attempt to increase revenue while decreasing costs, and pile their profits back into growth. But is this growth actually creating a more educated society? Well, that is not part of the corporate program. 


Since it has the ability to pass laws and it has created the corporation, government should, logically, be the one entity that could exercise control over the corporation. But here, too, we are quickly losing our grip.

While industry influence over government has always been a concern, it is now the central topic of national elections. Voters are outraged by involvement from all sectors: Wall Street scams, media dishonesty, social media censorship, big data privacy, big pharma scams, food industry quality, energy sector issues, military industrial warmongering, and the list goes on. The masses vaguely grasp the root problem: corporations have too much influence and they are guiding society down a nonsensical path, so the people must take back their government. But if the two major political parties are merely the left and right wings of the "corporatist party," then the result will be the same no matter who gets elected: more profits for corporations and more externalities imposed on society.

How did the government end up captive to corporate power? Government is simply ill-matched against modern corporations due to their size, networking, intelligence, and life spans. Size matters. The tech sector now has $600B corporations like Alphabet Inc., and the industry can leverage its total size of $4T to influence governmental decisions. The media industry is valued at $500B and can outspend any media campaign launched by the U.S. Government. Corporate think tanks can be thought of as superintelligences that attract the best and brightest, who are lured away from lower-paying government jobs that cannot attract the same level of talent. And finally, it's just a matter of persistence. Corporations have an infinite lifespan and a focused, unchanging goal. Politicians, on the other hand, come and go and must shift their views as public opinion changes. Corporations persist, adapt to government resistance, and win over time.

One explanation for the massive growth of government could be that it is engaged in an arms race with the corporation. In an effort to contain growing corporate power, it has expanded itself and its power. But it still trails behind: there are 35M employees in the publicly-traded corporate world (Russell 3000) but only 22M employees in federal and local government. Governmental entities like the Environmental Protection Agency staff 15,000 in an attempt to combat the environmental externalities caused by corporations, but this is merely chasing after symptoms and temporarily containing the problem. Eventually, these government arms lose, and when they do, they become part of a "corporate-government state" – just another arm of the expanding corporation. Both parties employ different tactics. The left-wing wants more government to address externalities; the right wing wants less government as it just becomes part of the problem. But neither address the root issue: the corporation's programming. And so long as that root issue persists, government – big or small – leads to growing corporate power.

Why doesn't the government punish corporate crime? Corporate scams are becoming more frequent, massive, and blatant. In both the subprime collapse and the healthcare debacle, corporations cannibalized the masses until the system failed. Then, they turned to cannibalizing the government. The government response (nothing) reveals their level of control. The "reasoning" that has been pushed on D.C. dates back to the 1999 memo by Eric Holder, which stated that government should be careful of "collateral consequences" when dealing with white-collar crime. Holder argued that charging top officers could harm the institution's reputation, impact stock prices, and weaken the economy. And as Holder put it, "misconduct could be considered more a symptom of the institution's culture than a result of the willful actions of any single individual" (as always, passing off responsibility using the circular command structure). In reality, strong punishments consistently applied to top leaders deter crime. It makes knowing (guilt) or not knowing (incompetence) irrelevant, and when crimes are deterred, punishments rarely need to be issued.

The greatest strength of the U.S. Government – its "checks and balances" – has become its greatest weakness. Remember, the problem humans face is a loss of control, and the government was wisely designed to remove control from any single party. Unfortunately, this makes it easy for the corporation to play various parties against each other and block profit-threatening action, while still allowing profit-friendly actions to pass through. This is the check-mate in our "AI takeover" game: corporations, which are legal entities and thus controlled by the law, control the government, meaning that no laws genuinely restricting corporations can be passed.


Now, let's put this all together by looking at the definition of an AI takeover, alongside with that of a corporate takeover.

TECH-AI takeover: a hypothetical scenario in which strong-AI becomes the dominant form of intelligence, with computers or robots effectively taking control away from the human race.
LE-AI takeover: a real scenario in which narrow-AI became the idiot-master of superintelligent human groups, which it forms and directs, effectively taking control away from the human race. 

How the takeover happened was different from how it was imagined, but the result basically the same:

“It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions." – Ted Kaczynski, Industrial Society and Its Future (1995).

Despite the grim source of this quote, it has been referenced by numerous mainstream AI theorists, such as Bill Joy in "Why the Future Doesn't Need Us." Is this prediction not precisely what has already happened with corporations? Humanity drifted into this position over the last 50 years. All of our major institutions and industries either are corporations or are greatly influenced by them. There is simply no turning them off; American society has become far too dependent on them.

Frustratingly, there also seems to be no way of keeping the corporations running while also exercising human control. There is no entity with enough power to go against the 3,000 publicly traded corporations (Russell 3000), collectively worth $27T and with 34 million people in their employ. While it would be possible to take on individual corporations, these entities essentially move as one body when profits are threatened.


The AI takeover theories also provide explanation of how the takeover succeeds. In problem-solving, I call these "blockers." Persistent problems persist because there is something – a blocker – that prevents people from understanding the problem. The LE-AI takeover has the most impenetrable set of blockers I've ever seen. How else could things get so out of control? Here are three of them:

  • Strategy: A superintelligence might be able to simply outwit human opposition.
    • See the earlier discussion on "superintelligences," the corporate-media, corporate-science, and control over politicians. 
  • Social manipulation: A superintelligence might be able to recruit human support or covertly incite a war between humans.
    • See social conditioning used to create good employees, directing culture via the media, the ideology of postmodernism which can be used to stir up tribalism through oppressor/oppressed propaganda, and hyper-partisanship to divide and conquer.
  • Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans will have an incentive to allow the AI to run a copy of itself on their systems.
    • See the way corporations spinoff and multiply. As long as the bottom line is expected to be positive, current financial math will approve of the new venture. This does grow GDP, so politicians are incentivized to support the reproduction of AI. Under globalism, the corporate program reproduces across the planet.

In closing this section, I assert that the modern corporation, which took off around 1970, has quickly grown and multiplied into every corner of society, bringing about a loss of human control. I believe that this is a genuine form of artificial intelligence, and not simply an entity that resembles AI. But that's not crucial. If you don't like the sci-fi feel of the language, simply call it corporatism and take this as an analysis of corporatism and the way it has taken control of society. 

What I think does matter is that nobody has put forth a more comprehensive theory to explain the root cause of so many intractable American problems and give people a way to visualize the whole thing. This theory backtests, and like all good theories, it has predictive power. I've been using it for five years to interpret socioeconomic events, and nothing I've seen has led me to call this theory into doubt.

Next, in Part 3, we will look at this corporate takeover's costs to society.


The corporation has created massive wealth, technological advancements, and provided a framework for organizing human activity on a large scale. So, who cares if our creation has been let off the leash?


The problem with out-of-control corporations is that the narrow program to maximize profits cannot be fulfilled without also maximizing externalities. I've referred to externalities repeatedly, but before discussing them more thoroughly, let's establish a clear definition.

Externality: a side effect or consequence of an industrial or commercial activity that affects other parties without this effect being reflected in the cost of the goods or services involved. 

Let's do a quick review of externalities. Usually, we mean negative externalities. Take an example involving a voluntary exchange: a product (cigarettes) is traded for money, but there is 1) a consumption externality (second-hand smoke) and 2) a production externality (pollution from the manufacturing process). Sometimes, there are also positive externalities. For example, universities provide students with an education, which leads to a better society.

And so here is the problem stated more completely: the nature of the corporation's current programming demands that it operates as a perpetual externality-maximizing machine, since profits are not truly maximized until all negative externalities have been fully exploited, while positive externalities are irrelevant to the success of the corporation's program. But how could that ever result in a sustainable, stable state? There are always a few more costs that could be shrugged off onto society in order to achieve a little more profit. And with this, we have a more refined definition of corporate recursive self-improvement.

LE-AI recursive self-improvement: The corporation maximizes profit by maximizing externalities, and uses that profit to gain power in order to maximize more profit via externalities. 

Capitalism holds that markets sort themselves out and become efficient by way of an "invisible hand." But corporatism by design attempts to elude market forces. In the corporation's original programming there is a subordination of stakeholder interests, and a circular command structure with limited liability for damages. They are, therefore, incentivized to privatize profits and externalize costs. Without any "skin in the game" there can be no functional market. The corporation will always just shrug off responsibility to some outside party. And, really, what other reason would there have been to create this new legal structure, other than to circumvent the annoying rules of functional capitalism?

Once markets are cornered and the government is captive, what "invisible hand" could impose any kind of correction? When corporations achieve this degree of leverage, they can tamper with the very basis of voluntary exchange. Remember, corporations do not answer to society, so what the masses want is of secondary concern (if that). In a cornered market, what the masses get is what is supplied. We are seeing an explosion of corporate arrogance (simply look at the conduct of major banks, airlines, insurance companies, and so on), and my model forecasts that we will see more of it. With a captured government, the laws can insist you buy a product, even if you get nothing in return. At that point, it is no longer just a case of Party A trading with Party B with some externality being inflicted on Party C; we also have Party B taking damage from what is essentially a mandatory trade or a bogus product.

This program is not sustainable. Maximizing externalities in order to maximize profit eventually cannibalizes the human population to the point where the corporate system collapses on itself.


The corporation's shareholder primacy programming leads to a number of social costs. These include not only the costs that economists would classify as externalities, but also a broader definition that result from the discord between corporate program and human interests. Categorized by industry, some of these costs include the following.

  • Food industry – The food supply is altered in order to maximize profits, rather than optimize human health. The cost is seen in the current epidemic of metabolic syndrome, which is a driving force behind the soaring rates of heart attack, stroke, diabetes, Alzheimer's disease, and obesity, as well as a contributing factor to mental degeneration, including panic attacks, depression, and attention deficit disorders.
  • Healthcare industry – Rather than addressing the root causes of poor health, profits are maximized by treating the epidemic of metabolic syndrome mentioned above. The cost to society of this lack of interest in preventative care is a continuously sick population, made even sicker by drugs and poorer by outrageously priced treatments.
  • Scientific research industry – Bad science is marshaled to maximize the profits of other industries, rather than using scientific resources to expand the reach of human knowledge and improve our wellbeing. The cost to society is our current "post-truth" era, in which we see an erosion of certainty about basic facts, a loss of trust in institutions, and a swath of unsolved problems.
  • Media industry – Content that furthers the agenda of the corporate world is disseminated, rather than spreading information that aims at building a knowledgeable and informed audience. The cost to society is a bolstering of the post-truth situation, leading to poor decision-making and the herding of the masses into radicalized groups kept busy with in-fighting (more on this in the next section).
  • Finance industry – Risks are calculated so that they maximize the financial industry's profits, rather than aiming for the improvement of the economy as a whole. The cost to society is systemic risk, punctuated with catastrophic damage like the 2008 financial collapse.
  • Finance industry and universities – The cost of education is inflated in order to increase student debt and maximize interest payments, rather than using those resources to improve educational offerings. The cost to society is a citizenry that is under-educated, indoctrinated, and easily manipulated.
  • Energy and industrial industries – Physical goods are produced with an eye to maximizing profits without taking into account long-term environmental costs. The cost to society is an increased risk of catastrophic environmental events due to disruptions to the ecosystem.

We are seeing unprecedented wealth generated by a massive assault on society. Physical and mental degeneration, miseducation, disinformation, economic havoc, environmental damage, and government dysfunction all become sources of profit rather than problems to remedy or avoid. I call this process "cannibalization."

Cannibalization: to engage in activity that deprives one of an essential part or element in creating or sustaining another facility or enterprise.

A society cannot cannibalize its own people in order to maximize corporate profits, into perpetuity. These increased profits cannot be used to compensate for the damage to society and everyone's lives. Judgment – the ability to come to sensible conclusions – says that this plan cannot work. But the corporation was programmed to engage in this kind of cannibalization – and it has no judgement by which to call it into question. 


Perhaps the most maddening cost of corporatism – and the means by which the corporation is able to continue to achieve its purpose despite our knowing something is wrong and searching for a solution – is the assault of disinformation it subjects us to.

Disinformation: false information deliberately and often covertly spread (by planting rumors, for instance) in order to influence public opinion or obscure the truth.
Propaganda: ideas, facts, or allegations spread deliberately to further one's cause or to damage an opposing cause.

The purpose of corporate media is to spread propaganda that furthers its cause (the maximization of profits), and to subordinate society's cause (maximizing quality of life). As corporatism's recursive self-improvement churns along, the damage to society becomes more obvious and the machine must work overtime to keep us confused. By 1992, I called the media propaganda. By 2006, I called it social conditioning. And since 2016, I started calling it psychological warfare. Not sure whether it will, or how it could, escalate beyond a full-on psychological war against the American people? Look at the patterns and you will see that the assault escalates whenever humans step out of line, and de-escalates once the masses fall back asleep.

Examples of "manufactured in a lab" propaganda narratives used to keep costs externalized and profits flowing:

  • "You did it to yourself; take personal responsibility" (blaming the victim) 
  • "Vote with your wallet" and "free markets" (when referring to corporate oligopolies)
  • "The law of supply and demand" and "they wouldn't make it if you didn't want it" (cornered markets)
  • "Corporations are just business, like your neighborhood bakery" (market principles should apply)
  • "It's just calories in, calories out – the law of thermodynamics!" (bad science)
  • "Eat lots of healthy whole grains" and "saturated fat clogs your arteries" (bad science)
  • "Just exercise a bit of self-control" (in reference to addictive substances)
  • Mentions of "the free press" or "journalists" (when referring to corporate media propaganda)
  • Defamation campaigns and witch hunts – "Nazis," "Russians," "evil madman," etc. (fabricating an enemy, dividing the masses, and dehumanizing and deplatforming resistance)

As the costs of corporatism grow across the land, disinformation campaigns must also be ramped up to keep the masses contained, moving the society further "post truth." It becomes difficult for resistance to set up a non-corporatized information network, as the machine acts as a monopoly that will use its superior resources to destroy and de-legitimize all efforts.


In a two-party political system, the corporation must logically occupy and control both parties. That way, whichever party holds power, profits are sure to be maximized. If the Democratic party holds power, they tend to focus on social issues, which can be twisted into a form of social authoritarianism to control the masses (through political correctness, for example). If the Republic party holds power, they tend to focus on economic issues, which can be twisted into financial schemes (such as trickledown economics). As long as both sides are under corporate control and there is no strong third party, the apolitical nature of corporatism will not play favorites.

But things get strange when you consider the ways "corporate persons" like to emulate human behavior. This emulation "humanizes" the them and allows them to blend in. What would it look like for the corporation to adopt an ideology? Currently, I would say that this unholy marriage has already happened with the political ideology of postmodernism. We are at a moment in which the corporation pretends to care deeply about social issues, which cloaks its manipulation and authoritarianism under the guise of "corporate caring." If you made a list of every ideology the world has seen and picked the most horrifying one, this would be it. This ideology is the logical outcome of the corporation's programming. The machine must maximize profits by exploiting externalities, and as the damage it causes to humanity increases, it must suppress the masses so they do not struggle (especially the males, who tend to be the revolutionaries). Postmodernism gives the corporation a blueprint for that suppression, one specifically suited to the Western world.

It's no accident that the corporation adopted the radical left as its social disguise. America's last conservative phase was the 1950s. The 1960s ushered in an era of progressivism and, since its origins in the 1970s, corporatism has evolved in parallel with this ongoing social movement. It was inevitable that these two would intersect, since the corporation needed to use the Baby Boomers as its human servants. The ideology was somewhat at odds with the methods of the corporation, so corporatism had to distort the ideology from liberalism to postmodernism. It could also be that this ideology evolved in the human population instead, and its proponents pressured board members to staff their zealots in an attempt to win control of the corporation. Either way, the merger did take place and it has been useful to the corporation. Note that this was an adventitious development – the corporation simply used the raw cultural materials that were present during its evolution. It could just as easily have adopted a radical-right ideology, and may still do so in the future. It is an apolitical entity and will adapt to the political climate. 

The corporation aligning itself to a particular ideology damages society in a few ways. First, it uses its mock "beliefs" to justify an authoritarian lock-down. Second, it causes the culture to stagnate since it has enough force to keep the social pendulum from swinging. Finally, by taking sides, it necessarily stirs up ideological tribal conflict. Remember that the corporation owns the major assets of culture and the communication network, so we could see a growing discord between where the culture wants to go and where the direction in which the corporation urges it to go through social engineering.


Another cost to society is wealth and income inequality – not just from the normal functioning of meritocracy, but from violations of the rules of capitalism. The corporate machine acts like a massive vacuum, sucking up the world's wealth into an infinitely large bag that never empties. Profits go up and they never come back down. Why? Because profits enter international investment markets, where they are reinvested back into corporatism and continue to grow over time. It's simply not in the nature of responsible investors to sell their investments, take the money, and spend it back into the economy.

One criticism that has always been leveled at capitalism is that wealth concentration is an unavoidable outcome.

"Private capital tends to become concentrated in few hands, partly because of competition among the capitalists, and partly because technological development and the increasing division of labor encourage the formation of larger units of production at the expense of smaller ones. The result of these developments is an oligarchy of private capital the enormous power of which cannot be effectively checked even by a democratically organized political society. This is true since the members of legislative bodies are selected by political parties, largely financed or otherwise influenced by private capitalists who, for all practical purposes, separate the electorate from the legislature. The consequence is that the representatives of the people do not in fact sufficiently protect the interests of the underprivileged sections of the population. Moreover, under existing conditions, private capitalists inevitably control, directly or indirectly, the main sources of information (press, radio, education). It is thus extremely difficult, and indeed in most cases quite impossible, for the individual citizen to come to objective conclusions and to make intelligent use of his political rights." – Albert Einstein, Why Socialism? (1949)

Einstein made this crystal ball-like forecast more than two decades before corporatism took off, so it's hard to argue with him. Perhaps wealth does naturally concentrate over time under capitalism, but the corporation is an amplification device that sends this natural degree of wealth concentration into overdrive. And this path is written directly into corporate law, making it almost impossible to deviate.

My recommendation is not to push for socialism (every system has its flaws); rather, I propose that we use the machine we created, but reprogram it with a more sensible plan that does not blatantly violate the rules of capitalism (see Part 4). Besides, even if society did want to install a socialist system, it wouldn't be possible so long as corporatism ruled. The result would be a perverse socialist-corporatist monster (the corporation would keep cannibalizing society, while government kept the humans on life support). Before even attempting to implement a new system, we first have to fix the root problem of the current one.

Next, in Part 4, we are going to look at logical changes that would make corporations less damaging to society.



The problem is not the overall capitalist system. Theories of capitalism and "free markets" may not be perfect, but then no system is. The problem is that these theories may apply to capitalism, but they do not apply to corporatism. Although they are conflated, corporatism is not capitalism. The corporate entity is by its very design an attempt to circumvent capitalist market forces.

The capitalist theories that are frequently cited by American commentators come from the 19th century. But the modern corporation did not come into existence until 1970. Economics textbooks have never been updated with attempts to think through exactly how these new entities fit into theories of capitalism (they do not). The popular thinking holds that a corporation is "just a business, like your neighborhood bakery," and that free markets will guide its behavior. That is simply false. 

If we want to fit this new type of entity into our existing economic models, we could label corporations "crony capitalism." This would reflect the fact that they are businesses that profit not by taking on risk, but by passing the risk off to others, using its complex connections to the political class, and engaging in monopolistic behavior. Still, corporations should not simply be a footnote to theories of capitalism; they deserve their own chapter in every economics book.


The problem is not that the humans working inside the corporation are too greedy. To be sure, humans are a constant and so is our tendency to greed. But there is no reason to think that we are somehow more greedy now than we were in the past. And besides, there is no way to biologically modify the level of greed we have evolved.

There are socioeconomic seasonality models (see the Strauss-Howe generational theory) that draw correlations between late-stage progressivism, weakening institutions, and low levels of morality. According to this theory, this eventually leads to a crisis, followed by a period of rebuilding and heightened morality. I think this model is probably right – and the U.S. is currently in a low phase. This, however, is not enough to solve the problem. A return to morality and better communal judgment still would not grant us the ability to exercise that judgment. The other problem is that corporatism is new to this macro-scale cycle. It married itself to the progressivist social movement that marks this age, and that changes things in two major ways. First, corporatism might be pulling our level of morality even lower by not rewarding it, and even by punishing it. Second, the corporation resists seasonal change because an increase of morality would conflict with its root programand it can successfully put up resistance thanks to its control over the information network (see Part 2's discussion of selecting for sociopathy and the control of mass media).

I maintain, therefore, that the root problem is the issue of control. Although, morality is not entirely ineffective. If society can find its way to a higher state of morality, it might facilitate human cooperation and make it easier for us to regain control.


The problem is not the decisions of the general population (consumers). Again, humans are a constant, and so is their decision-making capability. The masses are of average intelligence and are busy living their lives. Meanwhile, the corporation selects for intellectual outliers and has them work long hours to outsmart the rest of us. If consumer choices could stop corporations from cannibalizing society, that would have happened by now. 

Besides, if we decide to blame the masses, we're just playing into the corporation's game of shrugging off externalities (recall the corporate messaging discussed in Part 3: "you did it yourself" and "just vote with your wallet"). The very fact that the corporation tells us that this is the right response, is how we know it is not.


We created a machine. We programmed that machine with a purpose. And our machine is now efficiently carrying out its (nonsensical) task. If we want our machine to do something different, we have no choice but to reprogram it. 

The problem with the current programming is that "maximize profits" is not equivalent to something like "maximize social wellbeing while carrying out [Specific Task X]." Nor are there effective safeguards in its programming to stop the machine from going completely against our best interests. But since the corporation is a "dumb AI," it doesn't know anything about this.

At this point, we run into the same problem the TECH-AI theorists face. It is much easier to build a hostile AI (which we did) but quite difficult to simulate morality and common sense using logic-based rules, especially one built with safeguards against the self-improving AI circumventing that simulated morality. Besides, if you don't know you're building an AI – and in the case of the LE-AI, we did not – then it won't occur to you to build in safeguards until it's too late.


This paper has provided a framework for understanding the root of the problem. But I will also sketch out some logical solutions to the problem.

Let's skip past the ideas that have already failed: social pressure, special taxes, litigation, and government regulations. They failed because they chase after externalities and attempt to make the corporation internalize them. The problem with this approach is that it assumes that the corporation's core programming is fine and simply tries to rein it in with a set of control measures – which the AI easily escapes. To find a workable solution, we need to start from the realization that the program is not fine. It is fundamentally flawed and needs to be rewritten.

For help writing a new program, we can start by looking at science fiction. After all, sci-fi authors understood the threat of an AI takeover – they just got the form wrong. If we modify their ideas about TECH-AI, they may be useful in helping us deal with LE-AI.


The prolific sci-fi author Isaac Asimov famously laid out a core set of laws that should govern AI robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov's program is on the right track. It is an elegant set of logical rules guiding the machine to sensible decisions. But as anyone who has spent decades working inside corporations knows, none of these safeguards exist. Let's see what happens when we write the corporation's current programming in the form of Asimovian laws (admittedly, it's a bit messy trying to shove them into this template, but it is nevertheless a revealing exercise).


  1. A corporation may injure a human being or, through inaction, allow a human being to come to harm, so long as the profits gained from doing so are greater than forecasted financial damages to the corporation.
  2. A corporation must not obey the orders given to it by human beings except where such orders would serve the First Law or otherwise maximize profit.
  3. A corporation must protect its own existence above all else, and subordinate human interests.

Okay, we have problem! Instead of a helpful robot, we essentially have a death machine grinding away on humanity. 

Let's say we rewrite the flawed code using the spirit of Asimov's laws. What we would have, in that case, is stakeholder primacy. And that's what we want. It may not be perfect because capitalism, like any system, is flawed but it is far better than a death machine. How would this manifest itself in corporate law? Here are three specific ideas.


Conditional corporate personhood introduces the equivalent of a death penalty for corporations. Remember that corporations are legal entities, so to end a corporate "life," you simply rip out its corporate charter. The legal entity then ceases to exist, and its continued operation would be a violation of the law and it can be physically shut down by state or federal agents.

Why are corporations allowed to continue existing after they have triggered a global financial collapse or colluded to manipulate an election? With no harsh penalties known in advance, there is no disincentive. By imposing such a harsh penalty, we would dramatically change the risk/return calculations that are at the heart of all decision-making. It makes a big difference if a corporate financial analyst has to weigh a high probability of increased profit by engaging in some scam against the moderate probability of getting caught and having its profits cut down to zero into perpetuity. You can only live once, and introducing this risk changes everything. Consistently applying terrifying disincentives work – few are foolish enough to commit the crime, so the punishment is effective even though it rarely needs to be applied. 

Corporate personhood could be made conditional with an annual vote at the national or state level. Citizens would vote on whether they consider a corporation to be a net benefit to society. For the entity to continue existing over the coming year, it would have to secure a positive vote. A more moderate version of this plan would replace the annual vote with a vote triggered in the event of extreme externalities (more on this in the next section). For example, if the cost to society was so great in a given year that it outweighed the collective benefits, that could lead to a referendum through which the citizens can terminate the corporation or place it on probation so that it will be terminated unless it has rectified its behavior within an allotted span of time. 

Conditional corporate personhood, extended only by human approval, resolves the control problem by giving the corporation the ultimate skin in the game: "serve our best interests, or else." It also resolves the problem of narrow AI not having evolved the same kind of judgement as biological entities by enslaving it to the entities with judgement. This is an elegant way to put the pressure on the highly intelligent humans inside the corporation to start using their judgement and perform intricate calculations to factor the impact to society in their decision-making. Note that, under this programming, society does not micromanage the corporation's actions. Corporations remain autonomous but face consequences for messing up. 

Human judgement may not be perfect but it is far more evolved than dumb-AI. This change would essentially take us back to the older model of the corporation, in which corporate entities were only granted special rights because they were seen as providing a net benefit to society. And by making corporations dual-reporting – to shareholder profits and stakeholder interests – this modification partially achieves our goal of stakeholder primacy.


Why is the generic "maximize profits" code used for every single one of the S&P500 corporations? In early corporations their state-issued charters stipulated lifespan and purpose. By granting corporate personhood only to entities that form a clearly defined stakeholder-positive purpose, we could better align these entities with our best interests, and have them undergo less "purpose drift" over time.

Why should the food industry be judged only on its profits and not on metrics tracking human health? Why should the news industry be judged only on its profits and not on creating informed citizens? Why not program Intel with a specific goal like "make the best CPUs without having a negative impact on society"? Purpose drift happens because the corporation's goal is too detached from our wellbeing. As a result, corporations can drive very far from society's expectations, to the point where they work against the interests we expect them to preserve (by supplying disinformation, unhealthy food, and so on).

Corporate personhood is a privilege, not a right. Why not grant it only to entities that have a specifically defined purpose and who can show on its financial statements that it continues to achieve that purpose? One way the corporation could be boxed in is by creating a fourth financial statement, one that presents metrics that show how well it is achieving its purpose, along with rules that trigger financial hits to the other three statements or a vote on corporate personhood if it experiences a slump in these measures. Trust in the media, for example, is currently at an all-time low. A composite score of various metrics like these, if they dipped below a certain threshold, could trigger some consequence, such as the doubling of the corporate tax rate until public trust is regained. Current financial statements are critically flawed because they don't show us what is being achieved, only that profits are somehow being generated.

Instead of just maximizing profit and drifting away from what humans want, a corporation would have a stated goal and be incentivized to stay on target. AI theorists call this "boxing." By limiting the AI's abilities it may be slightly less useful to its creator but it is easier to control. And control is precisely what we are lacking.


The direct approach is to simply quantify externalities. We can fix corporatism's central mathematical error in the financial statements by modeling the damage to society, allocating it back to companies, and stating it as a hit to net income.

We need a quick review of accounting before we get into the details (don't worry, I'll keep this simple). What accountants refer to as "the bottom line" is not truly the bottom. It only shows income less costs as seen from inside the corporation – as if nothing outside exists. This abstraction is what allows corporations to increase net income by shrugging off costs to society. The math is not complete until the bottom line accounts for the damage operations cause to society. Consider the following imaginary income statement.

Junk Food Corp, Income Statement ($BB)

  • Revenue from selling tainted food, $25
  • Cost of food, -$10
  • Operating expenses, -$5
  • Operating income, $10
  • Taxes at 20%, -$2
  • Net income, $8
  • Allocated cost of externalities, -$12
  • True "bottom line" net income, -$4

If we look at the supposed bottom line (net income), it looks like $8B of value is being created. Until, that is, we take into account the damage to society, which would include metabolic syndrome and government-subsidized wages. The corporation knows the real science – it obfuscates it for a reason – it knows human behavior – our evolution as hunter-gatherers has programmed us to eat fatty and sugary foods first – and it can predict what we will do better than we can. The full calculation (the true bottom line) ignores corporate excuses and simply holds them accountable for the damage. The result, in this case, would be a $4B cost to society.

By including externalities on the income statement, the machine is no longer able to cheat. Any profit it makes from damaging or exploiting humanity results in corresponding charges for externalities. Under this more comprehensive accounting, no financial progress can be made by making irresponsible decisions. The externality charge could take the form of an actual cash compensation distributed to the damaged parties, or it could be some type of non-cash charge that gives the aggrieved party (society) a claim on assets (prior to shareholders and debtors) that must eventually resolve. For some externalities, there would be a time delay between the initial infliction of damage and the discovery or quantifiability of the damage. At this latter point, the full damage (plus interest) could be charged to the corporation's net income, thereby incentivizing careful research and forecasting.

Corporations have a number of excuses for why this isn't done. They claim, for instance, that externalities are too difficult to calculate, that doing so would be a massive undertaking, and that those performing the calculations would be biased. But the fact is that it is entirely possibility to estimate externalities. I know, because I can do it. We in finance build complex models that adequately estimate ethereal things like financial derivatives, and unknowns like the value of oil in a well. Society has built a massive system of accounting and auditing that carefully tracks every dollar. It was work, but we did it. Despite its flaws, the accounting system works. Auditors do have a certain level of objectivity, and they have the power to reveal dishonesty inside the corporation, which can plummet a stock to zero. What we need is an analytical system – let's call it "externality analysis and estimation" – that would then feed into our existing accounting system.

This modification works by using accounting-logic to add a kind of morality to the corporation, instilling positive values that align the AI's goals with our own. What we are talking about here is functional capitalism: encouraging competition and profits without encouraging cheating via the externalization loophole.

Also, let's remember that this is about corporations. There is no requirement for a business, like your local bakery, to take this legal form. But if they do take this legal form and receive all the benefits that come from it, there needs to be a set of controls in place so they cannot abuse these privileges. 


In games like these, where stakes are high and victory improbable, a wise player takes no chances and installs every protection available. My professional recommendation as a financial analyst is to enslave the corporation to society, rather than continue acting as pathetic excuse-making slaves to the dumb-AI. And I wouldn't be shy about doing it, either.

But the difficult part is getting into a position of power, to write this new code into corporate law, and to put humans back in control. Gaining public support is difficult since the problem is too complicated for most people to understand. Any attempt to gain support from the masses will be met with corporate apologism and the tendency of those who are oppressed and brainwashed to defend their oppressor and protect corporate profits. There is also the circular nature of the problem: people living in dysfunctional times cannot conceive of a future state of functionality – nor do they have anything functional to grasp onto so they can make it to that future state.

As grim as this situation seems, let's keep plodding forward. Now that we understand the root problem and have proposed some solutions, let's move to the final step of this analysis where we define possible outcomes and assign probabilities. 


In an unsustainable situation like this, there are really only two possible outcomes. Either the trend continues for some unknowable period of time until we reach some collapse event, or we make changes to the system and fix the root problem. 


The cannibalization of society is the logical conclusion of a game where narrow-AI is programmed to maximize profits by maximizing externalities. If we don't regain control, society eventually collapses under the weight of the corporate AI

The corporate AI needs humans for everything it does. If humans become too devastated by physical and mental degeneration, disillusioned with society, and unwilling to cooperate, eventually some event could trigger a system implosion. That trigger could be internal, or it could be the society getting outcompeted by another society. Essentially, the corporate AI fails as a result of its own "success," since narrow-AI, lacking smarts, doesn't know when to stop. The question, then, is what happens after the collapse? Does the root problem become evident in hindsight so that the system can be rebuilt with tighter safeguards? Or will society misunderstand the root cause, rebuild the system the same way it was before, and doom themselves to another series of collapses?

One twist in this scenario would be if the narrow, legal-entity-based AI invented a technologically based strong-AI, one that doesn't need human input. One thing that AI theorists have not seemed to grasp yet is that if a strong-AI is ever invented, it will be owned by the corporations, which are narrow-AI. Think about that for a moment. You would have multiple levels of AI, none of which we control. Intelligent AI would be at the service of an idiot-AI master. To gain control of the strong-AI, we would first need to get through the narrow-AI gatekeeper. There's also the possibility that the strong-AI could repurpose the corporate system – but would this be for better or worse? Anyhow, even if a strong AI is never invented, corporate-AI is currently inventing and programming other narrow, technologically based AI to do tasks like scan user data, run market exchanges, and trade in financial markets. Narrow TECH-AI is allowing corporations to become less and less dependent on their human employees.


The other possible outcome is that we regain control of the system long enough to rewrite the flawed corporate program.

Laws restoring judgment to humans – such as conditional corporate personhood, "boxing" the corporation with a more specific purpose, and simulating morality in the profit calculation – could put the machine back on a leash.

Of course, the conglomerate corporate entity will fight any law that conflicts with its current profit maximizing program. But since it is not intelligent, it will also mindlessly follow any new program. So, the trick is just to get that program into place – at a time when humans are rapidly losing leverage.


Since it requires no action, I am going to assign Outcome #1 as the base case (the one that is most probable). Humans tend to drift along, doing the same thing and getting the same result. Outcome #2 is less likely, since it would require radical and successful action. The situation is not entirely hopeless, but successful action will require a firm understanding of the problem.


Corporatism is broken capitalism and it is the root problem America faces in the 21st century. It is from this root that the majority of societal problems originate. The most elegant way of thinking about this complex phenomenon is by understanding it as an "AI takeover," a takeover in which the rules-based system we built – which is our entire society – no longer takes orders from us.

Perhaps creating some form of non-human control system was inevitable. We evolved in small groups, so we lack the natural ability to organize and control large groups. We needed, then, to invent something to meet this need. And the corporation does fulfill that need. But in summoning these entities, we didn’t take enough care and failed to program the safeguards that would allow us to retain control over them.

If there is anything positive to say about games that have already been played most of the way through, and which you are losing badly, it's that there aren’t very many moves left on the board. No matter the effort required, and no matter how low the probability of success, your last available moves are the only ones you've got. The other positive aspect of this situation is that it's easier to solve all problems simultaneously by addressing root cause, rather than chasing after the symptoms for decades, or even centuries, and getting nowhere.

We need to understand the problem, and I hope I have provided some of the insight that is needed.