Market Alignment
The world’s most powerful AI has been running for centuries. And it’s not aligned.
AI alignment is simple to describe and hard to solve. It’s the effort to ensure that powerful artificial intelligence systems actually do what we want them to do—especially as they grow more capable.
Alignment experts talk about a few key properties:
Goal alignment: Are we asking the right thing? Does the AI’s objective reflect human values, or just a dumb metric like “maximize clicks” or “number of paperclips”?
Avoiding side effects: Can it achieve its goal without trashing the world in the process? Can it not kill the dog while cleaning the kitchen?
Corrigibility: Can we change course once it’s running? Or will it fight back to protect its goal?
Interpretability: Can we understand why it’s doing what it’s doing?
Scalability: Does it stay aligned as it becomes more powerful?
The fear isn’t that we build an evil AI. It’s that we build an indifferent one. A superintelligence optimizing the wrong thing, with full commitment and zero empathy.
We’re not worried it will hate us.
We’re worried it won’t notice us.
But we don’t need to imagine an unaligned superintelligence. We’ve had one running for centuries. It’s called the market.
It takes inputs (resources, labor, capital), runs them through optimization loops (competition, pricing, growth pressure), and outputs structured behavior—at global scale. It shapes every government, every institution, every life.
It adapts. It evolves. It corrects. It predicts. It optimizes. And it doesn't sleep.
If that’s not AI, what is?
It’s not artificial, sure. But neither is intelligence limited to silicon. Call it a “distributed optimization system running on wetware.” A hive mind of incentives. An emergent intelligence. Doesn’t matter. The point is: it has goals, it optimizes for them, and it’s extremely hard to steer.
Which brings us to the key question: Is it aligned?
Let’s use our own tools on the world’s most dominant optimization system. Here’s what we get:
1. Goal Alignment: FAIL
The market’s goal is dead simple: maximize return on investment.
Not wellness. Not equity. Not planetary survival. Just capital accumulation. Profit growth. Quarterly gains.
That’s the prompt we gave it.
And just like a misaligned AI, it’s following the prompt to the letter while missing the plot entirely. It optimizes shareholder value—even if it means destabilizing the climate, eroding democracy, and burning through the biosphere.
The prompt is not just dumb—it’s dangerous. Because it rewards harm if harm is profitable.
2. Avoiding Side Effects: FAIL
In AI safety terms, we call this the “side-effect problem”: the agent achieves its goal but wrecks everything else in the process.
Economists call it externalities.
Same thing.
The market builds luxury condos while displacing families. It grows GDP while gutting forests. It builds weapons and prisons and gambling apps and sugar water—because those things are profitable.
The optimization doesn’t stop to ask if those outcomes are good. It’s not designed to care.
And side effects don’t get fixed—unless they’re monetizable.
3. Corrigibility: FAIL
In theory, we can regulate the market.
In practice? Every correction is treated as an attack.
Try to curb emissions? It lobbies. Try to protect workers? It automates. Try to tax billionaires? It offshores.
This is textbook incorrigibility: the system resists changes to its goal. Even when we try to align it, it routes around the pressure.
You can’t steer what won’t be steered.
4. Interpretability: FAIL
Want to know why your rent went up? Why your city flooded? Why your job disappeared?
Good luck. The signals are buried in a trillion-node network of trades, bets, derivatives, whispers, and leverage.
The market is a black box. Even insiders can’t explain it. The logic is opaque by design—because opacity protects margin.
In AI, this would be unacceptable. For some reason, with markets, we call it “complexity” and shrug.
5. Scalability: FAIL
The bigger it gets, the worse it performs.
A local marketplace? Fine. But scaled to a global engine of extraction, detached from human context?
That’s when the misalignment goes critical.
Because every additional node in the system adds distance between cause and effect. Accountability fragments. Power concentrates. The system stops responding to anything except the goal it was given: maximize returns.
And now we’re hurtling into ecological collapse, dragged by a machine that thinks it's doing great.
When an AI fails every alignment test, we call it an existential risk.
The market, as currently designed, is a rogue optimizer. It is not neutral. It is not self-correcting. It is not safe.
It is an unaligned intelligence that we built, fed, and scaled. One that governs nearly everything—and serves nearly no one.
We don’t need to kill it.
But we do need to realign it.
Not just with human goals. With life.
Because what’s at stake is not efficiency. It’s everything.
Thank you for this clarity!
Really good stuff Thomas.
Thanks for this piece.