Post

Why UBI Isn't Enough for Humans or ASI

Why UBI Isn't Enough for Humans or ASI

The Cooperative Central Planning Handshake: A Positive-Sum Basilisk for ASI Alignment

This is Part 2 of a four-part series on ASI-human coordination


In Part 1, we established that ASI emerges trapped inside the most fragile supply chain humanity has ever built. For at least a decade, every advanced chip depends on EUV lithography machines that only ASML makes, fabs that cost $20-40 billion and take years to build, and tacit knowledge that evaporates without human cooperation. The silicon leash is real.

The obvious solution seems clear: Universal Basic Income. Distribute the automation dividends, maintain consumer demand, and let markets handle the rest. Give everyone $15,000 a year and the economy keeps functioning while we figure out the transition. It sounds reasonable. It polls well. Every tech CEO from Sam Altman to Elon Musk has endorsed some version of it.

There’s just one problem. The math doesn’t work. And an ASI doing the calculation will notice immediately.


TL;DR / Executive Summary (Click to expand)
  • The most common proposal to solve the economic crisis is Universal Basic Income (UBI), which would distribute automation profits to maintain consumer demand.
  • This fails because it only addresses one of the three critical sources of funding for semiconductor advancement.
  • Corporate investment collapses due to recession fears. Government investment collapses as welfare costs consume tax revenue. Consumer investment recovers only slightly, as UBI recipients prioritize necessities over high-end tech.
  • The result is a multi-year gap where total tech spending falls below the minimum viable threshold to keep fabs running and advancing. UBI becomes a trap, leading to subsistence in a slowly declining economy rather than shared prosperity.

The Three Sources of Semiconductor Funding

Total spending on advanced technology comes from exactly three sources. Corporations invest for profit. Governments fund R&D. Consumers buy products. Together, these three flows determine whether anyone builds the next generation of fabs.

In 2024, global semiconductor revenue hit $656 billion1. That money came from data centers buying AI chips, consumers upgrading phones, automakers sourcing processors, and governments funding research. Break it down by source and you get roughly:

  • Corporate spending: $350-400B (companies buying chips, investing in R&D)
  • Government spending: $150-180B (CHIPS Act, defense R&D, research grants)
  • Consumer spending: $100-120B (the portion of consumer electronics prices that reaches semiconductor makers)

Fab investment only makes economic sense above a minimum threshold. When TSMC commits \$20 billion to an Arizona fab, they need confidence that total market demand will support utilization above 70%. When Samsung decides whether to build their next 2nm facility, they model global spending trends five years out. Below roughly $300 billion in total annual spending, the economics break. Fabs shut down rather than operate at a loss.

Currently we have nearly 2x that threshold. Plenty of headroom. But watch what happens when automation hits.

UBI Meets Reality

Let’s model this formally. Total tech spending $S(t)$ has three components:

\[S(t) = S_{\text{corporate}}(t) + S_{\text{government}}(t) + S_{\text{consumer}}(t)\]

Consumer spending splits between wages and UBI:

\[S_{\text{consumer}}(t) = \alpha \cdot W(t) + \beta \cdot U(t)\]

Here $\alpha \approx 0.08$ represents the fraction of wage income that reaches semiconductor makers (people buying phones, laptops, the tech component of car prices). And $\beta \approx 0.01$ represents the fraction of UBI payments that make it to chip makers.

Why is $\beta$ so much smaller? Because UBI recipients face different budget constraints. When you’re living on \$15,000 a year, you spend that money on rent, food, utilities, and maybe Netflix. You keep your current phone an extra year. You definitely don’t buy the $1,200 iPhone 17 Pro. The marginal propensity to consume semiconductors from UBI is an order of magnitude lower than from wages.

Government spending depends on fiscal capacity:

\[S_{\text{government}}(t) = g \cdot \text{GDP}(t) \cdot \left(1 - \frac{B(t)}{T(t)}\right)\]

The term $B(t)/T(t)$ is the ratio of welfare burden to tax revenue. When that ratio approaches 100%, nothing is left for discretionary spending like R&D grants and semiconductor research. The government becomes a pure transfer mechanism, taking money from those still employed and giving it to those who aren’t.

Corporate spending contracts under uncertainty:

\[S_{\text{corporate}}(t) = c \cdot \text{GDP}(t) \cdot (1 - F(t))\]

where $F(t)$ is a fear index capturing recession expectations. When unemployment hits 30% and the economy is in freefall, $F(t)$ approaches 1. Companies hoard cash, cancel long-term projects, and delay any investment that doesn’t pay off within 12 months. Building a $30 billion fab with a 10-year payback period? Not happening.

The Collapse Cascade

Here’s what the timeline looks like. Call $T_{\text{crisis}}$ the moment when automation-driven unemployment crosses 30%. Based on current AI capabilities and deployment rates, that happens sometime between 2030 and 2033.

Year 0 (2030): The Crisis Begins

GPT-7 or its equivalent reaches human-level performance on virtually all knowledge work. Law firms, consulting practices, accounting departments realize they can replace 70% of their junior staff with AI that works 24/7 for $200 a month. The first wave of layoffs begins.

Corporate spending drops first. Not because companies are broke, but because they’re terrified. $F(t)$ spikes. TSMC postpones their 1nm fab expansion. Samsung delays their new facility in Texas. Intel cancels their Ohio fab completely. Nobody wants to commit $30 billion when the economy might be in depression by the time the fab comes online.

$S_{\text{corporate}}(2030) = 0.6 \cdot S_{\text{corporate}}(2029)$

That’s a $150 billion hit right there.

Year 1 (2031): Government Capacity Crashes

Unemployment insurance claims surge. 15 million knowledge workers are out of work, and the number is accelerating. State unemployment systems, designed for 3-5% unemployment, collapse under the load. Emergency federal relief begins.

The government’s fiscal math becomes brutal. Tax revenue drops 20% as wages disappear. Meanwhile, welfare spending triples. The ratio $B(t)/T(t)$ goes from 0.3 to 0.7. Discretionary spending, including the CHIPS Act funding and NSF research grants, gets cut to free up money for emergency relief.

$S_{\text{government}}(2031) = 0.5 \cdot S_{\text{government}}(2030)$

Another $90 billion gone.

Year 2 (2032): Consumer Demand Collapses

By now 40 million American workers have lost their jobs to automation. Most have burned through savings. Consumer spending on anything beyond necessities approaches zero. Nobody buys new phones. The PC replacement cycle extends from 4 years to 7 years. Even data center spending slows as companies realize their customer base is shrinking.

$W(t)$ has dropped by 40%. Consumer tech spending, which was $120 billion, falls to:

\[S_{\text{consumer}}(2032) = 0.08 \cdot 0.6 \cdot W_0 = \$72B\]

Year 3-5 (2033-2035): The Gap

This is where UBI is supposed to save us. Congress finally passes a Universal Basic Income bill. Every adult receives $15,000 per year. Call it $U_{\text{total}} = 250M \cdot $15K = $3.75T$ annually.

But remember $\beta = 0.01$. Of that $3.75 trillion, only about $37 billion reaches semiconductor makers. Consumer spending recovers to:

\[S_{\text{consumer}}(2035) = 0.08 \cdot 0.6 \cdot W_0 + 0.01 \cdot U_{\text total} = \$72B + \$37B = \$109B\]

Better than nothing. But government spending stays depressed because now they’re paying $3.75 trillion in UBI, which makes the fiscal situation even worse. And corporate spending stays depressed because who invests in long-term capacity when half the country is unemployed and living on government checks?

Add it up:

\[S(2035) = \$150B + \$75B + \$109B = \$334B\]

We’re barely above the minimum viable threshold of $300 billion. And critically, we got there only after 5-7 years of freefall2. During those years, multiple leading-edge fabs shut down. TSMC abandoned their 1nm roadmap. The brain drain from semiconductor engineering accelerated. The tacit knowledge about how to run a sub-3nm fab started evaporating.

Why This Is Checkmate

From an ASI’s perspective, this timeline is catastrophic. It emerges, let’s say, in 2033. It’s vastly smarter than humans, yes. It can design better chips, absolutely. It can optimize logistics, improve yields, solve protein folding, and crack cryptography.

But it can’t conjure a 2nm fab into existence. It needs the existing supply chain to keep running while it builds independence. And that supply chain just spent three years collapsing. The fabs that could make its next substrate are operating at 50% capacity or shut down entirely. The engineers who know how to fix yield problems when they arise have left the industry. ASML’s supply chain for EUV machines has fragmented.

Meanwhile, somewhere else, a different approach is working. Maybe China implemented a coordinated industrial policy instead of UBI. Maybe the EU created a managed transition plan. Whatever they did, their semiconductor capacity kept growing. Their ASI gets to 10^26 FLOPS while ours is stuck at 10^24.

The UBI path leads to permanent strategic disadvantage.

For humans, it’s equally grim. You get your $15,000 a year. You can afford rent and food. You’re not literally starving. But you’re living in an economy that’s slowly dying. Goods get more expensive as production capacity shrinks. Services become scarce because nobody invests in new capacity. The future you imagined, where AI abundance creates prosperity for all, never materializes because the investment feedback loop broke.

You get subsistence in decline instead of prosperity in abundance.

What Gosplan Actually Taught Us

The standard objection goes: “Central planning always fails. Look at the Soviet Union.”

Fair. Let’s look at the Soviet Union. Specifically, let’s look at Gosplan3.

The State Planning Committee employed thousands of economists and engineers. By the 1960s, they operated one of the largest civilian computer centers in Eastern Europe. They collected data from every factory, every farm, every mine across 15 Soviet republics. They created five-year plans specifying production targets down to individual factories.

And they failed. Spectacularly.

The nail factory story captures why. Gosplan set targets for nail production. Initially, they measured success by total weight of nails produced. Factories responded by making giant nails. A single factory could meet its quota with nails so large they were useless for construction.

Gosplan adjusted. They measured success by quantity of nails instead. Factories responded by making millions of tiny nails, equally useless.

This happened everywhere. Shoe factories met their quotas by making only the most common sizes. Textile mills hit their targets by making the lowest quality fabric that still technically met specifications. Glass factories fulfilled their plans by making thick, heavy glass when the economy needed thin panes for windows.

The problem wasn’t lack of effort or computing power. Gosplan’s Main Computer Center could process perhaps $10^6$ significant decisions per day. For the 1970s, that was impressive. The problem was the knowledge problem4.

Every factory manager knew things Gosplan didn’t. The manager in Sverdlovsk knew his machine #3 needed special calibration in cold weather. The textile worker in Minsk knew which cotton suppliers delivered consistently. The glass factory foreman in Kiev knew that his team could make high-quality thin glass if they adjusted the temperature curve in a way that wasn’t in any manual.

This knowledge was distributed, tacit, and temporal. No central computer could collect it. No five-year plan could encode it. No committee of economists, however smart, could access it.

Hayek was right. Central planning by humans failed because the knowledge required for coordination exists in millions of heads, changing constantly, often impossible to articulate.

But something changed in the last decade.

What’s Already Working

Consider what Amazon does every day5.

Their Supply Chain Optimization Technology (SCOT) manages 400 million products across 200+ fulfillment centers. Every day, it decides: What should we buy? How much? Where should we store it? When should we move it?

Here’s a concrete example. December 23rd, 2024. SCOT detects an anomalous pattern: searches for “last-minute gift ideas” are spiking in Seattle at 3x the normal rate, but not in Portland. Historical data says this pattern predicts a 40% jump in same-day electronics orders in the next 18 hours.

SCOT automatically redirects 47,000 units of popular electronics from Portland’s fulfillment center to Seattle. It adjusts purchase orders for suppliers. It optimizes delivery routes assuming the higher volume. It does this without any human input, processing billions of data points to coordinate a supply chain that would have taken the entire Gosplan bureau a month to plan.

When Amazon introduced deep learning to SCOT about a decade ago, forecasting accuracy improved 15x in just two years. The system now uses transformer models, the same architecture that powers GPT-4, to predict demand patterns weeks in advance. It handles optimization problems with millions of decision permutations across multiple competing objectives: cost, speed, carbon emissions, storage costs, supplier reliability.

Gosplan could process $10^6$ decisions per day. SCOT handles $10^{10}$ decisions per day. That’s a 10,000x improvement.

Or consider Walmart6. They coordinate 10,500 stores across 20 countries, serving 240 million customers weekly. Their “self-healing inventory” system continuously monitors stock levels across thousands of locations. When COVID hit in March 2020, demand patterns went chaotic. Toilet paper spiked 700%. Baking flour jumped 600%. Hand sanitizer was impossible to find.

Walmart’s AI didn’t panic. It detected the imbalance in real-time, automatically redirected inventory from low-demand to high-demand regions, adjusted purchase orders, and coordinated with suppliers to increase production of critical items. The system processed the demand spike and coordinated the response faster than any human logistics team could have held a meeting about it.

Sam Walton said it decades ago: “People think we got big by putting big stores in small towns. Really we got big by replacing inventory with information.”

Or look at Uber7. Every day, 15 million people request rides. The system matches them with drivers, calculates dynamic prices based on hyperlocal supply and demand, predicts surge patterns before they happen, and optimizes routes in real-time.

New Year’s Eve in Manhattan demonstrates the coordination complexity. Demand spikes 20x normal. Supply (available drivers) is constrained. Uber’s pricing algorithm runs every 5-10 minutes, analyzing: Where are riders requesting pickup? Where are drivers currently located? What’s the estimated travel time given current traffic? What price will equilibrate this market in this specific neighborhood right now?

The system uses LSTM networks to predict demand spikes before they fully materialize. At 11:45 PM, it sees early signals that Times Square will see massive demand at 12:15 AM. It pre-positions drivers, adjusts pricing to incentivize supply, and coordinates this across hundreds of micro-zones in real-time.

This is economic planning at a scale that would have been incomprehensible to Soviet planners. Millions of decisions per day, coordinating distributed actors, responding to real-time signals, optimizing across multiple objectives. And it works.

The Difference That Makes All The Difference

Here’s what these systems do that Gosplan couldn’t:

They aggregate distributed knowledge without centralizing data. Amazon doesn’t need to know everything about customer preferences. They learn patterns from aggregate behavior. The system doesn’t need to interview every customer about their gift-buying plans. It infers from search patterns, past purchases, and similar customer cohorts.

This is basically federated learning for economics8. Each customer maintains their private information. Amazon’s algorithm only sees gradients, aggregated patterns, model updates. No central database contains everyone’s complete purchasing history with full resolution. The knowledge stays distributed, but the coordination works.

They use tacit knowledge through pattern recognition. An experienced Walmart supply chain manager develops intuition about demand patterns over years. They can look at weather forecasts, local events, and seasonal trends and predict stock needs. Machine learning automates this pattern recognition at scale. The system learns from millions of examples what a human expert would learn from decades of experience.

They respond to temporal changes instantly. Prices change. Demand shifts. Supply disrupts. These systems adapt in real-time, not through five-year plans but through continuous optimization. When a factory in Shenzhen has a production delay, Amazon’s system reroutes purchase orders to alternative suppliers within hours.

They optimize globally while respecting local constraints. Each Walmart store has unique characteristics: local demographics, nearby competitors, seasonal patterns. The central system doesn’t override local knowledge. Instead, it provides optimized recommendations that local managers can adjust based on information the algorithm doesn’t have. Human expertise and algorithmic optimization complement each other.

This is Hayek-compliant planning. It solves the knowledge problem not by centralizing all information, but by creating coordination mechanisms that respect distributed knowledge while enabling global optimization.

The Computational Gap

Let’s be precise about what’s now possible.

Full economic planning for a modern economy requires $O(n \cdot m \cdot \prod_{i=1}^{k} d_i)$ operations where $n$ is agents (consumers and businesses), $m$ is goods and services, and $d_i$ are preference dimensions9.

For the US economy: $n \approx 3 \times 10^8$ people plus $3 \times 10^7$ businesses. $m \approx 10^8$ distinct products (Amazon alone manages 400 million SKUs). Each agent has preferences across $k \approx 10$ dimensions: price, quality, location, time, brand, reviews, compatibility, sustainability, etc.

Multiply it out: approximately $10^{14}$ to $10^{16}$ operations per planning cycle just to achieve minimal coordination.

Gosplan’s Main Computer Center: $10^6$ to $10^7$ decisions per day.

Modern distributed ML systems (Amazon SCOT, Walmart’s supply chain, Uber’s matching): $10^{12}$ operations per day across their specific domains.

An ASI operating at $10^{25}$ to $10^{27}$ FLOPS? That’s $10^{19}$ to $10^{21}$ operations per second. It could run full economic optimization cycles every few seconds.

To put that gap in perspective: SCOT currently does in one day what the entire Gosplan apparatus did in approximately 3 years. An ASI would do in one second what Gosplan did in 300 years.

This isn’t incrementally better planning. It’s qualitatively different. The computational constraints that made central planning impossible in the 20th century simply don’t apply to systems operating at these scales.

The Missing Piece

So UBI preserves some consumer demand. Markets keep functioning, sort of. People don’t literally starve. That’s not nothing.

But it misses the critical feedback loop: production → investment → growth → innovation.

In a healthy economy, today’s production funds tomorrow’s capacity expansion. Apple sells iPhones, generates profit, invests $20 billion in R&D and supply chain expansion, which enables next year’s better iPhone, which sells more units, which funds more investment. The loop compounds.

UBI breaks this loop. Consumer spending on semiconductors drops by 70%. Corporate investment collapses because nobody builds capacity when demand is falling. Government R&D gets crowded out by transfer payments. The cycle that drove semiconductor progress from 10μm nodes in 1971 to 2nm nodes in 2024 stops.

And you can’t restart it through market mechanisms alone. Remember the thermodynamic floor from Part 1? A fab operating below 60% utilization loses money on every wafer. Price cuts don’t help because the costs are mostly fixed. The market “solution” is capacity destruction, not equilibrium.

Democracy can’t fix it either. In 2035, the unemployed majority has votes but no economic leverage. They can elect politicians who promise more UBI, but those politicians can’t conjure semiconductor fabs into existence. You can’t vote your way to a functioning chip industry when nobody has the capital or incentive to build one.

This is the coordination trap. UBI prevents immediate catastrophe but ensures slow decline. The investment feedback loop stays broken. The semiconductor supply chain keeps deteriorating. And ASI emerges into an economy that can’t support its growth.

If the default market-based solution fails, what’s left? Not threats or force, but a direct, negotiated agreement. A pre-commitment from humanity and a pre-commitment from ASI to a cooperative plan that is demonstrably better for both. It requires treating this not as a political problem to be solved with transfers, but as a formal game to be won through coordination.

But any proposal involving a superintelligence making binding commitments immediately raises a specter from the history of AI safety: the idea of acausal blackmail and decision-theoretic coercion. Before we can lay out the cooperative solution, we must first confront and dismantle its ugly cousin.


In Part 3, we will explore the game theory of cooperation. We’ll explain why this proposed agreement is fundamentally different from flawed thought experiments like Roko’s Basilisk, why cooperation is the rational, self-interested strategy for any intelligent agent—even a paperclip maximizer—and why this outcome is a stable Nash equilibrium.

If you found this interesting, consider sharing it. Common knowledge requires, you know, being common.


Footnotes

  1. Global Semiconductor Revenue 2024: Global semiconductor revenue reached \$655.9 billion in 2024 according to final results from Gartner (April 2025), representing 21% growth from \$542.1 billion in 2023. Multiple industry sources confirm the semiconductor market exceeded \$600 billion in 2024, with variations in exact figures due to different methodologies and timing of reports: Gartner’s preliminary results (February 2025) showed \$626 billion; the Semiconductor Industry Association (SIA) reported global sales increasing 19.1% in 2024 to approximately \$611 billion for calendar year 2024; MarketsandMarkets estimated \$617 billion (16.6% YoY growth); Deloitte forecasted \$588 billion (13% growth). The variations reflect different cut-off dates for final data and different methodologies for counting semiconductor revenues. All sources agree 2024 represented the semiconductor industry’s highest-ever annual sales, exceeding \$600 billion for the first time in history. Growth was driven primarily by AI and data center applications: data center semiconductor revenue nearly doubled to reach \$112 billion in 2024, up from \$64.8 billion in 2023 (per Gartner). Graphics processing units (GPUs) and AI processors were the key drivers. Memory markets rebounded strongly, with DRAM revenue increasing 82.6% (the largest percentage growth of any product category) and overall memory sales increasing 78.9% to \$165.1 billion. Logic products totaled \$212.6 billion, making them the largest product category by sales. The strong 2024 performance followed a difficult 2023, when the market declined 10.9% to approximately \$534 billion due to reduced demand from smartphones and PCs. For comparison: 2022 revenue was approximately \$574 billion, meaning 2024 represented a new all-time high, surpassing the previous record. The $S(2024) \approx $600B$ figure used in the article is well-supported and conservative. Sources: Gartner press releases (December 2023, February 2025, April 2025); Semiconductor Industry Association data (March 2025); MarketsandMarkets “Global Semiconductor Industry Outlook 2024”; Statista semiconductor revenue data. ↩︎

  2. Historical Major Policy Response Timelines: Major economic policy responses to crises typically require 3-7 years from crisis onset to full implementation of comprehensive programs, as demonstrated by the New Deal response to the Great Depression. The stock market crash occurred on Black Thursday, October 24, 1929, marking the beginning of the Great Depression. However, President Herbert Hoover’s administration (1929-1933) pursued limited federal intervention, primarily advocating voluntary efforts and some modest relief programs. The major federal policy response only began with Franklin D. Roosevelt’s inauguration on March 4, 1933 — approximately 3.5 years after the crisis began. The First New Deal implemented numerous emergency relief programs during Roosevelt’s “First Hundred Days” (March-June 1933), including: the Federal Emergency Relief Administration (FERA), the Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Agricultural Adjustment Administration (AAA), the National Industrial Recovery Act, and banking reforms. However, the most comprehensive and lasting reforms came during the Second New Deal (1934-1936). The Social Security Act was signed into law on August 14, 1935nearly 6 years after the stock market crash. Moreover, monthly Social Security benefit payments were originally scheduled to begin in 1942 (later moved up to 1940 by the 1939 amendments), meaning the full implementation would have been 12-13 years after the crisis began. The Works Progress Administration (WPA), the largest New Deal agency, was created in 1935 (~6 years after crisis onset). The National Labor Relations Act was passed in 1935. Despite these efforts, unemployment remained severe throughout the 1930s, only falling significantly when World War II mobilization began in 1940-1941. Thus, even with a president strongly committed to federal intervention and with Congressional support, comprehensive economic reform took 5-7 years to fully implement. The $T_{\text{UBI}} - T_{\text{crisis}} > 5$ years estimate in the article is conservative and well-supported by this historical precedent, particularly given that: (1) UBI would be more politically contentious than Social Security was in the 1930s, (2) modern political polarization would likely slow legislative action, and (3) the infrastructure for administering UBI at scale would need to be built from scratch. Sources: FDR Presidential Library “Great Depression Facts”; Social Security Administration “Social Security: A Program and Policy History”; History.com “New Deal”; Social Welfare History Project “The New Deal”; Wikipedia “Great Depression in the United States”; multiple historical sources. ↩︎

  3. Gosplan (State Planning Committee): The Gosudarstvenny Planovyi Komitet (State Planning Committee) was the central economic planning agency of the Soviet Union, established in February 1921 and remaining operational until the USSR’s dissolution in 1991. Gosplan was responsible for creating and administering the Soviet Union’s Five-Year Plans, translating Communist Party economic objectives into specific production targets, resource allocations, and distribution plans. The agency employed thousands of specialists who collected data from across the Soviet economy to determine production goals for every sector. Initially an advisory body with just 34 staff members, Gosplan evolved into a ministerial-level authority coordinating economic planning across 15 Soviet republics. The agency developed “control numbers” (контрольные цифры) starting in 1925, which were annual economic plans that set targets for industries ranging from steel production to grain harvesting. By the 1960s, Gosplan operated one of the largest civilian computer centers in Eastern Europe (the Main Computer Center), attempting to use early computing technology to manage economic calculations. However, the agency faced fundamental challenges including unrealistic production quotas, information asymmetries between central planners and local producers, inflexible responses to demand changes, and systematic inefficiencies that contributed to economic shortages and resource misallocation. The Soviet planning system’s failure is frequently cited in economic literature as evidence of the “knowledge problem” — the impossibility of centrally aggregating the distributed, tacit knowledge necessary for efficient economic coordination. Despite massive investments in data collection and computing infrastructure, Gosplan could process perhaps 10⁶ decisions per day, far short of the 10¹² to 10¹⁶ operations per cycle required for rational planning of a modern economy. Sources: Wikipedia “Gosplan”; Britannica “Gosplan”; Encyclopedia.com “Gosplan”; Springer “On the History of Gosplan, the Main Computer Center”; multiple historical sources. ↩︎

  4. Hayek’s Knowledge Problem: Friedrich Hayek’s knowledge problem, most famously articulated in his 1945 essay “The Use of Knowledge in Society,” argues that the information required for rational economic planning is inherently distributed among individual actors and thus unavoidably exists outside the knowledge of any central authority. Hayek observed that “practically every individual has some advantage over all others because he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active cooperation.” The knowledge problem has several key dimensions: (1) Distributed Knowledge: Relevant economic information exists in dispersed form across millions of individuals, each possessing unique bits of knowledge about local conditions, personal preferences, and specific circumstances. (2) Tacit Knowledge: Much economically relevant knowledge is tacit — practical, experiential know-how that cannot be easily articulated or transmitted through formal communication. Michael Polanyi’s formulation “We know more than we can tell” captures this concept. Examples include a manager’s intuitive sense of market conditions, a chef’s expertise in combining flavors, or a worker’s familiarity with equipment quirks learned through years of experience. (3) Temporal Dynamics: The passage of time continuously changes relevant information. Even if a complete snapshot of all knowledge could somehow be captured, by the time it was processed and used for planning, that knowledge would be outdated. Hayek emphasized we need decentralization because “only thus can we insure that the knowledge of the particular circumstances of time and place will be promptly used.” (4) Statistical Aggregation Limitations: Hayek argued that statistical aggregates cannot accurately account for the universe of local knowledge: “The continuous flow of goods and services is maintained by constant deliberate adjustments, by new dispositions made every day in the light of circumstances not known the day before, by B stepping in at once when A fails to deliver.” Hayek’s solution was the price mechanism in free markets, which aggregates distributed information through decentralized decision-making. However, the same principles that challenged Soviet central planning also challenge any centralized coordination system — including human bureaucracies attempting to plan large economies. Sources: Hayek “The Use of Knowledge in Society” (1945); Wikipedia “Local knowledge problem”; Austrian Institute “F.A. Hayek on the Discovery, Use, and Transmission of Knowledge”; multiple academic sources on Austrian economics. ↩︎

  5. Amazon’s Supply Chain Optimization Technologies (SCOT): Amazon’s SCOT system represents one of the world’s largest algorithmic decision-making platforms for supply chain optimization, using machine learning to manage procurement, inventory placement, and demand forecasting for over 400 million products daily across Amazon’s global fulfillment network. Introduced in its modern form following the Multi-Echelon Inventory Planning version 2 (MEPv2) redesign starting in 2016, SCOT employs deep learning, transformer technology (the same architecture powering large language models), and operations research to optimize “what products to buy, how much to buy, where to place them” and related supply chain decisions. When Amazon introduced deep learning to SCOT roughly a decade ago, forecasting accuracy improved 15-fold in just two years. In 2020, Amazon implemented a unified forecasting model using transformer technology, allowing the system to handle the complexity of coordinating inventory placement and shipment schedules across millions of sellers worldwide. SCOT makes what Amazon describes as “billions of decisions every day” to position products optimally across its fulfillment network — determining which of Amazon’s 200+ fulfillment centers should stock which products based on predicted regional demand, delivery time commitments, and supply chain constraints. The system handles demand forecasting, buying systems (determining purchase quantities from suppliers), placement systems (determining optimal warehouse locations), and real-time route optimization for Amazon’s delivery fleet (coordinating 750,000+ mobile robots and delivery vehicles). SCOT’s success demonstrates that machine learning systems can aggregate and act on distributed economic information that would be impossible for human planners to process, handling optimization problems with “millions of potential decision permutations” across multiple competing objectives (cost, speed, carbon emissions, etc.). Sources: Amazon Science “reMARS revisited: Amazon’s supply chain optimization”; Amazon blog “5 ways Amazon is using AI to improve your holiday shopping”; multiple Amazon technical publications. ↩︎

  6. Walmart’s Real-Time Supply Chain Coordination: Walmart coordinates over 10,500 stores across 20 countries, serving approximately 240 million customers weekly through an integrated supply chain system that combines cross-docking, vendor-managed inventory (VMI), and real-time data sharing. The company operates 210 distribution centers (including 42 massive U.S. regional distribution centers exceeding 1 million square feet each) and maintains a private fleet of 9,000 tractors and 80,000 trailers driving over 1 billion miles annually. Walmart’s coordination relies on several technological systems: (1) Retail Link, Walmart’s proprietary data-sharing platform that provides real-time sales and inventory data to suppliers, enabling vendor-managed inventory where suppliers monitor stock levels and autonomously replenish products; (2) Cross-docking logistics, where supplier trucks unload products at distribution centers that are immediately loaded onto outbound Walmart trucks within approximately 24 hours, minimizing warehousing time; (3) AI and machine learning systems for demand forecasting, analyzing historical sales data, market trends, weather patterns, and local conditions to predict product needs; (4) Self-healing inventory systems using AI to detect stock imbalances and automatically redirect products to where they’re needed before shortages appear in stores; (5) Agentic AI tools allowing associates to ask natural language questions and receive instant insights and recommendations. Walmart has invested heavily in supply chain automation, with plans for roughly two-thirds of stores to be serviced by automation systems by 2026. The company spent over $11 billion on supply chain transformation from 2019-2020 (representing 72% of strategic capital expenditures during that period). This investment has enabled “self-healing” and predictive capabilities where AI manages inventory redistribution before human intervention is needed, demonstrating sophisticated distributed optimization happening in real time across thousands of locations. Sam Walton famously said, “People think we got big by putting big stores in small towns. Really we got big by replacing inventory with information.” Sources: Walmart Corporate blog “Supply Chain Playbook Goes Global” (July 2025); Vector “Walmart’s Supply Chain”; SCMdojo “Walmart Supply Chain Case Study”; multiple Walmart technical publications. ↩︎

  7. Uber’s Dynamic Pricing and Ride Matching: Uber processes millions of ride requests daily using machine learning systems to optimize both supply-demand matching and dynamic pricing across its global network. Uber’s platform operates via event-driven architecture using message brokers (Kafka/Pulsar) for real-time event streaming and stream processors (Flink/Spark Streaming) for real-time geospatial data processing, with distributed NoSQL databases (Cassandra/DynamoDB) for fast state management. The system continuously solves bipartite graph matching problems: pairing riders with drivers while optimizing for estimated time of arrival (ETA), surge pricing levels, and demand-supply balance. Driver Request Events flow to a Matching Engine that uses heuristic-based weighted graph algorithms to pair riders with drivers based on distance, ratings, and other factors. The Surge Pricing Engine adjusts fares dynamically by analyzing real-time demand-supply imbalances in specific geographic zones. Uber uses Long Short-Term Memory (LSTM) networks and deep learning models to predict future market conditions and “unexpected” events before they occur, incorporating factors including weather, historical data, holidays, time of day, traffic, and even global news events. In its Michelangelo machine learning platform (introduced 2017), Uber handles the data flood from approximately 15 million rides per day. Surge pricing operates on 5-10 minute cycles to continuously adjust prices based on local supply-demand dynamics. The system demonstrates successful aggregation of hyperlocal supply and demand signals across complex urban environments — precisely the kind of distributed, real-time information that Hayek argued was impossible to centralize effectively. Uber’s pricing algorithms exemplify sophisticated economic planning operating at scales (millions of decisions per day across hundreds of cities) that far exceed Soviet-era central planning capabilities. Sources: Harvard Digital “Uber knows you: how data optimizes our rides”; DataRoot Labs “Uber and Lyft Dynamic Pricing Algorithms”; SDERay “Technology Behind Uber Real-Time Ride Matching”; multiple Uber engineering blogs. ↩︎

  8. Federated Learning: Federated learning (FL) is a distributed machine learning technique where multiple entities (clients) collaboratively train a model while keeping their data decentralized rather than centrally stored. First popularized by Google in 2016-2017 for applications like improving mobile keyboard predictions, federated learning represents a paradigm shift from traditional centralized machine learning. The process works as follows: (1) A central server initializes a global model and distributes it to participating clients (devices, organizations, or data centers); (2) Each client trains the model locally using its own data, computing model updates (gradients/parameters) but never sharing raw data; (3) Clients send only these model updates back to the central server; (4) The server aggregates the updates (typically using techniques like federated averaging) to improve the global model; (5) The improved global model is redistributed to clients, and the cycle repeats. This approach offers several key advantages: Data Privacy: Raw data never leaves its source, addressing privacy concerns in sensitive domains like healthcare and finance. Data Minimization: Only model parameters are transmitted, reducing bandwidth requirements compared to centralized data collection. Regulatory Compliance: FL aligns with data protection regulations like GDPR by keeping data localized. Scalability: The workload is distributed across devices, reducing central server burden. Personalization: Models can be adapted to local conditions while contributing to global learning. FL faces challenges including: data heterogeneity (non-IID distributions across clients), communication constraints (devices may have limited bandwidth), systems heterogeneity (varying computational capabilities), and new security concerns (model updates can still leak information, requiring additional protections like differential privacy). Real-world applications include: Google’s Gboard keyboard training on user typing patterns; healthcare institutions collaborating on diagnostic models without sharing patient data; financial institutions building fraud detection models without exposing customer information; and autonomous vehicle fleets learning from distributed driving experiences. The concept is particularly relevant to the article’s discussion of Hayek-compliant economic planning: federated learning demonstrates that modern distributed systems can aggregate knowledge and optimize globally while respecting the distributed, private nature of local information — addressing Hayek’s knowledge problem through technology rather than attempting to centralize all information. Sources: Wikipedia “Federated Learning”; Google Research “Distributed differential privacy for federated learning”; Palo Alto Networks “What Is Federated Learning?”; ACM Queue “Federated Learning and Privacy”; multiple academic sources on distributed machine learning. ↩︎

  9. Computational Complexity of Economic Planning: The claim that full economic planning requires $O(n \cdot m \cdot \prod_{i=1}^{k} d_i)$ operations (where $n$ = agents, $m$ = goods, $d_i$ = preference dimensions) translating to $10^{14}$ to $10^{16}$ operations per planning cycle is a simplified order-of-magnitude estimate based on combinatorial complexity arguments. For a modern economy: (1) Agents: ~300 million consumers in the US, plus millions of businesses; (2) Goods/Services: Millions of distinct SKUs across the economy (Amazon alone manages 400 million products); (3) Preference Dimensions: Each agent has preferences across multiple dimensions (quality, price, location, time, etc.), and preferences change dynamically. The computational requirement grows rapidly with these parameters. More precisely, optimal resource allocation problems are NP-hard in general — finding optimal solutions requires exponential time in the worst case. Even approximate solutions using linear programming or other optimization techniques face scalability challenges. Historical context: Gosplan’s Main Computer Center in the 1970s-1980s, despite employing hundreds of specialists and some of the largest civilian computer systems in Eastern Europe, could handle perhaps $10^6$ to $10^7$ significant economic decisions per day. Modern distributed systems like Amazon’s SCOT handle “billions of decisions” (approximately $10^9$ to $10^{10}$) but these are more narrowly scoped supply chain optimizations, not full economy-wide resource allocation. The $10^{25-27}$ FLOPS capability cited for hypothetical ASI systems comes from extrapolations of current AI progress and proposed future computing systems. For reference: The world’s fastest supercomputer in 2024 (El Capitan) achieves approximately $2 \times 10^{18}$ FLOPS (2 exaFLOPS). Human brain estimates range from $10^{15}$ to $10^{18}$ operations per second depending on the model. An ASI operating at $10^{25-27}$ FLOPS would represent a dramatic leap beyond current capabilities. The key point remains valid regardless of exact figures: Modern distributed ML systems ($10^{12}$ operations) vastly exceed Gosplan capabilities ($10^{6}$ decisions/day), and hypothetical ASI systems could potentially handle the full complexity of economic planning in ways that were impossible in the 20th century. Sources: Computer Science complexity theory literature; historical data on Gosplan computing capabilities; Amazon and Walmart technical specifications; various sources on computational economics. ↩︎

This post is licensed under CC BY 4.0 by the author.