What can $500bn actually do?


DISCLAIMER: This post may contain minor factual inaccuracies and/or statements that could be considered stupid, but it is (for better or worse) 100% written by a human.


On March 31, 1951 the first commercially available computer system, a UNIVAC I, was delivered to the U.S. Census Bureau. Its deployment represented the start of a transition that promised to revolutionise every aspect of the global economy. Looking back 75 years later, it’s clear that this was a reasonable promise.

Observationally though, we are only now - a literal lifetime later - finally starting to approach an asymptote in the economic efficiency gains to be made via digitisation. Looking back, it appears that the steps that got us to this point form a kind of multi-step S-curve, with the following distinct waves of adoption (interspersed with periods of stalled growth):

  • 1951–1965 -> Human computers are replaced by UNIVAC & IBM systems (governments and large companies)
  • 1960–1975 -> Calculation-heavy, but not logically complex systems are moved to mainframes (banking & finance)
  • 1975–2000 -> Complex information systems are digitised, allowing for greater operational complexity and organisational scaling (large corporations)
  • 2000–2020 -> SaaS companies and hyperscalers bring widespread digitisation to consumers & SMEs
  • 2020-???? -> Lower costs and western market saturation push technology to developing economies (consumers & SMEs digitise before governments in some regions)

That puts approximately 60 years between the introduction of the first commercial computer systems and finally observing what looks to be some kind of market saturation. In the interest of gleaning something from this to apply to our current situation, let’s examine what influenced this pattern.


What influenced the spread of digitisation?

Computational limitations

Simply put; some advancements had to wait until the technology caught up.

Paradigmatic limitations

For example, object oriented programming made it feasible to model complex systems. A mature, standardised web made it possible to build mega-scale SaaS products and 1bn+ user social networks.

Paradigmatic advancements are often initially blocked by computational limitations, but are not guaranteed by computational advancements.

Diffused knowledge & innovation

For organisations to adopt new technologies and yield the promised benefits, individuals must be embedded in the right place, with the right knowledge and, crucially, the right incentives.

Public sentiment

If those who steward technology in the eyes of the public are seen to misuse it, the public grows understandably concerned and mistrustful. This slows adoptions and can lead to tightened regulation.

Widespread awareness/understanding

Necessary for diffusion of a technology into consumer markets and adoption as a day-to-day professional tool. Strongly connected to public sentiment and concerns.

Regulatory & legislative restrictions

GDPR is the obvious example here, but regulations in areas like consumer banking have arguably been more impactful (but seem to be easing a bit now).

Limitations in public infrastructure

No consumer-facing hyperscaler (Facebook, Google, Netflix, etc.) could exist without widespread home broadband access, for example.

Energy demands

This is an entirely new type of challenge that’s emerged in the last two or three years as a result of the immense scale of recent AI infrastructure investment.

If datacenter buildout continues to grow at the current rate, its electricity demands will soon outpace the new energy capacity brought online annually. To say this is a hard solve is an understatement.


How does this apply to AI Investment?

Well, the way I see it, some of these limitations can be addressed effectively with capital investment alone (albeit with diminishing returns). This includes:

  • Computational limitations

  • Widespread awareness/understanding

  • Regulatory & legislative restrictions

  • Paradigmatic limitations

The issue is some of the factors outlined above cannot be effectively influenced by injections of capital:

  • Diffused knowledge & innovation
    • A note on Forward Deployed Engineers
  • Public sentiment
  • Energy demands
    • For the reasons outlined in the previous section, this is a challenging problem to overcome. It is also not something that Silicon Valley is particularly well-suited to solving.

What does all of this mean? (my opinions)

  • Companies are trying to solve issues that don’t scale with capital investment, with MASSIVE capital investment.
  • We’re currently in an environment where misguided and unsustainable investment into AI infrastructure buildout is widespread.
  • We are in a bubble. It will burst. I don’t claim to know when, to be fair, but I’m not above having a guess for the craic - let’s say within 18 months.
  • The quiet adoption of private credit financing by the hyperscalers will widen the fallout.
  • Companies that don’t over-capitalise now and have solid fundamentals will have an opportunity to pick up the pieces after the crash.

NOTE: This all works from the assumption that AGI isn’t acheived in the next 5/10 years.