Accelerated Computing Atlas

The Machine That Builds Intelligence

Modern AI is manufactured. NVIDIA designs, TSMC manufactures, ODMs assemble, clouds monetize — this atlas maps the whole chain, from power and lithography to inference workloads.

This atlas separates directly sourced facts from inferred relationships and forward-looking assumptions. Every profile carries a confidence label.

Sourced Inferred Context Forward-looking

In about 60 seconds

How to use this atlas

  1. 1
    Start with a question. Twelve featured questions below. Pick the one you actually have.
  2. 2
    Follow the value chain. Left to right: ASML → TSMC → HBM & CoWoS → NVIDIA → cloud → AI workloads.
  3. 3
    Test a bottleneck scenario. Walk a structural shock through the system. Nine to choose from.
  4. 4
    Deep-dive a company or product. 73 companies and 75 products. Open a profile from any chip.
  5. 5
    Check the sources. Every claim ties back to a citation. Every profile carries a confidence label.

Featured questions

What do you want to understand?

Twelve questions, three audiences. Each opens a guided answer with sources.

The thesis

An AI model is the visible tip. Beneath it sits a global industrial system: gigawatts of power, EUV lithography, advanced-node wafers, stacked HBM, silicon interposers, NVLink fabrics, CUDA, hyperscaler racks. Each layer constrains the next.

Marginal progress in one layer can unlock outsized gains in the layers above it. Marginal failure in one layer can stall everything above it. Reasoning about AI without reasoning about this stack is reasoning about a screen with no machine behind it.

Power is the hidden scaling law. CUDA is software gravity. HBM is bandwidth, not storage. CoWoS is packaging as architecture.

The spine of the atlas

The value chain

Read it left to right. The default highlight follows the canonical path: ASML → TSMC → HBM & CoWoS → NVIDIA → cloud → AI workloads. Switch modes to rearrange the system around a different question; click any chip to open its profile.

The wider field

Ecosystem overview

Where each company, product and constraint sits in the wider accelerated-computing system. The value chain above shows causality; this view shows scope.

What happens if a bottleneck breaks?

Nine shocks against the AI infrastructure system. Open one to see the affected entities, ordered effects, and likely winners and losers. Affected entities also light up across the atlas.

Where it breaks

Nine constraints decide how fast AI infrastructure can scale. Industrial chokepoints sit on top — physical limits with multi-year lead times. Strategic constraints sit below — policy and ecosystem dynamics. Frontier risk sits last.

Plainly answered

The questions this atlas answers

Start with the question you actually have. Each answer opens a guided explanation with related companies, products, bottlenecks and sources.

Audience
Category
Difficulty
Confidence

How the system actually works

Seven flows that walk end-to-end through the stack. Each one names the companies, products and concepts that connect to make the whole system function. Click any step to open its profile.

12 companies that explain the stack

If you only learn 12 companies, learn these. Click any card for the full profile.

Pick a learning track

Different readers want different orderings. Pick the track that matches your role.

What's inside

A modern AI product is a stack of subsystems. Open an anatomy to see the parts that compose it.

The stack is moving

Six tracks on the same calendar. Memory, packaging, GPU architecture, data center, software and quantum advance together.

Side by side

Two dropdowns. Two entities. Roles, dependencies and NVIDIA links lined up next to each other.

Sources, assumptions & methodology

Where the claims in this atlas come from, how they are classified, and where the editorial process has flagged its own work.

How this atlas is built

  1. Sourced first. Where a claim is directly stated by an official document, SEC filing, vendor product page or comparable primary source, it is labeled Sourced.
  2. Inferred where reasonable. When a claim follows from multiple sourced facts but isn't itself a direct quote, it is labeled Inferred.
  3. Context, not claim. Industry common-knowledge that should not be treated as a hard claim is labeled Market context.
  4. Forward-looking, marked. Roadmap entities (Rubin, Vera CPU, HBM4, quantum hardware) and forward-looking scenarios are labeled Forward-looking.
  5. Self-audited. The editorial audit log below records every claim that was tightened, kept with confidence, or flagged for review.

Confidence legend

Sourced Inferred from sources Market context Forward-looking

Primary references

Every node, indexed

Filter, search, click through. Same data as the atlas, organized for scanning rather than navigating.

The next AI winners will understand the whole stack.

A model is downstream of a chip, which is downstream of a fab, which is downstream of a lithography machine, which is downstream of a power grid. Every layer compounds. Reasoning across the stack is the work.

Companion projects: The AI Atlas — the foundational papers. The AI Map — the full field as a five-layer stack.

Built by Pugalenthi Magendran. Sourced from NVIDIA disclosures, SEC filings and public industry references.