AWS Nitro System¶
Notes from AWS Apprenticeship — November 2025. The hardware stack underneath EC2.
Ultra-Short Summary¶
Nitro is AWS's custom hardware + hypervisor system that replaced the old Xen-based virtualisation. Instead of running virtualisation in software on the main CPU, AWS built dedicated hardware cards (Nitro Cards) that handle networking, storage, and security offload — freeing the EC2 instance CPU for the customer's workload entirely.
The Problem Nitro Solved¶
Old virtualisation (Xen):
Physical Server CPU
├── Hypervisor (manages VMs) — uses CPU cycles
├── Network handling — uses CPU cycles
├── EBS I/O handling — uses CPU cycles
└── Customer VM — gets the leftover CPU
With Nitro:
Physical Server CPU
└── Customer VM — gets nearly ALL the CPU
Nitro Card (separate dedicated hardware)
├── Hypervisor logic
├── Network I/O
└── EBS I/O
Result: better performance, more consistent latency, and security isolation because the hypervisor isn't running on the same chip as the customer.
Key Components¶
| Component | What It Does |
|---|---|
| Nitro Cards | Custom ASICs that handle I/O — networking, EBS storage |
| Nitro Hypervisor | Lightweight hypervisor running on the Nitro chip, not the main CPU |
| Nitro Security Chip | Hardware root of trust — firmware verification, secure boot |
| Graviton CPUs | AWS-designed ARM-based CPUs (T4g, M7g, C7g instance families) |
Why Bare Metal Instances Exist¶
Nitro is so lightweight that bare metal EC2 instances (*.metal) run directly on the hardware with no virtualisation overhead. Use cases:
- Workloads that need direct hardware access (nested virtualisation)
- Licensing models tied to physical cores
- High-performance computing where even small virtualisation overhead matters
NVMe Storage¶
NVMe = Non-Volatile Memory Express — a protocol for SSDs.
- Much faster than SATA (used for spinning disks)
- Lower latency, higher IOPS
- Instance store volumes on EC2 use NVMe
- EBS
io2volumes can also use NVMe interface
Mental Model¶
Old way: one server CPU doing everything (slow, noisy neighbours possible)
Nitro way:
Main CPU → purely customer workload
Nitro Card → handles network, storage, hypervisor
Nitro Security Chip → hardware-level security
It's like having a dedicated co-processor for all the infrastructure work.
AWS Context¶
- Nitro is why newer instance types (M5, C5, R5, T3 and later) have better performance
- Graviton = Nitro + ARM — often 20-40% better price/performance than equivalent x86
- Bare metal instances = EC2 with zero virtualisation overhead
- Nitro Security Chip = part of the AWS shared responsibility model (AWS secures the hardware)
30-Second Takeaway¶
- Nitro = custom hardware that offloads hypervisor + I/O from the main CPU
- Result: customer gets near-bare-metal performance on virtualised EC2
- Graviton = AWS's ARM CPU, runs on Nitro, best price/performance
- Bare metal instances = Nitro without even the lightweight hypervisor
Self-Quiz¶
- What did Nitro replace? What was the problem with the old approach?
- Why does offloading I/O to a dedicated card improve performance?
- What's the difference between a Nitro Card and a Graviton CPU?
- When would you choose a bare metal EC2 instance over a standard one?
- What is NVMe and where does it appear in AWS storage?
- How does the Nitro Security Chip relate to the AWS Shared Responsibility Model?