Trusted Execution Environments (TEE)

Running code that even sysadmins can't see. SGX enclaves, remote attestation, and the cryptographic primitives powering confidential computing.

Advanced 45 min read Expert Version →

🎯 What You'll Learn

  • Understand the TEE trust model and threat boundaries
  • Analyze Intel SGX architecture (enclaves, sealing, attestation)
  • Implement remote attestation verification
  • Evaluate TEE limitations and side-channel attacks

📚 Prerequisites

Before this lesson, you should understand:

The Ultimate Trust Problem

In standard DevOps, you fear bugs. In crypto infrastructure, you fear the operator themselves.

How do you run a service where even the sysadmins can’t see the data inside?

The answer is Trusted Execution Environments (TEE).

Standard VM:       Hypervisor can read all memory
Standard Server:   Root user can read all memory
TEE Enclave:       Even root/hypervisor can't read enclave memory

This technology powers everything from Signal’s private contact discovery to SUAVE’s encrypted block building.


What You’ll Learn

By the end of this lesson, you’ll understand:

  1. TEE fundamentals - What enclaves protect (and don’t protect)
  2. Intel SGX deep dive - The dominant TEE implementation
  3. Remote attestation - Cryptographic proof of what’s running
  4. Practical limitations - Side channels, supply chain, and rollback attacks

The Foundation: What TEEs Protect

A TEE creates an enclave-an isolated execution environment where:

Protected:
├── Memory contents (encrypted by hardware)
├── CPU registers during execution
├── Code integrity (measured at load time)
└── Secrets sealed to specific enclave identity

NOT Protected:
├── Execution timing (side channels)
├── Memory access patterns (cache attacks)
├── Input/output (must be encrypted by app)
└── Availability (host can kill enclave)

The host OS cannot read enclave memory, even with root access.


The “Aha!” Moment

Here’s the key insight for blockchain applications:

A TEE can run a program that the operator cannot modify or inspect, and anyone can verify this fact cryptographically. This enables “trustless” services run by untrusted parties. A relay can prove it’s running honest code without revealing the transactions it’s processing.

This is how you build censorship-resistant infrastructure operated by potentially malicious actors.


Intel SGX Architecture

Enclave Memory Model

┌─────────────────────────────────────────┐
│              Regular Memory              │  ← OS can read/write
├─────────────────────────────────────────┤
│     Enclave Page Cache (EPC)            │  ← Hardware encrypted
│  ┌─────────────────────────────────┐    │
│  │         Enclave                 │    │
│  │  ┌──────────┐  ┌──────────┐     │    │
│  │  │   Code   │  │   Data   │     │    │
│  │  └──────────┘  └──────────┘     │    │
│  │  ┌──────────┐                   │    │
│  │  │  Secrets │ (sealed keys)     │    │
│  │  └──────────┘                   │    │
│  └─────────────────────────────────┘    │
└─────────────────────────────────────────┘

The Memory Encryption Engine (MEE) encrypts EPC pages with a key that never leaves the CPU.

The MRENCLAVE Identity

Every enclave has a cryptographic identity:

# MRENCLAVE = SHA256 of enclave state at initialization
mrenclave = sha256(
    enclave_code +
    enclave_data +
    memory_layout +
    entry_points
)

# Any change to the code → Different MRENCLAVE
# This is the "fingerprint" of what's running

Remote Attestation

How do you verify what’s running inside a remote enclave?

The Attestation Flow

┌──────────┐         ┌──────────┐         ┌─────────┐
│  Client  │         │  Enclave │         │ Intel   │
│(Verifier)│         │ (Prover) │         │ IAS/DCAP│
└────┬─────┘         └────┬─────┘         └────┬────┘
     │  1. Challenge       │                    │
     │────────────────────>│                    │
     │                     │                    │
     │                     │ 2. Request Quote   │
     │                     │    (EREPORT)       │
     │                     │                    │
     │  3. Quote           │                    │
     │<────────────────────│                    │
     │                     │                    │
     │  4. Verify Quote    │                    │
     │─────────────────────────────────────────>│
     │                     │                    │
     │  5. Attestation Result                   │
     │<─────────────────────────────────────────│

Quote Structure

struct SGXQuote {
    version: u16,
    sign_type: u16,
    epid_group_id: [u8; 4],
    qe_svn: u16,
    pce_svn: u16,
    xeid: u32,
    basename: [u8; 32],
    report_body: ReportBody,
    signature: [u8; 64],
}

struct ReportBody {
    cpu_svn: [u8; 16],
    misc_select: u32,
    attributes: Attributes,
    mr_enclave: [u8; 32],    // Hash of enclave code
    mr_signer: [u8; 32],     // Hash of signing key
    isv_prod_id: u16,
    isv_svn: u16,
    report_data: [u8; 64],   // Custom data (e.g., public key)
}

Verification Code

def verify_attestation(quote: bytes, expected_mrenclave: bytes) -> bool:
    """Verify an SGX quote matches expected enclave."""
    
    # 1. Parse quote structure
    parsed = parse_sgx_quote(quote)
    
    # 2. Verify Intel's signature (ECDSA with Intel's public key)
    if not verify_intel_signature(parsed):
        return False
    
    # 3. Check MRENCLAVE matches expected code
    if parsed.report_body.mr_enclave != expected_mrenclave:
        return False
    
    # 4. Check security version numbers
    if parsed.report_body.isv_svn < MINIMUM_SVN:
        return False
    
    # 5. Extract enclave's public key from report_data
    enclave_pubkey = parsed.report_body.report_data[:32]
    
    return True

Sealing: Persistent Secrets

Enclaves can encrypt data that only they can decrypt later:

// Inside enclave: seal data to this specific enclave
sgx_status_t seal_data(
    uint8_t *data,
    uint32_t data_size,
    sgx_sealed_data_t *sealed_blob
) {
    return sgx_seal_data(
        0, NULL,              // Additional authenticated data
        data_size, data,      // Data to seal
        sealed_size,          // Output size
        sealed_blob           // Output sealed blob
    );
}

// Sealed data includes:
// - KEY_ID derived from MRENCLAVE or MRSIGNER
// - IV and MAC for authenticated encryption
// - Encrypted payload

Sealing modes:

  • MRENCLAVE: Only this exact code can unseal
  • MRSIGNER: Any code signed by the same key can unseal

Limitations and Attacks

Side-Channel Attacks

TEEs protect data at rest and in use, but leak through timing and access patterns:

# Cache-timing attack example
# Attacker measures time to access cache lines

def timing_attack(victim_enclave):
    # 1. Fill cache with known data
    prime_cache()
    
    # 2. Trigger victim enclave execution
    trigger_victim()
    
    # 3. Measure which cache lines were evicted
    for line in cache_lines:
        start = rdtsc()
        access(line)
        elapsed = rdtsc() - start
        
        if elapsed > THRESHOLD:
            print(f"Enclave accessed line {line}")

Mitigations:

  • Constant-time code (no data-dependent branches)
  • Cache partitioning (Intel CAT)
  • Oblivious RAM (hides access patterns)

Rollback Attacks

The host can restore old sealed state:

Attack:
1. Enclave processes transaction T1, seals state S1
2. Enclave processes transaction T2, seals state S2
3. Attacker restores S1
4. Enclave unseals S1, "forgets" T2

Defense:
- Monotonic counters (limited availability)
- External state commitments (blockchain anchoring)
- Threshold schemes (multiple enclaves must agree)

TEE in Blockchain Applications

Use Case: Private Transaction Relay

class PrivateRelay:
    """Relay that can't see transaction contents."""
    
    def __init__(self):
        # Generate keypair inside enclave
        self.enclave_pubkey = generate_in_enclave()
    
    def submit_transaction(self, encrypted_tx: bytes):
        # Decrypt inside enclave
        # Execute simulation inside enclave
        # Re-encrypt result inside enclave
        
        # Neither operator nor relay code outside
        # enclave ever sees the plaintext transaction
        pass
    
    def get_attestation(self) -> bytes:
        """Prove we're running honest code."""
        return generate_quote(self.enclave_pubkey)

Use Case: Fair Ordering (SUAVE)

Without TEE:  Builder sees all transactions, can frontrun
With TEE:     Builder's enclave processes encrypted txs
              → Cannot extract MEV without running code
              → Code is attested by Intel

Practice Exercises

Exercise 1: Attestation Verification

# Given an SGX quote and expected MRENCLAVE:
quote_hex = "030002..."
expected_mrenclave = "a1b2c3..."

# Write code to:
# 1. Parse the quote
# 2. Verify the MRENCLAVE matches
# 3. Extract the enclave's public key

Exercise 2: Side-Channel Analysis

Consider this enclave code:

def check_password(input, secret):
    for i in range(len(secret)):
        if input[i] != secret[i]:
            return False
    return True

1. What side-channel vulnerability exists?
2. How would an attacker exploit it?
3. How would you fix it?

Exercise 3: Reproducible Builds

For SUAVE integration:
1. How do you ensure the code deployed matches GitHub source?
2. How do you handle dependencies?
3. What's the attestation verification flow in CI/CD?

Key Takeaways

  1. TEEs enable trustless services - Run code operators can’t modify or inspect
  2. Remote attestation is cryptographic proof - Verify exactly what’s running
  3. Side channels are real - TEEs protect data, not access patterns
  4. Key for blockchain - Enables private mempools, fair ordering, encrypted block building

What’s Next?

🎯 Continue learning: SUAVE Architecture - How Flashbots uses TEEs

🔬 Deep dive: Security Architecture for Trading Firms

Now you can design systems where even the operator can’t cheat. 🔐

Questions about this lesson? Working on related infrastructure?

Let's discuss