Speculative Design Fiction by RM (@nilsedison). Not affiliated with Apple Inc. Inspired by Jacob Miller.
Speculative Design Fiction

Apple Silicon

MAI

The model is the silicon.

Available starting fall 2027

0.6B
transistors
0nm
TSMC N2P process
0TOPS
Neural Engine throughput
0.2 TB/s
on-chip fabric bandwidth

Performance that
redefines intelligence.

MยทAI doesn't just run models โ€” it becomes them. With model weights etched directly into read-only memory, inference happens at the speed of silicon.

0ร—
faster inference than M4 Max
vs. GPU-bound model loading
0%
reduction in memory bandwidth
vs. traditional weight loading
0ms
first-token latency
vs. 120ms on M4 Ultra
0W
idle inference power draw
vs. 28W for comparable GPU

Testing conducted by Apple in September 2027 using preproduction MยทAI hardware. All comparisons are approximate and based on internal benchmarks.

A new kind of chip

The model doesn't load. It's already there.

For the first time, an AI model's weights are manufactured directly into silicon โ€” not stored in memory, not loaded from disk. They exist as physical circuits, ready at the speed of electricity.

Architecture

Five layers of intelligence, one chip.

MยทAI reimagines the system-on-chip as a model-on-chip. The foundational model lives in read-only fabric, while a personal layer adapts in real time โ€” learning your patterns without ever leaving the device.

The Secure Enclave+ ensures your personal fabric is cryptographically isolated, even from Apple.

Explore the architecture
Apple Intelligence Cloud Optional
On-device boundary
Intent Engine Reasoning
Personal Fabric Adaptive
Model ROM Fabric 38.6B params
Unified Memory + Neural Engine Hardware

Your intelligence.
Personally.

Personal Fabric learns your patterns, preferences, and context โ€” adapting the foundation model to you, entirely on device.

Day 1
Base Model
The ROM fabric provides general intelligence out of the box โ€” no setup, no training period.
Week 1
Pattern Recognition
Personal Fabric begins learning your routines, communication style, and app preferences.
Month 1
Deep Adaptation
LoRA layers specialize to your behavior. Suggestions become proactive and contextual.
Ongoing
Continuous Learning
The model refines constantly โ€” getting better at being yours without ever sending data off device.
42 MB Personal Fabric

Your personal layer is just 42 MB of on-device LoRA parameters โ€” expressive enough to deeply adapt, small enough to stay entirely on chip.

Privacy isn't a feature.
It's the architecture.

When the model lives in silicon, your data never needs to leave. Personal Fabric processes everything on device, in a cryptographically sealed enclave.

On-Device

Input processed locally by Neural Engine

Secure Enclave+

Personal Fabric encrypted at rest and in use

ROM Fabric

Immutable model weights in physical silicon

Private Cloud

Only when needed. End-to-end encrypted.

Intelligence, evolved

Your phone knows you. No one else does.

Every interaction makes MยทAI more useful โ€” your writing style, your calendar patterns, your creative preferences. All learned locally. All encrypted. All yours.

Contextual Intelligence

It acts before you ask.

The Intent Engine observes your context and surfaces what you need โ€” generated entirely on device through your Personal Fabric.

Context
Calendar
2 PM conflict detected
Messages
Sarah awaiting reply
Location
School pickup in 45 min
MยทAI
Intent Engine
Actions
Suggestion
Reschedule to 3:30 PM
Based on your usual pattern when conflicts arise with school pickup.
Accept โ€บ
Draft ready
Reply to Sarah
Project proposal response. Matches your tone and writing style.
Review โ€บ

MยทAI across every device.

One architecture, tuned for each form factor. Your Personal Fabric syncs seamlessly.

iPhone
MยทAI
Personal intelligence hub
Learn more โ€บ
MacBook Pro
MยทAI Pro
Professional powerhouse
Learn more โ€บ
iPad Pro
MยทAI
Creative canvas
Learn more โ€บ
Apple Watch
MยทAI Nano
On-wrist intelligence
Learn more โ€บ
Vision Pro
MยทAI Ultra
Spatial intelligence
Learn more โ€บ

Upgrading to MยทAI is instant.

Your Personal Fabric transfers seamlessly between devices. Set up takes seconds โ€” your intelligence follows you.

1

Sign in

Use your Apple Account on any MยทAI device.

2

Transfer

Personal Fabric migrates via encrypted peer-to-peer.

3

Adapt

The model resumes learning, instantly familiar.

MยทAI is here.

Available fall 2027.

The Technology

Key innovations behind MยทAI.

Silicon

Model ROM Fabric

38.6 billion parameters manufactured directly into silicon using TSMC's N3E process. Model weights exist as physical circuit pathways โ€” not bits in memory.

LoRA

Personal Fabric

Low-rank adaptation layers that continuously learn your patterns. Just 42 MB of parameters, stored in the Secure Enclave+, adapting in real time.

Privacy

Secure Enclave+

A dedicated security processor that cryptographically isolates your Personal Fabric. Even Apple cannot access your learned patterns.

Interface

Intent Engine

Proactive intelligence that observes your context โ€” calendar, messages, location โ€” and acts before you ask. All processing happens on device.

Silicon

Neural Engine 42 TOPS

A dedicated neural processing unit delivering 42 trillion operations per second for inference, training, and real-time adaptation.

LoRA

Fabric Sync

Your Personal Fabric syncs between MยทAI devices using end-to-end encrypted peer-to-peer transfer. Your intelligence follows you everywhere.