OpenAI is building an agent-first smartphone, not an app phone
OpenAI is reportedly building a smartphone designed around AI agents instead of traditional apps. The device understands user context continuously and executes tasks directly.

Agent phone
In 2007 Steve Jobs redefined computing's interaction model. Mouse, keyboard, and apps gave way to touch, apps, and notifications. That model has held mobile computing for 18 years.
Cracks are starting to show.
The Information reported this week that OpenAI is building a smartphone built around AI agents rather than traditional apps. The device continuously reads context, takes intent in voice or text, translates that to a task, and executes. Targeted launch: late 2027.
The premise is simple: the open-app, find-the-menu, tap-the-button steps disappear.
Sam Altman (OpenAI CEO): "Apps are the wrong abstraction for an agent age." Jony Ive (former Apple chief designer, OpenAI device collaborator): "Hardware should disappear into the conversation."
Who's involved — OpenAI, Apple, the user
For OpenAI this is the lock-in play. ChatGPT lives on web and mobile. If Apple Intelligence gets serious, users could drift. A first-party device closes that exit.
For Apple this is the first credible challenge to iPhone in 18 years. Short-term share won't move, but if agent-first wins users, Apple has to redesign at the OS level. Apple Intelligence in iOS 18-19 nibbles at the direction; OpenAI's bet is more aggressive.
For users the choice opens over 12-18 months. iPhone + Apple Intelligence vs OpenAI device + ChatGPT full stack. Few will replace iPhone in the short term. Power users may carry the OpenAI device as a second device first.
Voice-computing analyst Brian Roemmele on X — "the first true voice-first computer" framing. The argument: LLMs finally fix the accuracy, context, and privacy problems voice interfaces always had.
The numbers (per reporting)
| Spec | OpenAI device (reported) | iPhone 17 Pro | Pixel 10 Pro |
|---|---|---|---|
| Form factor | Compact, screenless + optional secondary screen | 6.3" OLED | 6.7" OLED |
| Primary interaction | Voice + camera + haptic | Touch + voice assist | Touch + voice assist |
| OS | OpenAI proprietary (agent-first) | iOS 19 | Android 16 |
| Chip | OpenAI-designed (TSMC 3nm) | Apple A19 Pro | Tensor G6 |
| Price (est.) | $400-600 (bundled with subscription) | $1,199 | $999 |
| Launch | Late 2027 | Sept 2025 | Oct 2025 |
The screenless compact form learns from Humane AI Pin's 2024 failure. Optional secondary screen as a modular add-on.
The chip is reportedly being built directly with TSMC. New architecture optimized for inference, not GPU-style training. Independent of NVIDIA and AMD.
Pricing leans on bundling — $400-600 with a ChatGPT Plus subscription ($20/mo). Carrier-subsidy negotiations underway.
Wins and losses
For OpenAI it's lock-in plus a new revenue line. Hardware revenue itself matters less than the subscription anchor for ChatGPT.
For Jony Ive it's the first meaningful device project since Apple. LoveFrom is reportedly involved on equity or revenue-share terms, not flat consulting.
For users — power users, voice-computing fans, AI early adopters — a new computing experience. General consumers leaving 18 years of app-touch muscle memory is a slow ask.
For Apple shareholders, monitor it. iPhone revenue moves only if the OpenAI device sells in the 100M+ range.
Past cycles — new device categories
Humane AI Pin, 2024. Screenless wearable, real interest at launch, broke on slow response and weak accuracy. Effectively dead by year-end.
Rabbit R1, 2024. Hand-held AI companion. 300K preorders, broke on the "this is just a ChatGPT app" critique.
Google Glass, 2013. AR wearable. Privacy and social-acceptance walls. Pivoted to enterprise.
Apple Watch, 2015. The success template. Three things made it work: complementary to iPhone, clear use case (health), strong brand. OpenAI's device has to prove all three.
Pattern: new device categories require (1) clear use case, (2) complement to existing devices, (3) coherent UX. Whether OpenAI clears all three is the open question.
Counter-moves
Apple steps up Apple Intelligence. iOS 19 (expected Sept 2026) reportedly brings full LLM Siri, larger Apple models, and broader external-LLM choice (ChatGPT, Gemini, Claude).
Google answers via Pixel + Gemini integration. From Pixel 10 Pro on, Gemini 3.1 Ultra is at the OS layer.
Samsung, Xiaomi, and other Android OEMs ride Google's strategy. Differentiation through API integrations rather than building their own AI device.
Meta leans into Ray-Ban Meta Glasses successor and Quest VR with LLM integration. Skips the phone.
Skeptics, by name
Benedict Evans (formerly Andreessen Horowitz) on his blog — hardware is a different game. Software/LLM strength doesn't translate to device manufacturing, distribution, and service.
Marques Brownlee (MKBHD) cites Humane and Rabbit. AI devices are likely complement-not-replace for phones.
Both grant OpenAI the capital and Ive's design weight. The question is real-world UX and mass acceptance.
Stakes
- Wins: OpenAI — lock-in plus a new revenue line. Jony Ive / LoveFrom — first major post-Apple device. TSMC — new device-chip volume.
- Loses: Apple — first credible threat in 18 years; short-term immaterial, long-term to monitor. Humane / Rabbit-style AI device startups — direct OpenAI entry compresses room.
- Watching: US/EU regulators — agent-first OS data handling. Carriers — subsidy negotiations. Samsung/LG — entering the AI device category.
What changes
Devs: a new OS and platform arrives. Agent-first SDK/API design diverges from iOS and Android. SDK beta access likely opens around the 2027 launch.
Founders: agent-first device opens new SaaS niches, but device install base under 10M in year one is the realistic ceiling. Treat it as complementary.
Investors — Apple, Google, Samsung, NVIDIA — short-term immaterial. Watch the 2027-2028 device category dynamics.
Consumers: minimal change short-term. Most likely path: power users adopt as a second device for 1-2 years post-launch. Mainstream uptake from 2028-2030.
3-Line Summary
- OpenAI is building an agent-first smartphone with Jony Ive.
- Late-2027 target, screenless compact form factor.
- 18-year iPhone paradigm under long-term threat — short-term immaterial.
Sources
출처
관련 기사

OpenAI Put a Terminal in Its API – From Model Company to Agent Platform
OpenAI's Responses API now includes Shell tool, hosted containers, Skills, and Context Compaction. An agent infrastructure that maintains accuracy across 5-million-token sessions.

GPT-5.5 Ships: Agentic Coding and Computer Use Just Stepped Up a Level
OpenAI released GPT-5.5 with major upgrades to multi-step agentic coding and computer use. SWE-Bench Verified passes 75% and OSWorld leaps to 56% — the largest single-generation jump for OpenAI in agent benchmarks.

OpenAI's Lilli Replaces Internal Knowledge Search with AI Agents
OpenAI's internal search system Lilli launches for enterprise. Can it replace Notion and Confluence?
AI 트렌드를 앞서가세요
매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.
