At OpenAI’s 2025 DevDay in San Francisco, CEO Sam Altman and the leadership team unveiled a sweeping vision: to recast OpenAI not merely as a model provider, but as a full-stack, agentic platform. The theme was clear—move beyond “ask me anything” to “ask me to do anything for you.” In the process, they introduced AgentKit, upgraded Codex, an expanded Apps SDK in ChatGPT, and announced a landmark AMD–OpenAI compute partnership.
Below, we dissect what was announced, why it matters, and how developers, enterprises, and the broader AI ecosystem should heed the shift.

Image Source: Dev Day 2025
Key Highlights from DevDay 2025
OpenAI’s official DevDay overview provides a summary of the major launches: Apps inside ChatGPT, AgentKit, Sora 2, Codex updates, and new model releases. In particular, the launch of AgentKit marks a turning point for building production-grade autonomous agents.
A few anchors:
- AgentKit: a unified toolkit for designing, deploying, and optimizing agents
- Agent Builder: drag-and-drop workflow canvas with versioning and guardrails
- ChatKit: embeddable chat UI for agent experiences
- Connector Registry: central integration hub for data sources and tool access
- Eval tools and reinforcement fine-tuning support
- An Apps SDK enabling third-party apps to run within ChatGPT
- Updates to Codex, Slack integration, enterprise controls
- A new hardware/compute partnership with AMD to scale AI infrastructure
Geeky Gadgets also covered how AgentKit integrates with OpenAI’s model stack and offers visual tools for building agents.
The DevDay 2025 event was framed as the moment when ChatGPT evolves into an AI-first operating layer, not just a chatbot.
AgentKit: The Next-Gen Agent Development Framework
What Is AgentKit?
From OpenAI’s blog “Introducing AgentKit”:
“We’re launching AgentKit, a complete set of tools for developers and enterprises to build, deploy, and optimize agents.”
AgentKit is meant to remove friction from creating multi-tool, multi-step AI agents — a process that previously required stitching together orchestration, connectors, prompt tuning, UI, and evaluation pipelines.
Core Components & Features
Agent Builder
A visual canvas where developers compose agent workflows from nodes (tool calls, logic, guardrails, decision points). You can preview runs, version workflows, and instrument evaluation logic.
ChatKit
A UI toolkit that lets you embed chat-driven agent experiences into your web or app front end, enabling users to interact with agents seamlessly.
Connector Registry
A central admin panel to manage data / tool connections (e.g. Dropbox, Google Drive, internal APIs). Helps ensure consistent tool access across your agents.
Evaluation & Fine-Tuning
Built-in support for datasets, trace grading, prompt optimization, and reinforcement fine-tuning to measure and improve agent performance.
According to a Medium guide, AgentKit is positioned as a unification of previous OpenAI experiments (Operator, Deep Research) into a more structured and safer system.
Real-World Use Example
OpenAI showed a demo built by Ramp, using AgentKit to automate a procurement workflow:
- A user issues a natural language request: “I need five more ChatGPT business seats.”
- The agent processes the query, applies internal policy logic, looks up vendor data, and provisions a virtual credit card — a process that used to take weeks was condensed into minutes.
- Ramp claimed going from blank canvas to a working agent “in just a few hours,” reducing iteration cycles by ~70%.
This demo encapsulates OpenAI’s ambition: agents that carry out end-to-end decisions, not just surface-level responses.
AgentKit vs. Previous SDKs & Agent Tools
Before AgentKit, OpenAI offered a Responses API and an Agents SDK (released in March 2025). AgentKit builds on these, layering visual orchestration, guardrail support, evaluation tooling, and integration infrastructure.
In reviews, some comparisons have been made with no-code platforms (like n8n), noting that AgentKit is more tightly bound to OpenAI’s ecosystem and tools.
Apps SDK & Integration into ChatGPT
One of DevDay’s public-facing announcements centered on transforming ChatGPT into an app platform. The new Apps SDK lets developers build apps that run inside ChatGPT — complete with interactive UIs, full-screen views, and integration with chat flows.
In demos, partners like Canva, Zillow, and Coursera operated inside ChatGPT. Users could interact with these apps without switching context.
Sam Altman called this move a shift toward “systems you can ask to do anything for you.” The vision: ChatGPT becomes a platform where the UI and model blend, not just a conversation wrapper.
While the Apps SDK is in preview, it’s a powerful step toward reconfiguring how software is consumed: instead of jumping between apps, the intelligence moves inward into your chat session.
The AMD Deal: Fueling the Compute Engine
A key underpinning of all these software ambitions is infrastructure. On DevDay, OpenAI announced a major compute partnership with AMD.
The deal reportedly involves deploying hundreds of thousands of AI chips, equivalent to multiple gigawatts of compute, starting in the second half of 2026. It also includes an option for OpenAI to acquire a stake in AMD.
The move is intended to secure supply, reduce dependency, and scale OpenAI’s hardware footprint — a critical bottleneck in AI deployment. Altman repeatedly cited compute availability as a limiting factor in scaling services.
The AMD deal is strategic: it gives OpenAI more control of the stack, ties it to a major chip vendor, and signals serious capital commitment to infrastructure.
OpenAI Agents, Platform Strategy & Future Direction
Agentic AI: The New Paradigm
DevDay 2025 heralds what many in the AI space already call “agentic AI” — models that act autonomously, chain tools, and manage tasks over time.
With AgentKit, OpenAI is turning that concept into a developer reality: you don’t just query models — you enlist them as intelligent actors within your workflows.
Platform vs Model Provider
OpenAI’s shift is stark: fewer announcements about just releasing a new GPT-5 version (though that also occurred) and more emphasis on platform-level tooling — agent orchestration, SDKs, UI embedding, ecosystem growth.
The idea is to become not just a model vendor, but the foundation layer upon which intelligent apps and agents are built.
Implications for Developers & Businesses
- Reduced friction: AgentKit’s visual tooling can significantly reduce time and complexity in building agents.
- Distribution channel: Apps inside ChatGPT gain direct exposure to hundreds of millions of users.
- Ecosystem lock-in: Deep integration with OpenAI’s tools favors those committed to the platform.
- Need for safety and guardrails: More autonomy means more risk; embedded guardrails will matter.
- Hardware and cost pressure: With agents running across services, compute cost and efficiency become critical constraints.
Pricing, Documentation & Availability
Pricing & Access
OpenAI’s AgentKit announcement page outlines its product features and a “Pricing & Availability” section.
At launch, AgentKit is available to developers and enterprises under a preview or gated access model. The precise pricing tier and licensing haven’t been broadly disclosed yet.
Documentation & Onboarding
OpenAI’s DevDay microsite links directly to documentation for AgentKit, the Apps SDK, and other new capabilities.
Early guides (like a Medium walkthrough) are already showing step-by-step how to build agents, embed UI, and connect tools.
The expectation is that documentation, sample templates, and community resources will expand rapidly post-launch.
Challenges & Risks to Watch
Guardrails & safety
AgentKit enables powerful agent behavior. Ensuring agents behave safely, avoid hallucinations or misuse, and comply with user intent is paramount.
Complexity of orchestration
Though visual tools help, coordinating multi-agent workflows, error recovery, and fallback logic remains nontrivial.
Cost & scalability
Running agents continuously, making external API calls, and chaining models can become costly — efficient orchestration and caching strategies will matter.
Platform dependency
Deep integration into OpenAI’s stack may limit portability to other model providers or multi-cloud architectures.
Competitive pressure
Other platforms (Anthropic, Google, Meta) are also building agent frameworks. Interoperability, standards, and open specifications may become battlegrounds.
Regulation and control
As agents gain more autonomy, oversight, auditability, and compliance capabilities will be critical for enterprise adoption.
What You Should Do Now
If you’re a developer, startup, or enterprise evaluating this shift, here’s a quick action plan:
- Get access early to AgentKit preview and Apps SDK to experiment.
- Prototype simple agents (e.g. customer support, task automation) to validate workflows.
- Monitor cost metrics and observe which agent designs are efficient vs. wasteful.
- Integrate safety layers early (guardrails, rejection logic, fallback paths).
- Plan for multi-model flexibility—anticipate whether you may want to use non-OpenAI models in the future.
- Watch AMD and compute strategy — decisions made now will shape infrastructure tradeoffs down the line.
Summary
OpenAI DevDay 2025 marked a strategic inflection point. The company is no longer content to push powerful models alone — it wants to build the environment, tooling, distribution channels, and compute backbone that lets those models operate as agentic software in everyday systems.
AgentKit is the flagship of that transition: a unified agent framework with visual builder, embedded UI, connectors, evaluation, and lifecycle support. Coupled with the Apps SDK for ChatGPT, it opens new pathways for developers to embed intelligence in apps and interactions. And the AMD compute deal underscores that software ambitions must be matched by hardware scalability.
The move to “AI that acts, not just answers” carries immense promise — and risk. How well OpenAI balances capability, safety, cost, and flexibility will determine whether this pivot succeeds or strains existing paradigms. For those building in AI now, DevDay 2025 is not just a set of announcements — it’s a call to rethink how we build the next generation of intelligent software.