From announcement to implementation: Building ChatGPT apps
Adam Shea
Director, Engineering

How OpenAI's new Apps SDK is reshaping user engagement — and what we learned building an application in 48 hours
At the recent OpenAI DevDay 2025, the company unveiled their Apps SDK — a development framework that enables organizations to create integrated application experiences directly within ChatGPT, complete with custom user interface (UI) components and deep-linking capabilities.
This isn't just another API release. It represents a fundamental shift in how users will discover information, products and services. Much like the App Store's launch in 2007 reshaped mobile computing, ChatGPT apps signal a new paradigm in how users interact with applications through conversational AI.
For enterprise organizations, the strategic implications are significant:
- New engagement channel: Users can access your services without leaving their conversational workflow
- Discovery reimagined: ChatGPT can proactively suggest your app at contextually relevant moments
- Custom experiences: Rich UI components enable sophisticated interactions beyond text-based responses
- Seamless integration: Deep-linking connects conversational interfaces directly to your existing platforms
To understand the practical implications of this technology, our team at TELUS Digital built a proof of concept in 48 hrs: a stock and news tracker with market data, interactive visualizations and custom UI components. The goal wasn't just to validate the technology — it was to surface the architectural decisions, integration challenges and implementation patterns that separate proof-of-concept demos from enterprise-grade applications.
What follows is a detailed technical account of that journey: the architecture choices we made, the pitfalls we encountered and the lessons learned building on this emerging platform. For technical teams evaluating ChatGPT app development, this is the implementation guide we wish we'd had at the start.
The technical challenge: Building beyond the basics
The OpenAI Apps SDK offers a new paradigm for AI-powered applications. Unlike traditional API integrations, ChatGPT apps leverage the Model Context Protocol (MCP) to create bidirectional communication between AI models and external tools. The result is a conversational interface that can dynamically invoke backend services, render custom UI components and maintain context across complex user interactions.
Architecture decisions: Understanding the MCP server model
The foundation of any ChatGPT app is the MCP server — a backend service that registers tools and resources ChatGPT can invoke. This isn't a traditional REST API; it's a protocol-driven architecture where the server advertises capabilities and the AI model decides when to use them.
Our initial server implementation was deceptively simple:
The complexity emerged in the details. Tool registration order matters — tools must be registered before the server transport initializes. Port conflicts became a recurring issue in development, requiring process management strategies. And the relationship between tool definitions, input schemas and ChatGPT's invocation logic required careful design.
Key insight: MCP servers aren't just API endpoints. They're declarative interfaces that describe capabilities to an AI model. Your tool descriptions, parameter schemas and response formats directly influence how effectively ChatGPT can use your application.
Building tools: From simple to sophisticated
Tool registration follows a consistent pattern, but the design decisions around input schemas and response formats have significant implications:
Pitfall #1: Input schema flexibility
One of the most critical design decisions is creating input schemas that accommodate both direct tool calls and conversational prompts. ChatGPT users often phrase requests naturally, which requires flexible parameter handling:
Key insight: Design input schemas to handle both structured API calls and conversational prompts. Use descriptive strings with examples rather than strict enums and include optional parameters for common variations.
The modern development approach: Component-based architecture
One of the most significant advantages of the Apps SDK is support for modern component-based development.
Project structure: Separation of concerns
Component development workflow
The development process follows modern React patterns:
Build process: From React to production
The build process uses `esbuild` to create optimized bundles:
Server integration: Serving components
The MCP server serves React components through Express routes:
Resource registration: Modern component serving
Instead of embedding HTML strings, we register components as resources that serve complete HTML pages:
Pitfall #2: Component state management
React components need to be reactive to ChatGPT's data flow. The `window.openai` API provides access to tool inputs and outputs:
Data flow architecture: Event-driven reactive patterns
One of the most challenging aspects of ChatGPT app development is understanding how data flows from tool invocation to UI rendering. The `window.openai` object provides access to tool inputs and outputs, but manually polling these values creates timing issues and unnecessary complexity.
The proper solution is to leverage the `openai:set_globals` event that ChatGPT dispatches whenever global values change, including `toolOutput`. This event-driven approach eliminates the need for retry logic and makes components naturally reactive to data updates.
The reactive hook pattern
Instead of manually checking `window.openai` values, we use React's `useSyncExternalStore` to subscribe to global updates:
Using reactive hooks in components
Components automatically re-render when data becomes available:
Pitfall #3: Manual polling and retry logic
Our initial approach used manual polling with retry mechanisms — this created race conditions and unnecessary complexity:
// ❌ Anti-pattern: Manual polling with retries
Key insight: Enterprise applications require resilience but that doesn't mean manual retry logic. The `openai:set_globals` event provides a native, reliable mechanism for reactive data updates. Build components that subscribe to these events rather than polling for data availability. This eliminates timing issues, reduces complexity and ensures components automatically update when new data arrives.
Security and permissions: The read-only annotation
In enterprise contexts, security isn't optional. Our analytics platform only displays data — it doesn't modify user preferences or execute transactions. Communicating this to users required proper permission annotations:
Pitfall #4: Permission dialog confusion
Without `readOnlyHint: true`, users see "WRITE" permissions for tools that only read data — a trust issue that could prevent adoption in security-conscious organizations.
Key insight: Permission transparency builds trust. Always annotate tools with appropriate access levels and design your architecture to enforce least-privilege principles.
UI design: Modern component architecture
Our design system leverages React's component model for maintainable, reusable UI:
CSS architecture: Component-scoped styles
Pitfall #5: Component lifecycle management
React components still need proper cleanup and state management:
Performance optimization strategies
As our application grew in complexity, we implemented several optimization patterns:
- Code splitting: Components are built separately and loaded on demand
- Lazy loading: Components only initialize when their container elements exist
- State management: Proper React state patterns prevent unnecessary re-renders
Sandbox testing with ngrok
During development, ChatGPT's sandbox environment requires publicly accessible URLs for component resources. We used ngrok to expose our local development server for testing:
This creates a public tunnel to your local server, providing URLs like `https://abc123.ngrok-free.dev/mcp` that ChatGPT can access. Update your MCP resource registrations to use the ngrok URL:
Key considerations for ngrok setup:
- Free tier limitations: ngrok free tier has session timeouts and bandwidth limits
- URL consistency: ngrok URLs change on restart unless you have a paid plan
- CORS handling: Ensure your server handles CORS for the ngrok domain
- Security: Never expose production servers through ngrok without proper authentication
Component deployment
Key takeaways for enterprise teams
Building this ChatGPT app surfaced several insights relevant to enterprise AI development:
1. Component-based architecture wins
Modern React development patterns translate well to ChatGPT apps. Invest in proper component architecture from the start.
2. Protocol-first architecture
MCP represents a shift from request-response APIs to capability-driven protocols. Design your tools as declarative capabilities not imperative endpoints.
3. Resilience over perfection
Network conditions, state management and AI model behavior introduce variability. Build systems that gracefully handle uncertainty.
4. User trust through transparency
Clear permission models, read-only annotations and explicit data handling build the trust necessary for enterprise adoption.
5. Start simple, iterate quickly
Our most complex features started as basic tool definitions. Incremental development allowed us to validate assumptions before investing in sophisticated UX.
6. Event-driven data access
Don't rely on manual polling or retry mechanisms. Subscribe to the `openai:set_globals` event to reactively handle data updates. This eliminates timing issues and makes components naturally reactive to changes in `toolOutput` and other global values.
The path forward
OpenAI's introduction of apps in ChatGPT represents more than a new feature — it's a new development paradigm. For enterprise AI teams, this creates opportunities to build sophisticated, conversational interfaces that integrate seamlessly with existing workflows.
The technical challenges are real: protocol compliance, component state management, data flow management and security considerations all require careful attention. But the potential is equally significant: AI-powered applications that feel native to the conversational interface, discoverable at the moment of need and capable of complex interactions without sacrificing user experience.
Our analytics platform demonstrates that production-ready ChatGPT apps are achievable with the right architectural approach. The journey from announcement to implementation taught us that the future of enterprise AI isn't just about better models — it's about better integration patterns, more thoughtful architectures and development practices that prioritize resilience and trust.
For teams building on platforms like Fuel iX™ or exploring ChatGPT app development, the opportunity is clear: the tools are ready, the protocols are maturing and the ecosystem is expanding. The question isn't whether to build — it's what to build next.
About this implementationThis article is based on real production development of a ChatGPT app using the OpenAI Apps SDK and Model Context Protocol. All code examples, pitfalls and solutions are drawn from actual implementation experience building an analytics dashboard platform.


