Wonder is in public alpha. The outputs are plausible, not good — but that's where canvas-to-code sits generally right now, not a specific problem with Wonder.
The angle it's taking is different from most. Canvas-to-code tools typically treat the canvas and the codebase as separate worlds: generate, export, paste, clean up. Wonder's MCP integrations (Claude Code, Cursor, Codex) let the design surface read your actual codebase before generating — your components, your tokens, your structure. What comes out has a better chance of fitting what's already there than something generated from a blank slate.
In practice at alpha, you're getting 60-70% of the way to something shippable. The MCP approach is the right bet, though. A tool that knows your codebase before it starts has a structural advantage over tools that treat generation as a one-way export.
Free tier: 300 credits. Pro is $20/month for 3,000 credits and unlimited MCP calls. Worth trying at that price if you want to see what this approach looks like in practice.
Wonder is in public alpha. The outputs are plausible, not good — but that's where canvas-to-code sits generally right now, not a specific problem with Wonder.
The angle it's taking is different from most. Canvas-to-code tools typically treat the canvas and the codebase as separate worlds: generate, export, paste, clean up. Wonder's MCP integrations (Claude Code, Cursor, Codex) let the design surface read your actual codebase before generating — your components, your tokens, your structure. What comes out has a better chance of fitting what's already there than something generated from a blank slate.
In practice at alpha, you're getting 60-70% of the way to something shippable. The MCP approach is the right bet, though. A tool that knows your codebase before it starts has a structural advantage over tools that treat generation as a one-way export.
Free tier: 300 credits. Pro is $20/month for 3,000 credits and unlimited MCP calls. Worth trying at that price if you want to see what this approach looks like in practice.
Similar tools
Replit
Cloud-based development platform centered on AI Agent 3, which works autonomously for up to 200 minutes — testing its own code, spawning subagents, and connecting to 30+ services including Stripe, Figma, and Salesforce. Raised $250M at $3B valuation in January 2026. Best for rapid prototyping and solo builders; cost can spiral on heavy usage.
Adora
Adora automatically captures every screen, modal, and user journey in your live product without manual event tagging, building a continuously-updated visual library of what your app actually looks like in production. AI watches for usability issues 24/7 and links findings directly to the affected screens. Founded by ex-Canva execs, backed by Blackbird Ventures with a $9.9M seed, and used by teams at Canva, Notion, and Replit.
Agentation
Agentation is a floating toolbar you add to your React app that lets you click on UI elements and generate structured markdown feedback for AI coding agents. Instead of describing "the blue button in the sidebar," it captures CSS selectors, component names from the React fiber tree, and positional data so agents like Claude Code or Cursor can locate and fix the exact element. Free and open source, works with any AI agent that accepts text input.
Antigravity
Google Antigravity is an agentic IDE built on a forked VS Code that lets AI agents autonomously plan, write code, run the terminal, and test in a browser with minimal hand-holding. It pairs with Google Stitch (their text-to-UI design tool) to form a full design-to-code pipeline. Available free in public preview with generous Gemini 3 Pro limits; also supports Claude and OpenAI models.