AI Design Workflow Platform
Skelara

Skelara

Sketch interfaces, generate styled UI, and refine them in chat.

Project Details

StatusLive
ProductAI Design Workflow Platform
Timeline21 Weeks
Tech Stack
Next.js 15React 19TypeScriptConvexInngestOpenAI GPT-5 MiniAI SDKTailwind CSS 4Redux ToolkitPolar
The Problem

Early product ideas usually break apart across whiteboards, mood boards, design files, and code generators. That means every revision loses context, visual consistency slips between screens, and teams end up redrawing or reprompting the same concept instead of iterating on one persistent source of truth.

The Solution

Skelara combines sketching, style exploration, AI generation, conversational redesign, and export into one browser workspace. Users can turn a rough frame into styled HTML, extend it into supporting workflow pages, and keep the full project history, design references, and billing context tied to the same project.

21 WeeksPrototype-to-Platform Build
< 200msStreaming Preview Refresh
1sCanvas Autosave Debounce
No InstallBrowser-Based Access

What It Does

Infinite Canvas

Users can draw frames, shapes, connectors, and text on a persistent canvas instead of starting from a blank prompt. That keeps loose product thinking visible long enough to become something structured.

AI Style Guide

Mood-board uploads are turned into reusable colour and typography tokens before any screen generation happens. This gives the model visual direction that survives across iterations instead of resetting every time.

Frame-to-UI Generation

A selected wireframe frame is captured as an image and streamed back as editable HTML with Tailwind classes. The generated screen appears directly beside the sketch, so the jump from concept to interface is immediate.

Design Chat

Generated screens can be revised through a chat loop that includes the current HTML, project style guide, and optional wireframe context. That makes iteration feel like editing a design conversation instead of starting over.

Workflow Expansion

Once the main screen exists, Skelara can generate complementary pages like dashboards, settings, profiles, and data tables in the same visual system. This turns a single concept into a broader product flow quickly.

Autosaved Projects

Projects are stored with thumbnails, canvas state, viewport data, and generated output so users can resume work instead of rebuilding sessions. Fast autosave and background persistence reduce the risk of losing in-progress exploration.

In Action

Every workspace lives in the projects dashboard — thumbnails show the last canvas state so users can pick up exactly where they left off.
Rough layout ideas take shape on the canvas before any AI is involved — selecting a frame surfaces the Generate Design action in one click.
Reference images are pinned to the moodboard first, giving the model concrete visual direction before it extracts colours and type styles.
From a single reference upload, Skelara produces a complete named palette — primary, secondary, and UI component colours — ready to drive every generated screen.

Highlights

What We Built

Skelara is an AI-assisted design workspace for founders, product designers, and front-end teams who want to move from rough interface ideas to editable UI concepts without jumping between disconnected tools. The product is built as a Next.js 15 and React 19 web app, with Redux managing the client-side canvas interaction model and Convex handling project data, auth-backed storage, and uploaded assets. Users create a project, sketch frames on an infinite canvas, upload visual references into a mood board, generate a style guide, and then turn individual frames into streaming HTML and Tailwind UI. Inngest is used for background workflows such as autosave and billing-event handling, while Polar manages subscriptions and credit-based access. The result is a single browser workspace where idea capture, visual direction, AI generation, and persistence all stay in the same context.

The Hardest Problems

The hardest engineering problem here is preserving context across multiple AI steps instead of treating each prompt as stateless. Skelara has to combine a wireframe snapshot, stored style-guide tokens, optional inspiration images, and the current generated HTML so the next response feels like a revision rather than a reset. A second challenge is making generation feel interactive on the canvas, which is why output streams directly into canvas shapes and the container re-measures itself as markup grows. The background architecture also carries real product complexity: credits are consumed across several AI routes, Polar webhooks need to stay in sync with user entitlements, and Inngest functions bridge those events back into Convex. On top of that, model-generated HTML has to be sanitized before rendering, which makes security one of the sharpest edges in the whole system.

What We Learned

This codebase makes it clear that the real product is not just "AI makes a screen" but "AI remembers the design conversation." Style-guide generation, wireframe context, and workflow-page expansion matter because users need continuity between the first concept and the fifth revision. The architecture also shows a useful split between immediate and deferred work: canvas interaction and streamed feedback stay in the client, while persistence and entitlement workflows are pushed into background infrastructure. If this project continues toward launch, the biggest improvement should be hardening the HTML-sanitization layer and tightening the written product story, because the implementation is already further along than the documentation. The most important lesson is that design-to-code tools only feel credible when context survives every handoff.

The Result

Skelara already covers the core loop needed for early product exploration in one place. A user can sign in, create a project, sketch a layout, derive a style system from reference images, generate a polished screen, expand it into adjacent workflow pages, refine the result in chat, and export output without leaving the workspace. There is no production URL documented in the repository, so this intake treats the product as a near-launch work in progress rather than a publicly released SaaS. Even in that state, the repo demonstrates a coherent product thesis: keep the messy front end of interface ideation inside one persistent system so iteration speed compounds instead of resetting.

Like what you see?

Let's build your next product together.