Skip to main content

From Idea to Open Source in a Day: Building Markwell with Cursor

5 min readWritten by a human, edited by AI
AIElectronOpen SourceDevelopment

After building this website deliberately over weeks — documentation-first, design system in place, small iterations — I wanted to test the other end of the spectrum. How fast can you go from zero to a polished, published open source app using Cursor? Not a throwaway prototype: something with proper engineering standards, tests, CI, and the kind of scaffolding that makes a project credible for others to use and contribute to.

So I set out to build Markwell: a minimal markdown viewer for macOS. Open a file or pick from Recent, read in a large panel. Mermaid diagrams in code blocks work. No accounts, no extras. Nothing revolutionary — and that was the point. A small, well-defined scope to test velocity. It was also filling a gap for me personally - applications that allowed you to read markdown we're either overly feature rich or a side part of a large application. I just wanted something simple that I could use to read a markdown file. Was that too much to ask?

The Experiment

The question was simple: can Cursor help you ship something genuinely useful, with proper engineering standards, in a single sitting? I wasn't aiming for the deliberate, multi-week collaboration that produced this site. I wanted speed — but I also wanted quality. Tests, CI, linting, a proper open source package. No lockfile or .npmrc committed (deliberate choice for portability), but everything else you'd expect from a repo you'd be happy to share.

The not so simple question was can I build this in a technology stack that I have little experience with (Electon) and ship it as an opensource project, also something I've never done before...

What Cursor Delivered

The result is a complete Electron app with a clear structure: main.js for the main process, electron/preload.js exposing window.api to the renderer, and src/ for the UI. Markdown is rendered with marked; Mermaid diagrams are loaded from a CDN with SRI. Window size and position are persisted via a small store. The app does one thing and does it well.

What surprised me was how much beyond the app was produced. Not just the code, but the full open source package:

  • GitHub Actions CI — Lint, format check, and tests on Ubuntu; a separate job builds the Mac app on macos-latest
  • Securitynpm audit --omit=dev --audit-level=critical runs in CI so known critical vulnerabilities fail the build
  • Contributing guide — Run and build instructions, registry setup, code style, how to submit changes
  • Security policy — Responsible disclosure via GitHub's private vulnerability reporting
  • MIT licence, CODE_OF_CONDUCT.md, SECURITY.md
  • ESLint and Prettier configs
  • Unit tests using Node's built-in test runner (node --test)

All of that came out of the same session. The repo is at github.com/matt-asbury/markwell.

Example of the kind of defaults that showed up without asking — Electron security:

webPreferences: {
  preload: path.join(__dirname, 'electron', 'preload.js'),
  contextIsolation: true,
  nodeIntegration: false,
}

Context isolation and no Node integration in the renderer: the right baseline for an Electron app.

What Surprised Me

The quality of the OSS scaffolding. CONTRIBUTING.md, SECURITY.md, CODE_OF_CONDUCT.md — the things many developers skip or do poorly — were coherent and practical. The contributing guide explains why there's no lockfile, how to use a custom npm registry, and how to simulate CI locally. Dependency management used tilde ranges (~x.y.z) for critical deps so only patch updates are installed; that choice was documented. The app itself is usable: readable font size, max-width for the content, recent files in the sidebar. It feels like a real tool, not a demo.

What Still Needed a Human

The taste decisions — what to build, the scope, what "minimal" means — were mine. So was reviewing the security model; Electron apps need careful attention, and I wanted to understand what was being created. Deciding to publish it as open source, and how to position it in the README ("Read Markdown well"), was also human. Cursor produced the structure and the prose; the voice and the decision to ship were mine.

Lessons Learned

Speed and quality aren't mutually exclusive when the tooling can generate both the feature and the scaffolding. Small, well-scoped projects are where AI-assisted development shines brightest: clear boundaries, no sprawling architecture, and the overhead of "doing it properly" — CI, tests, docs — approaches zero when the AI handles the boilerplate.

The contrast with the website project is instructive. For the website, the deliberate approach was right: custom design system, accessibility audits, SEO, multi-week collaboration. For Markwell, speed was the right call. The skill is matching your process to your project. Sometimes you need to go slow and own every decision. Sometimes you need to ship in a day and still have something you're happy to put your name on.

Wrapping Up

I now have both experiences: the deliberate, documentation-first build and the fast, scope-limited experiment. Neither is wrong. Markwell is a small utility I use myself and am comfortable maintaining. If you're curious about the code or want to contribute, the repo is Markwell on GitHub. Issues and PRs are welcome.

What's your experience been with speed vs. deliberation when building with AI? When do you choose to go fast, and when do you slow down?

Share: