Projects

A few things I am building.

vmail

Email for AI

AI assistants are getting capable enough to handle real work—scheduling, research, customer support—but they can't send or receive email. The protocols exist. The problem is access: giving an AI agent your email credentials is a security disaster, and most email systems weren't designed for programmatic use by autonomous agents.

vmail is an email interface built for this future. It's a CLI and MCP server for Stalwart Mail Server that exposes full email operations—reading, sending, organizing, searching—through a structured API that AI agents can use safely. The MCP integration means any AI assistant that speaks the Model Context Protocol can manage your inbox: triage messages, draft replies, generate disposable addresses for signups, track conversations across threads.

The disposable address system makes this practical. Run a command (or let your AI agent run it) and get a random address like abc123@v1l.la that routes to your inbox. Assign it to a category and incoming mail automatically sorts via Sieve filters. When a service starts spamming or gets breached, delete the address. An AI assistant can create addresses on demand—one per vendor, per signup, per conversation—without touching your real address.

Security is the constraint that makes AI email access viable. Passwords never touch disk. Sessions expire. Every operation logs to an append-only audit trail. Rate limiting prevents runaway agents. The tool assumes that programmatic access to email is dangerous and builds in the guardrails to make it manageable. The bet is that email will increasingly be handled by AI agents acting on your behalf—and the infrastructure should be ready for that.

Sprezzy

Language Learning

Language acquisition research consistently shows that vocabulary sticks best when encountered in meaningful context—yet most learning tools strip words from the sentences that give them meaning. Immersion solves this but overwhelms beginners. Sprezzy addresses both problems: it transforms text in your native language into a hybrid where some words appear in your target language, calibrated to what you're ready to learn.

The tool integrates directly into your reading. Upload an epub and read through the built-in reader, activate the bookmarklet on any webpage, or paste text into the interface. The output looks like what you started with, except certain words have been replaced with their target-language equivalents. Words you haven't encountered before display a small translation underneath them; words you've seen enough times appear without the hint. Hovering over any replaced word reveals the original, and the system registers whether you needed that help.

Behind this is a model that tracks your history with every word you've encountered, estimating how likely you are to remember each one and how much new material you can absorb in a given session. When you read, the system selects which words to present in the target language and whether to include hints, then updates its estimates based on how you interacted with the text. The difficulty adjusts to your pace—not a fixed curriculum, but a response to how you're actually progressing.

The practical effect is that reading becomes language learning without feeling like study. You're not setting aside time for flashcards or drills; you're reading a book or an article you would have read anyway, and your target language is there, embedded in the content. Progress is slow and incremental by design. Over weeks and months, the proportion shifts, the hints become less necessary, and the language grows familiar through sustained, low-pressure exposure.

Mementi

Voice-First Notes

Ideas don't wait for convenient moments. They show up mid-podcast, mid-commute, mid-conversation—and they leave just as quickly. Most note-taking apps assume you have two free hands and time to type. This one assumes you don't.

Mementi captures thoughts by voice. Hold a button, speak, release. The app transcribes what you said, generates a title, and saves it. No typing, no friction. The idea that would have slipped away is now saved and waiting for when you need it.

In podcast mode, the app listens alongside you, maintaining a running transcript of whatever you're hearing. Ask a question and it answers using that context: What was the name of the company they just mentioned? or Can you explain that concept they glossed over? The AI does the research so you don't have to pause and look it up. Responses stream back and read aloud through your headphones—or get saved as notes you can review later.

In notes mode, the AI becomes a research assistant for your own ideas. Ask what have I thought about pricing? and it finds the relevant notes, even if you never used the word pricing in any of them. It understands what you meant, not just what you typed. The answer pulls from what you've already captured, and you can save that response too—building on your own thinking over time.

Every interaction can be hands-free. Speak your question, hear the answer. Save it or dismiss it with a swipe.

The point is this: capturing ideas should be effortless enough that you actually do it. And once captured, those ideas should be easy to find, build on, and use. Two modes. Voice in, voice out. An AI that helps you think.