Zonko Graveyard

last updated on 18th feb 2026

Experiments we built, learned from, and consciously stopped pursuing.

Our philosophy: if the ecosystem is moving fast, we want to stay close to the frontier, know who is building what, and understand what is changing in real time. We then bring those insights together into holistic products instead of optimizing for one isolated feature. We do not always build to maximize users immediately. Often, we build because we are genuinely curious and want to understand a space deeply.

1. Agentic AI image editing tool

What we built: A system that could understand a prompt and auto-compose contextually accurate visuals, combining real-world references with generated content. Think: "Generate Ganpati Bappa celebrations on real Mumbai roads" or "Place a company banner on Hiranandani Gardens using real building context."

Why we built it: To get our hands dirty with image and video generation models across open-source and closed-source stacks. We wanted to understand how to route context on the fly.

What we actually learned: Long, detailed prompts often perform worse than short, generic ones. The bigger surprise was that the tool was capable, but without clear use cases surfaced, most users couldn't figure out what to do with it. This became one of our core principles, in consumer AI, discovery of use cases is a bigger problem than capability itself. Just being powerful isn't enough. You have to show people what's possible.

Where it went: The technical depth from this made building our on-brand image generator (below) dramatically faster.

2. Poke.com-style assistant

What we built: An AI assistant focused on tool-calling flows and persistent memory.

Why we built it: To get real-world experience with tool calling, memory systems, and the challenges you only see in production. We also used it to experiment with different agent frameworks.

What we actually learned: A deep dive into tool calling, how for different use cases it might work and might not work, and how different models handle the exact same use case in very different ways.

Where it went: This is why we were able to ship memory and tool calling on Howdy so quickly. The hard lessons were already paid for.

3. On-brand asset generator

What we built: A tool that generates branded creatives from plain prompts by inferring a brand's visual and textual language. "Create a Republic Day visual for Zomato", and it just works.

Why it is in the graveyard: A few people tried it and liked it, but we chose to drop it.

4. StudyAnything.ai

This was the hardest one to kill.

What we built: You give it something you want to learn, it understands intent, generates a course so you can go deep, and lets you chat and learn via a voice teacher.

Why we built it: We keep learning different things inside the team. We wanted to build the tool we wished existed.

Why it is in the graveyard: People who tried it liked it, but repeat usage stayed low. We also saw an upstream problem: most users froze on the blank "what do you want to learn?" prompt. It reinforced that discovery often matters more than capability.

Where it went: Live @ studyanything.ai.

5. AI-native dating matchmaking MVP

What we built: An MVP where AI understands people and preferences to do meaningfully better matchmaking.

Why it is in the graveyard: The concept worked, but the market size didn't feel attractive enough for a company-level bet.

6. AI influencer video pipeline

What we built: A tool where creating a character and generating videos for that character was a one-click workflow. Built purely for internal use, to solve our own workflow problem of making video content at scale without manual production.

Frontier explorations

Alongside product bets, we regularly run focused sprints to build deep intuition across AI modalities. These are not products. They are deliberate investments in understanding what is possible, what is changing, and where the real levers are.

Voice stack: End-to-end voice pipelines, tool calling inside voice flows, hosting tradeoffs, and latency and cost optimization across the full stack (including 100x+ cost improvements from self-hosting).

Vibe coding platform: Gave coding agents a VM and full freedom. Explored Claude Agents SDK behavior with real cloud infrastructure.

Music generation: Evaluated open-source music models in real workflows. Mapped what is possible and what is still broken.

Generative UI SDK: Built a developer SDK for generating UI from structured model outputs. Paused when Vercel launched JSON Renderer.

Two other ideas we're excited about, but not actively working on right now:

  • AI-native cloud: Make building AI products radically simpler, spanning context management, memory, tool calling, model routing, and not just compute but VMs and cloud infrastructure primitives.
  • AI-native hedge fund: A fund that is built from day one around AI-native workflows for research, execution, and iteration.

Why this page exists

Sometimes an experiment turns into a product. Sometimes it turns into a capability we use later. Sometimes it just turns into conviction about what's real and what isn't.

Three principles we've learned the hard way:

Discovery > Capability. In consumer AI, it doesn't matter how powerful your product is if people can't figure out what to do with it. Solving discovery is harder and more important than adding features.

Best model best strategy. In consumer apps where monetization takes time, cost structure is existential. The difference between the best closed-source model and a well-deployed open-source alternative can be 100-300x in cost. That can make or break a product.

Nothing compounds if you don't let it. Every experiment here deposited a technical capability, voice, tool calling, generative UI, cost optimization, agent architecture. We consciously design experiments so the learnings flow into what we're building next. And we kill things fast, most experiments on this page lasted weeks (and less), not months.

Currently experimenting with

  • Stealth image generation and companion app for India (30,000+ users have tried it).
  • An AI-native assistant and social app (productivity + closer to the 25 most important people in your life).

We want to automate as much of the company as possible, making creators' videos, running product ops, analytics, tech, fixing the box, shipping new features, and designing.

Built by Zonko team since mid-December 2025.

We're hiring. If this is how you want to work, join us.