AI coding tools and code privacy
Nov 15, 2025

We built an AI agent for the Godot game engine.
TL;DR
Your code only lives in memory. We send it to Google Gemini to generate output, then throw it away.
When you reload a chat from history, it only loads the conversation and tool history. The code is filled in from the current project state.
Even if our database were to be hacked, they would only find chat conversations from the last 90 days; not your actual code.
Our Threat Model
We built Ziva assuming:
- Our database and logs will eventually get breached (it happens to everyone)
- Google sees everything we send them (they’re the LLM provider)
- Your unreleased game code is sensitive IP
So we designed around that: Don’t store sensitive stuff.
Architecture
When you use Ziva to automate game development, here’s what happens:
- Plugin sends your message + file contents to our server
- Server forwards everything to Google Gemini
- Gemini streams response back
- Only the conversation (questions/answers/tool results) gets stored
The core filtering logic is kept simple and validated with E2E tests:
// Filter out context messages - they should not be persisted to DB for privacy
const messagesToSave = messages.filter(
(msg) => !msg.metadata?.isContextMessage,
);We Don’t Want To Store It
We run a cron job that deletes chats with no activity for 90+ days. Not because we’re nice, but because old data is a liability. Less data = less to leak.
// src/lib/cleanup-old-chats.ts
const result = await prisma.chat.deleteMany({
where: {
updatedAt: { lt: ninetyDaysAgo }
}
});Safety Built-in
All our server code uses a custom monitorlog method which has basic heuristics to detect code leakage. Alerts are fired straight to the oncall to investigate.
We’ve also designed the schema so every chat has a userId foreign key. It would be hard to accidentally write code exposing other users’ chats. Account deletions cascade delete everything.
Remaining Gaps & Future Improvements
Some areas we could realistically improve:
Your conversation data is not encrypted at rest. If someone gets database access, they can read your chats. Your code context isn’t there, but the conversation history is.
Tool call output is stored. This is a tradeoff between storing minimal output and conversation restoration being useful.
Google sees your code. That’s how LLMs work. We send them your code, they send back responses. We could use a local model, but we have other investment areas that are a priority right now.
Want better privacy features? Got concerns? Hit us up:
- Feature requests: GitHub Issues
- Community: Discord Server
- Security: Min 500$ bounty for customer code leak