Your Life Is Not Training Data
What data sovereignty actually means in 2026
Part 4 of the series: Building Linubra
Most AI products are honest about what they are, if you read carefully enough.
The terms of service tell you your inputs may be used to improve the model. The privacy policy explains your data lives on shared infrastructure in a jurisdiction you didn’t pick. The marketing page says “private by design” and means “we use HTTPS.” None of this is a lie. It’s just buried under enough language that most people never get there.
For a tool that stores your grocery list or your movie preferences, this bargain is fine. The data is low stakes. The convenience is high. It’s a fair trade.
A reasoning memory engine is not that kind of tool.
A Reasoning Memory Engine Is Not a Notes App
Linubra captures meetings where budget commitments were made. Medical observations about your children. Injury logs. Family conflicts. Strategic decisions about your company. The names, relationships, and private context behind the people you trust most.
That’s not a notes app. It’s a running record of your cognitive life — the decisions you made, the patterns in your health, the commitments other people made to you, the emotional shape of the conversations that mattered.
A tool of that sensitivity can’t run on the standard AI-industry bargain. Not because the bargain is dishonest (it usually isn’t), but because the data it would consume is categorically different from the data most AI products handle. If your reasoning memory engine is also somebody’s training corpus, you don’t have a reasoning memory engine. You have a data-harvesting product with a useful interface.
That distinction is the one every architectural decision in this project has to pass through.
Four Commitments, Four Architectural Choices
Sovereignty isn’t a marketing phrase. It’s a short list of things you commit to and then build around. Ours is four items long.
Your data trains nothing. Linubra processes inputs through Gemini on Vertex AI under a contract that prohibits using customer data for model training. Your memories aren’t federated into a shared model. Your patterns don’t inform outputs served to other users. The intelligence the system builds belongs to your graph, not to a foundation model somewhere.
Your data lives in dedicated storage. Every user’s data sits in its own Google Cloud Storage bucket, per environment. No shared storage layer. No configuration mistake that could ever expose one user’s data to another. This is more expensive to run than shared infrastructure. It’s the baseline.
Authentication is built for the actual threat model. The web app uses HttpOnly cookies for session management — not localStorage, not URL-embedded tokens. HttpOnly cookies can’t be read by JavaScript, which means an XSS bug can’t exfiltrate them. For a tool handling this kind of data, this is the right call. It’s also harder to implement. We took the harder path on purpose.
We do not sell access to your data. No ads. No data brokerage. No “anonymised aggregate insights” product that turns out to be less anonymised than claimed. Linubra is a subscription. The revenue model is lined up with user value, not data volume.
What We Deliberately Didn’t Do
There’s a version of this product that would be cheaper and easier to build. Shared storage across all users. Session tokens in localStorage so we could share auth state more simply. A deal with a model vendor that traded lower API costs for participation in “model improvement programmes”. A line of business monetising the behavioural patterns in the knowledge graph as aggregated insights for B2B buyers.
We haven’t done any of these things, and we aren’t going to.
Not because we’re naive about business models. Because the core promise — a reasoning memory engine should function as an autonomous chief of staff for your entire life — is only credible if the vault is actually closed. You can’t build a tool people trust with their most sensitive data if the data isn’t genuinely protected. The product collapses without the trust. The trust requires the architecture. The architecture requires the constraints.
Data sovereignty isn’t a feature. It’s the thing the whole rest of the product is built on top of.
What This Means If You Use It
Your inputs are processed to build your knowledge graph. That graph is yours. Not shared. Not used for training. Not monetised.
If you stop using Linubra, your data can be exported and permanently deleted. The export tooling is being built now and ships before general availability. Portability is part of sovereignty — owning your data means being able to leave with it.
If you have specific compliance requirements (GDPR, HIPAA, anything else), get in touch. I’ll tell you exactly what we can and can’t support, without ambiguity, and without the “reach out to our solutions team” runaround.
Your life is not training data. That isn’t a slogan. It’s the constraint the whole system is built around — and it’s why the knowledge graph builds itself from what you say instead of demanding your constant administrative attention.