Linubra
Product K in The Linubra Journal

Why We Built Linubra

Most second-brain tools make you do all the work. We built one that thinks.

Part 1 of the series: Building Linubra

Patrick Lehmann
Patrick Lehmann
· 5 min read
Raw inputs — audio waveforms, text snippets, images — flowing through a reasoning layer into a structured knowledge graph

Every knowledge worker has lived the same five minutes. A conversation three weeks ago — lunch, a phone call, a hallway chat — and now you need one specific thing out of it. A name. A number. A commitment somebody made.

You open your notes. Nothing. You search your email. Close, but not quite. You scroll your calendar trying to reconstruct the day. Twenty minutes in, you either find a fragment or give up.

That’s the problem. Not “we don’t have enough apps.” Not “our search is too slow.” The problem is that the thing you actually want — what somebody said to you, three weeks ago, in context — lives in your head, and your head is the only place it ever lived.


Two Kinds of Tool. Both Broken.

Walk into this problem from one direction and you land on manual note-taking. Obsidian, Notion, Roam, the whole “second brain” school. Every insight typed by hand. Every connection linked by hand. Every tag, every backlink, every folder, maintained by hand. For the rare disciplined note-taker this works. For the rest of us the system stays empty — not from laziness, but from the sheer weight of the maintenance tax.

Walk into it from the opposite direction and you land on passive recording. Rewind, Limitless, the ambient capture tools. They log everything — screen, audio, transcripts — and promise you can search it later. And you can. You get keyword search over transcripts. You get timestamped recordings. What you don’t get is an answer.

Ask a passive recorder “What did Mark say about the Q3 budget?” and it returns a transcript snippet with those words in it. Ask it “How has Mark’s position on the Q3 budget shifted over the last month?” and the room goes quiet. It captured the data. It didn’t build the knowledge.

Is there really a difference between losing a memory and being unable to find it? On the day you need it, no.


What a Reasoning Memory Engine Actually Does

Linubra sits between the two. Capture is effortless — speak into your phone, paste a link, jot a note. No manual filing. No tags. No backlinks.

But unlike a passive recorder, it doesn’t stop at storage. When a memory lands, the engine runs five things against it:

  1. Extract. People mentioned, action items, decisions, sentiment, location, time. Structured JSON, not a summary.
  2. Resolve entities. “Dr. Schmidt”, “Patricia”, and “the neuroscientist from Berlin” all point to the same person. The graph knows this.
  3. Embed semantically. The memory lands in a vector space where conceptually similar memories cluster, even if the words don’t match.
  4. Detect contradictions. A new claim that conflicts with an older one gets flagged, not silently overwritten.
  5. Connect. People to events. Events to projects. Projects to commitments. Commitments to deadlines. A graph, not a pile.

All five run in the background while you go about your day. What comes out the other side isn’t a transcript archive. It’s a graph of your world that grows more useful every time you add to it.

The payoff is the questions it can answer. Not “find me the file that contains X”, which is what search does. Questions that need synthesis — brief me on Project Aurora before tomorrow’s meeting, when did Sarah mention her birthday, what did I commit to the engineering team this quarter. A 2025 paper in Nature Scientific Reports found that knowledge-graph-augmented retrieval improves factual accuracy by 13.6% over plain vector RAG (Nature, 2025). That number is the reason we build a graph instead of flat-searching embeddings.

And because the graph crosses domains, it can catch things siloed tools can’t. A pattern between work stress and running injury risk. A deadline that moved three times and nobody wrote down. The knee and the board meeting, later in this series, is the story of one of those.


The Bet on Long-Context

None of this was possible two years ago. The architecture only works because a model can now hold hundreds of memories in a single context window and reason across them without losing the thread.

Specifically, we bet on Gemini 3 Pro. Four capabilities made the call:

  • Audio in directly, without a separate transcription step.
  • Entity resolution across memories separated by weeks.
  • Structured extraction from messy, half-formed speech.
  • Consistency across a context full of real memories, not benchmark needles.

The bet comes with costs we’re not going to pretend aren’t real. It’s compute-intensive. Entity resolution across long time horizons still fails in ways we’re actively fixing. And the privacy question — your memories, in someone else’s cloud — is the one we take most seriously, which is why your data never trains the model and why we wrote a separate post on what data sovereignty actually means when the data is this personal.


What Comes Next

This is the first post in a series about how the system works under the hood. The ones already written:

Still coming: the knowledge graph schema (PostgreSQL, pgvector, property graph vs triple store, and why we picked what we picked), and how entity resolution decides that “Mark” and “Marcus from accounting” are the same person without asking you.

If you’re building in this space, or you’re tired of losing important details to the gap between your meetings and your notes, I’d like to hear from you.


Patrick Lehmann

Written by

Patrick Lehmann

Software Architect & AI Engineer

Founder of Linubra. Building tools that capture reality and retrieve wisdom. Software architect with a passion for AI-powered knowledge systems and the intersection of memory science and technology.

More from Patrick Lehmann

A dense hand-maintained database of property rows and tag chips on the left, next to a single calm graph node with a few sparse incoming fragments on the right — the contrast between manual upkeep and a self-organising knowledge graph

The Hidden Cost of Your Second Brain

Every note-taking system carries a hidden maintenance tax. Most people never add it up. When you do, the number is roughly half a year of focused work every five years — spent not thinking, not creating, not deciding. Spent filing.

Patrick Lehmann Patrick Lehmann · · 4 min read
Abstract sound waveform dissolving into interconnected knowledge graph nodes — representing voice input transformed into structured reasoning

Your Voice Is the Interface

Every voice app on the market treats speech as a transcription problem. It isn't. It's a reasoning problem with a convenient input device attached to it.

Patrick Lehmann Patrick Lehmann · · 7 min read
Abstract visualization of threads connecting business and athletic objects — revealing hidden cross-domain patterns

The Knee, the Board Meeting, and a Pattern

A founder's board-meeting stress caused a running injury three months later. Every data point was captured. None of them lived in the same place, so nobody — not the founder, not the physiotherapist, not any of his apps — ever saw the pattern.

Patrick Lehmann Patrick Lehmann · · 5 min read
A waveform flowing into a checklist — raw voice input transformed into actionable tasks grouped by date

Your Voice Notes Now Have a To-Do List

Everything you said you'd do, in one place. Grouped by when it's due, filtered by who it's for, one tap away from the voice note it came from.

Patrick Lehmann Patrick Lehmann · · 4 min read

Recommended reading

A sealed concrete cube surrounded by untouching glass spheres — representing data sovereignty and protected personal data

Your Life Is Not Training Data

The standard 'free in exchange for your data' bargain works fine for a notes app. It doesn't work for a tool that remembers your meetings, your injuries, your finances, and the people you trust. Here's what the architecture looks like when you take that seriously.

Patrick Lehmann Patrick Lehmann · · 4 min read

Stay in the loop

Get insights on AI, memory, and building tools that capture reality.