Agents are different for personalization

29 Dec 2025

3 minute read

In the past year or so, AI models have effectively learned to take actions. Models can search, read what they find, reason about it, and search again based on what they learned. This loop is what makes tools like Claude Code useful for navigating large codebases.

This has transformed how many, including me, code, using tools like claude code. But it hasn’t yet changed personalization, ie how models are able to understand what we care about, our background, our intentions, beyond a single chat. Most chatbot users’ personalization is still limited to a system prompt—a few sentences of static context. Memory features don’t seem that effective at actually learning things about the user.

Part of the problem here is the adaptivity and continual learning of the models themselves, which is quite fundamental. But I think an understated, and easy to tweak bottleneck is data. Models are not being given artifacts of who the user is, and what their intentions are. Meanwhile, many of us have lots of information that could help models understand us: our writing, documents, journals, the things we’ve read and liked, or just personal websites.

Instead of a system prompt, we can give models a searchable artifact of our writing, notes, and reading history. Something agents can explore when your questions might benefit from context—and write to, to remember things for later.

Over the break, I built a very simple software tool to do this. It’s called whorl, and you can install it here.

whorl is a local server that holds any text you give it—journal entries, website posts, documents, reading notes, etc…—and exposes an MCP that lets models search and query it. Point it at an folder or upload files.

I gave it my journals, website, and miscellaneous docs, and started using Claude Code with the whorl MCP. Its responses were much more personalized to my actual context and experiences.

First I asked it:

do a deep investigation of this personal knowledge base, and make a world model of the user. this is a text that another model could be prompted with, and would lead them to interacting with the user in a way the user would enjoy more

It ran a bunch of bash and search calls, thought for a bit, and then made a detailed profile of me, and my guess is that its quality beats many low effort system prompts, linked here.

I’m an ML researcher, so I asked it to recommend papers and explain the motivation for various recs. Many of these I’ve already read, but it has some interesting suggestions, quite above my usual experience with these kinds of prompts. See here.

These prompts are those where the effect of personalization is most clear, but this is also useful in general chat convos, allowing the model to query and search for details that might be relevant.

It can also use the MCP to modify and correct the artifact provided it, to optimize for later interactions – especially if you host a “user guide” there like the one I linked. I’ve been thinking about this notion of user world models for personalization for a while now, and I think the first step is this – good, intentional data and structure that is easy to add to for agents to understand you.

This is a very early stage exploration, and I’m not sure what the best usecases for this personalization tech is, but I want to find out! I think people who have already spent a bunch of time outputting content should give that content to their agents to get a solid baseline at what a personalization method should be able to do, and that this is just straight up useful.




Copyright © 2019-2025 uzpg