Hi, I'm Clint, one of the co-authors of this paper.
I'd like to quickly summarize what is different about our approach and why it matters.
Our work was inspired by brilliant research done at MIT CSAIL on "Recursive Language Models" (RLMs). One of the controversies has been whether these models are just a formalization of what agents like Claude Code already do vs. whether they bring new capabilities to the table.
By outperforming Claude on the major long-context benchmark, we provide a strong signal that something fundamentally new is happening. (In other words, it's not "just Claude Code" because it demonstrably outperforms Claude Code in the long-context regime.)
Where our contribution, LCM, differs from RLMs is how we handle recursion. RLMs use "symbolic recursion" -- i.e., they have an LLM write a script to recursively call itself in order to manipulate the context, which is stored in a REPL. This provides maximum flexibility... but it often goes wrong, since the LLM may write imperfect scripts.
LCM attempts to decompose the recursion from RLMs into deterministic primitives so that the control flow can be managed by an engine rather than left to the whims of the LLM. In practice, this means we replace bespoke scripts with two mechanisms:
(1) A DAG-based context management system that works like paged virtual memory, except for managing conversations and files;
and
(2) Operator-level recursion, like "Map" for LLMs, which lets one tool call process thousands of tasks.
An analogy we draw in the paper is the evolution from GO-TO statements (of Dijkstra's "Considered Harmful" fame) to structured programming. RLMs are maximally expressive, but all of that power comes with the risk of things going awry. We have built a more mechanistic system, which can provide stronger guarantees when deployed in production with today's models.
Happy to answer any questions! Thanks for taking a look at the paper!
I've echoed the sentiment here on HN (and elsewhere) that these kinds of mechanisms seem to be a pathway to extending context longer and longer and longer and I wish I could toy around with this technology right now (can I?). I'm so excited!!
Your work is the shoulders-built-on-shoulders upon which other giants shall keep on building. Thank you so much.
This looks super useful! And it’s intellectually appealing to think that the LLM will have the ability to think back precisely and we can rely on DAG tooling to reason about and keep track of history (and correct history).
Have you considered making an openclaw plugin/PR for it? I understand you have your own coding CLI tool, but I don’t think this looks so hard to implement that it can’t be implemented elsewhere.
Yes, that is actually the next thing we are shipping!
We have heard from a ton of OpenClaw users that the biggest barrier to them getting everything they want out of their agents is that memory is not a solved problem.
LCM could be a great solution to that. Stay tuned -- will ship it ASAP.
Riffing on this a little, there’s a few things that would be useful:
1 - global namespace - for the gateway agent/coordinator - would make inspecting results of subagent tasks much more safe and efficient, and all the benefits of precision across compaction boundaries for the main chat thread. I could see giving the subagents access to it, or just prompting them fresh and storing results in the global memory - probably the second is better.
2 - permissioned memory spaces - stuff that a given subagent should know without giving them global memory access. Then a gateway could mark some stuff ‘available’ as part of prompting.
This would be a super useful set of primitives - from reading the paper, I think you could do this relatively cheaply, maybe a tagging system for branches/nodes in the DAG. openclaw keeps some sort of track of what subagents should have access to already in the form of skills, but I haven’t looked into the actual permissions architecture.
Did somebody say 'global namespace'? I spent years working on one of those as part of Urbit... In general, I think you're right. Each conversation is an append-only log at the lowest layer, and I see no reason not to expose that fact as a global namespace, as long as permissions are handled gracefully.
Of course getting permissions to work well might be easier said than done, but I like this direction.
Cool. I agree (consistent with your GOTO analogy) that imposing structure on the model (or a human) can constrain the search space and lead to better choosing given a fixed decision budget.
> deterministic primitives
Are agent-map and LLM-map the only two options you've given the model for recursive invocations? No higher-level, er, reduction operators to augment the map primitives?
Hi, I'm the other author on this paper. You've asked a good question. I had originally planned on writing an agentic_reduce operator to complement the agentic_map operator, but the more I thought about it, the more I realized I couldn't come up with a use case for it that wasn't contrived. Instead, having the main agent write scripts that perform aggregations on the result of an agentic_map or llm_map call made a lot more sense.
It's quite possible that's wrong. If so, I would write llm_reduce like this: it would spawn a sub-task for every pair of elements in the list, which would call an LLM with a prompt telling it how to combine the two elements into one. The output type of the reduce operation would need to be the same as the input type, just like in normal map/reduce. This allows for a tree of operations to be performed, where the reduction is run log(n) times, resulting in a single value.
That value should probably be loaded into the LCM database by default, rather than putting it directly into the model's context, to protect the invariant that the model should be able to string together arbitrarily long sequences of maps and reduces without filling up its own context.
I don't think this would be hard to write. It would reuse the same database and parallelism machinery that llm_map and agentic_map use.
Cool! It'll be interesting to follow your work. I've been thinking, as well, about quorum and voting systems that might benefit from some structure. The primitives you've described are great for the "do N things one time each" case, but sometimes I (and the AI) want "do one thing N times: pick the best somehow". (I mean, you can express that with map/reduc over small integers or something, but still: different flavor.) You can even bring public choice theory into it.
Do you have any resources or youtube videos that might also help someone understand the lcm context management a bit better. I think there's something to this, but i'm having trouble wrapping my head around it. i learn well with analogies and im trying to really grok the concept here. If there are other ways you could explain it it would be appreciated. mind you i have built my own agents from scratch so im not a total novice in these areas. my agents already manage context with sub-agents and multi layered conversational histories with RAG thrown in there. But i dont want to make wrong assumptions about your implementations and miss the nuanced important bits. regardless, ill try my best to reread the article and hash it out on my own, thanks for the paper.
We don't have any other materials yet, but let's see if this lands for you. I can run you through a couple simpler versions of the system, why they don't work, and how that informs our ultimate design.
The most basic part of the system is "two layers". Layer 1 is the "ground truth" of the conversation - the whole text the user sees. Layer 2 is what the model sees, i.e., the active context window.
In a perfect world, those would be the same thing. But, as you know, context lengths aren't long enough for that, so we can't fit everything from Layer 1 into Layer 2.
So instead we keep a "pointer" to the appropriate part of Layer 1 in Layer 2. That pointer takes the form of a summary. But it's not a summary designed to contain all information. It's more like a "label" that makes sure the model knows where to look.
The naive version of the system would allow the main model to expand Layer 2 summaries by importing all of the underlying data from Layer 1. But this doesn't work well, because then you just end up re-filling the Layer 2 context window.
So instead you let the main model clone itself, the clone expands the summary in its context (and can do this for multiple summaries, transforming each into the original uncompressed text), and then the clone returns whatever information the main thread requires.
Where this system would not fully match the capabilities of RLMs is that, by writing a script that calls itself e.g. thousands of times, an RLM has the ability to make many more recursive tool calls than can fit in a context window. So we fix that using operator-level recursion, i.e., we give the LLM a tool, map, that executes arbitrary recursion, without the LLM having to write a custom script to accomplish that.
> Because expansion can recover arbitrarily large volumes of earlier conversation, this tool is restricted to sub-agents spawned via the Task tool; the main agent cannot call it directly. This restriction prevents uncontrolled context growth in the primary interaction loop.
What if the lcm_expand is called for a summary that has 1000s of messages that immediately floods the sub-agent's own context window?
Does lcm_expand only unroll one "layer" of the DAG and unrolls more if needed by another subagent?
By construction, individual summaries are not typically large enough to overload the context window when expanded.
The reason that the volume is potentially arbitrarily large is that one sub-agent can call lcm_expand multiple times - either vertically or horizontally. But that's a process that occurs gradually as the tool is used repeatedly.
This has not been a problem in our testing, but if it were a problem it would be easy to prevent sub-agents from invoking lcm_expand once their context buffer has reached a specified threshold.
Another question is, why would earlier conversations need to be stored and recalled? They're irrelevant. Only records of the initial requirements and the work done, or work in progress, needs to be stored.
You could definitely build a coding agent that way, and it sounds like you've done it. We store the conversation history because:
1. In our use of coding agents, we find that there are often things referenced earlier in the conversation (API keys, endpoint addresses, feedback to the agent, etc.) that it's useful to have persist.
2. This is a general-purpose LLM memory system, which we've just used here to build a coding agent. But it is also designed for personal assistants, legal LLMs, etc.
Seems that this would be useful for subagents aswell. You could still allow an agent down the line to inspect the thinking traces/steps of a subagent, by creating a mapping of the content. Thus keeping it compressed but accesible if requested.
Our system uses sub-agents as a core part of its architecture.
That terminology can be confusing, because in other cases (and sometimes in our own architecture, like when executing thousands of operations via MAP) a sub-agent may be a smaller model given less complex individual tasks.
But the core mechanism we use for simulating unlimited context is to allow the main model to spin up instances of itself (sub-agents) with the previously summarized portion of the context expanded into its full, uncompressed state.
Expanding summaries into full text in sub-agents rather than the main thread is a critical part of our architecture, because it prevents the main context window from filling up.
1. It does not store chat history, reasoning traces etc, only workflow artifacts (requirements, codebase analysis, implementation plan, etc). I frankly do not believe those things are relevant.
2. It is significantly simpler and more lightweight, using only markdown files
Is this more than keeping history before compaction, making it all available via tools, and some backend bookkeeping for parallel tool calls? I'm not exactly sure what is interesting here
Much of this feels like a technical report of what they did, and makes me feel like we've reached the ICO whitepaper phase. I have very similar features in my custom coding agent, they seem pretty common sense to have. Are you really throwing away the compacted history? Saving it doesn't seem like a feature, the opposite seems like a gap. Same for making it available via toos/search, pretty standard stuff. Then too, ADK framework I use handles parallel agents/tools.
I'd like to quickly summarize what is different about our approach and why it matters.
Our work was inspired by brilliant research done at MIT CSAIL on "Recursive Language Models" (RLMs). One of the controversies has been whether these models are just a formalization of what agents like Claude Code already do vs. whether they bring new capabilities to the table.
By outperforming Claude on the major long-context benchmark, we provide a strong signal that something fundamentally new is happening. (In other words, it's not "just Claude Code" because it demonstrably outperforms Claude Code in the long-context regime.)
Where our contribution, LCM, differs from RLMs is how we handle recursion. RLMs use "symbolic recursion" -- i.e., they have an LLM write a script to recursively call itself in order to manipulate the context, which is stored in a REPL. This provides maximum flexibility... but it often goes wrong, since the LLM may write imperfect scripts.
LCM attempts to decompose the recursion from RLMs into deterministic primitives so that the control flow can be managed by an engine rather than left to the whims of the LLM. In practice, this means we replace bespoke scripts with two mechanisms: (1) A DAG-based context management system that works like paged virtual memory, except for managing conversations and files; and (2) Operator-level recursion, like "Map" for LLMs, which lets one tool call process thousands of tasks.
An analogy we draw in the paper is the evolution from GO-TO statements (of Dijkstra's "Considered Harmful" fame) to structured programming. RLMs are maximally expressive, but all of that power comes with the risk of things going awry. We have built a more mechanistic system, which can provide stronger guarantees when deployed in production with today's models.
Happy to answer any questions! Thanks for taking a look at the paper!
reply