I do wonder about the usefulness about this massive context dumping exercise. 100M is a ridiculous amount. Usually to get good results on practical tasks you need to actually think about what you are dumping into context.
I also have my gripes about the way 2 hop is mentioned here. With figure 3 being the canonical example of what I would consider too trivial/misleading (The exact text match of "Eric Watts" being in the question and in the context). It leads to the natural question of how does it do compared to an LLM with a grep tool.
What I would consider more interesting is practical synthesis over such a large context where you can't just string lookup answers. For example maybe dumping all of Intel's x86 manuals into context and then asking an LLM to try to write assembly or something.
So basically there's no excuse not to see ChatGPT and Claude release 10M -> 100M models within 6months or so. <9% degradation is crazy. Hopefully DeepSeek and Qwen4 can implement this.
MSA: Memory Sparse Attention
(github.com)81 points by chaosprint 21 March 2026 | 7 comments
Comments
I also have my gripes about the way 2 hop is mentioned here. With figure 3 being the canonical example of what I would consider too trivial/misleading (The exact text match of "Eric Watts" being in the question and in the context). It leads to the natural question of how does it do compared to an LLM with a grep tool.
What I would consider more interesting is practical synthesis over such a large context where you can't just string lookup answers. For example maybe dumping all of Intel's x86 manuals into context and then asking an LLM to try to write assembly or something.