Search-R1: Training LLMs to Reason and Leverage Search Engines with RL

(arxiv.org)

Comments

perbu 3 April 2025
This is the magical thing that happens when AI research happens in the open. Deepseek published their model and their methodology and then the nice people at the University of Illinois are able to build on it.

When OpenAI was launched this is what I thought it was going to be like. Something, something for the betterment of man kind.

vessenes 3 April 2025
A couple of comments. What’s not that interesting here is that adding search to an LLM increases accuracy — this is known, and largely implemented via RAG or other search pipelines which then stuff information into the context.

What might be interesting here is that they are thinking about taxonomic tool use-cases, and exploring training and therefore optimizing the utilization of them.

This to me is a proof of concept — an interesting one, but just a proof of concept. You can see from their example search that the model over-relied on search; it didn’t need to re-search three times to get the answer.

A next step that I think would be useful would be updating the reward function to penalize search; pressing the model to use search when it needs to and not before. This to me is a likely framework going forward where MCP tool costing matters, and would be really useful to have in the next gen of tool calling LLMs.

In the case of search we’d hopefully get a really useful signal and outcome for times the model is unsure — it would call a friend, and get good info! And for times it’s sure, we’d have taught it not to waste reward on that.

deepsquirrelnet 3 April 2025
This is pretty cool. I have a similar model that’s 8 days into training on msmarco.

So far I only have the “cold start” data posted, but I’m planning on posting a full distillation dataset.

https://huggingface.co/datasets/dleemiller/lm25

ccux0013 7 April 2025
As far as I know, the idea behind Search-R1 stemmed from DeepRetrieval (search it on GitHub), though the latter has gained much less attention. Also, DeepRetrieval was trained using real search engines, not just BM25. If you check their training log, they got incredible performance (65% vs SOTA 25%) much earlier.
abidhusain 3 April 2025
Leveraging reinforcement learning (RL) for LLMs is a fascinating evolution in search technology. The potential for improving search engines to reason intelligently and process data in real-time could revolutionize the entire industry.
DeathArrow 3 April 2025
Can someone ELI5 how reinforcement learning works with transformer based architecture?
sachinaag 3 April 2025
I wonder if Perplexity uses similar methods under the hood or if it is a completely different approach.