I ran a quick experiment investigating how DeepSeek-R1 carries out on agentic tasks, regardless of not supporting tool use natively, and I was quite pleased by preliminary results. This experiment runs DeepSeek-R1 in a single-agent setup, where the design not only prepares the actions however likewise develops the actions as executable Python code. On a subset1 of the GAIA recognition split, DeepSeek-R1 exceeds Claude 3.5 Sonnet by 12.5% outright, from 53.1% to 65.6% proper, and forum.pinoo.com.tr other designs by an even larger margin:
The experiment followed design use guidelines from the DeepSeek-R1 paper and the design card: Don't use few-shot examples, prevent including a system timely, and set the temperature to 0.5 - 0.7 (0.6 was utilized). You can discover further evaluation details here.
Approach
DeepSeek-R1's strong coding abilities enable it to function as a representative without being explicitly trained for tool usage. By enabling the design to generate actions as Python code, it can flexibly communicate with environments through code execution.
Tools are executed as Python code that is consisted of straight in the timely. This can be a simple function meaning or a module of a bigger bundle - any valid Python code. The design then generates code actions that call these tools.
Arise from executing these actions feed back to the model as follow-up messages, driving the next steps up until a last answer is reached. The representative framework is a basic iterative coding loop that moderates the conversation between the design and its environment.
Conversations
DeepSeek-R1 is used as chat design in my experiment, where the model autonomously pulls additional context from its environment by using tools e.g. by utilizing a search engine or fetching information from web pages. This drives the conversation with the environment that continues till a last answer is reached.
On the other hand, classihub.in o1 designs are known to perform badly when utilized as chat models i.e. they do not try to pull context during a conversation. According to the connected short article, o1 designs perform best when they have the complete context available, with clear directions on what to do with it.
Initially, I also tried a complete context in a single prompt method at each action (with arise from previous steps included), however this led to substantially lower scores on the GAIA subset. Switching to the conversational approach explained above, I had the ability to reach the reported 65.6% performance.
This raises an intriguing concern about the claim that o1 isn't a chat model - possibly this observation was more relevant to older o1 designs that lacked tool use capabilities? After all, isn't tool usage support a crucial system for enabling models to pull additional context from their environment? This conversational approach certainly seems efficient for DeepSeek-R1, tandme.co.uk though I still need to carry out similar experiments with o1 models.
Generalization
Although DeepSeek-R1 was mainly trained with RL on mathematics and coding tasks, it is remarkable that generalization to agentic tasks with tool use via code actions works so well. This capability to generalize to agentic tasks advises of current research by DeepMind that reveals that RL generalizes whereas SFT remembers, although generalization to tool use wasn't examined in that work.
Despite its capability to generalize to tool usage, DeepSeek-R1 frequently produces really long thinking traces at each step, compared to other designs in my experiments, limiting the usefulness of this model in a single-agent setup. Even simpler tasks sometimes take a long time to finish. Further RL on agentic tool usage, be it via code actions or not, could be one choice to enhance efficiency.
Underthinking
I likewise observed the underthinking phenomon with DeepSeek-R1. This is when a thinking model regularly switches in between various thinking ideas without adequately exploring appealing paths to reach a correct option. This was a major factor for overly long thinking traces produced by DeepSeek-R1. This can be seen in the recorded traces that are available for hb9lc.org download.
Future experiments
Another typical application of reasoning models is to use them for planning only, while using other models for actions. This could be a prospective brand-new function of freeact, if this separation of roles shows helpful for more complex jobs.
I'm also curious about how reasoning models that already support tool usage (like o1, o3, ...) perform in a single-agent setup, with and without creating code actions. Recent advancements like OpenAI's Deep Research or Hugging Face's open-source Deep Research, which also uses code actions, look intriguing.
1
Exploring DeepSeek R1's Agentic Capabilities Through Code Actions
Abel Gregorio edited this page 2 months ago