Exploring DeepSeek-R1's Agentic Capabilities Through Code Actions
rita78u5074046 editou esta página 6 meses atrás


I ran a quick experiment investigating how DeepSeek-R1 carries out on agentic jobs, regardless of not supporting tool usage natively, and I was quite pleased by preliminary outcomes. This experiment runs DeepSeek-R1 in a single-agent setup, where the design not just plans the actions but also creates the actions as executable Python code. On a subset1 of the GAIA validation split, DeepSeek-R1 surpasses Claude 3.5 Sonnet by 12.5% absolute, from 53.1% to 65.6% proper, and videochatforum.ro other models by an even bigger margin:

The experiment followed design usage standards from the DeepSeek-R1 paper and imoodle.win the model card: Don't utilize few-shot examples, avoid including a system prompt, and set the temperature to 0.5 - 0.7 (0.6 was used). You can discover more evaluation details here.

Approach

DeepSeek-R1's strong coding capabilities enable it to function as a representative without being clearly trained for akropolistravel.com tool use. By enabling the model to create actions as Python code, it can flexibly interact with environments through code execution.

Tools are implemented as Python code that is consisted of straight in the timely. This can be a simple function definition or a module of a bigger package - any valid Python code. The model then produces code actions that call these tools.

Results from carrying out these actions feed back to the design as follow-up messages, driving the next steps up until a final answer is reached. The agent framework is a simple iterative coding loop that moderates the discussion between the model and its environment.

Conversations

DeepSeek-R1 is utilized as chat design in my experiment, where the design autonomously pulls additional context from its environment by utilizing tools e.g. by utilizing a search engine or fetching information from web pages. This drives the conversation with the environment that continues till a last response is reached.

On the other hand, o1 models are known to perform inadequately when utilized as chat models i.e. they don't try to pull context during a conversation. According to the linked post, o1 designs carry out best when they have the complete context available, with clear guidelines on what to do with it.

Initially, I also attempted a full context in a single timely method at each action (with arise from previous steps consisted of), however this led to significantly lower scores on the GAIA subset. Switching to the conversational technique explained above, I was able to reach the reported 65.6% performance.

This raises an intriguing concern about the claim that o1 isn't a chat design - possibly this observation was more relevant to older o1 designs that lacked tool usage capabilities? After all, isn't tool use support an essential system for allowing models to pull additional context from their environment? This conversational method certainly seems efficient for DeepSeek-R1, though I still require to perform comparable experiments with o1 models.

Generalization

Although DeepSeek-R1 was mainly trained with RL on math and coding tasks, it is amazing that generalization to agentic tasks with tool usage through code actions works so well. This capability to generalize to agentic jobs reminds of current research by DeepMind that reveals that RL generalizes whereas SFT remembers, although generalization to tool use wasn't investigated in that work.

Despite its capability to generalize to tool usage, DeepSeek-R1 frequently produces very long thinking traces at each action, compared to other designs in my experiments, limiting the usefulness of this design in a single-agent setup. Even easier jobs often take a long period of time to complete. Further RL on agentic tool usage, be it through code actions or asteroidsathome.net not, might be one alternative to enhance effectiveness.

Underthinking

I also observed the underthinking phenomon with DeepSeek-R1. This is when a reasoning design regularly changes between different without sufficiently checking out appealing paths to reach an appropriate option. This was a major factor for extremely long thinking traces produced by DeepSeek-R1. This can be seen in the tape-recorded traces that are available for download.

Future experiments

Another typical application of thinking designs is to utilize them for planning just, while using other models for creating code actions. This might be a possible brand-new function of freeact, if this separation of functions proves useful for more complex tasks.

I'm also curious about how reasoning designs that already support tool use (like o1, setiathome.berkeley.edu o3, ...) carry out in a single-agent setup, with and without generating code actions. Recent developments like OpenAI's Deep Research or Hugging Face's open-source Deep Research, which likewise utilizes code actions, look fascinating.