ReAct (Reasoning+Acting) is a prompting technique for large language models (LLMs) that combines reasoning and acting to improve performance on complex tasks. It was introduced by Yao et al. in 2022 in their paper "ReAct: Synergizing Reasoning and Acting in Language Models"[1].

Overview

edit

ReAct prompts LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. This allows the model to:

  1. Induce, track, and update action plans
  2. Handle exceptions
  3. Gather and incorporate external information

The key idea is to leverage the synergies between reasoning and acting, similar to how humans approach complex tasks.

How it works

edit

A ReAct prompt typically consists of:

  1. A task description
  2. Few-shot examples showing the desired format
  3. Alternating "Thought", "Action", and "Observation" steps

For example: ``` Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?

Thought 1: I need to search for information about the Colorado orogeny and its eastern sector.

Action 1: Search[Colorado orogeny]

Observation 1: The Colorado orogeny was an episode of mountain building in Colorado and surrounding areas.

Thought 2: I need more specific information about the eastern sector.

Action 2: Lookup[eastern sector]

Observation 2: The eastern sector extends into the High Plains and is called the Central Plains orogeny.

Thought 3: Now I need to find the elevation range of the High Plains.

Action 3: Search[High Plains elevation range]

Observation 3: The High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).

Thought 4: I have found the answer.

Action 4: Finish[1,800 to 7,000 ft ```

Applications

edit

ReAct has been shown to be effective in various tasks, including:

  • Question answering (e.g., HotpotQA)
  • Fact verification (e.g., FEVER dataset)
  • Decision-making in complex environments (e.g., ALFWorld, WebShop)

Advantages

edit

Compared to other prompting techniques, ReAct offers several benefits:

  • Improved performance on knowledge-intensive tasks
  • Better handling of multi-step reasoning
  • Increased interpretability of model decisions
  • Reduced hallucination through grounding in external information

Limitations

edit

Some limitations of ReAct include:

  • Increased prompt length due to the inclusion of reasoning steps
  • Potential for error propagation if early reasoning steps are incorrect
  • Dependence on the quality of external information sources

See also

edit

References

edit

[1] Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). ReAct: Synergizing Reasoning and Acting in Language Models. arXiv preprint arXiv:2210.03629.