Understanding Agentic Workflows (ReAct, Autonomous Commits, and Plan-and-Execute) | From Zero to AI Hero

As a developer, you use tools to complete multi-step tasks. An agentic workflow takes this idea and applies it to AI, giving a Large Language Model (LLM) the ability to think, plan, use external tools, and execute complex, multi-step tasks like fixing a bug or writing a feature. The core logic enabling these autonomous tools is often the ReAct framework.

Agentic Workflows: A Quick Summary

ComponentRole in Agentic WorkflowsSimple AnalogyKey Framework
AgentThe central LLM that manages the task.The project manager.ReAct, Plan-and-Execute
ToolsExternal capabilities the agent can call (e.g., file editor, terminal, search).A developer’s hands: keyboard, mouse, web browser.File I/O, git, google:search
WorkflowThe sequence of “thought” and “action” steps the agent takes.The to-do list for solving a bug.Chain-of-Thought (CoT)
Core ValueAllows AI to go beyond simple text generation to complete actions in the real world (your codebase).Turning an idea into a completed, committed piece of code.

What is an Agentic Workflow?

In the first three articles of this series, we focused on how the AI generates text:

  1. RAG (Article #1) gives the AI context.
  2. Embeddings (Article #2) help the AI find that context.
  3. Chain-of-Thought (Article #3) helps the AI reason about the context.

An agentic workflow takes that reasoning and turns it into concrete actions. An agent is an LLM that runs in a loop, continuously performing three steps until the task is complete:

  1. Observe: Read the environment (code, error logs, user request).
  2. Think/Plan: Decide what to do next.
  3. Act/Tool Use: Execute the planned step using an external tool.

These steps allow the AI to tackle large tasks, like writing an “autonomous commit” that fixes a complex issue, by breaking them down into small, solvable units.

ReAct Framework for Developer Agents

The ReAct framework for developer agents is the most common design pattern used for this process. ReAct stands for Reasoning and Acting. It explicitly asks the LLM to alternate between an internal Thought step and an external Action step.

Here is an example of a developer agent using ReAct to fix a bug:

Step TypeOutputDescription
User Input“Fix the bug reported in Jira ticket JIRA-101 where the user search is timing out.”The initial prompt.
Thought“I need to understand the error. First, I will search the codebase for the file related to ‘user search’ using RAG, and then I will search the internet for the specific timeout error code mentioned in the ticket.”The LLM uses Chain-of-Thought to plan.
Actiongoogle:search(query="JIRA-101 timeout error code")The agent calls a tool to get external information.
Observation[Search Result: "Common database lock issue with Postgres on large queries."]The tool returns a result, which the agent observes.
Thought“The problem is likely a missing database index. I will now retrieve the related code files to confirm the query and add the index command.”The agent updates its plan based on the observation.
Actionfile_editor(action="edit", file="db_setup.sql", content="CREATE INDEX...")The agent uses a file editing tool to make the fix.
Thought“The fix is applied. I will now run the test suite to ensure no regressions were introduced.”The agent plans the next verification step.

This continuous cycle of Thought (Reasoning) and Action (Acting) is what makes the ReAct framework so effective for complex, real-world development tasks.

Beyond ReAct: Plan-and-Execute

A simpler, or sometimes slower, alternative to ReAct is the Plan-and-Execute model. In this model, the LLM generates the entire plan upfront, before taking any action.

  1. Plan: “I will do Step 1, then Step 2, then Step 3.”
  2. Execute: The agent executes Steps 1, 2, and 3 without stopping to reconsider.

While this can be faster for straightforward tasks, it is less robust. If Step 1 fails, the agent may blindly execute Step 2 and 3, leading to worse results. ReAct is superior for debugging workflows because it adapts its thinking after every single action, making it more resilient to unexpected observations.

Autonomous Commits: The Goal of Agentic Workflows

The ultimate goal of many developer agents is to produce an Autonomous Commit: a set of changes, along with a high-quality commit message, that addresses a task without human intervention.

For this to work, the agent must be proficient in several areas:

  • Context: It must find and use the right files via RAG (Article #1).
  • Reasoning: It must structure its fix logically using Chain-of-Thought (Article #3).
  • Specialization: If the agent is built using a customized model via LoRA fine-tuning for code LLMs (Article #5), it can better match the company’s unique coding style and commit conventions.
  • Safety: It must operate within strict guardrails in AI code generation (Article #7) to ensure the commit does not introduce insecure code or leak internal data.

Frequently Asked Questions (FAQs)

  1. What is the biggest challenge of agentic workflows?

    The main challenge is reliability, often called “tool hallucination.” The agent might decide to use a tool that does not exist or try to use a tool incorrectly. Managing the complexity of the internal Thought process and ensuring fast, reliable execution is crucial.

  2. How do agents handle complex project structure?

    Agents rely heavily on code embeddings. When an agent needs to know which file to modify, it embeds its current goal (e.g., “fix shipping cost calculation”) and searches the vector database for the closest code vector, allowing it to accurately choose the right tool command (file_editor) and file path.

  3. Does an agent use a different kind of LLM?

    No, the agent is often built using the same core LLM (like a powerful Transformer model). The difference is the prompting: the agent’s system prompt is highly structured, telling the LLM to always output in the “Thought, Action, Observation” format of ReAct.

  4. How does LoRA Fine-Tuning help an agent?

    The customization method of LoRA fine-tuning can teach an agent specialized internal logic, such as: “When debugging an authentication error, always check the JWT library version before the database permissions.” This makes the agent’s initial plan (its first Thought) more effective.

  5. Are agentic workflows a type of multi-modality?

    They can be. An agent might use a tool to analyze a UI screenshot and then use that visual information to decide on the next code action. This blending of text and visual data is an example of multi-modal AI for developer workflows.

Conclusion

Agentic workflows, powered by the ReAct framework, represent the most advanced stage of AI integration in software development. They move AI beyond simple chatbots into true collaborators capable of planning, acting, and completing tasks like autonomous commits. By building these systems, companies are transforming the way code is written, fixed, and deployed, promising a future of exponentially faster and more efficient development cycles.

Aditya Gupta
Aditya Gupta
Articles: 463
Review Your Cart
0
Add Coupon Code
Subtotal