Aurelio logo
Updated on Nov 06, 2024

AI Agents are Neuro-Symbolic Systems

AI Engineering

Intro to AI Agents Article

We believe AI's short—to mid-term future belongs to agents and that the long-term future of AGI may evolve from agentic systems. Our definition of agents covers any neuro-symbolic system in which we merge neural AI (such as an LLM) with semi-traditional software.

With agents, we allow LLMs to integrate with code — allowing AI to search the web, perform math, and essentially integrate into anything we can build with code. It should be clear the scope of use cases is phenomenal where AI can integrate with the broader world of software.

In this introduction to AI agents, we will cover the essential concepts that make them what they are and why that will make them the core of real-world AI in the years to come.


Neuro-Symbolic Systems

Neuro-symbolic systems consist of both neural and symbolic computation, where:

  • Neural refers to LLMs, embedding models, or other neural network-based models.
  • Symbolic refers to logic containing symbolic logic, such as code.

Both neural and symbolic AI originate from the early philosophical approaches to AI: connectionism (now neural) and symbolism. Symbolic AI is the more traditional AI. Diehard symbolists believed they could achieve true AGI via written rules, ontologies, and other logical functions.

The other camp were the connectionists. Connectionism emerged in 1943 with a theoretical neural circuit but truly kicked off with Rosenblatt's perceptron paper in 1958 [1][2]. Both of these approaches to AI are fascinating but deserve more time than we can give them here, so we will leave further exploration of these concepts for a future chapter.

Most important to us is understanding where symbolic logic outperforms neural-based compute and vice-versa.

NeuralSymbolic
Flexible, learned logic that can cover a huge range of potential scenarios.Mostly hand-written rules which can be very granular and fine-tuned but hard to scale.
Hard to interpret why a neural system does what it does. Very difficult or even impossible to predict behavior.Rules are written and can be understood. When unsure why a particular ouput was produced we can look at the rules / logic to understand.
Requires huge amount of data and compute to train state-of-the-art neural models, making it hard to add new abilities or update with new information.Code is relatively cheap to write, it can be updated with new features easily, and latest information can often be added often instantaneously.
When trained on broad datasets can often lack performance when exposed to unique scenarios that are not well represented in the training data.Easily customized to unique scenarios.
Struggles with complex computations such as mathematical operations.Perform complex computations very quickly and accurately.

Pure neural architectures struggle with many seemingly simple tasks. For example, an LLM cannot provide an accurate answer if we ask it for today’s date.

Retrieval Augmented Generation (RAG) is commonly used to provide LLMs with up-to-date knowledge on a particular subject or access to proprietary knowledge.

Giving LLMs Superpowers

By 2020, it was becoming clear that neural AI systems could not perform tasks symbolic systems typically excelled in, such as arithmetic, accessing structured DB data, or making API calls. These tasks require discrete input parameters that allow us to process them reliably according to strict written logic.

In 2022, researchers at AI21 developed Jurassic-X, an LLM-based "neuro-symbolic architecture.” Neuro-symbolic refers to merging the "neural computation" of large language models (LLMs) with more traditional (i.e. symbolic) computation of code.

Jurassic-X used the Modular Reasoning, Knowledge, and Language (MRKL) system [3]. The researchers developed MRKL to solve the limitations of LLMs, namely:

  • Lack of up-to-date knowledge, whether that is the latest in AI or something as simple as today's date.
  • Lack of proprietary knowledge, such as internal company docs or your calendar bookings.
  • Lack of reasoning, i.e. the inability to perform operations that traditional software is good at, like running complex mathematical operations.
  • Lack of ability to generalize. Back in 2022, most LLMs had to be fine-tuned to perform well in a specific domain. This problem is still present today but far less prominent as the SotA models generalize much better and, in the case of MRKL, are able to use tools relatively well (although we could certainly take the MRKL solution to improve tool use performance even today).

MRKL represents one of the earliest forms of what we would now call an agent; it is an LLM (neural computation) paired with executable code (symbolic computation).

ReAct and Tools

There is a misconception in the broader industry that an AI agent is an LLM contained within some looping logic that can generate inputs for and execute code functions. This definition of agents originates from the huge popularity of the ReAct agent framework and the adoption of a similar structure with function/tool calling by LLM providers such as OpenAI, Anthropic, and Ollama.

ReAct agent flow with the Reasoning-Action loop [4]. When the action chosen specifies to use a normal tool, the tool is used and the observation returned for another iteration through the Reasoning-Action loop. To return a final answer to the user the LLM must choose action “answer” and provide the natural language response, finishing the loop.

ReAct agent flow with the Reasoning-Action loop [4]. When the action chosen specifies to use a normal tool, the tool is used and the observation returned for another iteration through the Reasoning-Action loop. To return a final answer to the user the LLM must choose action “answer” and provide the natural language response, finishing the loop.

Our "neuro-symbolic" definition is much broader but certainly does include ReAct agents and LLMs paired with tools. This agent type is the most common for now, so it's worth understanding the basic concept behind it.

The Reason Action (ReAct) method encourages LLMs to generate iterative reasoning and action steps. During reasoning, the LLM describes what steps are to be taken to answer the user's query. Then, the LLM generates an action, which we parse into an input to some executable code, which we typically describe as a tool/function call.

ReAct method. Each iteration includes a Reasoning step followed by an Action (tool call) step. The Observation is the output from the previous tool call. During the final iteration the agent calls the answer tool, meaning we generate the final answer for the user.

ReAct method. Each iteration includes a Reasoning step followed by an Action (tool call) step. The Observation is the output from the previous tool call. During the final iteration the agent calls the answer tool, meaning we generate the final answer for the user.

Following the reason and action steps, our action tool call returns an observation. The logic returns the observation to the LLM, which is then used to generate subsequent reasoning and action steps.

The ReAct loop continues until the LLM has enough information to answer the original input. Once the LLM reaches this state, it calls a special answer action with the generated answer for the user.

Not only LLMs and Tool Calls

LLMs paired with tool calling are powerful but far from the only approach to building agents. Using the definition of neuro-symbolic, we cover architectures such as:

  • Multi-agent workflows that involve multiple LLM-tool (or other agent structure) combinations.
  • More deterministic workflows where we may have set neural model-tool paths that may fork or merge as the use case requires.
  • Embedding models that can detect user intents and decide tool-use or LLM selection-based selection in vector space.

These are just a few high-level examples of alternative agent structures. Far from being designed for niche use cases, we find these alternative options to frequently perform better than the more common ReAct or Tool agents. We will cover all of these examples and more in future chapters.


Agents are fundamental to the future of AI, but that doesn't mean we should expect that future to come from agents in their most popular form today. ReAct and Tool agents are great and handle many simple use cases well, but the scope of agents is much broader, and we believe thinking beyond ReAct and Tools is key to building future AI.


You can sign up for the Aurelio AI newsletter to stay updated on future releases in our comprehensive course on agents.


References

[1] The curious case of Connectionism (2019) https://www.degruyter.com/document/doi/10.1515/opphil-2019-0018/html

[2] F. Rosenblatt, The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain (1958), Psychological Review

[3] E. Karpas et al. MRKL Systems: A Modular, Neuro-Symbolic Architecture That Combines Large Language Models, External Knowledge Sources and Discrete Reasoning (2022), AI21 Labs

[4] S. Yao et al. ReAct: Synergizing Reasoning and Acting in Language Models (2022), ICLR