langchain - Langgraph tool calling - LM doesnt call provided tools - Stack Overflow

admin2025-04-17  3

Currently im trying to learn how to develop an agentic AI based on langgraph academy video, while in the langgraph academy video is using openAI GPT, i decide to use llama3.2 3B as it is free.

Below is my code

tools definition

def multiply(a: int, b: int) -> int:
    """Multiply a and b.

    Args:
        a: first int
        b: second int
    """
    return a * b

def add(a: int, b: int) -> int:
    """Adds a and b.

    Args:
        a: first int
        b: second int
    """
    return a + b

def divide(a: int, b: int) -> float:
    """Divide a and b.

    Args:
        a: first int
        b: second int
    """
    return a / b
tools = [add, multiply, divide]

chatbot definition

llm_with_tools = llm.bind_tools(tools)
def assistant(state:MessagesState):
    sys_message = [SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!")]
    return {'messages' : [llm_with_tools.invoke(sys_message + state['messages'])]}

graph building and compiling

builder = StateGraph(MessagesState)
builder.add_node('assistant', assistant)
builder.add_node('tools', ToolNode(tools))
builder.add_edge(START, 'assistant')
builder.add_conditional_edges('assistant', tools_condition)
builder.add_edge('tools', 'assistant')
state_graph = builderpile()
messages = [HumanMessage(content='What is 99 multiplied by 99?')]
resp = state_graph.invoke({'messages' : messages})
for m in resp['messages']:
    m.pretty_print()

With the given definition and graph, below is the results of the llm when i turn on the debugging mode.

{
  "generations": [
    [
      {
        "text": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 31 Jan 2025\n\nYou are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is 99 multiplied by 99?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nTo calculate 99 multiplied by 99, I'll use the built-in multiplication function.\n\n```python\nresult = 99 * 99\nprint(result)\n```\n\nThe result of 99 multiplied by 99 is: 9801",
        "generation_info": null,
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 31 Jan 2025\n\nYou are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is 99 multiplied by 99?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nTo calculate 99 multiplied by 99, I'll use the built-in multiplication function.\n\n```python\nresult = 99 * 99\nprint(result)\n```\n\nThe result of 99 multiplied by 99 is: 9801",
            "type": "ai",
            "id": "run-19f4ccaa-a9ba-46f1-905a-aab7d3455f02-0",
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": null,
  "run": null,
  "type": "LLMResult"
}

Any idea what im doing wrong? or is it because the capability of the llm? AFAIK llama3.2 3B supports function/tool calling. Any explanation or help is very much appreciated.

Currently im trying to learn how to develop an agentic AI based on langgraph academy video, while in the langgraph academy video is using openAI GPT, i decide to use llama3.2 3B as it is free.

Below is my code

tools definition

def multiply(a: int, b: int) -> int:
    """Multiply a and b.

    Args:
        a: first int
        b: second int
    """
    return a * b

def add(a: int, b: int) -> int:
    """Adds a and b.

    Args:
        a: first int
        b: second int
    """
    return a + b

def divide(a: int, b: int) -> float:
    """Divide a and b.

    Args:
        a: first int
        b: second int
    """
    return a / b
tools = [add, multiply, divide]

chatbot definition

llm_with_tools = llm.bind_tools(tools)
def assistant(state:MessagesState):
    sys_message = [SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!")]
    return {'messages' : [llm_with_tools.invoke(sys_message + state['messages'])]}

graph building and compiling

builder = StateGraph(MessagesState)
builder.add_node('assistant', assistant)
builder.add_node('tools', ToolNode(tools))
builder.add_edge(START, 'assistant')
builder.add_conditional_edges('assistant', tools_condition)
builder.add_edge('tools', 'assistant')
state_graph = builder.compile()
messages = [HumanMessage(content='What is 99 multiplied by 99?')]
resp = state_graph.invoke({'messages' : messages})
for m in resp['messages']:
    m.pretty_print()

With the given definition and graph, below is the results of the llm when i turn on the debugging mode.

{
  "generations": [
    [
      {
        "text": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 31 Jan 2025\n\nYou are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is 99 multiplied by 99?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nTo calculate 99 multiplied by 99, I'll use the built-in multiplication function.\n\n```python\nresult = 99 * 99\nprint(result)\n```\n\nThe result of 99 multiplied by 99 is: 9801",
        "generation_info": null,
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 31 Jan 2025\n\nYou are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is 99 multiplied by 99?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nTo calculate 99 multiplied by 99, I'll use the built-in multiplication function.\n\n```python\nresult = 99 * 99\nprint(result)\n```\n\nThe result of 99 multiplied by 99 is: 9801",
            "type": "ai",
            "id": "run-19f4ccaa-a9ba-46f1-905a-aab7d3455f02-0",
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": null,
  "run": null,
  "type": "LLMResult"
}

Any idea what im doing wrong? or is it because the capability of the llm? AFAIK llama3.2 3B supports function/tool calling. Any explanation or help is very much appreciated.

Share Improve this question asked Jan 31 at 9:07 KycdiAKycdiA 1231 gold badge1 silver badge4 bronze badges
Add a comment  | 

2 Answers 2

Reset to default 0

Not sure to understand the exact problem With that small sample on my side it's calling well a similar tool

I'm running llama3.2:3b through ollama

ollama pull llama3.2:3b

Then I run this sample script in Python

from langchain.schema import HumanMessage, SystemMessage
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.graph import StateGraph, MessagesState, START
from langchain_ollama import ChatOllama

def multiply(a: int, b: int) -> int:
    """Multiply a and b.

    Args:
        a: first int
        b: second int
    """
    print("TOOL:Multiplying", a, "and", b)
    return a * b

tools = [multiply]

llm = ChatOllama(
    model="llama3.2:3b",
    temperature=0,
)

llm_with_tools = llm.bind_tools(tools)

def assistant(state: MessagesState):
    print("\n=== Assistant Node ===")
    print("Input state:", state)
    sys_message = [SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs. You are encouraged to call a function for each of the given question!")]
    response = llm_with_tools.invoke(sys_message + state['messages'])
    print("LLM Response:", response)
    return {'messages': [response]}

builder = StateGraph(MessagesState)
builder.add_node('assistant', assistant)
builder.add_node('tools', ToolNode(tools))
builder.add_edge(START, 'assistant')
builder.add_conditional_edges('assistant', tools_condition)
builder.add_edge('tools', 'assistant')
state_graph = builder.compile()

messages = [HumanMessage(content='What is 99 multiplied by 99?')]
resp = state_graph.invoke({'messages' : messages})
for m in resp['messages']:
    m.pretty_print()

The response shows that the model answers directly instead of calling the function, meaning it does not recognize that tool use is required. Try adjusting the system prompt

转载请注明原文地址:http://anycun.com/QandA/1744872956a88823.html