PandasAI give exactly the same result with different models - Stack Overflow

admin2025-04-20  2

I use the titanic dataset () and want to use LLM for the data exploration.

I use pandasai with OpenAI and local model and find that the responses are exactly the same. Is it really possible?

# ChatGPT
from pandasai.llm import OpenAI
model = OpenAI(api_token=OPENAI_KEY)

# local model
from pandasai.llm.local_llm import LocalLLM
model = LocalLLM(api_base='http://localhost:11434/v1', model='deepseek-r1')

Q1 = "What are the column names?"
Q2 = "What is the correlation score between age and fare?"

smd = SmartDataframe('Titanic-Dataset.csv', config = {'llm': model,
                                                      'max_tokens': 200,
                                                      'temperature': 0})
smd.chat(Q1)

both models result as: * The column names are: PassengerId, Survived, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked.*

转载请注明原文地址:http://anycun.com/QandA/1745125193a90384.html