跳到内容

消息和聊天记录

PydanticAI 提供了访问代理运行期间交换的消息的途径。这些消息既可以用于继续连贯的对话,也可以用于理解代理的执行情况。

从结果访问消息

运行代理后,您可以从 result 对象访问该运行期间交换的消息。

RunResult (由 Agent.run, Agent.run_sync 返回) 和 StreamedRunResult (由 Agent.run_stream 返回) 都有以下方法

StreamedRunResult 和完整消息

StreamedRunResult 上,从这些方法返回的消息将仅在流完成后包含最终结果消息。

例如,您已经等待了以下协程之一

注意: 如果您使用 .stream_text(delta=True),最终结果消息将不会添加到结果消息中,因为在这种情况下,结果内容永远不会构建为一个字符串。

访问 RunResult 上方法的示例

run_result_messages.py
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result = agent.run_sync('Tell me a joke.')
print(result.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

# all messages from the run
print(result.all_messages())
"""
[
    ModelRequest(
        parts=[
            SystemPromptPart(
                content='Be a helpful assistant.',
                timestamp=datetime.datetime(...),
                dynamic_ref=None,
                part_kind='system-prompt',
            ),
            UserPromptPart(
                content='Tell me a joke.',
                timestamp=datetime.datetime(...),
                part_kind='user-prompt',
            ),
        ],
        kind='request',
    ),
    ModelResponse(
        parts=[
            TextPart(
                content='Did you hear about the toothpaste scandal? They called it Colgate.',
                part_kind='text',
            )
        ],
        model_name='gpt-4o',
        timestamp=datetime.datetime(...),
        kind='response',
    ),
]
"""
(此示例完整,可以直接运行)

访问 StreamedRunResult 上方法的示例

streamed_run_result_messages.py
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')


async def main():
    async with agent.run_stream('Tell me a joke.') as result:
        # incomplete messages before the stream finishes
        print(result.all_messages())
        """
        [
            ModelRequest(
                parts=[
                    SystemPromptPart(
                        content='Be a helpful assistant.',
                        timestamp=datetime.datetime(...),
                        dynamic_ref=None,
                        part_kind='system-prompt',
                    ),
                    UserPromptPart(
                        content='Tell me a joke.',
                        timestamp=datetime.datetime(...),
                        part_kind='user-prompt',
                    ),
                ],
                kind='request',
            )
        ]
        """

        async for text in result.stream_text():
            print(text)
            #> Did you hear
            #> Did you hear about the toothpaste
            #> Did you hear about the toothpaste scandal? They called
            #> Did you hear about the toothpaste scandal? They called it Colgate.

        # complete messages once the stream finishes
        print(result.all_messages())
        """
        [
            ModelRequest(
                parts=[
                    SystemPromptPart(
                        content='Be a helpful assistant.',
                        timestamp=datetime.datetime(...),
                        dynamic_ref=None,
                        part_kind='system-prompt',
                    ),
                    UserPromptPart(
                        content='Tell me a joke.',
                        timestamp=datetime.datetime(...),
                        part_kind='user-prompt',
                    ),
                ],
                kind='request',
            ),
            ModelResponse(
                parts=[
                    TextPart(
                        content='Did you hear about the toothpaste scandal? They called it Colgate.',
                        part_kind='text',
                    )
                ],
                model_name='gpt-4o',
                timestamp=datetime.datetime(...),
                kind='response',
            ),
        ]
        """
(此示例完整,可以直接运行 — 您需要添加 asyncio.run(main()) 来运行 main)

使用消息作为进一步代理运行的输入

消息历史在 PydanticAI 中的主要用途是在多次代理运行中保持上下文。

要在运行中使用现有消息,请将它们传递给 Agent.run, Agent.run_syncAgent.run_streammessage_history 参数。

如果设置了 message_history 且不为空,则不会生成新的系统提示 — 我们假设现有的消息历史记录包含系统提示。

在对话中重用消息
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result1 = agent.run_sync('Tell me a joke.')
print(result1.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

result2 = agent.run_sync('Explain?', message_history=result1.new_messages())
print(result2.data)
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.

print(result2.all_messages())
"""
[
    ModelRequest(
        parts=[
            SystemPromptPart(
                content='Be a helpful assistant.',
                timestamp=datetime.datetime(...),
                dynamic_ref=None,
                part_kind='system-prompt',
            ),
            UserPromptPart(
                content='Tell me a joke.',
                timestamp=datetime.datetime(...),
                part_kind='user-prompt',
            ),
        ],
        kind='request',
    ),
    ModelResponse(
        parts=[
            TextPart(
                content='Did you hear about the toothpaste scandal? They called it Colgate.',
                part_kind='text',
            )
        ],
        model_name='gpt-4o',
        timestamp=datetime.datetime(...),
        kind='response',
    ),
    ModelRequest(
        parts=[
            UserPromptPart(
                content='Explain?',
                timestamp=datetime.datetime(...),
                part_kind='user-prompt',
            )
        ],
        kind='request',
    ),
    ModelResponse(
        parts=[
            TextPart(
                content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
                part_kind='text',
            )
        ],
        model_name='gpt-4o',
        timestamp=datetime.datetime(...),
        kind='response',
    ),
]
"""
(此示例完整,可以直接运行)

存储和加载消息 (到 JSON)

虽然在内存中维护对话状态对于许多应用程序来说已经足够,但通常您可能希望将代理运行的消息历史记录存储在磁盘上或数据库中。这可能是为了评估、在 Python 和 JavaScript/TypeScript 之间共享数据,或任何其他用例。

预期的方法是使用 TypeAdapter

我们导出了 ModelMessagesTypeAdapter,可用于此目的,或者您可以创建自己的。

这是一个展示如何操作的示例

将消息序列化为 json
from pydantic_core import to_jsonable_python

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessagesTypeAdapter  # (1)!

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result1 = agent.run_sync('Tell me a joke.')
history_step_1 = result1.all_messages()
as_python_objects = to_jsonable_python(history_step_1)  # (2)!
same_history_as_step_1 = ModelMessagesTypeAdapter.validate_python(as_python_objects)

result2 = agent.run_sync(  # (3)!
    'Tell me a different joke.', message_history=same_history_as_step_1
)
  1. 或者,您可以从头开始创建 TypeAdapter
    from pydantic import TypeAdapter
    from pydantic_ai.messages import ModelMessage
    ModelMessagesTypeAdapter = TypeAdapter(list[ModelMessage])
    
  2. 或者,您可以直接序列化为/从 JSON 反序列化
    from pydantic_core import to_json
    ...
    as_json_objects = to_json(history_step_1)
    same_history_as_step_1 = ModelMessagesTypeAdapter.validate_json(as_json_objects)
    
  3. 现在,尽管创建了新的代理运行,您仍然可以使用历史记录 same_history_as_step_1 继续对话。

(此示例完整,可以直接运行)

使用消息的其他方式

由于消息由简单的 dataclass 定义,您可以手动创建和操作,例如用于测试。

消息格式与使用的模型无关,因此您可以在不同的代理中使用消息,或在同一代理中使用不同的模型。

在下面的示例中,我们重用了第一次代理运行中的消息,该运行使用 openai:gpt-4o 模型,在第二次代理运行中使用 google-gla:gemini-1.5-pro 模型。

使用不同模型重用消息
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result1 = agent.run_sync('Tell me a joke.')
print(result1.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

result2 = agent.run_sync(
    'Explain?',
    model='google-gla:gemini-1.5-pro',
    message_history=result1.new_messages(),
)
print(result2.data)
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.

print(result2.all_messages())
"""
[
    ModelRequest(
        parts=[
            SystemPromptPart(
                content='Be a helpful assistant.',
                timestamp=datetime.datetime(...),
                dynamic_ref=None,
                part_kind='system-prompt',
            ),
            UserPromptPart(
                content='Tell me a joke.',
                timestamp=datetime.datetime(...),
                part_kind='user-prompt',
            ),
        ],
        kind='request',
    ),
    ModelResponse(
        parts=[
            TextPart(
                content='Did you hear about the toothpaste scandal? They called it Colgate.',
                part_kind='text',
            )
        ],
        model_name='gpt-4o',
        timestamp=datetime.datetime(...),
        kind='response',
    ),
    ModelRequest(
        parts=[
            UserPromptPart(
                content='Explain?',
                timestamp=datetime.datetime(...),
                part_kind='user-prompt',
            )
        ],
        kind='request',
    ),
    ModelResponse(
        parts=[
            TextPart(
                content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
                part_kind='text',
            )
        ],
        model_name='gemini-1.5-pro',
        timestamp=datetime.datetime(...),
        kind='response',
    ),
]
"""

示例

有关在对话中使用消息的更完整示例,请参阅 聊天应用 示例。