|
liblloyal 1.0.0
Branched Inference for llama.cpp
|
Result from parsing model output. More...
#include <lloyal/chat_out.hpp>
Public Attributes | |
| std::string | content |
| Main response text (visible to user) | |
| std::string | reasoning_content |
| Extracted thinking/reasoning blocks (empty if none) | |
| std::vector< ToolCall > | tool_calls |
| Extracted tool calls (empty array if none) | |
Result from parsing model output.
For thinking models (e.g. Qwen3), reasoning_content contains text from <think>...</think> blocks while content contains the visible response. Store both fields separately in your message history so that chat_in::format() can reconstruct the template correctly on subsequent turns.
Definition at line 77 of file chat_out.hpp.
| std::string lloyal::chat_out::ParseResult::content |
Main response text (visible to user)
Definition at line 78 of file chat_out.hpp.
| std::string lloyal::chat_out::ParseResult::reasoning_content |
Extracted thinking/reasoning blocks (empty if none)
Definition at line 79 of file chat_out.hpp.
| std::vector<ToolCall> lloyal::chat_out::ParseResult::tool_calls |
Extracted tool calls (empty array if none)
Definition at line 80 of file chat_out.hpp.