|
liblloyal 1.0.0
Branched Inference for llama.cpp
|
Chat Output Parsing. More...
#include "common.hpp"#include <llama/llama.h>#include <chat.h>#include <peg-parser.h>#include <exception>#include <string>#include <vector>Go to the source code of this file.
Classes | |
| struct | lloyal::chat_out::ToolCall |
| A single tool call extracted from model output. More... | |
| struct | lloyal::chat_out::ParseResult |
| Result from parsing model output. More... | |
Namespaces | |
| namespace | lloyal |
| Boundary Tracker Stub for OSS liblloyal. | |
| namespace | lloyal::chat_out |
| Chat output parsing (tool calls, reasoning, content) | |
Functions | |
| ParseResult | lloyal::chat_out::parse (const std::string &output, common_chat_format format, common_reasoning_format reasoning_format=COMMON_REASONING_FORMAT_NONE, bool is_partial=false, bool thinking_forced_open=false, const std::string &parser_data="") |
| Parse model output with explicit format. | |
| ParseResult | lloyal::chat_out::parse (const llama_model *model, const std::string &output, bool is_partial=false) |
| Parse model output with auto-detected format from model template. | |
Chat Output Parsing.
Wraps llama.cpp's common_chat_parse() to extract structured content from model output: plain text, reasoning/thinking content, and tool calls. Stateless — each call is independent.
Typically paired with chat_in::format():
Definition in file chat_out.hpp.