|
liblloyal 1.0.0
Branched Inference for llama.cpp
|
Chat Input Formatting with Full Format Awareness. More...
#include "common.hpp"#include "tokenizer.hpp"#include <llama/llama.h>#include <chat.h>#include <nlohmann/json.hpp>#include <algorithm>#include <exception>#include <string>#include <vector>Go to the source code of this file.
Classes | |
| struct | lloyal::chat_in::FormatInputs |
| Input parameters for chat formatting. More... | |
| struct | lloyal::chat_in::FormatResult |
| Result from chat template formatting with full format awareness. More... | |
Namespaces | |
| namespace | lloyal |
| Boundary Tracker Stub for OSS liblloyal. | |
| namespace | lloyal::chat_in |
| Chat input formatting with full format awareness. | |
Functions | |
| FormatResult | lloyal::chat_in::format (const llama_model *model, const FormatInputs &inputs) |
| Format chat messages using model's chat template with full format awareness. | |
| bool | lloyal::chat_in::validate (const std::string &template_str) |
| Validate chat template syntax. | |
| std::vector< llama_token > | lloyal::chat_in::fallback_to_eog (const llama_model *model) |
| Get EOG token as fallback when template parsing fails. | |
| std::string | lloyal::chat_in::get_token_safe (const llama_model *model, llama_token token) |
| Get token text safely. | |
| std::vector< llama_token > | lloyal::chat_in::get_turn_separator (const llama_model *model) |
| Get turn separator tokens for the model's chat template. | |
Chat Input Formatting with Full Format Awareness.
Provides high-level chat template processing that passes through all format-awareness fields (tools, grammar, reasoning) and returns all output fields from common_chat_params. This enables callers to use format-aware grammar constraining and output parsing.
Uses llama.cpp's common library (chat.h) for template processing:
Definition in file chat_in.hpp.