|
liblloyal 1.0.0
Composable primitives for llama.cpp inference
|
Result from complete chat template processing. More...
#include <helpers.hpp>
Public Attributes | |
| std::string | prompt |
| Formatted chat prompt ready for tokenization. | |
| std::vector< std::string > | additional_stops |
| Template-specific stop tokens (e.g., "<|im_end|>", "<|eot_id|>") | |
Result from complete chat template processing.
Contains formatted prompt and dynamically detected stop tokens specific to the model's chat template (ChatML, Llama-3, etc.).
Definition at line 113 of file helpers.hpp.
| std::vector<std::string> lloyal::ChatTemplateResult::additional_stops |
Template-specific stop tokens (e.g., "<|im_end|>", "<|eot_id|>")
Definition at line 115 of file helpers.hpp.
| std::string lloyal::ChatTemplateResult::prompt |
Formatted chat prompt ready for tokenization.
Definition at line 114 of file helpers.hpp.