liblloyal 1.0.0
Composable primitives for llama.cpp inference
Loading...
Searching...
No Matches
lloyal::chat_template Namespace Reference

Classes

struct  FormatResult
 Result from chat template formatting NOTE: Named FormatResult, NOT ChatTemplateResult. More...
 

Functions

FormatResult format (const llama_model *model, const std::string &messages_json, const std::string &template_override="")
 Format chat messages using model's chat template with fallback.
 
bool validate (const std::string &template_str)
 Validate chat template syntax.
 

Function Documentation

◆ format()

FormatResult lloyal::chat_template::format ( const llama_model *  model,
const std::string &  messages_json,
const std::string &  template_override = "" 
)
inline

Format chat messages using model's chat template with fallback.

Orchestration logic:

  1. Calls format_chat_template_complete() from helpers.hpp
  2. If template processing fails (empty prompt), falls back to simple format
  3. Handles JSON parsing errors

Fallback hierarchy:

  1. template_override (if provided)
  2. Model's built-in template
  3. ChatML template
  4. Simple "role: content" format (this layer adds this)
Parameters
modelLlama model (for template and vocab)
messages_jsonJSON string with messages array
template_overrideOptional custom template
Returns
FormatResult with formatted prompt and stop tokens

Definition at line 57 of file chat_template.hpp.

◆ validate()

bool lloyal::chat_template::validate ( const std::string &  template_str)
inline

Validate chat template syntax.

Calls validate_chat_template_helper() from helpers.hpp. Does NOT require a model (syntax-only validation).

Parameters
template_strTemplate string to validate
Returns
True if syntax is valid, false otherwise (never throws)

Definition at line 130 of file chat_template.hpp.