lloyal.node API Reference - v1.0.7
    Preparing search index...

    Interface ContextOptions

    Options for creating an inference context

    interface ContextOptions {
        modelPath: string;
        embeddings?: boolean;
        nCtx?: number;
        nSeqMax?: number;
        nThreads?: number;
        poolingType?: PoolingType;
    }
    Index

    Properties

    modelPath: string

    Path to .gguf model file

    embeddings?: boolean

    Enable embedding extraction mode

    When true, context is optimized for embedding extraction. Use with encode() and getEmbeddings() methods. Default: false (text generation mode)

    nCtx?: number

    Context size (default: 2048)

    nSeqMax?: number

    Maximum number of sequences for multi-sequence support

    Set > 1 to enable multiple independent KV cache sequences. Useful for parallel decoding or conversation branching. Default: 1 (single sequence)

    nThreads?: number

    Number of threads (default: 4)

    poolingType?: PoolingType

    Pooling type for embedding extraction

    Only relevant when embeddings=true. Default: MEAN for embedding contexts, NONE otherwise