Context
❓What is Context and Why Is It So Important to Understand?
Context refers to the information that defines how a Char behaves and responds during a conversation. It includes elements like the Char Template, Included Conversation, and recent chat messages.
Understanding context is essential because:
- It drives behavior: A strong template and example messages set the tone for how the Char interprets input and crafts responses.
- It’s dynamic: You can update context on the fly—by editing messages or changing the template — to shift the Char's personality, mood, or purpose instantly.
- It unlocks creativity and control: Mastering context gives you powerful control over how your Char sounds, thinks, and interacts — whether you're creating a helpful assistant or a roleplay character.
Just like we need context to understand situations, so does AI. Context is all you need!
Use Tools like Inspect Context to see exactly what the model is using at any given time. Alternatively, you can use the Debug window (Bottom right corner, you also need to enable: → Chat → Debug Context).
The Role of the AI Model
The AI model powering the Char has a Maximum Context Size (see Presets) — this is the limit on how much information it can process at once. If the current context exceeds this limit, older messages get trimmed (may get transferred into Memory if activated).
Why don't we just set a higher Context Size in the Presets then?
Using more context comes at a cost:
- Slower performance
- Higher memory use
- Model-dependent: Output quality can degrade with longer context, also models are designed and trained to handle a certain max context length (so a larger size may isn't supported).
Always choose a context size that fits your computer's capabilities and the AI model's limits.
How are Tokens related to this?
Context is measured in tokens because AI models don't directly process raw text. Instead, tokens are the format the model actually works with (numeric IDs). Text is first converted into tokens, then processed, and finally converted back into human-readable text.
Anything can be broken down into tokens — that's why we have different models that can handle different things like text, images, audio, videos, and more.