compass.validation.content.ParseChunksWithMemory#
- class ParseChunksWithMemory(text_chunks, num_to_recall=2)[source]#
Bases:
objectIterate through text chunks while caching prior LLM decisions
This helper stores an in-memory cache of prior validation results so each chunk can optionally reuse outcomes from earlier LLM calls. The design supports revisiting a configurable number of preceding text chunks when newer chunks lack sufficient context.
- Parameters:
text_chunks (
listofstr) – List of strings, each of which represent a chunk of text. The order of the strings should be the order of the text chunks. This validator may refer to previous text chunks to answer validation questions.num_to_recall (
int, optional) – Number of chunks to check for each validation call. This includes the original chunk! For example, if num_to_recall=2, the validator will first check the chunk at the requested index, and then the previous chunk as well. By default,2.
Methods
parse_from_ind(ind, key, llm_call_callback)Validate a chunk by consulting current and prior context
- async parse_from_ind(ind, key, llm_call_callback)[source]#
Validate a chunk by consulting current and prior context
Cached verdicts are reused to avoid redundant LLM calls when neighboring chunks have already been assessed. If the cache lacks a verdict, the callback is executed and the result stored.
- Parameters:
ind (
int) – Index of the chunk to inspect. Must be less than the number of available chunks.key (
str) – JSON key expected in the LLM response. The same key is used to populate the decision cache.llm_call_callback (
callable()) – Awaitable invoked with(key, text_chunk)that returns a boolean indicating whether the chunk satisfies the LLM validation check.
- Returns:
bool–Trueif the selected or recalled chunk satisfies the check,Falseotherwise.