optimize_prompt()
Optimize a prompt for reduced token usage.
Usage
optimize_prompt(
prompt,
*,
level=OptimizationLevel.MODERATE,
)Applies text-level compression techniques that preserve semantic meaning while reducing token count. Useful when targeting models with small context windows (8K–32K).
Parameters
prompt: str | PromptBuilder-
A prompt string or a
~talk_box.prompt_builder.PromptBuilderinstance. If a PromptBuilder, its built output is used. level: str | OptimizationLevel = OptimizationLevel.MODERATE-
How aggressively to compress.
"light"is formatting-only,"moderate"(default) condenses structure,"aggressive"maximizes compression.
Returns
OptimizeResult- The optimized text with before/after token counts.
Examples
import talk_box as tb
builder = (
tb.PromptBuilder()
.persona("analyst", "data science")
.task_context("Analyze sales data")
.constraint("Be concise")
.example("Q: Revenue?", "A: $1.2M")
)
result = tb.optimize_prompt(builder)
print(f"Saved {result.reduction_pct:.0f}% tokens")
print(result.text)