0
0
Agentic_aiml~5 mins

Caching and result reuse in Agentic Ai

Choose your learning style8 modes available
Introduction

Caching helps save time by storing results so you don't have to redo work. Result reuse means using saved answers to speed up tasks.

When running the same AI task multiple times with the same input
When training models that need repeated data processing steps
When generating predictions for repeated queries
When debugging or testing AI code to avoid waiting for long runs
When working with slow or expensive computations in AI pipelines
Syntax
Agentic_ai
cache = {}

# To store a result
cache[key] = result

# To reuse a result
if key in cache:
    result = cache[key]
else:
    result = compute()
    cache[key] = result

Use a dictionary or similar structure to store results with keys.

Check if the key exists before computing to reuse results.

Examples
This example stores the output of an expensive function and reuses it if the same input appears again.
Agentic_ai
cache = {}

input_data = 'data1'

if input_data in cache:
    output = cache[input_data]
else:
    output = expensive_function(input_data)
    cache[input_data] = output
This function caches predictions from a model to avoid repeated computation.
Agentic_ai
def cached_predict(model, input_data, cache):
    if input_data in cache:
        return cache[input_data]
    prediction = model.predict(input_data)
    cache[input_data] = prediction
    return prediction
Sample Program

This program computes squares of numbers slowly but caches results to reuse when the same number appears again.

Agentic_ai
import time

cache = {}

def slow_square(x):
    time.sleep(1)  # Simulate slow computation
    return x * x

inputs = [2, 3, 2, 4, 3]
outputs = []

for num in inputs:
    if num in cache:
        result = cache[num]
        print(f"Reusing cached result for {num}: {result}")
    else:
        result = slow_square(num)
        cache[num] = result
        print(f"Computed result for {num}: {result}")
    outputs.append(result)

print("Final outputs:", outputs)
OutputSuccess
Important Notes

Caching saves time but uses memory to store results.

Choose keys carefully to avoid wrong reuse of results.

Clear cache if data or model changes to avoid stale results.

Summary

Caching stores results to avoid repeating work.

Result reuse speeds up AI tasks by using saved answers.

Always check cache before computing to save time.