What AI hallucinations are in AI for Everyone - Time & Space Complexity
We want to understand how often AI hallucinations happen as AI processes more information.
How does the chance of hallucination grow when AI handles bigger or more complex tasks?
Analyze the time complexity of the following AI response generation process.
function generateResponse(input) {
let hallucinationCount = 0;
for (let i = 0; i < input.length; i++) {
let token = processToken(input[i]);
if (isHallucination(token)) {
hallucinationCount++;
}
}
return hallucinationCount;
}
This code checks each part of the input to see if it causes an AI hallucination and counts them.
Look for repeated checks or loops that happen as input grows.
- Primary operation: Looping through each input token to check for hallucination.
- How many times: Once for every token in the input.
As the input size grows, the AI checks more tokens one by one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The number of checks grows directly with input size.
Time Complexity: O(n)
This means the time to check for hallucinations grows in a straight line as input gets bigger.
[X] Wrong: "AI hallucinations happen randomly and checking more input won't affect how often they occur."
[OK] Correct: Actually, the more input the AI processes, the more chances there are for hallucinations to appear, so the checking work grows with input size.
Understanding how AI processes input and where errors like hallucinations can appear helps you think clearly about AI behavior and performance.
"What if the AI checked groups of tokens together instead of one by one? How would that change the time complexity?"