// 03_AI_RESEARCH
Why LLMs Can't Count
Ask an AI to reverse the word "Lollipop" and it might struggle. Why? Because it doesn't see letters. It sees tokens.
Visualizing the Input
Below is how a Transformer model actually "sees" the sentence "The quick brown fox". Each color represents a distinct integer ID in the vector space.
The
quick
brown
fox
[ 464, 2091, 7842, 1902 ]
Context Window Drift
As the conversation grows, the "Attention Mechanism" has to work harder to relate new tokens to old ones. It's not memory; it's probability.
Explain quantum entanglement like I'm 5.
Imagine you have two magic dice. No matter how far apart they are, if you roll a 6 on one...
This abstraction layer is both the power and the weakness of current GenAI models.