Running a large language model is expensive, and a surprising amount of that cost comes down to memory, not computation.
As Large Language Models (LLMs) expand their context windows to process massive documents and intricate conversations, they encounter a brutal hardware reality known as the "Key-Value (KV) cache ...