A technique that caches the key-value (KV) attention states of a repeated prompt prefix, so subsequent requests reuse the pre-computed computation rather than re-running it. Prompt caching reduces latency and cost significantly for applications with long system prompts.
Réservez une consultation pour discuter de l'application des concepts IA à vos défis.