The maximum amount of text (measured in tokens) an LLM can process in a single request, encompassing both the prompt and the generated output. Larger context windows—now exceeding 1 million tokens in some models—enable processing of long documents, codebases, and meeting transcripts in one pass.
AI概念があなたの課題にどのように適用されるかを話し合う相談を予約してください。