A neural network component that allows a model to dynamically focus on the most relevant parts of its input when producing each output element. Self-attention is the core innovation of the Transformer architecture and is responsible for LLMs' ability to handle long, complex contexts.
احجز استشارة لمناقشة كيفية تطبيق مفاهيم الذكاء الاصطناعي على تحدياتك.