A neural network component that allows a model to dynamically focus on the most relevant parts of its input when producing each output element. Self-attention is the core innovation of the Transformer architecture and is responsible for LLMs' ability to handle long, complex contexts.
Book a 30-minute call to discuss how these AI concepts translate to your specific industry and business challenges.