A training paradigm where a model generates its own supervision signal from unlabelled data—for example, by predicting masked words in a sentence. Self-supervised pre-training on internet-scale text or images is what enables foundation models to acquire broad world knowledge.
Réservez une consultation pour discuter de l'application des concepts IA à vos défis.