A training technique where a smaller "student" model is trained to replicate the behaviour of a larger "teacher" model. Distillation produces compact, fast models suitable for latency-sensitive or resource-constrained deployments without sacrificing too much quality.
Réservez une consultation pour discuter de l'application des concepts IA à vos défis.