A family of techniques—including LoRA, prefix tuning, and adapters—that adapt large pre-trained models to new tasks by training only a small fraction of parameters. PEFT dramatically reduces compute and storage costs compared to full fine-tuning.
Κλείστε μια συμβουλευτική για να συζητήσετε πώς οι έννοιες AI εφαρμόζονται στις προκλήσεις σας.