A family of techniques—including LoRA, prefix tuning, and adapters—that adapt large pre-trained models to new tasks by training only a small fraction of parameters. PEFT dramatically reduces compute and storage costs compared to full fine-tuning.
Boek een consultatie om te bespreken hoe AI-concepten op uw uitdagingen van toepassing zijn.