A family of techniques—including LoRA, prefix tuning, and adapters—that adapt large pre-trained models to new tasks by training only a small fraction of parameters. PEFT dramatically reduces compute and storage costs compared to full fine-tuning.
Buchen Sie eine Beratung, um zu besprechen, wie KI-Konzepte auf Ihre Herausforderungen anwendbar sind.