A family of techniques—including LoRA, prefix tuning, and adapters—that adapt large pre-trained models to new tasks by training only a small fraction of parameters. PEFT dramatically reduces compute and storage costs compared to full fine-tuning.
AI概念があなたの課題にどのように適用されるかを話し合う相談を予約してください。