A

Arogya • 2.57K Points
Extraordinary

Q. What does quantization do in LLaMA models?

  • (A) Improves screen resolution
  • (B) Reduces model size and memory usage
  • (C) Increases dataset size
  • (D) Adds new vocabulary
  • Correct Answer - Option(B)
  • Views: 1
  • Filed under category Llama
  • Hashtags:

Explanation by: Arogya
Quantization compresses model weights to allow running on low-RAM devices.

You must be Logged in to update hint/solution

Discusssion

Login to discuss.

Be the first to start discuss.