Local command extraction
- Crucial for dividing user instructions into actionable commands.
- Transition from OpenAI’s 3.5 Turbo (API dependent, latency issues) to a fine-tuned Llama 3.2 3B model for local use.
Benefits of the new implementation:
- Comparable performance.
- Reduced latency and internet independence.
- Fine-tuning results published on HuggingFace.