Optimizing LLMs for cost and quality
This technical webinar will review fine tuning models for performance, model quality optimization, devops for LLM apps, and a full demo showing how to fine tune Llama 3.1-8B to outperform a GPT-4o model at redacting personally identifiable information (PII) from your datasets.
Learn with our crawl, walk, run approach to scale production GenAI applications. Low quality and high costs are some of the biggest blockers for scaling LLMs. In this technical session we will show a path to using open source to achieve better quality with faster and cheaper models for your apps.
In the on-demand webinar you will:
Learn why fine tuning models is critically important
Lean a proven "crawl, walk, run" approach for model quality optimization
See what the continuous development cycle looks like for LLM apps
See a demo showing a fine tuned Llama 3.1-8B outperforms GPT-4o at redacting personally identifiable information (PII) from datasets
This 50 minute session is for engineers, product leaders, and technical founders. You can take and apply learnings to your prototype or optimize your existing GenAI app.