Daniel Han - Low Level Technicals of LLMs

Video Available!

Open Models: This workshop will be split into 3x one hour blocks:

  1. How to analyze & fix LLMs - how to find and fix bugs in Gemma, Phi-3, Llama & tokenizers
  2. Finetuning with Unsloth - continued pretraining, reward modelling, QLoRA & more
  3. Deep dive into LLM technicals - hand deriving derivatives, SOTA finetuning tricks

It's recommended you have Python with Pytorch and Unsloth installed (or use online Google Colab / Kaggle). College level maths and programming would be helpful.

Daniel Han

Hey I'm Daniel, the algos guy behind Unsloth. I love making LLM training go fast! We're the guys who fixed 8 of Google's Gemma bugs, a 2048 SWA Phi-3 issue, found tokenization issues and fixed untrained tokens with Llama-3, and I run Unsloth with my brother Michael!

Our open source package makes finetuning of LLMs 2x faster and uses 70% less VRAM with no accuracy degradation. I used to work at NVIDIA making GPU algos go fast and helped NASA engineers process data from a Mars rover faster!

Daniel Han
Daniel HanCEO

Buy Tickets

We have now sold out of Early Bird tickets; General Admission has also sold out.
Please join us online for the free livestream.

Buy Tickets SOLD OUT!