LLMs-from-scratch/ch07
Sebastian Raschka f4ed263847
Some checks failed
Check hyperlinks / test (push) Waiting to run
Spell Check / spellcheck (push) Waiting to run
PEP8 Style checks / flake8 (push) Waiting to run
Code tests (Linux) / test (push) Has been cancelled
Code tests (macOS) / test (push) Has been cancelled
Test PyTorch 2.0 and 2.5 / test (2.0.1) (push) Has been cancelled
Test PyTorch 2.0 and 2.5 / test (2.5.0) (push) Has been cancelled
Code tests (Windows) / test (push) Has been cancelled
Add "What's next" section (#432)
* Add What's next section

* Delete appendix-D/01_main-chapter-code/appendix-D-Copy2.ipynb

* Delete ch03/01_main-chapter-code/ch03-Copy1.ipynb

* Delete appendix-D/01_main-chapter-code/appendix-D-Copy1.ipynb

* Update ch07.ipynb

* Update ch07.ipynb
2024-11-07 20:12:59 -06:00
..
01_main-chapter-code Add "What's next" section (#432) 2024-11-07 20:12:59 -06:00
02_dataset-utilities fix typos, add codespell pre-commit hook (#264) 2024-07-16 07:07:04 -05:00
03_model-evaluation Fix 8-billion-parameter spelling 2024-07-28 10:48:56 -05:00
04_preference-tuning-with-dpo Fix 2 typos in 04_preferene-tuning-with-dpo (#356) 2024-09-15 07:36:22 -05:00
05_dataset-generation Clarify API usage limits in bonus content 2024-09-15 08:05:04 -05:00
06_user_interface Add user interface to ch06 and ch07 (#366) 2024-09-21 20:33:00 -05:00
README.md Update bonus section formatting (#400) 2024-10-12 10:26:08 -05:00

Chapter 7: Finetuning to Follow Instructions

 

Main Chapter Code

 

Bonus Materials

  • 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
  • 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
  • 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
  • 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM