mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2024-11-25 16:22:50 +08:00
f4ed263847
Some checks failed
Check hyperlinks / test (push) Waiting to run
Spell Check / spellcheck (push) Waiting to run
PEP8 Style checks / flake8 (push) Waiting to run
Code tests (Linux) / test (push) Has been cancelled
Code tests (macOS) / test (push) Has been cancelled
Test PyTorch 2.0 and 2.5 / test (2.0.1) (push) Has been cancelled
Test PyTorch 2.0 and 2.5 / test (2.5.0) (push) Has been cancelled
Code tests (Windows) / test (push) Has been cancelled
* Add What's next section * Delete appendix-D/01_main-chapter-code/appendix-D-Copy2.ipynb * Delete ch03/01_main-chapter-code/ch03-Copy1.ipynb * Delete appendix-D/01_main-chapter-code/appendix-D-Copy1.ipynb * Update ch07.ipynb * Update ch07.ipynb |
||
---|---|---|
.. | ||
01_main-chapter-code | ||
02_dataset-utilities | ||
03_model-evaluation | ||
04_preference-tuning-with-dpo | ||
05_dataset-generation | ||
06_user_interface | ||
README.md |
Chapter 7: Finetuning to Follow Instructions
Main Chapter Code
- 01_main-chapter-code contains the main chapter code and exercise solutions
Bonus Materials
- 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
- 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
- 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
- 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
- 06_user_interface implements an interactive user interface to interact with the pretrained LLM