LLMs-from-scratch/ch05/05_bonus_hparam_tuning
TITC d16527ddf2
total training iters may equal to warmup_iters (#301)
total_training_iters=20, warmup_iters=20= len(train_loader) 4 multiply n_epochs 5, then ZeroDivisionError occurred.
```shell
Traceback (most recent call last):                                                                                                                                                                                                                                                                                              
  File "LLMs-from-scratch/ch05/05_bonus_hparam_tuning/hparam_search.py", line 191, in <module>                                             
    train_loss, val_loss = train_model(                                                                                                                                                                                                                                                                                         
                           ^^^^^^^^^^^^                                                                                                                         
  File "/mnt/raid1/docker/ai/LLMs-from-scratch/ch05/05_bonus_hparam_tuning/hparam_search.py", line 90, in train_model                                                                                                                                                                                                           
    progress = (global_step - warmup_iters) / (total_training_iters - warmup_iters)                                                                             
               ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                                                                                                                             
ZeroDivisionError: division by zero 
```
2024-08-06 07:10:05 -05:00
..
hparam_search.py total training iters may equal to warmup_iters (#301) 2024-08-06 07:10:05 -05:00
previous_chapters.py
README.md
the-verdict.txt

Optimizing Hyperparameters for Pretraining

The hparam_search.py script, based on the extended training function in Appendix D: Adding Bells and Whistles to the Training Loop, is designed to find optimal hyperparameters via grid search.

Note

This script will take a long time to run. You may want to reduce the number of hyperparameter configurations explored in the HPARAM_GRID dictionary at the top.