A very good implementation of Focal Loss could be find here. It also works fine if I turn off checkpointing. You know how 'standard' DL training loop looks like PyTorch-Lightning-Bolts Documentation, Release 0. Proper hyperparameter tuning can make the difference between a good training run and a failing one. PyTorch also has a lot of loss functions implemented. Experiment tracking and model registry built for research and production teams that run a lot of experiments. The strength of down-weighting is proportional to the size of the gamma parameter. It is free and open-source software released under the Modified BSD license.
현재 lightning과 비슷한 역할을 하는 High-level api로는 keras, Ignite, fast.