Weights are stored in FP16 precision by default. Only TinyLlama models have variants of FP32 and FP16.
Naming convention: {P}?{F}?{E}?-{base_model}-epoch{1|2|3|5}-{fp16}? Legend: P - Pretrained; F - Finetuned; E - Edited
{P}?{F}?{E}?-{base_model}-epoch{1|2|3|5}-{fp16}?