Until recently, fine-tuning large language models (LLMs) on a single GPU was a pipe dream.
why multiple 80G gpu says torch.cuda.OutOfMemoryError
You have used up all your available memory
Try reduce the batch size
why multiple 80G gpu says torch.cuda.OutOfMemoryError
You have used up all your available memory
Try reduce the batch size