When I start fine-tuning yolo11n.pt (or any other version of yolo), the python.exe program will takes over 8GB, so I try everything to reduce the memory usage . But anyway,the following is my model configuration ,I'm wonder is this normal or not? And does it influence the accuracy or is there a way I can improve it without causing OOM eeror?(my dataset only contains 2000 image)
if torch.cuda.is_available():
torch.cuda.empty_cache()
mp.set_start_method('spawn', force=True) # Ensures proper process handling on Windows
mp.freeze_support() # Needed for Windows multiprocessing
model = YOLO('yolo11n.pt')
device=torch.device('cuda' if torch.cuda.is_available()
else 'cpu')
results = model.train(
data=data_yaml_path,
epochs=150, # Increased epochs
batch=2, # Reduced batch size
imgsz=512, # Reduced image size
amp=True,
# Stabilization parameters
lr0=0.001, # Lower learning rate
lrf=0.001,
cache='disk',
# Other parameters
project='fall_detection_optimized', # Save results in a new folder
name='branch_first_phase',
device=device,
optimizer='AdamW',
workers=1,
patience=25,
weight_decay=0.0005,
exist_ok=True,
plots=True,
save_period=50
)
# Save the best model
model.save('Fall_detection_optimized_best.pt')
So I lower everything and using cache to disk but it still takes 5GB, and I'm wonder does it impact the accuracy or how can I improve the accuracy