Back to Leaderboard

YOLOv8-N

yolov8

one-stage detector with CSPDarknet backbone

Parameters

3.2M

FLOPs

8.7G

Input Size

640px

License

MIT

Architecture

Type

one-stage

Backbone

CSPDarknet

Neck

PAFPN

Head

Decoupled

Benchmark Results
Performance on COCO val2017 across different hardware configurations
HardwaremAP@50-95FPSLatencyVRAM
NVIDIA A100 (TensorRT FP16)37.4%193.35.2ms179 MB
NVIDIA T4 (TensorRT FP16)37.4%80.312.5ms202 MB
CPU (ONNX Runtime)37.4%10.496.6ms193 MB
Speed Breakdown (A100 TensorRT)
End-to-end latency breakdown showing preprocessing, inference, and postprocessing times
1.4ms
1.4ms
2.3ms
Preprocess
Inference
Postprocess (NMS)
Usage with LibreYOLO
from libreyolo import YOLO

# Load model
model = YOLO.from_pretrained("https://huggingface.co/Libre-YOLO/yolov8n")

# Run inference
results = model.predict("image.jpg")

# Process results
for box in results.boxes:
    print(f"Class: {box.cls}, Confidence: {box.conf:.2f}")
production-readyreal-timeedge-friendly
Notes

Nano variant, best for edge deployment

Related Models (yolov8)