Vision Analysis
Back to Leaderboard

YOLO-NAS-M

yolonas

one-stage detector with Neural-Architecture-Search (AutoNAC) backbone

Parameters51.1M
GFLOPs88.9
Input Size640px
Best mAP50.5%
LicenseApache-2.0

Architecture

Type

one-stage

Backbone

Neural-Architecture-Search (AutoNAC)

Neck

PANet

Head

Decoupled (anchor-free, distribution focal loss)

Benchmark Results

Performance on COCO val2017 across different hardware configurations

HardwareRuntimemAP@50-95FPSLatencyVRAM
NVIDIA A100PyTorch FP3250.5%17.058.9ms363 MB

Speed Breakdown(NVIDIA A100)

4.5ms
20.3ms
34.1ms
Preprocess
Inference
Postprocess (NMS)

Usage with LibreYOLO

from libreyolo import LIBREYOLO

# Load model (auto-downloads from HuggingFace if not found locally)
model = LIBREYOLO("libreyolonasm.pth")

# Run inference
result = model("image.jpg", conf=0.25, iou=0.45)

# Process results
print(f"Found {len(result)} objects")
print(result.boxes.xyxy)   # bounding boxes (N, 4)
print(result.boxes.conf)   # confidence scores (N,)
print(result.boxes.cls)    # class IDs (N,)
anchor-freenmsPaper: 51.55% mAP

Related Models (yolonas)