Back to Leaderboard

RT-DETR-L

rtdetr

transformer detector with HGNetv2 backbone

Parameters

32.0M

FLOPs

110.0G

Input Size

640px

License

Apache-2.0

Architecture

Type

transformer

Backbone

HGNetv2

Neck

HybridEncoder

Head

DETR

Benchmark Results
Performance on COCO val2017 across different hardware configurations
HardwaremAP@50-95FPSLatencyVRAM
NVIDIA A100 (TensorRT FP16)52.9%92.910.8ms1447 MB
NVIDIA T4 (TensorRT FP16)52.9%34.728.8ms1490 MB
CPU (ONNX Runtime)53.1%4.2235.9ms1407 MB
Speed Breakdown (A100 TensorRT)
End-to-end latency breakdown showing preprocessing, inference, and postprocessing times
1.2ms
7.1ms
2.5ms
Preprocess
Inference
Postprocess (NMS)
Usage with LibreYOLO
from libreyolo import YOLO

# Load model
model = YOLO.from_pretrained("https://huggingface.co/Libre-YOLO/rtdetr-l")

# Run inference
results = model.predict("image.jpg")

# Process results
for box in results.boxes:
    print(f"Class: {box.cls}, Confidence: {box.conf:.2f}")
transformerno-nmsefficient