Releases: roboflow/supervision
Releases ยท roboflow/supervision
supervision-0.5.2
supervision-0.5.1
๐ ๏ธ Fixed
- Fixed
Detections.__getitem__
method did not return mask for selected item. - Fixed
Detections.area
crashed for mask detections.
๐ Contributors
supervision-0.5.0
๐ Added
Detections.mask
to enable segmentation support. (#58)MaskAnnotator
to allow easyDetections.mask
annotation. (#58)Detections.from_sam
to enable native Segment Anything Model (SAM) support. (#58)
๐ฑ Changed
Detections.area
behaviour to work not only with boxes but also with masks. (#58)
๐ Contributors
supervision-0.4.0
๐ Added
Detections.empty
to allow easy creation of emptyDetections
objects. (#48)Detections.from_roboflow
to allow easy creation ofDetections
objects from Roboflow API inference results. (#56)plot_images_grid
to allow easy plotting of multiple images on single plot. (#56)- Initial support for Pascal VOC XML format with
detections_to_voc_xml
method. (#56)
๐ฑ Changed
show_frame_in_notebook
refactored and renamed toplot_image
. (#56)
๐ Contributors
supervision-0.3.2
supervision-0.3.1
supervision-0.3.0
๐ Added
New methods in sv.Detections
API:
from_transformers
- convert Object Detection ๐ค Transformer result intosv.Detections
from_detectron2
- convert Detectron2 result intosv.Detections
from_coco_annotations
- convert COCO annotation intosv.Detections
area
- dynamically calculated property storing bbox areawith_nms
- initial implementation (only class agnostic) ofsv.Detections
NMS
๐ฑ Changed
- Make
sv.Detections.confidence
fieldOptional
.
๐ Contributors
supervision-0.2.0
๐ช Killer features
- Support for
PolygonZone
andPolygonZoneAnnotator
๐ฅ
๐ Code example
import numpy as np
import supervision as sv
from ultralytics import YOLO
# initiate polygon zone
polygon = np.array([
[1900, 1250],
[2350, 1250],
[3500, 2160],
[1250, 2160]
])
video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)
# initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)
# extract video frame
generator = sv.get_video_frames_generator(MALL_VIDEO_PATH)
iterator = iter(generator)
frame = next(iterator)
# detect
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
detections = detections[detections.class_id == 0]
zone.trigger(detections=detections)
# annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)
- Advanced
vs.Detections
filtering with pandas-like API.
detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]
- Improved integration with
YOLOv5
andYOLOv8
models.
import torch
import supervision as sv
model = torch.hub.load('ultralytics/yolov5', 'yolov5x6')
results = model(frame, size=1280)
detections = sv.Detections.from_yolov5(results)
from ultralytics import YOLO
import supervision as sv
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
๐ Added
supervision.get_polygon_center
function - takes in a polygon as a 2-dimensionalnumpy.ndarray
and returns the center of the polygon as a Point objectsupervision.draw_polygon
function - draw a polygon on a scenesupervision.draw_text
function - draw a text on a scenesupervision.ColorPalette.default()
- class method - to generate defaultColorPalette
supervision.generate_2d_mask
function - generate a 2D mask from a polygonsupervision.PolygonZone
class - to define polygon zones and validate ifsupervision.Detections
are in the zonesupervision.PolygonZoneAnnotator
class - to drawsupervision.PolygonZone
on scene
๐ฑ Changed
VideoInfo
API - change the property nameresolution
->resolution_wh
to make it more descriptive; convertVideoInfo
todataclass
process_frame
API - change argument nameframe
->scene
to make it consistent with other classes and methodsLineCounter
API - rename classLineCounter
->LineZone
to make it consistent withPolygonZone
LineCounterAnnotator
API - rename classLineCounterAnnotator
->LineZoneAnnotator
๐ Contributors
supervision-0.1.0
๐ Added
- โ Add project license
- ๐จ
DEFAULT_COLOR_PALETTE
,Color
, andColorPalette
classes - ๐ initial implementation of
Point
,Vector
, andRect
classes - ๐ฌ
VideoInfo
andVideoSink
classes as well asget_video_frames_generator
-๐show_frame_in_notebook
util - ๐๏ธ
draw_line
,draw_rectangle
,draw_filled_rectangle
utils added - ๐ฆ Initial version
Detections
andBoxAnnotator
added - ๐งฎ initial implementation of
LineCounter
andLineCounterAnnotator
classes