Java model inference library for the TensorFlow Object Detection API. Allows real-time localization and identification of multiple objects in a single or batch of images. Works with all pre-trained zoo models and ttps://github.com/tensorflow/models/tree/865c14c/research/object_detection/data[object labels].
The ObjectDetectionService
takes an image or a batch of images and outputs a list of predicted objects bounding boxes
represented by ObjectDetection.
For the models supporting Instance Segmentation,
the The JsonMapperFunction permits
converting the |
Add the object-detection
dependency to the pom (use the latest version available):
<dependency>
<groupId>org.springframework.cloud.fn</groupId>
<artifactId>object-detection-function</artifactId>
<version>${revision}</version>
</dependency>
The ExampleObjectDetection.java
sample demonstrates how to use the ObjectDetectionService
for detecting objects in input images. It also shows how to
convert the result into JSON format and augment the input image with the detected object bounding boxes.
ObjectDetectionService detectionService = new ObjectDetectionService(
"https://download.tensorflow.org/models/object_detection/faster_rcnn_nas_coco_2018_01_28.tar.gz#frozen_inference_graph.pb", //(1)
"https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/data/mscoco_label_map.pbtxt", //(2)
0.4f, //(3)
false, //(4)
true); //(5)
byte[] image = GraphicsUtils.loadAsByteArray("classpath:/images/object-detection.jpg"); //(6)
List<ObjectDetection> detectedObjects = detectionService.detect(image); //(7)
-
Downloads and loads a pre-trained
frozen_inference_graph.pb
model directly from thefaster_rcnn_nas_coco.tar.gz
archive in the Tensorflow model zoo. Mind that on first attempt it will download few hundreds of MBs. The consecutive runs will use the cached copy (5) instead. -
Object category labels (e.g. names) for the model
-
Confidence threshold - Only object with estimate above the threshold are returned
-
Indicate that this is not a
mask
(e.g. not an instance segmentation) model type -
Cache the model on the local file system.
-
Load the input image to evaluate
-
Detect the objects in the image and represent the result as a list of ObjectDetection instances.
Next you can convert the result in JSON format.
String jsonObjectDetections = new JsonMapperFunction().apply(detectedObjects);
System.out.println(jsonObjectDetections);
[{"name":"person","estimate":0.998,"x1":0.160,"y1":0.774,"x2":0.201,"y2":0.946,"cid":1},
{"name":"kite","estimate":0.998,"x1":0.437,"y1":0.089,"x2":0.495,"y2":0.169,"cid":38},
{"name":"person","estimate":0.997,"x1":0.084,"y1":0.681,"x2":0.121,"y2":0.848,"cid":1},
{"name":"kite","estimate":0.988,"x1":0.206,"y1":0.263,"x2":0.225,"y2":0.314,"cid":38}]]
Use the ObjectDetectionImageAugmenter to draw the detected objects on top of the input image.
byte[] annotatedImage = new ObjectDetectionImageAugmenter().apply(image, detectedObjects); // (1)
IOUtils.write(annotatedImage, new FileOutputStream("./object-detection-function/target/object-detection-augmented.jpg")); //(2)
-
Augment the image with the detected object bounding boxes (Uses Java2D internally).
-
Stores the augmented image as
object-detection-augmented.jpg
image file.
Tip
|
Set the ObjectDetectionImageAugmenter#agnosticColors property to true to use a monochrome color schema.
|
The ExampleInstanceSegmentation.java
sample shows how to use the ObjectDetectionService
for Instance Segmentation
.
NOTE: It requires a trained model that supports Masks
as well as setting the instance segmentation (e.g. useMasks
) flag to true
.
ObjectDetectionService detectionService = new ObjectDetectionService(
"https://download.tensorflow.org/models/object_detection/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz#frozen_inference_graph.pb", // (1)
"https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/data/mscoco_label_map.pbtxt", // (2)
0.4f, // (3)
true, // (4)
true); // (5)
byte[] image = GraphicsUtils.loadAsByteArray("classpath:/images/object-detection.jpg");
List<ObjectDetection> detectedObjects = detectionService.detect(image); // (6)
String jsonObjectDetections = new JsonMapperFunction().apply(detectedObjects); // (7)
System.out.println(jsonObjectDetections);
byte[] annotatedImage = new ObjectDetectionImageAugmenter(true) // (8)
.apply(image, detectedObjects);
IOUtils.write(annotatedImage, new FileOutputStream("./object-detection-function/target/object-detection-segmentation-augmented.jpg"));
-
Uses one of the 4 MASK pre-trained models
-
Object category labels (e.g. names) for the model
-
Confidence threshold - Only object with estimate above the threshold are returned.
-
Use masks output - For the pre-trained models instruct to use the extended fetch names that include instance segmentation masks as well.
-
Cache model - Create a local copy of the model to speed up consecutive runs.
-
Evaluate the model to predict the object in the input image.
-
Convert the detected object in to JSON array. NOTE: that with mask there is an additional field:
mask
-
Draw the detected object on top of the input image. Mind the
true
constructor parameter stands for draw detected masks. If false only the bounding boxes will be shown.
All pre-trained detection_model_zoo.md models are supported. Following URI notation can be used to download any of the models directly from the zoo.
http://<zoo model tar.gz url>#frozen_inference_graph.pb
The frozen_inference_graph.pb
is the frozen model file name within the archive.
Note
|
For some models this name may differ. You have to download and open the archive to find the real name. |
Tip
|
To speedup the bootstrap performance you may consider extracting the frozen_inference_graph.pb and caching it
locally. Then you can use the file://path-to-my-local-copy URI schema to access it.
|
Following models can be used for Instance Segmentation
as well:
In addition to the model, the ObjectDetectionService
requires a list of labels that correspond to the categories detectable by the selected model.
All labels files are available in the object_detection/data folder.
Note
|
It is important to use the labels that correspond to the model being used! Table below highlights this mapping. |
Model | Labels |
---|---|
Tip
|
For performance reasons you may consider downloading the required label files to the local file system. |