Model Zoo¶
Table of Contents
Using pretrained models¶
Some popular models are already built in nnio. Example of using SSD MobileNet object detection model on CPU:
# Load model
model = nnio.zoo.onnx.detection.SSDMobileNetV1()
# Get preprocessing function
preproc = model.get_preprocessing()
# Preprocess your numpy image
image = preproc(image_rgb)
# Make prediction
boxes = model(image)
Here boxes
is a list of nnio.DetectionBox
instances.
ONNX¶
Classification¶
- class nnio.zoo.onnx.classification.MobileNetV2¶
MobileNetV2 classifier trained on ImageNet
Model is taken from the ONNX Model Zoo.
- __init__()¶
- forward(image, return_scores=False, return_info=False)¶
- Parameters
image – np array. Input image
return_scores – bool. If
True
, return class scores.return_info – bool. If
True
, return inference time.
- Returns
str
: class label.
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
- property labels¶
- Returns
list of ImageNet classification labels
Detection¶
- class nnio.zoo.onnx.detection.SSDMobileNetV1¶
SSDMobileNetV1 object detection model trained on COCO dataset.
Model is taken from the ONNX Model Zoo.
Here is the webcam demo of this model working.
- __init__()¶
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image
return_info – bool. If
True
, return inference time.
- Returns
list of
nnio.DetectionBox
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
- property labels¶
- Returns
list of COCO labels
Re-Identification¶
- class nnio.zoo.onnx.reid.OSNet¶
Omni-Scale Feature Network for Person Re-ID taken from here and converted to onnx.
Here is the webcam demo of this model working.
- __init__()¶
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image of a person.
return_info – bool. If
True
, return inference time.
- Returns
np.array of shape
[512]
- person appearence vector. You can compare them by cosine or Euclidian distance.
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
OpenVINO¶
Detection¶
- class nnio.zoo.openvino.detection.SSDMobileNetV2(device='CPU', lite=True, threshold=0.5)¶
SSDMobileNetV2 object detection model trained on COCO dataset.
Model is taken from openvino and converted to openvino.
Here is the webcam demo of an analogous model (
nnio.zoo.onnx.detection.SSDMobileNetV1
) working.- __init__(device='CPU', lite=True, threshold=0.5)¶
- Parameters
device – str. Choose Intel device:
CPU
,GPU
,MYRIAD
. If there are multiple devices in your system, you can use indeces:MYRIAD:0
but it is not recommended since Intel automatically chooses a free device.threshold – float. Detection threshold. It affects sensitivity of the detector.
lite – bool. If True, use SSDLite version (idk exactly how it is lighter).
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image of a person.
return_info – bool. If
True
, return inference time.
- Returns
list of
nnio.DetectionBox
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
- property labels¶
- Returns
list of COCO labels
Re-Identification¶
- class nnio.zoo.openvino.reid.OSNet(device='CPU')¶
Omni-Scale Feature Network for Person Re-ID taken from here and converted to openvino.
Here is the webcam demo of this model (onnx version) working.
- __init__(device='CPU')¶
- Parameters
device – str. Choose Intel device:
CPU
,GPU
,MYRIAD
. If there are multiple devices in your system, you can use indeces:MYRIAD:0
but it is not recommended since Intel automatically chooses a free device.
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image of a person.
return_info – bool. If
True
, return inference time.
- Returns
np.array of shape
[512]
- person appearence vector. You can compare them by cosine or Euclidian distance.
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
EdgeTPU¶
Classification¶
- class nnio.zoo.edgetpu.classification.MobileNet(device='CPU', version='v2')¶
MobileNet V2 (or V1) classifier trained on ImageNet
Model is taken from the google-coral repo
- __init__(device='CPU', version='v2')¶
- Parameters
device – str.
CPU
by default. SetTPU
orTPU:0
to use the first EdgeTPU device. SetTPU:1
to use the second EdgeTPU device etc.version – str. Either
v1
orv2
.
- forward(image, return_scores=False)¶
- Parameters
image – np array. Input image
return_scores – bool. If
True
, return class scores.
- Returns
str
: class label.
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
- property labels¶
- Returns
list of ImageNet classification labels
Detection¶
- class nnio.zoo.edgetpu.detection.SSDMobileNet(device='CPU', version='v2', threshold=0.5)¶
MobileNet V2 (or V1) SSD object detector trained on COCO dataset.
Model is taken from the google-coral repo.
Here is the webcam demo of an analogous model (
nnio.zoo.onnx.detection.SSDMobileNetV1
) working.- __init__(device='CPU', version='v2', threshold=0.5)¶
- Parameters
device – str.
CPU
by default. SetTPU
orTPU:0
to use the first EdgeTPU device. SetTPU:1
to use the second EdgeTPU device etc.version – str. Either “v1” or “v2”
threshold – float. Detection threshold. Affects the detector’s sensitivity.
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image
return_info – bool. If
True
, return inference time.
- Returns
list of
nnio.DetectionBox
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
- property labels¶
- Returns
list of COCO labels
- class nnio.zoo.edgetpu.detection.SSDMobileNetFace(device='CPU', threshold=0.5)¶
MobileNet V2 SSD face detector.
Model is taken from the google-coral repo.
- __init__(device='CPU', threshold=0.5)¶
- Parameters
device – str.
CPU
by default. SetTPU
orTPU:0
to use the first EdgeTPU device. SetTPU:1
to use the second EdgeTPU device etc.threshold – float. Detection threshold. Affects the detector’s sensitivity.
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image
return_info – bool. If
True
, return inference time.
- Returns
list of
nnio.DetectionBox
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
Re-Identification¶
- class nnio.zoo.edgetpu.reid.OSNet(device='CPU')¶
Omni-Scale Feature Network for Person Re-ID taken from torchreid and converted to tflite.
This is the quantized version. It is not as accurate as its onnx and openvino versions.
Here is the webcam demo of this model (onnx version) working.
- __init__(device='CPU')¶
- Parameters
device – str.
CPU
by default. SetTPU
orTPU:0
to use the first EdgeTPU device. SetTPU:1
to use the second EdgeTPU device etc.
- forward(image, return_info=False)¶
- Parameters
image – np array. Input image of a person.
return_info – bool. If
True
, return inference time.
- Returns
np.array of shape
[512]
- person appearence vector. You can compare them by cosine or Euclidian distance.
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
Segmentation¶
- class nnio.zoo.edgetpu.segmentation.DeepLabV3(device='CPU')¶
DeepLabV3 instance segmentation model trained in Pascal VOC dataset.
Model is taken from the google-coral repo.
- __init__(device='CPU')¶
- Parameters
device – str.
CPU
by default. SetTPU
orTPU:0
to use the first EdgeTPU device. SetTPU:1
to use the second EdgeTPU device etc.
- forward(image)¶
- Parameters
image – np array. Input image
- Returns
numpy array. Segmentation map of the same size as the input image:
shape=[batch, 513, 513]
. For each pixel gives an integer denoting class. Class labels are available through.labels
attribute of this object.
- get_preprocessing()¶
- Returns
nnio.Preprocessing
object.
- property labels¶
- Returns
list of Pascal VOC labels