Run Inference

On any of the docker containers you can run sample inference to get an output video:

python3 inference.py --device DEVICE --input_video INPUT_VIDEO --out_dir OUT_DIR \
                [--model_path MODEL_PATH] [--threshold THRESHOLD]  [--input_width INPUT_WIDTH]\
                [--input_height INPUT_HEIGHT] [--out_width OUT_WIDTH] [--out_height OUT_HEIGHT]

Where:

DEVICE should be one of the x86, edgetpu or jetson.

INPUT_VIDEO is the path to the input video file.

OUT_DIR is directory in which the output video file will be saved.

MODEL_PATH is the path to the model file or directory. for x86 devices it should be a directory which contains the saved_model directory. For edgetpu it should be a compiled tflite file, and for jetson devices it should be a TRT Engine file.

threshold is the detector’s threshold to detect objects.

INPUT_WIDTH and INPUT_HEIGHT are width and height of the input of the model.

OUT_WIDTH and OUT_HEIGHT are resolution of output video.