I have followed this tutorial to retrain MobileNet SSD V1 using Tensorflow GPU as described and got 0.5 loss after training using GPU (below more info about config) and got model.ckpt. This is the command I used for Training: python ../models/research/object_detection/legacy/train.py –logtostderr –train_dir=./data/ –pipeline_config_path=./ssd_mobilenet_v1_pets.config And this is the command for freezing (generate pb file): python ../models/research/object_detection/export_inference_graph.py –input_type image_tensor –pipeline_config_path ./ssd_mobilenet_v1_pets.config
Tag: object-detection-api
Get the bounding box coordinates in the TensorFlow object detection API tutorial
I am new to both Python and Tensorflow. I am trying to run the object detection tutorial file from the Tensorflow Object Detection API, but I cannot find where I can get the coordinates of the bounding boxes when objects are detected. Relevant code: The place where I assume bounding boxes are drawn is like this: I tried printing output_dict[‘detection_boxes’]
How to reduce the number of training steps in Tensorflow’s Object Detection API?
I am following Dat Trans example to train my own Object Detector with TensorFlow’s Object Detector API. I successfully started to train the custom objects. I am using CPU to train the model but it takes around 3 hour to complete 100 training steps. I suppose i have to change some parameter in .config. I tried to convert .ckpt to