Skip to content

Crowd Analysis for Congestion Control Early Warning System on Foot Over Bridge

Notifications You must be signed in to change notification settings

nspunn1993/Crowd-Behavior-Analysis-using-Faster-RCNN

Repository files navigation

Crowd-Behavior-Analysis-using-Faster-RCNN

Crowd Analysis for Congestion Control Early Warning System on Foot Over Bridge.

This repository presents the setup of any tensorflow 1 based object detection model which we used in our paper.

Steps to follow

  • Set up the object detection directory structure and TensorflowGPU enable anaconda virtual envrironment.
  • Generate the labelled dataset.
  • Generate label map and configure the training parameters.
  • Train the object detector model and export the inference graph for testing.

Step 1: Setup

Download the following files:

  • Tensorflow object api repostiory from here.
  • An object detection model from here - we used Faster RCNN inception V2 Coco model.
  • Download the files from this repository. Extract the tensorflow object detection api repository and navigate to
.model-master/research/object_detection/ 

Extract the object detection model and files from this repository there. Append the PYTHONPATH environment variable as

PYTHONPATH=<Model master path>;<Model master path>\research;<Model master path>\research\slim;

Step 2: Anaconda virtual environment

I recommend to use miniconda instead of anaconda because of minimal already installed packages.

Open the anaconda command prompt and type the following:

Note: Before installing tensorflow-gpu you need to install CUDA and CuDNN with compatible versions. For more details you can refer here.

- C:\> conda create -n objdet python=3.6
- C:\> activate objdet
- (objdet) C:\> pip3 install tensorflow-gpu==1.12
- (objdet) C:\> conda install -c anaconda protobuf
- (objdet) C:\> pip3 install pillow
- (objdet) C:\> pip3 install lxml
- (objdet) C:\> pip3 install Cython
- (objdet) C:\> pip3 install pandas
- (objdet) C:\> pip3 install contextlib2
- (objdet) C:\> pip3 install matplotlib
- (objdet) C:\> pip3 install opencv-python
- (objdet) C:\> pip3 install jupyter
- (objdet) C:\> cd C:\<model-master>\research
- (objdet) C:\> protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto
- (objdet) C:\<model-master>\research> python setup.py build
- (objdet) C:\<model-master>\research> python setup.py install

You can test the setup as follows:

(objdet) C:\<model-master>\research\object_detection> jupyter notebook object_detection_tutorial.ipynb

The jupyter notebook should execute without any errors.

Step 3: Dataset

You are required to generate TFRecords to feed the model. These can be generated by .xml files that consists of feautres like filename, width, height, class, and coordinates to represent the bounding box - xmin, ymin, xmax and ymax.

  • Get the image that consists of concerned object like head in our case.
  • Label the images using LabelImg tool (GitHub). This genrates a .xml file for each image.
  • Convert the .xml file to csv file using xml_to_csv.py as follows:

(objdet) C:\<model-master>\research\object_detection> python xml_to_csv.py

The files test_labels.csv and train_labels.csv will be generated at \object_detection\images folder.

  • Open C:<model-master>\research\object_detection\generate_tfrecord.py file and edit the row_label conditions with your class label names.
  • Generate the TFRecords with the following command:

(objdet) C:\<model-master>\research\object_detection> python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record

(objdet) C:\<model-master>\research\object_detection> python generate_tfrecord.py --csv_input=images\test_labels.csv --image_dir=images\test --output_path=test.record

You can also create your own dataset using Open Image Dataset

Step 4: Label map and configuration

  • Create a new file, labelmap.pbtxt, at C:\<model-master>\research\object_detection\training\.
  • Edit the file with below format to represent different classes (based on requirement). In our example we need only one class i.e. head.
item {
  id: 1
  name: 'head'
}
  • To configure training pipeline, go to C:\<model-master>\research\object_detection\samples\configs and copy the faster_rcnn_inception_v2_pets.config file into the C:\<model-master>\research\object_detection\training.
  • Edit the faster_rcnn_inception_v2_pets.config file as below:
    • Update the num_classes parameter as per your requirement at line 9 (for me it was 1).
    • At line 106 change fine_tune_checkpoint to "C:<model-master>/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
    • At line 123 and 125 update the input_path and label_map_path as "C:<model-master>/research/object_detection/train.record" and "C:<model-master>/research/object_detection/training/labelmap.pbtxt" respectively.
    • Update the num_examples parameter at line 130 with number of images in the \images\test directory.
    • At line 135 and 137 update the input_path and label_map_path to "C:<model-master>/research/object_detection/test.record" and "C:<model-master>/research/object_detection/training/labelmap.pbtxt" respectively.

Step 5: Begin training

Enter the following command to begin training of your model:

(objdet) C:\<model-master>\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config

To view the training progress you can use tensorboard as follows:

(objdet) C:\<model-master>\research\object_detection>tensorboard --logdir=training

Once the training is finished you can export the inference graph at \object_detection\inference_graph using following commands;

(objdet) C:\<model-master>\research\object_detection>python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph

Step 6: Test the model

I have shared my code that tend to detect the human heads that displays Alert message if there is congestion. More details can be found in our paper given in the Citation section.

You may need to update some path variables before initiating the execution.

Citation

@inproceedings{punn2019crowd,
  title={Crowd analysis for congestion control early warning system on foot over bridge},
  author={Punn, Narinder Singh and Agarwal, Sonali},
  booktitle={2019 Twelfth International Conference on Contemporary Computing (IC3)},
  pages={1--6},
  year={2019},
  organization={IEEE}
}

About

Crowd Analysis for Congestion Control Early Warning System on Foot Over Bridge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages