Suspicious Activity Detection in Surveillance Video Using Fully Convolutional Networks Segmentation
Keywords:
Segmentation, FCN, Object Modeling, Suspicious Activity Detection, Surveillance VideoAbstract
In Recent Years, suspicious activity detection is used to detect traffic in different surveillance videos with high accuracy and high speed in daytime. This surveillance video detection method includes Adaptive Background, Object Modeling, Object Tracking, Activity Recognition, and Segmentation. The semantic segmentation using suspicious activity detection techniques plays a major role in the segmentation of the surveillance video. U-Net is one of the popular Fully Convolutional Networks (FCN) which is applicable for image segmentation. This method could found the different anomalies activity from the videos.
References
[1]K.He,X.Zhang,S.Ren,andJ.Sun.patialpyramidpooling in deep convolutional networks for visual recognition. In ECCV, 2014. 1, 2
[2]Y.Jia,E.Shelhamer,J.Donahue,S.Karayev,J.Long,R.Girshick,S. Guadarrama, and T. Darrell. Caffe: Convolutionalarchitectureforfastfeatureembedding. arXivpreprint arXiv:1408.5093, 2014. 7
[3] J. J. Koenderink and A. J. van Doorn. Representation of local geometry in the visual system. Biological cybernetics, 55(6):367–375, 1987.
[4] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 1, 2, 3, 5
[5]Y.LeCun,B.Boser,J.Denker,D.Henderson,R.E.Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand-written zip code recognition. In Neural Computation, 1989. 2, 3
[6] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨uller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. Springer, 1998. 7
[7] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015. 6
[8] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, 2016. 2, 3
[9] J. Dai, Y. Li, K. He, and J. Sun. R-FCN: Object detection via region-based fully convolutional networks. In NIPS, 2016. 2
[10] A. Faktor and M. Irani. Video segmentation by non-local consensus voting. In BMVC, 2014. 5
[11] Q. Fan, F. Zhong, D. Lischinski, D. Cohen-Or, and B. Chen. Jumpcut: Non-successive mask transfer and interpolation for video cutout. ACM Trans. Graph., 34(6), 2015. 2, 3, 5, 7
[12] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. TPAMI, 35(8):1915– 1929, 2013. 2, 3
[13] K. Fragkiadaki, G. Zhang, and J. Shi. Video segmentation by tracing discontinuities in a trajectory embedding. In CVPR, 2012. 5
[14] R. Girshick. Fast R-CNN. In ICCV, 2015. 1, 3
[15] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 1
[16] M. Godec, P. M. Roth, and H. Bischof. Hough-based tracking of non-rigid objects. CVIU, 117(10):1245–1256, 2013. 5
[17] M. Grundmann, V. Kwatra, M. Han, and I. A. Essa. Efficient hierarchical graph-based video segmentation. In CVPR, 2010. 2, 3, 5, 7
[18] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015. 2, 4
[19] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 3, 4
[20] S. D. Jain and K. Grauman. Supervoxel-consistent foreground propagation in video. In ECCV, 2014. 2, 5, 8
[21] V. Jampani, R. Gadde, and P. V. Gehler. Video propagation networks. In CVPR, 2017. 3
[22] A. Khoreva, F. Perazzi, R. Benenson, B. Schiele, and A. Sorkine-Hornung. Learning video object segmentation from static images. In CVPR, 2017. 3
[23] I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. In ICLR, 2016. 1, 4
[24] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 1, 3, 4
[25] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In ICCV, 2011. 5
[26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. SSD: Single shot multibox detector. In ECCV, 2016. 1
[27] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 2, 3, 4
[28] M. Kristan et al. The visual object tracking VOT2015 challenge results. In Visual Object Tracking Workshop 2015 at ICCV 2015, Dec 2015. 8
[29] K. Maninis, J. Pont-Tuset, P. Arbel´aez, and L. Van Gool. Convolutional oriented boundaries. In ECCV, 2016. 1, 3, 4, 5
[30] K. Maninis, J. Pont-Tuset, P. Arbel´aez, and L. Van Gool. Deep retinal image understanding. In MICCAI, 2016. 2, 3, 4
[31] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The role of context for object detection and semantic segmentation in the wild. In CVPR, 2014. 5
[32] H. Nam and B. Han. Learning multi-domain convolutional neural networks for visual tracking. In CVPR, 2016. 3, 8
[33] N. Nicolas M¨arki, F. Perazzi, O. Wang, and A. SorkineHornung. Bilateral space video segmentation. In CVPR, 2016. 2, 5, 7
[34] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In ICCV, 2015. 2, 3
[35] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. TPAMI, 36(6):1187– 1200, 2014. 5
[36] A. Papazoglou and V. Ferrari. Fast object segmentation in unconstrained video. In ICCV, 2013.
[37] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016. 2, 5, 6
[38] F. Perazzi, O. Wang, M. Gross, and A. Sorkine-Hornung. Fully connected object proposals for video segmentation. In ICCV, 2015. 2, 5, 7
[39] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Doll´ar. Learning to refine object segments. In ECCV, 2016. 3
[40] J. Pont-Tuset, P. Arbel´aez, J. T. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping for image segmentation and object proposal generation. TPAMI, 2017. 3, 5
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
