Unusual Activity Detection in Surveillance Video using Machine Learning and Discriminative Deep Belief Network Techniques
Keywords:
Convolution Neural Network, Discriminative Deep Belief Neural Network, Recurrent neural network, video surveillanceAbstract
In recent years, video police work systems are typically adopted more or less the planet as security issues and their low hardware price. Anomaly detection is one in all the analysis areas within the field of video police work. During this study, totally different existing cluster primarily based, like techniques EM bunch and classification primarily based anomaly detection techniques in video police work square measure mentioned. The video closed-circuit television includes background modeling, object detection, object following, activity recognition and classification. Recently, the machine learning primarily based anomaly detection techniques plays a significant role within the classification of the events into traditional and abnormal events. The new approaches just like the grouping of Convolution Neural Network and repeated Neural Network and cascade deep learning square measure the strong algorithms for big datasets. The options so extracted square measure fed to a Discriminative Deep Belief Network. Labeled videos of some uncertain activities also are fed to the DDBN and their options also are extracted. Then the options extracted exploitation Convolution Neural Network square measure compared against these options extracted from the labeled sample video of classified suspicious actions employing a Discriminative Deep Belief Network (DDBN) and varied suspicious actions square measure detected from the given video.
References
[1] Teddy Ko, “A investigation on Behaviour Analysis in Video Surveillance Applications”, Applied Imagery Pattern Recognition Workshop `08. 37th IEEE , pp. 1-8, 2008.
[2] Min Li, Zhaoxiang Zhang, Kaiqi Huang and Tieniu Tan, “Estimating the Number of People in Crowded Scenes by MID Based Foreground Segmentation and Head-should Detection”, IEEE Computer Society PressVol. 35, pp.96-120, 2008.
[3] Gwang Goo K Lee, Hwan Ka, Byeoung Su Kim, Whoi Yul Kim, Ja Young Yoon and Jae Jun Kim, ”Analysis of crowded scenes in Surveillance Videos”, Canadian Journal on Image Processing & Computer Vision Vol. 1,No. 1, pp.52-75, 2010.
[4] Shuiwang Ji, Wei Xu, Ming Yang. 3D Convolutional Neural Network for Human Action Recognition. pattern analysis and machine intelligence, IEEE Trans. Vol. 35, January 2013.
[5] P Shusen Zhou, Qingcai Chen and Xiaolong Wang, “Discriminative Deep Belief Networks for Image Classification” in Proc. IEEE. Image Processing, September 2010.
[6] M. Hebert, C. Thorpe, and A. Stentz, Intelligent Unmanned Ground Vehicles: Autonomous Navigation Research at Carnegie Mellon. Norwell, MA: Kluwer, 1997.
[7] B. Southall, T. Hague, J. A. Marchant, and B. F. Buxton, “Vision-aided outdoor navigation of an autonomous horticultural vehicle,” in Proc. 1st ICVS, Gran Canaria, Spain, Jan. 13–15, 1999, pp. 37–50.
[8] T. Gandhi and M. M. Trivedi, “Motion analysis for event detection and tracking with a mobile omni-directional camera,” Multimedia Syst., vol. 10, no. 2, pp. 131–143, 2004.
[9] C. Thorpe, M. H. Herbert, T. Kanade, and S. A. Shafer, “Vision and navigation for the Carnegie Mellon Navlab,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 10, no. 3, pp. 362–372, May 1998.
[10] D. A. Pomerlau, “Reliability estimation for neural network based autonomous driving,” Robot. Auton. Syst., vol. 12, no. 3/4, pp. 113–119, Apr. 1994.
[11] E. D. Dickmanns and B. Mysliwets, “Recursive 3D road and relative egostate recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 199–213, Feb. 1992.
[12] E. D. Dickmanns, “Computer vision and highway automation,” Veh. Syst. Dyn., vol. 31, no. 5, pp. 325–343, Jun. 1999.
[13] D. A. Pomerlau and T. Jockem, “Rapidly adapting machine vision for automated vehicle steering,” IEEE Intell. Syst., vol. 11, no. 2, pp. 19–27, Apr. 1996.
[14] S. Araki, T. Matsuoka, N. Yokoya, and H. Takemura, “Real-time tracking of multiple moving object contours in a moving camera image sequences,” IEICE Trans. Inf. Syst., vol. E83-D, no. 7, pp. 1583–1591, Jul. 2000.
[15] G. Giralt, R. Sobek, and R. Chatila, “A multi-level planning and navigation system for a mobile robot: A first approach to Hilare,” in Proc. Int. Joint Conf. Artif. Intell., 1979, vol. 1, pp. 335–337.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
