Maryam Asadzadehkaljahi
Spatiotemporal Edges for Arbitrarily Moving Video Classification in Protected and Sensitive Scenes
Asadzadehkaljahi, Maryam; Halder, Arnab; Pal, Umapada; Shivakumara, Palaiahnakote
Authors
Abstract
Classification of arbitrary moving objects including vehicles and human beings in a real environment (such as protected and sensitive areas) is challenging due to arbitrary deformation and directions caused by shaky camera and wind. This work aims at adopting a spatiotemporal approach for classifying arbitrarily moving objects. The intuition to propose the approach is that the behavior of the arbitrary moving objects caused by wind and shaky camera is inconsistent and unstable, while, for static objects, the behavior is consistent and stable. The proposed method segments foreground objects from background using the frame difference between median frame and individual frame. This step outputs several different foreground information. The method finds static and dynamic edges by subtracting Canny of foreground information from the Canny edges of respective input frames. The ratio of the number of static and dynamic edges of each frame is considered as features. The features are normalized to avoid the problems of imbalanced feature size and irrelevant features. For classification, the work uses 10-fold cross-validation to choose the number of training and testing samples, and the random forest classifier is used for the final classification of frames with static objects and arbitrarily moving objects. For evaluating the proposed method, we construct our own dataset, which contains video of static and arbitrarily moving objects caused by shaky camera and wind. The results on the video dataset show that the proposed method achieves the state-of-the-art performance (76% classification rate) which is 14% better than the best existing method.
Citation
Asadzadehkaljahi, M., Halder, A., Pal, U., & Shivakumara, P. (2023). Spatiotemporal Edges for Arbitrarily Moving Video Classification in Protected and Sensitive Scenes. #Journal not on list, https://doi.org/10.47852/bonviewAIA3202526
Journal Article Type | Article |
---|---|
Acceptance Date | Jan 17, 2023 |
Publication Date | Feb 8, 2023 |
Deposit Date | Nov 15, 2024 |
Publicly Available Date | Nov 18, 2024 |
Journal | Artificial Intelligence and Applications |
Peer Reviewed | Peer Reviewed |
DOI | https://doi.org/10.47852/bonviewAIA3202526 |
Files
Published Version
(2.9 Mb)
PDF
You might also like
Altered Handwritten Text Detection in Document Images Using Deep Learning
(2024)
Journal Article
A novel autoencoder for structural anomalies detection in river tunnel operation
(2023)
Journal Article
Downloadable Citations
About USIR
Administrator e-mail: library-research@salford.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search