A comprehensive survey on digital video forensics: Taxonomy, challenges, and future directions

Abdul Rehman Javed, Zunera Jalil, Wisha Zehra, Thippa Reddy Gadekallu, Doug Young Suh, Md Jalil Piran

Research output: Contribution to journalShort surveypeer-review

16 Scopus citations


With the explosive advancements in smartphone technology, video uploading/downloading has become a routine part of digital social networking. Video contents contain valuable information as more incidents are being recorded now than ever before. In this paper, we present a comprehensive survey on information extraction from video contents and forgery detection. In this context, we review various modern techniques such as computer vision and different machine learning (ML) algorithms including deep learning (DL) proposed for video forgery detection. Furthermore, we discuss the persistent general, resource, legal, and technical challenges, as well as challenges in using DL for the problem at hand, such as the theory behind DL, CV, limited datasets, real-time processing, and the challenges with the emergence of ML techniques used with the Internet of Things (IoT)-based heterogeneous devices. Moreover, this survey presents prominent video analysis products used for video forensics investigation and analysis. In summary, this survey provides a detailed and broader investigation about information extraction and forgery detection in video contents under one umbrella, which was not presented yet to the best of our knowledge.

Original languageEnglish
Article number104456
JournalEngineering Applications of Artificial Intelligence
StatePublished - Nov 2021


  • Anti-forensics
  • Computer vision (CV)
  • Deep learning (DL)
  • Digital forensics
  • Evidence extraction
  • Forgery detection
  • Legal aspects
  • Machine learning (ML)
  • Video forensics
  • Video forgery


Dive into the research topics of 'A comprehensive survey on digital video forensics: Taxonomy, challenges, and future directions'. Together they form a unique fingerprint.

Cite this