A robust facemask forgery detection system in video


An in-depth fake video uses Artificial Intelligent (AI), AI programming, and a Personal computer (PC) mix to create a deep fake video of the action. Deep-faking can also be used to represent images and sounds. We provide insights into our reviews in this document. We’re showing our dataset to start. At this point, we present the subtleties and reproductively of exploratory settings to evaluate the discovered effects finally. It is no surprise to find deep fake videos, which only monitor a tiny section of the video (e.g., the target face appears quickly on the video; hence the time is limited). We remove our system’s fixed duration’s persistent effects as each video contributes to the preparation, approval, and testing sections to reflect this. The edge groups are isolated from each video successively (without outline skips). The entire pipeline is ready to be finished when the approval stage is ten years old. Convolutional Neural Network (CNN) was the best and most reliable of the classification systems. Fake videos typically use low-quality pictures to mask faults or insist that the general public regard camera defects as unexplainable phenomena. ‘This is a common trope with Unidentified Flying Object (UFO) videos: ghostly orbs are lenses; snakes are compression artifacts on one’s face. In this study, we have implemented a sophisticated, knowledgeable method to recognize false images. Our test results using various monitored videos have shown that we can reliably predict whether videos are monitored through with simple co-evolutionary Long Short-Term Memory (LSTM) structure.

Authors: Firas Husham Almukhtar

Journal Name: Periodicals of Engineering and Natural Sciences

Date of Publication: June 2022

URL: http://pen.ius.edu.ba/index.php/pen/article/view/3072/1166

DoI: http://dx.doi.org/10.21533/pen.v10i3.3072