Skip to content

This project focuses on developing a video content authenticity detection system. We are able to determine whether a face in a video is fake or real by doing a Euler video magnification of the video and then analysing the frames using the model.

Notifications You must be signed in to change notification settings

devanshpratapsingh/remoteheartrate_deepfake_detection

 
 

Repository files navigation


Logo

Deepfake Detection using Euler Video Magnification

A project as a part of the Data Mining and Cybersecurity for Business Intelligence Summer Programme at the BGU International

Table of Contents 🗓

  1. About The Project
  2. Libraries Used
  3. Getting Started
  4. Roadmap
  5. Contact
  6. Acknowledgments

About The Project 🚀

This project focuses on developing a video content authenticity detection system.

We are able to determine whether a face in a video is fake or real by doing a Euler video magnification of the video and then analysing the frames using the model.

There are a myriad of possible approaches that can be taken to tackle this problem. The most common ones are: Using CNN to detect Edge/regional anomalies, Identify spacial and temporal inconsistencies, or make the use of the experience of a pre-existing model to classify the video as pristine or fake.

(back to top)

Libraries Used

  • Numpy
  • sys
  • dlib
  • skimage
  • cv2
  • math
  • matplotlib
  • os
  • Pandas
  • PIL
  • TensorFlow

(back to top)

Getting Started

  • First run the itercrop on the extracted frames of the video.
  • Next run the stich.ipynb to create a stitched video of the cropped facial region from the extraced frames.
  • now run the main.py with the recently generated video and get the heartrate of the identified person.

Prerequisites

The forementioned libraries should be installed in order for the code to run properly. All the libraries can be downloaded to the specified version using the following command:

  • npm
    npm install npm@latest -g
  • library
    npm install library_name_here -g

👉 Download this file to run the facial extractor

Roadmap

  • Indentify possible datasets
    • Balance the data
    • Create meta.csv (labels)
  • Create a preprocessing pipeleine
    • Extraction of frames from videos
    • Facial indentification and landmark extraction
    • Cropping of face according to the extracted landmarks
    • Stitching of frames to form video ready for EVM
  • Created FFT from the ROI to identify frequency for the EVM
  • Create an implementation of the Euler Video Magnification
  • Train a LSTM based classifier to identify if the final processed video is "Pristine" or "Fake".
  • Create an automated pipeline for end to end processing
  • Deploy the model.

(back to top)

Contact

Rishit Saraf - @rishitsaraf - [email protected]
Devansh Pratap Singh - [email protected]

Project Link: https://github.com/rishitsaraf/remoteheartrate_deepfake_detection

(back to top)

Acknowledgments

Use this space to list resources you find helpful and would like to give credit to. I've included a few of my favorites to kick things off!

(back to top)

About

This project focuses on developing a video content authenticity detection system. We are able to determine whether a face in a video is fake or real by doing a Euler video magnification of the video and then analysing the frames using the model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 93.1%
  • Python 6.9%