A Comprehensive Review of Real-Time Deepfake Detection Using Light Distribution and Illumination Consistency Analysis

Main Article Content

Islam Mu'ayyad Dhiyab

Abstract

The recent fast breakthroughs in deep learning and generative models have made it possible to synthesize highly faked deepfakes realistically. The classical approaches that depend on the static visual artifacts, spatial irregularities, or frequency domain anomalies are hard to be done in a real-time system. This paper delivers an in-depth overview of recent real-time deepfake detection methodologies that take advantage of the physical behavior of light, such as illumination distribution, analysis of corneal reflection or effects due to vibration. By taking advantage of the inherent interaction between light and real facial material, these methods add actively physical probing signals that not neural nets growing existing generative models can emulate nimbly. Experimental results in the literature suggest that light-based techniques and physical-response models yield high temporal consistency, robustness and accuracy, usually at the cost of traditional data-driven methods for live video with efficiency. However, there are still some problems un-solved, e.g. illumination variations, hardware limitation of the devices and the artifacts introduced by JPEG compression etc., and few well-annotated databases for illumination-based recognition have been proposed in this area. To overcome these limitations, hybrid (i.e. physical-light cue oriented) architectures that mix the best of both worlds and merge state-of-the-art deep learning model (e.g., YOLO-based ones) are employed for building robust, explainable and real-time BASED systems in upcoming research.

Article Details

How to Cite
[1]
I. M. Dhiyab, “A Comprehensive Review of Real-Time Deepfake Detection Using Light Distribution and Illumination Consistency Analysis”, Rafidain J. Eng. Sci., vol. 4, no. 1, pp. 407–431, Mar. 2026, doi: 10.61268/5twsxv77.
Section
Computer Engineering

How to Cite

[1]
I. M. Dhiyab, “A Comprehensive Review of Real-Time Deepfake Detection Using Light Distribution and Illumination Consistency Analysis”, Rafidain J. Eng. Sci., vol. 4, no. 1, pp. 407–431, Mar. 2026, doi: 10.61268/5twsxv77.

References

Çeçen M, Karaköse M. A Deepfake Image Detection Approach Based on YOLOv3. In: 2th International Conference on Advances and Innovations in Engineering; 21-23 September 2023. pp. 10-18.

Franklin RJ, Mohona. Traffic Signal Violation Detection using Artificial Intelligence and Deep Learning. In: International Conference on Communication and Electronics Systems; 10-12 June 2020. pp. 839 - 844.

İlhan İ., Balı E., Karaköse M. An Improved DeepFake Detection Approach with NASNetLarge CNN. In: IEEE International Conference on Data Analytics for Business and Industry; 25-26 October 2022. pp. 598-602.

Seow JW, Lim MK, Phan R, Liu J. A comprehensive overview of Deepfake: Generation, detection, datasets, and opportunities. Elsevier Neurocomputing 2022; 513: 351–371.

İlhan İ, Karaköse M. Derin Sahte Videoların Tespiti ve Uygulamaları için Bir Karşılaştırma Çalışması. Adıyaman Üniversitesi Mühendislik Bilimleri Dergisi 2021; 8(14): 47-60.

John J, Sherif B. Comparative Analysis on Different DeepFakeDetection Methods and Semi Supervised GAN Architecture for DeepFake Detection. In: Proceedings of the Sixth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud); 10-12 November 2022.

FaceApp. Accessed: Jan. 4, 2021. [Online]. Available: https://www. faceapp.com/

FakeApp. Accessed: Jan. 4, 2021. [Online]. Available: https://www. fakeapp.org/

Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, J. Ortega-Garcia, Deepfakes and beyond: a survey of face manipulation and fake detection. Inf. Fusion [Online] 64, 131 148 (2020). Available at: https://arxiv.org/pdf/2001.00179.pdf.

Clark, Deepfakes algorithm nails Donald Trump in most convincing fake yet [Online]. TNW | Artificial-Intelligence (2018).

Westerlund, The emergence of Deepfake technology: a review. Technol. Innov. Manag. Rev. [Online] 9(11) (2019).

N. Kanwal, A. Girdhar, L. Kaur, and J. S. Bhullar, “Detection of Digital Image Forgery using Fast Fourier Transform and Local Features,” in 2019 International Conference on Automation, Computational and Technology Management (ICACTM), London, United Kingdom: IEEE, Apr. 2019, pp. 262–267. doi: 10.1109/ICACTM.2019.8776709.

L. Verdoliva, “Media Forensics and DeepFakes: An Overview,” IEEE J. Sel. Top. Signal Process., vol. 14, no. 5, pp. 910–932, Aug. 2020, doi: 10.1109/JSTSP.2020.3002101.

“A. van den Oord, Y. Li, O. Vinyals, Representation... - Google Scholar.” Accessed: Jun. 07, 2024.

GoodfellowIan et al., “Generative adversarial networks,” Communications of the ACM, Oct. 2020, doi: 10.1145/3422622.

D. P. Kingma and M. Welling, “An Introduction to Variational Autoencoders,” MAL, vol. 12, no. 4, pp. 307–392, Nov. 2019, doi: 10.1561/2200000056.

“Jianchang Mao and Anil K Jain. Texture classification... - Google Scholar.” Accessed: Jun. 07, 2024.

Azawi, Raghad Majeed, Ibrahim Tariq Ibrahim, and Israa Mishkhal. "A Hybrid Detection System of Heart Disease by Using Machine Learning Techniques." Journal homepage: https://ijas. uodiyala. edu. iq/index. php/IJAS/index ISSN 3006: 5828.

“Adversarial-learning-based image-to-image transformation: A survey - ScienceDirect.” Accessed: Jun. 07, 2024.

J. W. Seow, M. K. Lim, R. C. W. Phan, and J. K. Liu, “A comprehensive overview of Deepfake: Generation, detection, datasets, and opportunities,” Neurocomputing, vol. 513, pp. 351–371, Nov. 2022, doi: 10.1016/j.neucom.2022.09.135.

“FaceApp Inc, Faceapp (2016). https://www.faceapp.com/. - Google Scholar.” Accessed: Jun. 08, 2024.

shaoanlu, shaoanlu/faceswap-GAN. (Mar. 09, 2024). Jupyter Notebook. Accessed: Mar. 11, 2024.

“Zao Asian Cafe,” App Store. Accessed: Mar. 11, 2024.

J. He, J. Zheng, Y. Shen, Y. Guo, and H. Zhou, “Facial Image Synthesis and Super-Resolution With Stacked Generative Adversarial Network,” Neurocomputing, vol. 402, pp. 359–365, Aug. 2020, doi: 10.1016/j.neucom.2020.03.107.

R. Natsume, T. Yatagawa, and S. Morishima, “RSGAN: Face Swapping and Editing using Face and Hair Representation in Latent Spaces,” in ACM SIGGRAPH 2018 Posters, Aug. 2018, pp. 1–2. doi: 10.1145/3230744.3230818.

R. Natsume, T. Yatagawa, and S. Morishima, “FSNet: An Identity-Aware Generative Model for Image-based Face Swapping,” vol. 11366, 2019, pp. 117–132. doi: 10.1007/978-3-030-20876-9_8.

Y. Nirkin, Y. Keller, and T. Hassner, “FSGAN: Subject Agnostic Face Swapping and Reenactment,” Aug. 16, 2019, arXiv: arXiv:1908.05932. doi: 10.48550/arXiv.1908.05932.

L. Li, J. Bao, H. Yang, D. Chen, and F. Wen, “FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping,” Sep. 15, 2020, arXiv: arXiv:1912.13457. Accessed: Dec. 10, 2023. [Online]. Available: http://arxiv.org/abs/1912.13457.

Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT: IEEE, Jun. 2018, pp. 8789–8797. doi: 10.1109/CVPR.2018.00916.

W. Wu, Y. Zhang, C. Li, C. Qian, and C. C. Loy, “ReenactGAN: Learning to Reenact Faces via Boundary Transfer,” presented at the Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 603–619. Accessed: Jun. 10, 2024.

A. Bansal, S. Ma, D. Ramanan, and Y. Sheikh, “Recycle-GAN: Unsupervised Video Retargeting,” presented at the Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 119–135. Accessed: Jun. 10, 2024.

Y. Song, J. Zhu, D. Li, X. Wang, and H. Qi, “Talking Face Generation by Conditional Recurrent Adversarial Network,” Jul. 25, 2019, arXiv: arXiv:1804.04786. doi: 10.48550/arXiv.1804.04786.

S. Tripathy, J. Kannala, and E. Rahtu, “ICface: Interpretable and Controllable Face Reenactment Using GANs,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA: IEEE, Mar. 2020, pp. 3374– 3383. doi: 10.1109/WACV45572.2020.9093474.

Y. Sun, J. Tang, Z. Sun, and M. Tistarelli, “Facial Age and Expression Synthesis Using Ordinal Ranking Adversarial Networks,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 2960–2972, 2020, doi: 10.1109/TIFS.2020.2980792.

Y. Wang, P. Bilinski, F. Bremond, and A. Dantcheva, “ImaGINator: Conditional Spatio-Temporal GAN for Video Generation,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA: IEEE, Mar. 2020, pp. 1149–1158. doi: 10.1109/WACV45572.2020.9093492.

C. Fu, Y. Hu, X. Wu, G. Wang, Q. Zhang, and R. He, “High-Fidelity Face Manipulation With Extreme Poses and Expressions,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 2218–2231, 2021, doi: 10.1109/TIFS.2021.3050065.

Y. Didi, “Jiggy: Magic dance gif maker (2020),” URL https://apps. apple. com/us/app/jiggy-magic-dance-gifmaker/id1482608709.

W. Liu, Z. Piao, J. Min, W. Luo, L. Ma, and S. Gao, “Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis,” presented at the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5904–5913. Accessed: Jun. 13, 2024.

T. Xiao, J. Hong, and J. Ma, “ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes,” presented at the Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 168–184. Accessed: Jun. 13, 2024.

T. Li et al., “BeautyGAN: Instance-level Facial Makeup Transfer with Deep Generative Adversarial Network,” in Proceedings of the 26th ACM international conference on Multimedia, in MM ‟18. New York, NY, USA: Association for Computing Machinery, Oct. 2018, pp. 645–653. doi: 10.1145/3240508.3240618.

Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “AttGAN: Facial Attribute Editing by Only Changing What You Want,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 5464–5478, Nov. 2019, doi: 10.1109/TIP.2019.2916751.

M. Liu et al., “STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing,” presented at the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3673–3682. Accessed: Jun. 14, 2024.

Y. Jo and J. Park, “SC-FEGAN: Face Editing Generative Adversarial Network With User‟s Sketch and Color,” presented at the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1745–1753. Accessed: Jun. 15, 2024.

X. Nie, H. Ding, M. Qi, Y. Wang, and E. K. Wong, “URCA-GAN: UpSample Residual Channel-wise Attention Generative Adversarial Network for image-to-image translation,” Neurocomputing, vol. 443, pp. 75–84, Jul. 2021, doi: 10.1016/j.neucom.2021.02.054.

“Photo-realistic face age progression/regression using a single generative adversarial network - ScienceDirect.” Accessed: Jun. 14, 2024.

J. Guo and Y. Liu, “Attributes guided facial image completion,” Neurocomputing, vol. 392, pp. 60–69, Jun. 2020, doi: 10.1016/j.neucom.2020.02.013.

M. Afifi, M. A. Brubaker, and M. S. Brown, “HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms,” presented at the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 7941–7950. Accessed: Jun. 15, 2024.

T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401–4410. Accessed: Jun. 05, 2024.

T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and Improving the Image Quality of StyleGAN,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, Jun. 2020, pp. 8107–8116. doi: 10.1109/CVPR42600.2020.00813.

C. R. Gerstner and H. Farid, Detecting Real-Time Deep-Fake Videos Using Active Illumination, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR), 2022, pp. 3928–3937.

Merlin Nimier-David, Delio Vicini, Tizian Zeltner, and Wen-zel Jakob. Mitsuba 2: A retargetable forward and inverse renderer. Transactions on Graphics (Proceedings of SIG GRAPH Asia), 38(6), Dec. 2019.

H. Guo, X. Wang, and S. Lyu, Detection of Real-Time DeepFakes in Video Conferencing with Active Probing and Corneal Reflection, arXiv preprint arXiv:2210.12108, 2022.

Z. Xie and J. Luo, SFake: Real-Time Deepfake Detection via Smartphone Vibration Probing, arXiv preprint arXiv:2404.01674, 2024.

Zotov S, Dremliuga R, Borshevnikov A et al (2020) Deepfake detection algorithms: a meta-analysis. In: 2020 2nd symposium on signal processing systems. pp 43–48.

Mitra A, Mohanty SP, Corcoran P et al (2021) A machine learning based approach for deepfake detection in social media through key video frame extraction. SN Comput Sci 2(2):1–18.

Xu FJ, Wang R, Huang Y et al (2022) Countering malicious deepfakes: survey, battleground, and horizon. Int J Comput Vis. https:// doi. org/ 10. 1007/ s11263- 022- 01606-8.

Zhang T (2022) Deepfake generation and detection, a survey. Multimed Tools Appl 81(5):6259–6276.

Similar Articles

You may also start an advanced similarity search for this article.