Date of Award

Winter 2025

Project Type

Thesis

Program or Major

Cybersecurity Engineering

Degree Name

Master of Science

First Advisor

Michael Jonas

Second Advisor

Timothy Finan

Third Advisor

Jeremiah Johnson

Abstract

The democratization of Artificial Intelligence and Machine Learning (AI/ML) tools, combined with the short-form nature of modern media consumption, has created a ripe landscape for generating and weaponizing synthetic media. The availability of pre-trained, open-source generative models enables would-be attackers, who lack funding and enterprise-grade hardware, to conduct convincing information-based attacks that harm, manipulate, and deceive. This thesis aims to model this threat actor using an open-source deepfake generation pipeline that requires minimal computational resources and is capable of producing deepfakes in the audio, image, and video domains. This model is then used to identify, measure, and evaluate the capabilities of a low-resourced attacker by employing a detection-focused perspective. The effectiveness and realism of the artifacts produced by this pipeline are determined through comparison with real media samples and closed-source model outputs. Results showed that, with only an 8 GiB GPU, a threat actor can efficiently run powerful open-source generative models locally. Importantly, this research demonstrates that these generative models can produce outputs that bypass current open-source detection tools. Finally, the limitations of open-source detection tools are revealed, highlighting the widening evolutionary gap between generative models and defensive systems. This growing gap signifies the urgent need for scalable, generalizable detection frameworks deployed at the point of media consumption.

Share

COinS