Artificial intelligence is a general umbrella term that constitutes a machine’s ability to learn, adapt, and overcome challenges by acquiring data. Much like the evolution of human knowledge based on previous experiences (such as understanding a fire stove is hot), machines are trained to carry out tasks more proficiently by analysing large volumes of growing data, otherwise known as ‘big data’. Advanced studies in neuroscience and machine learning have emulated the complex neuron pathways in the human brain to create artificial neural networks, in addition to learning algorithms to teach itself games such as chess. The scope for artificial intelligence diversifies from self-driving cars to precision surgery to even possibly writing feature films.

Enter Deepfake.

Deepfake applies the aforementioned artificial neural networks to detect facial patterns and movements and implement them to another subject’s face, giving an illusion of the subject speaking and doing actions it has never done. While similar AI models have been implemented to emulate photorealism in computer graphics or motion trace character models in video games, deepfake algorithms are becoming more sophisticated – and thus more difficult to detect. The looming threat of deepfake technology against prominent global leaders and celebrities is coupled with the increasing complexities of collecting user data amongst prominent tech empires such as Facebook and Google. 

Audio data – much like fingerprints – is a unique metric for user authentication; hence, exposing such data leaves the user’s personal and financial details vulnerable. While it’s no surprise that Google’s smart assistant system Google Home uploads recorded queries of the user to its cloud, the user can take measures to prevent audio data from being shared. A hacker can’t necessarily brute force through Google’s encryptions and other security measures to collect your data. Instead, they use other ingenious tactics such as social engineering to collect a snippet of a user’s voice. 

Many scam callers impersonating as bank officials use ‘phishing’ techniques to acquire a user’s passwords and credit card details by creating a sense of urgency. In almost all cases, they will begin their attack through a simple question “Can you hear me?” If you answer “Yes?” that tiny bit of data alone is enough for hackers to manipulate your voice through – once again – artificial neural networks. The more data harvested, the better the deep learning algorithms analyse and the more indistinguishable the voices. Once its paired with deepfake technology, it can be nearly impossible to determine the validity of the clip. 

On the other hand, the increasing notoriety of personal data harvesting in media apps such as Tiktok makes it all the more dangerous. Each photo you post on the Internet forever remains on the cloud as part of your ‘digital footprint’. As governments implement stricter cybersecurity laws to combat against cyber attacks and social engineering, the methods used to collect and deepfake a user are more subversive than ever. Users can take further steps by maintaining a cyber awareness over what gets posted online, in addition to reading about social engineering and other scams used to trick persons in revealing sensitive information (more posts on this in the future). Deepfaking may be revolutionary technology but it is not a perfect one, and researching countermeasures against its malicious uses will put us in a definitive advantage.