Cybersecurity

Ready or Not: Deepfakes Are Here

By February 21, 2024 No Comments

Cheap, sophisticated deepfake technology has lowered the barrier to entry for scammers of all skill levels. As always, the goal is either a cash grab or a point of access to valuable systems or data. The current wave of deepfakes is rearing its ugly head in two big areas: the hiring and job search process and in financial services fraud. While there is software to help with deepfake discernment, good old fashioned low tech solutions are also required.

From Annoying to Harmful

If you’re lucky, the deepfakes you’ve encountered in the wild range from entertaining to mildly annoying. They’re all over social media and YouTube. Some are deeply harmful to the celebrities that are often involved, and others get you hyped for movies that don’t actually exist. (You have to admit that using AI and deepfake software to create a preview for a fake Terminator movie is pretty on the nose. Skynet would be proud.)

The same software that can make you think that John Cena is starring in Terminator 7 might also convince you that your boss wants you to wire a bunch of money to someplace it shouldn’t actually go. That’s exactly what happened to a finance worker in Hong Kong. The unidentified employee received a message from his company’s CFO that seemed odd. He questioned its validity but was soon reassured by a follow-up call. The video call featured not only the CFO but other colleagues the worker recognized–all of whom were actually deepfakes. The situation ended with the worker remitting $200 million Hong Kong dollars (25.6 million USD) to the scammers. 

Deepfakes in the Hiring and Job Search Process

Hackers love a vulnerable target. The job/employee search process puts both sides of the equation in a position of high hopes and big emotions. It also necessitates a frank exchange of information as prospective employees and employers get to know one another. 

Since the job search and hiring process is so vulnerable, deepfake scammers have already found novel ways to insert themselves. They are, largely, out to acquire access–either to individuals’ data or companies’ data and systems. 

Here’s an example where the prospective employee is being exploited. A scammer finds a recruiter or hiring manager’s profile picture on LinkedIn and generates a deepfaked likeness. This faked entity can be used to lure prospective candidates through a duped LinkedIn profile and/or email address using details from an actual job posting as their bait. Candidates looking for jobs online are eager to be agreeable when a hiring manager shows interest in their application. The candidate could easily be convinced to share a resume, fill out forms, or submit references that would all reveal a lot of personal information. 

On the flip side, scammers can easily create fake personas and use them to apply for jobs. If they get far enough into the interview process (or even get hired), they can gain access to systems, learn insider details of company functions, and more. 

Safeguarding Against Deepfakes in the Hiring Process

Job seekers should always verify the legitimacy of potential employers by checking their online presence–both their website and networking platforms. Are leadership names consistent? Does the company maintain a presence on networking platforms? Do users who look legitimate interact with their posts? 

Looking for a job is hard work. But don’t be overly accommodating to every recruiter. Avoid sharing personal information–remove your home address from your resume. A city and state is fine. And always be skeptical of job offers that seem too good to be true. 

Both job seekers and employers need to stick to reputable job search platforms. Employers can help candidates verify that job postings are legitimate by matching listings on their website to all the platforms where their listings are duplicated. It may also be helpful for employers to mention on the “careers” portion of their site where they do or do not cross post openings. “We only cross post job openings on Glassdoor. Listings posted elsewhere should not be trusted.” 

Employers should invest in and commit to using secure communication channels for all candidate screenings and interviews. These communications channels should employ some form of identity verification or authentication process. When possible, in-person interviews should be conducted. 

Deepfakes in Banking & Finance

Chinese hackers appear to be leading the way in mobile banking and finance fraud. A new malware called “GoldPickaxe” is targeting elderly victims by primarily pretending to be a government service app used to receive pensions. (The Android version of the malware can actually pose as over twenty different applications used by the Thai government, financial sector, and utility industry.) 

The scammers use these duped apps to convince victims to scan their faces so they can develop deepfakes. These deepfakes are then used to bypass the biometric security checks recently implemented at many Southeast Asian banks. 

As a bonus, the iOS and Android versions of GoldPickaxe also collect their victims’ ID documents and photos, intercept incoming SMS, and proxy traffic through the compromised device. 

Safeguards Against Deepfakes in Banking

Relying on software alone to detect deepfakes is not enough. The quality of deepfakes is evolving at a much faster rate than that of detection tools. Developing an overall culture of caution among consumers (and your employees) is a better way to slow the spread of deepfake scams. Businesses can lead the way by offering training on common scams and evolving threats. 

Businesses should also develop a strong verification process for transactions that involve moving money or data. The verification process should employ a layered approach with checks across multiple communication channels. Logging into a banking app could include both a password and a biometric scan, as well as a one-time code sent via SMS to a phone number on record. 

Organizations should also develop mechanisms to encourage customers and employees to report suspicious activity. 

For custom information security and compliance solutions, reach out to Asylas at 615-622-4591 or email info@asylas.com. Or complete our contact form.