Privacy

Protecting Virtual Privacy in the Face of Evolving Artificial Intelligence

By January 17, 2024 No Comments
young girl looking at a smartphone

Advancements in artificial intelligence have made it more important than ever to evaluate how we protect our kids’ virtual privacy. 

For a couple of decades now, parents have fretted over what to share online about their kids. The concern has mainly been around the kids’ personal privacy–whether or not they would want a particular photo visible to the world. In those terms, sharing carefully selected photos on family blogs or social media sites felt innocent enough when the platforms were new. But through the years, enough scary stories emerged that many parents made their accounts private. Or they opted to (somewhat creatively) cover their childrens’ faces on profiles that remained public. But many parents are realizing that those old strategies no longer work. 

Recent advances in AI have found us flat-footed and less knowledgeable than we thought we were. Even people as wealthy and powerful as Taylor Swift are not immune to the risks of AI. The popular singer recently had her voice deepfaked and used in Facebook ads for Le Creuset cookware. Many other celebrities have had AI versions of their likeness or voice used, without permission, to sell products. See MrBeast, Gayle King, and Tom Hanks, to name a few. 

How can we be realistic about virtual privacy when it comes to ourselves and our kids? Engaging with the better things that tech has to offer can enrich our lives and relationships. But staying aware and alert to emerging threats is non-negotiable.

AI’s Unique Threats to Virtual Privacy

Earlier this year, Deutsche Telekom highlighted the potential risks of “sharenting” in its video “Ella Without Consent.” The video is part of the telecom company’s ShareWithCare campaign that asks parents to consider the risks of posting personal photos and videos of their children online. In the video, little Ella is aged via deepfake video to present a warning from the future. She terrifies her parents with her all-too-realistic face and voice. And she informs them that this AI-generated self could be just as easily used in disturbing graphic content and fraud schemes as it is in the warning film. 

It’s, frankly, a bit heavy-handed, but that seems to be what it takes to get the message across. 

The “Ella” video offers us a hypothetical. But AI is being used to perpetuate real scams right now. Software that clones voices is readily available and easy for the general public to use. Scammers have quickly developed ways to put it to use. 

An attorney named Gary Schildhorn testified to a Senate panel in November 2023 about his experience with a terrifying voice scam. Schildhorn went into “action mode” when he received a phone call from his 32-year-old “son” after a serious car accident that resulted in the son’s arrest. The deepfake of his son’s voice was so real that it elicited a visceral reaction in the father. Blood pumping and nerves frazzled, he engaged in a series of phone calls with the scammers that led him to the precipice of depositing thousands of dollars in a Bitcoin ATM. He was saved by first checking in with his son’s wife who confirmed that there was no accident. The son was fine–not under arrest, not injured, and not facing DUI charges. 

Not every intended victim is so lucky. An 82-year-old Texas man was caught up in the same scam: his son-in-law was supposedly in a serious accident and at fault for the crash. He would need to be bailed out of jail to the tune of $17,000. The son-in-law’s faked voice was so convincing on the other end of the line that the older man paid up. 

Practical Tips for Staying Aware and Alert

Let’s assume you’re a pretty normal person. You have a social media account or two. Your kids attend school somewhere. You use credit cards and shop online. You have multiple phones, tablets, and computers in your household. Your life is built around a certain amount of tech and there’s really no going back. There is no way to ensure 100% safety. But there are many steps you can take to avoid becoming a victim of the worst scams and privacy infringements. 

First, clean up your own act. Make your social media pages private and don’t accept friend requests from people you don’t know in real life. Even if your account is limited to friends and family, take a beat to consider what you’re posting before it goes live. Never post pictures that could be used maliciously–anything embarrassing or not fully clothed. 

Next, talk to your kids about the dangers of having an online presence. This conversation will differ depending on their age. But it needs to happen before you provide them with any connected device–even your own phone for briefly playing games. Internet Matters, a British nonprofit, offers age-based guidance for parents. 

Teach and use good password hygiene. We’ve written about password guidelines too many times to count. But that’s because it really matters! Kids do not instinctively know what makes a good password. Encourage them to use passphrases that are easy to remember but hard to crack. And help older children set up a password manager

If your child is under 16, consider freezing their credit to protect them from identity theft. Child identity theft is more common than you probably realize. About a million kids are victims every year (see data from 2017, 2021). Both the FTC and Bankrate offer advice on why and how to see if your child has a credit report, and whether or not to subsequently freeze their credit. The freeze will stay in place until you tell the credit bureaus to remove it or until the child chooses to do so (after they’ve turned 16). 

The voice cloning scams mentioned above are a rare and random occurrence. But when successful, the stakes are incredibly high. To avoid the emotional and financial terror associated with such calls, consider adopting a family safe word or phrase. It should be a particular question or phrase that only your immediate family would know to say. Then if you’re contacted about a loved one who’s in distress, you can ask for the safe word to prove the situation is real. 

Government and Industry Safeguards

Mr. Schildhorn’s testimony is evidence that the government is paying attention to scams and other privacy incursions driven by advances in AI. But what steps are leaders in government and industry actually taking to create safeguards for the public?

While it seems like the tech sector is mainly focused on growing the capabilities of AI, whatever the cost, there are some players working on ways to limit bad actors who leverage the advances. McAfee Corp. introduced Project Mockingbird earlier this month at CES. Mockingbird is an AI-powered Deepfake Audio detection technology. McAfee hopes that the software will help prevent the spread of disinformation and the perpetuation of voice scams. 

Congress is doing a lot of talking when it comes to artificial intelligence. In September 2023, the U.S. Senate and top tech leaders held a marathon closed-door meeting to discuss whether or not the government needs to play a role in regulating AI. The House and Senate held several hearings in 2023 on AI and the future of elections, the risks and opportunities of AI, AI and human rights, and advances in deepfake technology

There is, at least, a sense of urgency around regulating AI issues. Senate Intelligence Committee Chairman Mark Warner acknowledges that the government acted too slowly with social media. To that end, Senators Richard Blumenthal and Josh Hawley recently released a bipartisan blueprint for a U.S. AI Act

Just last week, the House began circulating a discussion draft of a bill that addresses deepfakes. The No AI Fake Replicas and Unauthorized Duplications (FRAUD) Act, proposes making it illegal to create a “digital depiction” of any person, living or dead, without permission, including their appearance or voice. Violations would carry a find of up to $50,000 per infraction. Tennessee Governor Bill Lee announced a similar bill last week as well. The Ensuring Likeness Voice and Image Security (ELVIS) Act updates an existing state law to include protection from new generative AI tools. 

Conclusion

As with most risks, it’s better to be aware and alert than it is to be afraid. Knowing that advances in AI have elevated the risk of virtual privacy infringement is the first step to staying safe. Don’t wait for industry or the government to make improvements that will protect you or your kids. Continue learning all that you can as technology evolves. And watch this space! We will continue to stay on top of changes and bring you our best advice.

For custom information security and compliance solutions, reach out to Asylas at 615-622-4591 or email info@asylas.com. Or complete our contact form.

Leave a Reply