Be Skeptical

Be Skeptical: AI and Fakes

Artificial intelligence has made it shockingly easy for scammers to create fake voices, fake videos, fake photos, and fake messages that look and sound completely real. The days when you could trust your own eyes and ears online are over. Here’s what you need to know — and how to protect yourself.


What Scammers Can Fake Now

🗣️ Voice Cloning

What it is: With just a few seconds of someone’s voice — pulled from a social media video, voicemail greeting, or phone call — AI can create a near-perfect copy of that person’s voice. It can then say anything the scammer types in.

How scammers use it: They call you sounding exactly like your grandchild, your son or daughter, or your spouse. They say they’re in trouble and need money immediately. This takes the grandparent scam to a terrifying new level — because it actually sounds like the person you love.

How to protect yourself: Create a family code word — a secret word or phrase that only your family knows. If someone calls claiming to be a family member and asking for money, ask for the code word. If they can’t provide it, hang up and call your family member directly at their real number.

🎥 Deepfake Videos

What it is: AI can create realistic videos of real people saying and doing things they never actually said or did. These videos can be very convincing, even to careful viewers.

How scammers use it: Fake videos of celebrities, news anchors, politicians, or doctors are used to promote investment scams, fake products, and health fraud. You might see a video of a well-known person endorsing a cryptocurrency, a miracle supplement, or a “guaranteed” money-making opportunity. The person in the video never said any of it.

How to protect yourself: Be skeptical of any video promoting money-making opportunities or health products — especially on social media. Search for the claim on trusted news sites before believing it. If a celebrity is endorsing something in a social media ad, it’s very likely fake.

📸 Fake Photos

What it is: AI can generate realistic photos of people who don’t exist, or alter real photos to create fake situations. These images are used for fake profiles, fake products, and fake news stories.

How scammers use it: Romance scammers use AI-generated faces for their dating profiles. Fake stores use AI-generated product photos to sell items that don’t exist. Misleading news articles use fake images to manipulate your emotions and opinions.

How to protect yourself: Look closely at photos. AI-generated faces sometimes have oddities — uneven earrings, blurry backgrounds, strange teeth, or distorted hands. For profile photos, do a reverse image search by dragging the image into Google Images to see if it appears elsewhere. Don’t trust a photo just because it looks professional.

✉️ AI-Written Scam Messages

What it is: Scam emails and texts used to be easy to spot because of spelling mistakes, bad grammar, and awkward phrasing. AI has changed that. Scammers now use AI to write polished, professional messages that are almost impossible to distinguish from legitimate communication.

How scammers use it: AI-generated phishing emails look exactly like real messages from your bank, your doctor’s office, or a company you do business with. They use your name, reference real services, and write in perfect English. AI can also generate personalized messages at massive scale — meaning millions of convincing scam emails can be created in minutes.

How to protect yourself: Perfect grammar is no longer a sign that a message is safe. Focus on what the message is asking you to do rather than how well it’s written. If it asks you to click a link, call a number, send money, or share personal information — verify through official channels first.

📰 Fake News and Misinformation

What it is: AI can generate entire fake news articles, complete with realistic headlines, quotes, and details. These are spread on social media, in emails, and on websites designed to look like real news outlets.

How scammers use it: Fake news stories are used to create panic, manipulate opinions, promote scam products, and drive people to malicious websites. During elections, natural disasters, and health crises, fake stories multiply rapidly.

How to protect yourself: Check the source. Is it a news outlet you recognize and trust? Search for the same story on other major news sites — if only one obscure site is reporting it, be very cautious. Be especially skeptical of stories that make you feel strong emotions like anger or fear, as these are designed to make you share before you think.


Your Skeptic’s Toolkit

  • Set up a family code word. Agree on a secret word with your close family members. Use it to verify identity on any unexpected call asking for money or personal information.
  • Verify before you trust. If something seems urgent, unexpected, or too good to be true — stop and verify through a separate channel. Call the person or company directly using a number you already have.
  • Don’t trust a voice, video, or photo just because it looks real. AI can fake all of these convincingly. Focus on what’s being asked of you, not how real it looks or sounds.
  • Check claims on trusted news sites. Before believing or sharing a shocking story, search for it on established news outlets like your local newspaper’s website, AP News, or Reuters.
  • Slow down. Scammers want you to act fast. The single best defense against any scam — AI-powered or not — is to pause, think, and verify before doing anything.
  • Talk to someone you trust. Before making any decisions based on an unexpected call, message, or video, talk it over with a family member or friend.

← Back to Safety Guides

Scroll to Top