Thu. Jun 13th, 2024


The rise of AI has the potential to transform numerous aspects of our lives. However, this powerful technology also poses new challenges, particularly in the realm of social media scams. If you don’t have keen eyes, sharp ears, and a bit of critical thinking, you may end up falling victim.



If you’re wondering what sort of scams I’m talking about, then read on. I’m going to share some of the most common AI-related scams I’ve seen myself and heard from others who’ve experienced them.


While scrolling through Facebook and other social media platforms, you’ll likely notice ads about AI tools. While some of these ads are authentic and are about real tools, a lot of these ads are scams trying to promote trojans and viruses to infect your device. Some of these scammy ads directly target well-known AI tools such as ChatGPT, Gemini, and Microsoft Copilot to appear more believable.

Take, for example, the below ad about the Sora AI model from OpenAI.


A fake Facebook page claiming to be Sora from OpenAI and promoting a fake AI tool

Sora hasn’t been released to the public yet as of writing this. However, just like this ad, whenever a company announces a new AI tool, and it gets some traction, some social pages act like they are promoting those tools when, in reality, they’re trying to scam you. Sometimes, you’ll find pages not posing as the company itself but third parties promoting the AI tools. The technique is quite similar. Here’s one I found the other day.

A fake Facebook page promoting Sora video AI from OpenAI yet to be released


Another common instance is when any of these AI tools receive a major update. So those fake pages create ads related to the update and ask you to download the latest version of the tool. Here’s an example of just that.

A fake Facebook page promoting malware disguised as a new updated Gemini AI

What’s more concerning is that some people are actually falling for these scams. The engagement in such ads points to that. Once you download the provided “software,” unzip it, and install it, you’re installing malware on your device. Chances are that the scammer may get access to your whole system, including social media credentials and other sensitive information.

AI-Generated Social Profile Scam

A snapshot of the Instagram profile of Liam Nikuro, an AI influencer who doesn't exist in real life


While scammers try to exploit your data with AI-related ads, another emerging threat involves creating entirely fake social media profiles. There was a time when people used to create bots for following and engagement purposes. But you could easily identify them. With AI image generation capabilities, anyone can craft realistic pictures of humans on a scale.

We now have AI influencers, virtual humans who don’t exist, with thousands of followers. While this may not look that bad at first glance, malevolent people can use this idea to scam others. Imagine a supplement company using these AI influencers to promote their dangerous products. People will fall into this influencer trap and buy them. Some companies can go beyond and generate before-after images to show their product works. Mass production of fake testimonials is another plausible use of AI social profiles.


These AI-generated profiles can also be engineered to engage in seemingly genuine conversations, building rapport with naive people. This is particularly dangerous in the context of romance scams, where scammers exploit emotional vulnerabilities to extract money or personal information.

Impersonation Scam

This type of scam takes AI-generated social profiles to the next level. Scammers don’t just stop at creating random profiles but target real-life people and generate profiles that match their appearance and personality. They do so by feeding real data of that person to AI models. Celebrities and internet personalities are the biggest victims of this.

Scammers are using AI to develop remarkably convincing impersonations of real people. They might use deepfakes and AI-generated photos of popular personalities, influencers, or even your friends and family. These impersonations are then used to gain your trust and exploit you financially and emotionally. Some AI technologies can clone human voices, which makes it more disastrous.


This isn’t necessarily a scam specific to AI tools. However, AI being a trendy topic, scammers can take advantage of people’s interests even more. One kind of scam is advertising some kind of cutting-edge AI software. Only when you buy and use it do you realize it’s some fake bootleg version of the existing tools and offers minimal functionalities compared to what was advertised.

Some other scammers offer you subscriptions to AI tools at a cheaper price. What they really do is buy a subscription and share that account with multiple people. If someone else or the seller himself changes the credentials of the account, you won’t be able to use that account anymore. Even worse, sometimes, after receiving your money, they’ll block you immediately. If something on social media looks too good to be true, it probably is. See this ad below promoting a ChatGPT Plus subscription for only $8 per year.


An example of a Facebook page selling ChatGPT yearly subscription at a lower price

Then, of course, we have so-called AI gurus who don’t sell you tools but rather products related to them. Some of the common products they offer include “proven” AI prompts, tool guides, courses, and anything about making money with AI. In reality, the products only have surface-level information and nothing special worth paying for. Funnily enough, they’re making money with AI by teaching people how to make money with AI.

This is not to say there’s anything wrong with selling AI-related products or that there aren’t any real AI experts. But so many people position themselves as AI experts with repackaged information so they can make a fortune at the expense of others.


How to Save Yourself From This

As more scams rise on social media, you need to be more watchful in all your online interactions. This includes who you’re connecting with, who you’re buying from, and what information you’re consuming. AI tools are improving quickly, but fortunately, it’s possible to spot AI-generated news, photos, and even videos. You can take help from tools, but training yourself on knowing what’s real and what’s not can be beneficial for the upcoming days.

Be wary of ads coming from unofficial sources. You can identify them by bad grammar, suspicious links, and unrealistic promises. Usually, the official pages of companies are verified (check for the blue tick on Facebook and X) and have a good engagement history.

An example of ChatGPT' official Facebook page verified by Meta


Be extra careful when buying online. Only buy from trusted sellers with a good reputation. If you find something cheaper than the official price, there has to be something fishy. Try to do your own research. This involves searching for reviews, checking the company’s website, and using fact-checking authorities.

Don’t take everything you see on social media at face value. If you find suspicious social profiles, ads, messages, or other activities, report them to the social platform. This helps social media companies identify and remove scams faster.

As AI tools keep improving, we should be more cautious in the online world. There will be doors of new opportunities as there will be new challenges to overcome. By following the tips outlined above, you can reduce the risk of falling victim and ensure a positive social media experience.



Source link

By John P.

Leave a Reply

Your email address will not be published. Required fields are marked *