Technology is a tool, and a tool like ChatGPT is like a flamethrower. It can be used for good, yes, but it can also be used for A LOT of bad. According to a recent report from Norton Cyber Security, ChatGPT is being used for scams.
This should come as no surprise. Fun fact: bad people exist, and they have been using artificial technology for their own benefit for a while. Recently, there was a large phishing scam spotted on LinkedIn. It used an AI-generated image as the primary image and led people to a site that would steal their information.
There are ChatGPT scams out there
At this point, you should expect a fair amount of the content that you see on the internet to be AI-generated. Whether it be an image or a news article.
What makes this such a pain in the neck is that it’s extremely difficult to distinguish between content written by humans and AI. This means that people could actually generate whatever they want and, 99% of the time, the reader would not know that it was AI-generated.
The report says that ChatGPT has been used in numerous phishing schemes. The bad actors would feed examples of text from the entities or people they’re trying to emulate. It would then learn from those examples and create convincing messages based on them.
That way, people would be more likely to believe that it came from a legitimate source. They’ll click without another thought.
Another way that ChatGPT is being used for bad is through its coding capabilities. The chatbot can adapt to different programming languages, and it can generate pieces of code that you can in your apps. This means that people could use ChatGPT to generate the code to do certain tasks.
This means that amateur programmers could generate code for pretty much anything. They can have it generate code to do any malicious task, and they don’t really need to learn how to code.
These are just a few examples of how ChatGPT could be used as a tool to cause harm. It’s important to keep an eye out for these types of scams in order to stay safe.