While email scams have been around a long time, it is still widely used platform for phishing attacks simply because it's still effective. The majority of those who are aware that these emails are malicious in nature typically ignore and delete them, with few actually trying to interact with the sender. However, in an ironic twist, Netsafe, a pro-online safety organization based in New Zealand, has developed an artificial intelligence bot that will, in essence, “scam” the scammers.
Users initiate Re:scam by sending an email they suspect of being a scam to Netsafe's email address, email@example.com. The company will then double check it to confirm if it really is a scam. Once the email is confirmed to be malicious, a proxy email address is used to engage the scammer. The bot will then flood the sender's email address with a variety of messages related to the scam attempt. The goal of Re:scam is to waste the scammers time, while appearing as natural as possible, in effect appearing as a potential victim who is curious about the scam.
The chatbot uses its AI to mimic the email habits of an actual person, borrowing from multiple personas while adding little quirks like spelling errors and humor to make it seem more authentic. Re:scam was designed to engage as many scammers as possible, for as long as possible—potentially disrupting their operations and damaging their profits.
While AI is already used in security software (primarily via machine learning), it also proves that while still in its early days, AI has the potential to be used in unconventional methods as seen through this initiative. As AI and machine learning become more advanced, security should also be expected to keep in step with new AI-related technologies.
While Re:scam can help slow down email scammers, following the best practices for these kinds of scams is still the most effective way to defend against phishing-related attacks:
Users should never give away personal information unless they are sure of the identity of the person or organization asking for the information.
Users should always take the context of an email or message into account. For example, while African royalty could be asking for money to help claim their inheritance, it is highly unlikely for them to ask help from random strangers on the internet. Even more plausible situations should always be taken with a grain of salt.
If it seems suspicious, it probably is. Users should always err on the side of caution when it comes to messages and emails—especially those from strangers.