Hackers have a lot to gain from the use of generative AI tools such as ChatGPT. While the tools are still too young to be able to run malicious campaigns with minimal human input, they can be used to supercharge human-run campaigns in ways that have never been seen before.

This is according to new analysis from IBM’s Security Intelligence X-Force team. The researchers detailed an experiment in which they pitted human-written phishing emails against those written by ChatGPT. The goal was to see which email would have a higher click-through rate, both for emails themselves and for the malicious links inside.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *