OpenAI reports a ban on more than 20 hacker groups since the beginning of this year. These groups have been utilizing OpenAI for various purposes, ranging from web content generation for political influence to aiding in malware development.
One such group, STORM-0817, has been using OpenAI models for tasks such as text translation on websites, data scraping from Instagram, and malware development. Monitoring their activities revealed an attempt to create new malware for Android devices, aiming to track targets by extracting data from phones and developing servers to control infected devices. While the identity of this group remains unclear, they have been seen translating information into Persian, testing malware targets with individuals in Iran.
One reported case involves the creation of fake profiles to sway political opinions. For instance, one fake account used OpenAI’s API to generate fake profiles and engage with people on Twitter, despite the lack of credibility.
OpenAI’s policy includes storing inputs into the system for 30 days to monitor misuse. However, there is an option for users to request not to store any data, but this is only available to approved customers.
TLDR: OpenAI has banned over 20 hacker groups using their services for activities ranging from political manipulation to malware development. One group, STORM-0817, was found creating malware for Android and engaging in data scraping activities. Users can request not to store data, subject to approval.
Leave a Comment