Targeting OpenAI’s press account, hackers create the most current cybersecurity compromise
Hackers broke OpenAI’s X press account on September 23, thus resulting in the sixth cybersecurity breach the organization has had since January 2023. Hacker marketed a fake “OPENAI” token using the account by utilizing a phishing link, therefore falsely stating users could claim tokens and have access to forthcoming beta programs.
First noticed suspicious behavior on the “OpenAI Newsroom” account at 10:26 p.m. UTC, X users Screenshots of the pilfers of the stolen mails reveal the attackers offering a “OPENAI” currency meant to combine artificial intelligence with blockchain technologies. Most likely trying to get login credentials, the phishing link directed users to a blacklisted website asking them to join their bitcoin wallets.
The hackers blocked comments on the page using conventional techniques so as to prevent others from informing others about the scam. Though the damaging content was promptly removed, OpenAI or its CEO, Sam Altman, has not publicly addressed the matter.
Targeting OpenAI-affiliated accounts on X, this hack starts a series of such ones. In September 2024 hackers targeted OpenAI researcher Jason Wei’s account; Chief Scientist Jakub Pachocki and CTO Mira Murati in June 2024 and June 2023, respectively. Hacker pushed the same fake “Opener AI” token in every case.
Earlier in 2023 OpenAI also suffered a significant assault when a hacker broke into the company’s internal forum and exposed sensitive personnel data and correspondence. Fortunately, the hacker missed access to OpenAI’s primary systems.
While the company is striving to recover from these incidents, security experts counsel OpenAI to act more aggressively as the repeated attacks draw criticism. Some supporters of two-factor authentication aid to stop further attacks.
Though OpenAI has not yet responded to the most recent intrusion, the frequency of such incidents highlights the growing need of better cybersecurity practices in the face of ongoing threats.