The New York Times reported last year that a hacker gained access to OpenAI's internal messaging systems and extracted details regarding the company's artificial intelligence technologies. This breach involved information from discussions in an online forum where employees discussed OpenAI's latest advancements. Notably, the hacker did not breach the systems where OpenAI houses and develops its AI technologies, including ChatGPT, which is known for its chatbot capabilities.
OpenAI, backed by Microsoft Corp., reportedly informed employees and the company's board about the breach during an all-hands meeting in April last year. However, they chose not to disclose the incident publicly as no data regarding customers or partners was compromised. The executives assessed the breach as not constituting a national security threat, believing the hacker to be an individual without ties to any foreign government.
Despite disruptions caused by the breach, OpenAI did not report the incident to federal law enforcement agencies. This news comes amid broader concerns about the security and potential misuse of AI technology, especially following instances where AI models were targeted for deceptive activities on the internet.
In response to ongoing security challenges, including geopolitical concerns over AI technology, the Biden administration has considered regulatory measures to safeguard advanced AI models. This effort is part of a broader global initiative involving 16 companies committed to developing AI technology safely amidst evolving regulatory landscapes and emerging risks.