OpenAI launches bug bounty program with rewards up to $20K
AI research company OpenAI announced today the launch of a new bug bounty program to allow registered security researchers to discover vulnerabilities in its product line and get paid for reporting them via the Bugcrowd crowdsourced security platform.
As the company revealed today, the rewards are based on the reported issues’ severity and impact, and they range from $200 for low-severity security flaws up to $20,000 for exceptional discoveries.
“The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure,” OpenAI said.
“We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”
However, while the OpenAI Application Programming Interface (API) and its ChatGPT artificial-intelligence chatbot are in-scope targets for bounty hunters, the company asked researchers to report model issues via a separate form unless they have a security impact.
“Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach,” OpenAI said.
“To ensure that these concerns are properly addressed, please report them using the appropriate form, rather than submitting them through the bug bounty program. Reporting them in the right place allows our researchers to use these reports to improve the model.”
Other issues that are out of scope include jailbreaks and safety bypasses that ChatGPT users have been exploiting to trick the ChatGPT chatbot into ignoring the safeguards implemented by OpenAI engineers.
Last month, OpenAI disclosed a ChatGPT payment data leak the company blamed on a bug in the Redis client open-source library bug used by its platform.
Because of the bug, ChatGPT Plus subscribers began seeing other users’ email addresses on their subscription pages. Following an increasing stream of user reports, OpenAI took the ChatGPT bot offline to investigate an issue.
In a post-mortem published days later, the company explained that the bug caused the ChatGPT service to expose chat queries and personal information for roughly 1.2% of Plus subscribers.
The exposed info included subscriber names, email addresses, payment addresses, and partial credit card information.
“The bug was discovered in the Redis client open-source library, redis-py. As soon as we identified the bug, we reached out to the Redis maintainers with a patch to resolve the issue,” OpenAI said.
While the company didn’t link today’s announcement with this recent incident, the issue would’ve potentially been discovered earlier, and the data leak might’ve been avoided if OpenAI already had a running bug bounty program to allow researchers to test its products for security flaws.
Resource : https://www.bleepingcomputer.com/news/security/openai-launches-bug-bounty-program-with-rewards-up-to-20k/