OpenAI Unleashes Bug Bounty Program With Rewards Up to $20,000

The company behind the wildly popular ChatGPT will pay hundreds (or thousands) of dollars for researchers to report bugs in its code.

We may earn a commission from links on this page.
“The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure,” the company wrote on its website.
“The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure,” the company wrote on its website.
Image: Tada Images (Shutterstock)

OpenAI—parent company of the ever popular and powerful ChatGPT—has announced a pretty sweet deal for amateur programmers called the Bug Bounty Program. In exchange for finding bugs in OpenAI’s software, the company is willing to hand out anywhere from $200 to $20,000.

OpenAI announced the Bug Bounty Program on its website yesterday, citing transparency and collaboration as reasons for opening up the debugging program to the general public. The reward for identifying security flaws ranges from $200 for “low-severity findings” to a whopping $20,000 for “exceptional discoveries.” The program is in collaboration with Bugcrowd, a cybersecurity firm that focuses on a crowdsourcing approach to identifying flaws in software, and OpenAI says that Bugcrowd will handle the process of receiving user-submitted bug reports as well as distributing pay outs.

Advertisement

“OpenAI’s mission is to create artificial intelligence systems that benefit everyone. To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge,” the company wrote on its website. “The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure.”

Advertisement

Given ChatGPT’s meteoric rise since its release to the public last fall, a crowdsourcing approach to ghosts in the machine is a pretty foolproof approach to bolster OpenAI’s security presence—and there will probably be plenty of tech fanboys willing to sign up. There is also the possibility that this is an attempt by OpenAI to shower the company in public goodwill by opening access to the company’s inner machinations, much the same way Elon Musk did by making parts of Twitter’s recommendation algorithm open source.

Advertisement

ChatGPT has truly taken the world by storm over the past few months: ChatGPT has passed an MBA-level exam at Wharton, written an article for Gizmodo, and even pretended to be blind to convince a human to solve a captcha. The rapid progression and effectiveness of the AI has worried some experts, however, so much so that 500 top technologists (and Elon Musk) have demanded an AI pause on more powerful systems citing potential hazards it may present in an uncertain future for the tech.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.

Advertisement