Tech

People Are Pirating GPT-4 By Scraping Exposed API Keys

Why pay for $150,000 worth of OpenAI access when you could just steal it?
OpenA
Image: Jack Guez/Contributor
Screen Shot 2021-02-24 at 3
Hacking. Disinformation. Surveillance. CYBER is Motherboard's podcast and reporting on the dark underbelly of the internet.

People on the Discord for the r/ChatGPT subreddit are advertising stolen OpenAI API tokens that have been scraped from other peoples’ code, according to chat logs, screenshots and interviews. People using the stolen API keys can then implement GPT-4 while racking up usage charges to the stolen OpenAI account.

In one case, someone has stolen access to a valuable OpenAI account with an upper limit of $150,000 worth of usage, and is now offering that access for free to other members, including via a website and a second dedicated Discord server. That server has more than 500 members. 

Advertisement

People who want to use OpenAI's large language models like GPT-4 need to make an account with the company and associate a credit card with the account. OpenAI then gives them a unique API key which allows them to access OpenAI's tools. For example, an app developer can use code to implement ChatGPT or other language models in their app. The API key gives them access to those tools, and OpenAI charges a fee based on usage: “Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps),” OpenAI warns users. If the key is stolen or exposed, anyone can start racking up charges on that person's account.

The method by which the pirate gained access highlights a security consideration that paying users of OpenAI need to consider. The person says they scraped a website that allows people to collaborate on coding projects, according to screenshots. In many cases, it appears likely the authors of code hosted on the site, called Replit, did not realize they had included their OpenAI API keys in their publicly accessible code, exposing them to third-parties.

“My acc [account] is still not banned after doing crazy shit like this,” the pirate, who goes by the handle Discodtehe, wrote in the r/ChatGPT Discord server Wednesday.

Do you know anything else about how people are maliciously using AI? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

Advertisement

In the past few days, Discodtehe’s use of at least one stolen OpenAI API key appears to have ramped up. They shared multiple screenshots of the account usage increasing over time. One recent screenshot shows usage this month of $1,039.37 out of $150,000.

“If we have enough people they might not ban us all,” Discodtehe wrote on Wednesday.

Discodtehe appears to have been scraping exposed API keys for longer, though. In one Discord message from March, they wrote “the other day I scraped repl.it and found over 1000 working openai api keys.”

“I didn’t even do a full scrape, I only looked at about half of the results,” they added.

Replit is an online tool for writing code collaboratively. Users can make projects, what Replit calls “Repls,” which are public by default, Cecilia Ziniti, Replit’s general counsel and head of business development, told Motherboard in an email. Replit offers a mechanism for handling API keys called Secrets, Ziniti added.

“Some people accidentally do hard code tokens into their Repl's code, rather than storing them in Secrets. Ultimately, users are responsible for safeguarding their own tokens and should not be storing them in public code,” Ziniti said. 

Ziniti said the company scans projects for popular API key types, such as those from Github. After being alerted to this new API key issue by Motherboard, Ziniti said “Going forward, Replit will be reviewing our token scanning system to ensure that users are warned about accidentally exposing ChatGPT tokens.”

Advertisement

“If we have enough people they might not ban us all.”

A ChatGPT community member told Motherboard that Discodtehe “should definitely stop.”

“This is a steadily growing industry and so of course there’ll be crime in it sooner or later, but I’m shocked at how quickly it’s become an issue. The theft of corporate accounts is bad for sure, but I’m personally more bothered about the way these guys are willing to rob regular people who posted their keys by mistake,” they added. Motherboard granted the person anonymity so they didn’t face retaliation from other community members. 

free_access.jpg

A screenshot of the site offering free GPT-4 access. Image: Motherboard.

Discodtehe went a step further than just scraping tokens. Another Discord server, called ChimeraGPT, is offering “free access to GPT-4 and GPT-3.5-turbo!,” according to chat logs viewed by Motherboard. Discodtehe said in another message that ChimeraGPT is using the same organization as the stolen API key discussed in the r/ChatGPT Discord server. Motherboard found a Github repository that recommends using ChimeraGPT for getting a free API key. At the time of writing this server has 531 members.

Discodtehe said in another message they also created a website where people can request free access to the OpenAI API. (Ironically, this site is also hosted on Replit; shortly before publication the site became inaccessible).

The site tells users to enter their email address, click on a link sent by OpenAI and accept the invite, set their default billing address to the organization “weeeeee” which Discodtehe appears to be using.

Advertisement

“enjoy free gpt-4 api access,” the website concludes. On Wednesday the organization linked to the OpenAI account had 27 members, according to one screenshot. By Thursday, that number had jumped to 40, according to another.

“My acc [account] is still not banned after doing crazy shit like this.”

Discodtehe did not respond to a request for comment. A manager of the r/ChatGPT Discord server called “Dawn” told Motherboard their volunteer mods can not check every project, and “we are issuing a ban on the user.”

An OpenAI spokesperson told Motherboard in an email that “We conduct automated scans of big open repositories and we revoke any OpenAI keys discovered. We advise that users not reveal their API key after they are generated. If users think their API key may have been exposed, we urge them to rotate their key immediately.”

The community member, however, said “I think OpenAI holds a little bit of culpability here for how their authentication process works too though.”

“You don’t hear about API access to Google Cloud accounts getting stolen like this because Google has better auth[entication] procedures. I hope OpenAI’s integration with Microsoft brings some better security for users going forward,” they said.

Discodtehe referred to the usage as “just borrowing” in another message. They wrote that the usage is “just quote, no bills have been paid yet.”

“In the end, OpenAI will likely foot the bill,” they said. 

OpenAI did not immediately respond to a follow up question asking if it would foot the bill.

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.