The Expert's View with Michael Novinson

Artificial Intelligence & Machine Learning , Cloud Security , Next-Generation Technologies & Secure Development

Will Generative AI's Use in Cyber Tools Exceed Expectations?

To What Extent Will Security Tools Benefit From Linking Arms With OpenAI's ChatGPT?
Will Generative AI's Use in Cyber Tools Exceed Expectations?

Cybersecurity product launches almost never receive mainstream press attention.

See Also: Live Webinar | Navigating Identity Threats: Detection & Response Strategies for Modern Security Challenges

But when the product in question builds off the generative AI that has taken the world by storm since November, a different set of rules apply.

Microsoft's unveiling of Security Copilot in March was covered not just by trade publications writing for business or technology leaders but also by the world's biggest news outlets. The Washington Post said Copilot will accelerate the "never-ending arms race" between hackers and defenders, though it said it's unclear if generative AI "will change the tech industry as dramatically as leaders are predicting."

"Cybersecurity in general is the major challenge of our times," Microsoft CEO Satya Nadella told The Post last month. "Can we give the analyst's speed a 10X boost? Can we bring a novice analyst and change their learning curve?"

Security Copilot is laser-focused on addressing the labor shortage by enabling organizations to address a growing number of threats despite the continued dearth of cybersecurity talent. Some of Security Copilot's most impressive features include a very simple natural language interface and as well as the ability to quickly reverse-engineer threats. It's still unclear how Copilot will link with third-party vendors and data sources (see: The Security Perks and Perils of OpenAI on Microsoft Bing).

Embracing AI Chatbots for Security Queries

Though Security Copilot attracted the most attention since it was developed by the company that brought generative AI chatbots to the masses, it's neither the first nor the only security product to incorporate OpenAI's ChatGPT into its design. The distinction of being first belongs to Orca Security, which in January began forwarding security alerts to GPT-3 to generate remediation instructions.

In the two months between Orca's announcement and the debut of Security Copilot, several other security vendors identified ways to enhance their products with generative AI. Kubernetes security company Armo now pre-trains ChatGPT so that customers can use natural language to create security policies, get a description of said rules and receive remediation suggestions to fix failed controls.

Logpoint, meanwhile, integrated ChatGPT into its security operations, automation and response technology so that customers can investigate if using SOAR playbooks with ChatGPT is right for them. And AlertEnterprise launched a chatbot powered by ChatGPT to streamline user access to information on physical access, identity access management, visitor management and security and safety reporting.

Island Security, which unveiled its Enterprise Browser with built-in security controls for the workspace in February 2022, announced in January that it was the first company to integrate ChatGPT into a browser. A month later, Microsoft confirmed that ChatGPT would be integrated with both Bing and Microsoft's web browser Edge.

Accenture Security has looked for ways to automate some cyber defense-related tasks with ChatGPT, and companies such as Coro and Trellix have explored the possibility of embedding ChatGPT in their offerings.

The Future of Generative AI in Security Products

But the biggest wave of security products tapping into OpenAI or ChatGPT will crest over the next two weeks in conjunction with RSA Conference 2023, which runs April 24-27 in San Francisco. Recorded Future on Tuesday started using the OpenAI GPT model to process threat intelligence and generate real-time assessments of the threat landscape after training OpenAI on more than 10 years of research.

That same day, data security and management vendor Cohesity revealed plans to put forth an AI-ready data structure that will advance generative AI initiatives around threat detection, classification and anomaly detection. OpenAI itself got in on the action, launching a vulnerability disclosure program on Bugcrowd that will pay $200 for "low-severity" findings and up to $20,000 for "exceptional discoveries."

And despite a flurry of related product launches since the November debut of ChatGPT, the generative AI land rush has only just begun. The gap between the companies using artificial intelligence as mere window dressing and those using it to exponentially improve their technology will become apparent to all in the months and years to come.



About the Author

Michael Novinson

Michael Novinson

Managing Editor, Business, ISMG

Novinson is responsible for covering the vendor and technology landscape. Prior to joining ISMG, he spent four and a half years covering all the major cybersecurity vendors at CRN, with a focus on their programs and offerings for IT service providers. He was recognized for his breaking news coverage of the August 2019 coordinated ransomware attack against local governments in Texas as well as for his continued reporting around the SolarWinds hack in late 2020 and early 2021.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.com, you agree to our use of cookies.