Ransomware

OpenAI’s SearchGPT: A Game Changer Or Pandora’s Box For Cybersecurity Professionals?

This novel search tool promises faster, more relevant results by combining the power of AI models with real-time web data. But for cybersecurity pros, how well it handles misinformation remains to be seen.

by Mihir Bagwe July 25, 2024

Share on LinkedInShare on Twitter

OpenAI is throwing its hat into the AI search ring with SearchGPT, a prototype designed to revolutionize how users find information online. This novel tool promises faster, more relevant results by combining the power of AI models with real-time web data.

But for cybersecurity professionals accustomed to navigating a minefield of misinformation and disinformation, SearchGPT raises intriguing questions about its potential impact on the online threat landscape.

“We think there is room to make search much better than it is today.” – Sam Altman, CEO of OpenAI

Boosting Efficiency or Amplifying Disinformation?

As noted by the Russian-American computer scientist and podcaster Lex Fridman, it’s been a “crazy week” for all things AI.

First Elon Musk announced a push for Grok 2 and 3, then Meta released Llama 3.1, which was further topped by Mistral AI, whose Mistral Large 2 release yesterday reportedly beats Llama’s latest version on code and math. DeepMind AI on Wednesday scored a silver medal standard at solving an International Math Olympiad problem. And just as the AI pundits were gasping for breath from this week’s thrill ride, OpenAI on Thursday morning announced its competitor to Google and Perplexity AI search engines, called SearchGPT.

SearchGPT claims the ability to directly answer user queries with up-to-date information and clear source attribution. “We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier,” OpenAI said.

This streamlined approach could significantly reduce the time spent sifting through irrelevant search results, a boon particularly for security practitioners who are constantly battling information overload. However, concerns linger about the model’s ability to discern trustworthy sources from malicious ones. Disinformation campaigns are a growing scourge in the cyber realm, and AI-powered search engines could inadvertently amplify their reach if not carefully calibrated.

Transparency and Trust: Cornerstones of AI Search Security

To earn the trust of cybersecurity professionals, SearchGPT must prioritize transparency in its source selection and ranking algorithms. Clear explanations of how the model prioritizes information and identifies reliable sources will be crucial. Furthermore, the ability for users to refine searches based on specific criteria like publication date or source credibility will empower security personnel to make informed decisions about the information they consume.

A Symbiotic Relationship with Publishers

Several media and news outlets, including the New York Times, have sued OpenAI in recent months over alleged copyright violations. They argue that OpenAI illegally trained its AI models on their published work without consent or compensation. This led to profits for the company from protected and plagiarized material. OpenAI has dismissed these claims, saying it all falls under “fair use.”

But to avoid similar issues in the future, the company, in training the latest SearchGPT model, has partnered with The Atlantic and News Corp.

“We’ve partnered with publishers to build this experience and continue to seek their feedback. In addition to launching the SearchGPT prototype, we are also launching a way for publishers to manage how they appear in SearchGPT, so publishers have more choices,” OpenAI said. “Importantly, SearchGPT is about search and is separate from training OpenAI’s generative AI foundation models. Sites can be surfaced in search results even if they opt out of generative AI training.”

Robert Thomson, chief executive at News Corp., said, “Sam and the truly talented team at OpenAI innately understand that for AI-powered search to be effective, it must be founded on the highest-quality, most reliable information furnished by trusted sources. For the heavens to be in equilibrium, the relationship between technology and content must be symbiotic and provenance must be protected.”

“AI search is going to become one of the key ways that people navigate the internet, and it’s crucial, in these early days, that the technology is built in a way that values, respects, and protects journalism and publishers. We look forward to partnering with OpenAI in the process, and creating a new way for readers to discover The Atlantic,” said Nicholas Thompson, CEO of The Atlantic.

OpenAI’s commitment to partnering with publishers is a positive step. By ensuring that high-quality content from reputable sources is prominently displayed, SearchGPT would be able to contribute to a more secure information ecosystem. Additionally, publisher controls over how their content appears within the tool offer a level of control essential for maintaining trust and data integrity. It remains to be seen how well the search tool filters out misinformation, but for now, it’s a start.

The Learning Curve: Feedback and Open Communication

OpenAI’s proactive approach in seeking feedback on its SearchGPT model from the community is commendable. By fostering open communication and actively incorporating security expertise into the development process, the potential risks associated with SearchGPT can be mitigated. Utilizing the prototype as a learning platform to understand how malicious actors might exploit the AI search engines will be key to ensuring its long-term viability in the cybersecurity landscape.

The Final Verdict: A Promising Future, But Vigilance is Key

SearchGPT presents a compelling vision for a future where finding accurate and relevant information online is a seamless experience. For cybersecurity professionals, the potential benefits of enhanced efficiency and streamlined research are undeniable.

However, ensuring the tool doesn’t become a breeding ground for misinformation requires a focus on transparency, source credibility, and ongoing collaboration with the security community as well. OpenAI’s follow-through on these principles will determine whether SearchGPT becomes a valuable asset or a significant new challenge in the cybersecurity’s mis- and dis-information landscape.

The SearchGPT prototype is currently available to only a select few and has a waitlist. “We will learn from the prototype, make it better, and then integrate the tech into ChatGPT to make it real-time and maximally helpful,” Altman said.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button