These are the ‘secret rules’ behind Microsoft’s ChatGPT-powered Bing search
Microsoft’s newly unveiled early-stage test of Bing search, powered by OpenAI’s generative artificial intelligence (AI) tool ChatGPT, has a host of ‘secret’ rules that govern the results on its conversational platform. A report on the matter by The Verge produced a list of rules that the ChatGPT-powered Bing search service has disclosed, adding that Microsoft itself has confirmed that these rules are genuine.
The report cites Caitlin Roulston, director of communications at Microsoft, as having confirmed the existence of these rules behind the results that Bing search produces, as well as the internal codename, Sydney. The latter reportedly referred to a previous version of an AI-powered conversation platform that Microsoft was working on, before the ChatGPT-powered Bing search came to light.
The list of rules were discovered by The Verge from its early-access to the platform by reverse-engineering input phrases to the tool. The latter refers to asking questions that ultimately lead the search tool into revealing the source logic behind the results it produces. To be sure, generative AI tools such as Bing, as well as Google’s newly introduced Bard, have been trained on billions of data points, and use plain text inputs to generate a conversational search result — instead of web pages as seen in standard search engines.
Credit: Dmitri Brereton on Twitter
These rules in question include clauses such as requiring search results on the platform to be “informative, visual, logical, actionable, positive, interesting, entertaining and engaging,” but avoiding results that are “vague, controversial or off-topic”. The full list can be accessed here.
The ChatGPT-powered Bing search was unveiled by Microsoft at a conference in the US on February 7. Microsoft has clarified that while ChatGPT makes for one of the tools powering the conversational search platform, it is also connected to the internet to offer recent search results instead of just dated ones. As per Bing’s search results, as well as OpenAI’s previous disclosures, ChatGPT’s data set is trained on inputs until sometime in 2021.
However, the service was flagged to be returning a host of erroneous results during its initial demonstration, including wrong figures from financial reports, and incorrect product descriptions. Google’s own AI-powered search platform, Bard, which was unveiled on February 6, also produced misleading search results as part of its initial demo, leading to a $100 billion depletion in Google’s stock value — which crashed 7.7% on the day following its launch. Dmitri Brereton, a US-based independent AI and search engine expert, published a list of errors that the ChatGPT-powered Bing produced.
The firms behind these tools, Microsoft and Google, have maintained that the services are at an early stage of testing, and may return errors initially. Google chief Sundar Pichai added that Bard is based on a limited model of its conversational AI algorithm, Language Model for Dialogue Applications (LaMDA), and the same will be expanded as more users join the platform.