You can now keep your conversations with ChatGPT private, thanks to a new update that allows you to disable your chat history with the AI.
On Tuesday, OpenAI, the company behind the viral chatbot introduced new data control features to let users switch off ‘Chat History & Training’ in their settings.
Doing so would make it similar to browsing on Google’s ‘incognito mode’, so no one can see what you’ve been asking the chatbot.
Once disabled, ChatGPT will not save users’ conversation history or use it to improve its artificial intelligence, the company confirmed to Reuters.
‘While history is disabled, new chats will be deleted from our systems within 30 days – and reviewed only when needed to monitor for abuse – and won’t be used for model training,’ said OpenAI on its website.
If you don’t opt out, your existing conversations will still be saved and may be used by the company for model training.
Previously, all conversations with ChatGPT were saved on the left-hand side of the screen.
Now, when chat history is disabled, the conversation will not appear in your history and cannot be recovered.
There is no limit on the number of conversations you can have while history and training are disabled. This applies to both free and Plus subscriptions.
How do I turn off chat history on ChatGPT?
To disable chat history and model training, go to Settings on ChatGPT.
Select ‘Show’ on Data Controls and switch off the ‘Chat History & Training’ toggle button. This is enabled by default.
You can turn on Chat History by clicking on the ‘Enable chat history’ button on the top-left of the screen.
If you still want to keep your chat history but not have it be used to train the AI, the company is working on a new offering called ChatGPT Business that will opt end-users out of model training by default.
‘We plan to make ChatGPT Business available in the coming months,’ said Open AI.
In the meantime, you can opt out from use of your data to improve the company’s AI by filling out a form asking OpenAI to do so . Once you submit the form, new conversations will not be used to train the language model.
The move comes as scrutiny has grown over how ChatGPT and other AI chatbots manage hundreds of millions of users’ data, commonly used to improve, or ‘train’, AI.
Last month, Italy banned ChatGPT for possible privacy violations, saying OpenAI could resume the service if it met demands such as giving consumers tools to object to the processing of their data. France and Spain also began probing the service.
Mira Murati, OpenAI’s chief technology officer, told Reuters the company was compliant with European privacy law and is working to assure regulators.
The new features did not arise from Italy’s ChatGPT ban, she said, but from a months-long effort to put users ‘in the driver’s seat’ regarding data collection
‘We’ll be moving more and more in this direction of prioritizing user privacy,’ Murati said, with the goal that ‘it’s completely eyes off and the models are super aligned: they do the things that you want to do’.
User information has helped OpenAI make its software more reliable and reduce political bias, among other issues, she said, but added that the company still has challenges to tackle.
Source: Read Full Article