ChatGPT starts addressing privacy concerns

By Jalaja Ramanunni

OpenAI has introduced a new feature for users to keep their ChatGPT data private. This has been a much-awaited feature for many.

Generative AI tools such as ChatGPT are used in the digital advertising industry, particularly in search advertising.

Users can now turn off chat history in ChatGPT, allowing them to choose which conversations can be used to train its models. This prevents data from being used to train ChatGPT’s models.

The new feature was rolled out on 25th April, according to OpenAI’s blog post.

Enabling the feature also means the chat will not appear on the user’s history sidebar. However, even with this option turned off, OpenAI will still store users’ chats for a period of 30 days to prevent misuse, the blog states. OpenAI has stated that it will only review these chats if necessary, and after 30 days, it will be permanently deleted.

Data privacy on the platform has been a rising concern for many. In March this year, ChatGPT was banned in Italy over privacy concerns.

“Right now, we are in the pioneer phase. New tools are being launched and new applications for AI are being found almost every day. After that will come regulations, a new reality for our generation where tech progresses faster than laws,” said Benjamin Schwartz, Business Director, BPG Group.

OpenAI also revealed its plans to offer ChatGPT Business subscription for users who need more control over their data and for enterprises that want to manage their end-users.

“We plan to make ChatGPT Business available in the coming months,” the blog post states.