AI Roundup: Staying Safe with Generative AI

AI Roundup: Staying Safe with Generative AI

Chatbots powered by generative AI technology are now widely available for consumers. Tools such as OpenAI's ChatGPT and Microsoft Copilot are now used in a wide variety of industries for varying purposes, including health care, tourism, banking, and education. These tools have varying privacy policies on its use and data management, with different opt-out options. Knowing the risks and its proper management can go a long way in staying safe with generative AI.

Generative AI, and its popular tools such as ChatGPT and Microsoft Pilot, is changing the way people work across industries. Recent surveys indicate that the technology's adoption rate is growing, and the early adopters are looking for ways to expand its use cases.

According to a Salesforce data, 29% of the UK population it surveyed is using generative AI, and it becomes 45% and 49% for the US and Australia. In terms of potential use cases, around 75% of generative AI users are looking to use the technology to automate tasks and use it for communications. And 38% of respondents say that they look forward to using AI for fun. Data suggests that many are enamored with the technology, using it for different purposes, from entertainment to business.

However, many are still unaware or even ignore the technology's privacy risks. Popular generative AI tools such as ChatGPT, Gemini, Microsoft Copilot, or even Musk's Grok, though highly accessible, have different privacy policies related to use and data retention. And many are unaware of these, and they end up using these tools while sharing personal information.

This is where information and following safe practices come into the picture. Before you dig deep into these generative AI tools, it's crucial that you're aware of the basics of its privacy policies and use the opt-out tools when necessary. If you're using generative AI or still in the planning stage, then here are some tips and strategies to protect your privacy.

Review the tool's Privacy Policy

Before using a tool, it's important that you read first its Privacy Policy. You need to be sure that you're on the same page on how data is handled and retained.

At this step, you'll need to ask some probing questions to help you assess and compare your options: how will the tool use the information? Is there a way to turn-off data sharing, or at least configure how long the data is retained? Does it provide a quick way to opt-out?

These are just some questions you can ask when comparing the tools' privacy policies. Work only with tools that are transparent in how they use and protect your data.

Don't over-share sensitive data

Be careful when sharing information when using generative AI tools. Don't be too trusting of these new tools, and always exercise caution when submitting information online, whether personal or work-related.

Even big companies have expressed their concerns over employees using generative AI tools to complete their work. Once you share information, the AI model gets access to it, which it could use for training purposes. As such, some companies will only approve the use of custom or configured AI models to have a firewall between large language models and proprietary information.

As always, it's better to exercise caution when using generative AI tools like ChatGPT. Using ChatGPT to summarize a story may not be a problem, but using the same tool to summarize a client's legal document may be problematic. Or it becomes risky when submitting your clients' or even your company's data. By doing so, the generative model will know its content and can be used for training. Also, there's a possibility that the developers may gain access to this data.

If you want to make sure that your shared data aren't used to improve the generative AI model, you can always turn off this feature in its settings. Just click your name, then "Settings", and look for "Data Controls", and turn off the switch that says, "improve the model for everyone".

Also, you can use ChatGPT's feature called "temporary chat". Just click on the menu that says "ChatGPT", then click the option that says, "temporary chat". When this feature is active, your chat will not appear in your history, and the system will not save your conversations.

You can also use this feature in other generative AI tools. In Perplexity, there's a switch called "AI data retention" under settings. If you're using Google's Gemini, there's a button called "activity" at the lower left and on the next page, click the menu called "turn off".

Opt-in only when necessary

Bringing Microsoft Copilot to more customers worldwide | Microsoft 365 Blog

Companies are increasing their adoption of generative AI tools to improve their workflows. Some AI tools are integrated in existing products, like Microsoft, making the adoption faster and seamless. Microsoft, for example, has recently introduced its Copilot for Microsoft 365 which integrates Word, Excel, and PowerPoint. The good news is that Microsoft has stated that it doesn't share consumers information with third party without permission, and it doesn't use users' data to train and develop its Copilot without consent.

Microsoft also offers its users the option to opt-in and share data. If you decide to opt-in, you enhance the functionality of certain Copilot features. However, by opting in, you lose control of how your data is used. If you become uncomfortable at a certain point, you can always opt out and withdraw your consent.

Generative AI tools like ChatGPT and Copilot are extremely helpful when searching for ideas and information online. However, searching for information and sharing data along the way can also affect one's privacy. During search, search engines often store cookies or remember information that can help fast-track the next search queries. If you're using generative AI tools for search, you can set a short retention period. Whenever possible, always delete chats after generating your information. Again, this can prevent your information from being used as part of the model training.

The good news is that the developers of these tools make it easier for you to manage or delete your chats or search logs. If you're using Microsoft, look for your Search history and review and delete everything that you've chatted with its chatbot. If you're using ChatGPT online, you can click your email address (found at the bottom left), then click "Settings" then "Data controls". You have the option to stop ChatGPT from using your data in training the model, but you'll access to your chat history.

Also, you can wipe out your conversations on the platform by clicking the trash can icon next to them on the main screen, or just click your email address, then select "Clear conversations", then select "Confirm clear conversations" to remove your chat history.

Generative AI tools like ChatGPT, Microsoft's Copilot, and Google's Gemini are changing how we work. A wide range of industries now rely on this technology, and individual users are now enjoying its features for fun, work, and entertainment.

But like other technologies, it's crucial to know its limits and use it responsibly. And this step is crucial with AI tools that work on data. If you're going to use generative AI, it's better to err on the side of caution: always read and verify its privacy policies, watch what you share, and opt-out when necessary.