Is ChatGPT Safe? – Everything You Need to Know

Users of ChatGPT need to be aware of how it uses their data and what they need to do to protect their privacy. By following a few simple steps, you can use ChatGPT while staying safe online.

While ChatGPT is safe for casual use, users should take special care when using ChatGPT for work and avoid entering sensitive data. OpenAI uses users data to improve model performance and may become available to the general public.

While the safest thing to do would be to not use AI at all, it is the future of technology. With AI being integrated into many major software systems, people will need to learn to safely work alongside it rather than attempt to avoid it altogether.

Let’s look into some ways that people can stay safe while still using ChatGPT, see what OpenAI, the creators of ChatGPT are doing in regard to data security and emerging technologies that will make chat bots like ChatGPT more acceptable in the workplace.

Image of Lock and Key in front of a computer screen.  Keeping data safe when using ChatGPT is dependent on user behavior.

Staying Safe While Using ChatGPT

The primary concern with entering data into ChatGPT is that users do not know what is being done with it.

ChatGPT will keep a record of your conversation history for 30 days. By default, ChatGPT will also use the information you enter into it to further train and improve its model. Users have the ability to opt-out of data collection and OpenAI has recently added an incognito feature to allow users to browse in private.

Machine Learning (ML) and Artificial Intelligence (AI) technology that’s used in ChatGPT is often criticized as being a black box. Once data goes into the model, it’s ingested and assimilated by the computer’s data model. These models contain so much data that it’s difficult for humans to explain what an AI model does with it when determining an answer to a prompt.

Once information is entered into a large language model like ChatGPT it may become impossible to fully remove it from the model.

This has several key implications.

  • Do not enter information into ChatGPT that you wouldn’t want to appear in someone else’s search result – OpenAI states that information entered into ChatGPT may be used for future training.
  • Once data is entered into ChatGPT it may be impossible to remove – ChatGPT is a highly complex system that its creators and academics who work on similar projects do not fully understand.
  • Users can still use ChatGPT for generic questions – We use ChatGPT extensively for technical research, writing SQL queries, Excel Formulas, VBA, or researching background information on a broad range of topics.
  • Replace google searches with ChatGPT – In many cases, you can use ChatGPT similar to how you use Google. People mostly type broad topics into Google and do not include personal information or company information into the search box.
  • Don’t generate company specific content with ChatGPT – When using ChatGPT to write e-mails or letters, do not copy and paste specific information such as names, addresses and dollar amounts into ChatGPT. Use generic terms or prompt ChatGPT to use placeholder names.

If users avoid entering information that’s too specific or sensitive into ChatGPT, they will be able to leverage it as a productivity tool to help them in their day to day lives and at work.

How does ChatGPT use your personal information?

OpenAI goes into depth explaining how they utilize a user’s personal information in their privacy policy. Some of the use cases are required for normal business operations that you would see at most companies.

Most interesting is the use of information to improve their service and conduct research.

An excerpt from the OpenAI privacy policy explaining key uses of user data:

Is ChatGPT Safe to Use will largely depend on the data users enter into it.  The OpenAI privacy clearly states that data is used to improve services or conduct research

What does OpenAI do to keep your data safe?

OpenAI has a number of safeguards in place to keep user data safe and secure. They comply with a number of key pieces of data privacy legislation. In addition to these compliance measures, OpenAI will also work directly with companies to meet custom requirements such as HIPAA compliance.

GDPR (General Data Protection Regulation): An EU law from 2018 that gives individuals control over their personal data and requires organizations to be transparent about how they handle such data.

CCPA (California Consumer Privacy Act): A California law from 2020 that grants consumers rights to know about and control the personal information collected about them by businesses.

SOC 2 (System and Organization Controls 2): An auditing procedure ensuring a company’s non-financial reporting controls effectively protect its data security, availability, processing integrity, confidentiality, and privacy.

Bug Bounty Program. OpenAI leverages crowd sourcing and the online community of programmers and cyber security professionals to identify bugs in their software in their bug bounty program. Rewards range from a couple hundred to a few thousand dollars based on the severity of the bug.

You can read OpenAI’s full and most recent security policy here

OpenAI data usage for consumer services FAQ

OpenAI provides additional information and regularly updates their frequently asked questions for data usage. It contains several helpful links for their non-API products with current procedures for requesting data deletion and goes into further detail explaining how to delete your OpenAI account and what they do with your data.

The current FAQ can be found here: Data usage for consumer services FAQ | OpenAI Help Center

Does OpenAI sell your data from ChatGPT?

OpenAI states that they do not use the data users put into ChatGPT for selling products or services, but they use the information entered into ChatGPT for training and improving their AI models.

They further go on to explain that there are two ways that they handle data. Information sent to ChatGPT through their API, the connection protocol used by third party services, is not used to train their AI model. However, information that is entered directly into ChatGPT from their website is used for model training.

OpenAI provides users with the ability to opt out of using their data for training purposes.

Opting Out of ChatGPT Data Collection

OpenAI provides users of their products, the ability to opt out of data collection by filling out an online form. Once you fill out the form and submit it, OpenAI will no longer use the data that you enter directly into their products like ChatGPT and Dall-e2 for training purposes.

The ChatGPT User Content Opt Out Form will not remove any data that has already been entered into ChatGPT, it only prevents future data from being recorded.

ChatGPT user content opt out request and data retention policy.

You can find the opt out form here: OpenAI User Content Opt Out Request

Another recent feature addition to ChatGPT is an in private or incognito mode that allows users to turn data collection on or off.

How to use ChatGPT in Incognito Mode

Users that don’t mind training ChatGPT with a majority of their prompt responses, but want to maintain some confidentiality while using the system have the ability to turn on the equivalent of in private or incgonito mode in ChatGPT.

To turn on ChatGPT Incognito mode follow these steps:

  • Click the three dots (ellipsis) next to your name in the bottom left corner of ChatGPT.
  • Click on Settings
  • Click Data Controls
  • Toggle Chat history & training off
Screenshot of ChatGPT data control settings to disable chat history & training.

Incognito mode was a recent addition feature addition to ChatGPT. By turning this setting on it stops ChatGPT from using your data for training and will not record your conversation history.

Is ChatGPT Safe for Work?

As with any new technology, people have a number of cyber security concerns and data governance concerns with regard to ChatGPT. Recent headlines have brought even more concern with Fortune 500 companies like JP Morgan restricting employees’ access to ChatGPT altogether.

Certain industries such as government contracting, and the financial industry regularly work with highly confidential information. Even though most employees in these industries are required to go through routine cyber security training, it only takes one employee’s incorrect actions to let sensitive data out into the world.

Because of the high-risk nature of this data getting out into the public through ChatGPT we believe that many companies are correct in limiting employee access to ChatGPT for the time being.

Fortunately, Large Language Models (LLMs) like ChatGPT continue to improve and ChatGPT is available in a more secure way for large corporations.

Why Companies Should Consider Azure OpenAI Services

Microsoft invested over $10 billion into OpenAI. They are aggressively integrating ChatGPT into many of their products and much of OpenAI’s backend runs on the Microsoft Azure Cloud.

Azure OpenAI Services are a way for companies to utilize OpenAI’s ChatGPT language model while combining it with the security and data governance features of the Azure Cloud.

Features are expected to continue to improve over-time. Microsoft has shown a number of recent product releases and improvements that will help companies combine data sources while only letting employees access information related to their job roles.

By combining these features, companies will be able to deploy ChatGPT that keeps their proprietary information separate from the general public, and also keeps information compartmentalized so that only people with a need to access the data will be able to do so in a ChatGPT style interface.

On-premises Language Models are Coming

For ChatGPT deployments that require an even higher level of security than a segmented cloud, new language models are coming out that are faster and more efficient.

ChatGPT utilizes a massive amount of computing power to piece together answers to prompts. Because of the current limitations, it would not be feasible to implement a solution on premises or make available on a mobile device.

Stanford University built their own Chat GPT model based on technology from Meta that allowed them to create and train a large language model with minimal resources.

As the technology progresses there will be more instances of it running on local hardware, or on a centralized server that an organization does not have to connect to a cloud data center. Companies will have more capability of controlling who gets access to the data, data retention policies, and data governance.

Stanford researchers make a new ChatGPT with less than $600 (stanforddaily.com)

Cyber Security Best Practices

Companies and users of ChatGPT should also keep in mind general cyber security best practices. As AI continues to proliferate through the workplace, specific privacy policies will be subject to change and new creative use cases will arise.

To stay ahead of the rapidly changing AI environment, companies will need to invest in training their employees regularly and follow cyber security best practices even if they are using on-premises or private segmented clouds.

  1. Regular Training: Conduct regular cybersecurity training sessions to keep employees updated on the latest threats and security practices.
  2. Access Control: Implement strict access control policies. Ensure that employees have access only to the information necessary for their roles.
  3. Phishing Awareness: Educate employees about phishing scams and how to identify and avoid suspicious emails or messages.
  4. Data Protection: Adhere to data protection regulations to ensure sensitive information is handled and stored securely. It’s crucial to comply with legislations like GDPR, CCPA, and any applicable federal regulations.
  5. Password Management: Implement robust password management practices. Encourage the use of strong, unique passwords for all accounts and consider the use of a secure password manager. Regularly changing passwords can also enhance security.

Is it Ethical to Use ChatGPT for Work?

For companies that allow employees to access ChatGPT, we believe that it is an excellent tool for productivity. A number of AI powered tools are making creative jobs easier and can greatly increase productivity or compliment the creative process.

In our view, using ChatGPT for work is not only ethical, but will become a necessity in the near future.

This might take the form of using internal AI tools that are setup to be more secure than publicly available tools, but competitors will be utilizing AI to reduce costs forcing most if not all businesses to do the same.

Likewise, employees are constantly competing for promotions and those who embrace AI will be able to outcompete their co-workers who avoid the new technology.

We use AI in the workplace on a daily basis. It’s a philosophical question if using content directly generated from AI is plagiarism, but will often use it as a starting point for content creation and then re-write it into our own words and add our own original thoughts to it.

Is ChatGPT Dangerous?

Going beyond data security and compliance, there is an overall debate as to whether AI is safe for society.

Sam Altman, the CEO and one of the founders of OpenAI recently testified to the United States Congress about the safety concerns of ChatGPT.

It’s a fascinating video and viewpoint as governments around the world look to build a regulatory framework of how to control AI.

Many of the viewpoints of Sam Altman are well thought out, but we also have to wonder if AI will become an arms race where it won’t matter if one country puts a box around it if others are not doing so.

5 potential Ways ChatGPT could be used by cyber criminals

As government race to understand and control AI, it can still be used by people with less benevolent intentions. While we don’t think that ChatGPT can directly be used to hack a system, it can be used in a number of ways to make hackers and cyber criminals more productive in their endeavors.

Here are five potential ways that hackers could misuse ChatGPT, a specific type of Large Language Model:

  1. Phishing: ChatGPT could be used to generate convincing phishing messages that trick people into revealing sensitive information, such as passwords or credit card numbers.
  2. Disinformation: By generating realistic and persuasive narratives, ChatGPT could be used to spread disinformation or fake news on a large scale, manipulating public opinion.
  3. Spam and Scams: Malicious actors could use ChatGPT to produce sophisticated spam emails or craft deceptive scam messages, making them harder to recognize and more effective.
  4. Deepfake Content Generation: ChatGPT could create convincing deepfake text that could be used in various harmful ways, including impersonation, spreading false rumors, or misrepresenting events.
  5. Impersonation: By mimicking the writing styles of specific individuals or entities, ChatGPT could be used for impersonation attacks, causing reputational damage or misleading readers.

These are just a few examples of how ChatGPT could be used by hackers, and means that following cyber security best practices will become increasingly important.

Conclusion

As Chat GPT and similar technology is integrated with solutions like Microsoft Word and Google Docs, it’s likely that they will indemnify themselves for using the information that you input into their products.

Many of these products are fundamental to hundreds of millions of people’s daily jobs and lives outside of work. Google allowed third parties access to Gmail data up through 2018.

Google still allows third-party apps scan your Gmail data (cnn.com)

Chat GPT and other similar solutions are going to change many aspects of people’s everyday lives. In the workplace and outside of it. People should strive to understand the technology and begin using it as often as is practical.

Cyber Security and Data Governance concerns are legitimate concerns and it will ultimately be up to the people entering information into AI systems to use their best judgement to avoid inadvertent data leaks.

Scroll to Top