This article was originally published on TechDay

In today's rapidly evolving and advancing tech landscape, artificial intelligence (AI) wields immense potential to transform industries while revolutionising how we work and live. However, powerful AI tools and apps, such as ChatGPT, are quickly changing the cybersecurity and privacy landscape.   

From privacy concerns, such as data breaches, to ransomware attacks, privacy and cybersecurity threats are a growing challenge facing businesses and companies worldwide. However, as AI technology becomes increasingly prevalent, it is vital to examine the ethical ramifications and key responsibilities associated with its development as well as application.   

You may know that hackers and other malicious actors are using artificial intelligence. Just as businesses and companies harness the power of AI to drive innovation and efficiency, so too do cybercriminals as hackers seek to exploit its capabilities for their nefarious purposes. Did you know that OpenAI caused controversy when the company officially released a public-facing version of ChatGPT? 

While ChatGPT is a powerful and valuable AI conversational chatbot that quickly impresses, it comes with some pitfalls. From privacy concerns and security breaches to the undisclosed data ChatGPT was trained on, there are elements to these tools that need to be addressed. 

For instance, in March 2023, because of a security breach, some ChatGPT users saw conversation headings in their sidebars that didn’t belong to them.  Accidental or inadvertent sharing of users' chat histories poses a significant concern for organisations. However, it becomes particularly problematic given the widespread usage and popularity of the tool.   

According to Reuters, ChatGPT had about 100 million monthly active users by the end of January 2023. Although OpenAI patched the bug that caused the data breach, an Italian data privacy regulator demanded that the company cease all operations that processed the data of Italian users.   

Also, ChatGPT's model was trained on the collective writing of human beings worldwide, with reinforcement learning via human feedback and reward models.  

These models rank the best responses. Unfortunately, this indicates that the same biases that plague the real world can also impact the model. In some cases, ChatGPT has produced terrible answers that discriminate against race, gender, and minority groups, which OpenAI is trying to mitigate.  

As more organisations and their people continue experimenting with ChatGPT's unprecedented capabilities, the topic of Generative AI is dominating headlines of late, kicking off new debates about potential benefits and risks and the broader implications it poses to business and society.   

Today, we explore the realm of Generative AI and its revolutionary offspring, ChatGPT. While the possibilities and opportunities seem endless, it's crucial to address growing concerns around data protection and privacy. In collaboration with cyber security expert Netskope, we unpack the complexities of this dynamic landscape.  

Moving Past the Dystopian View of AI Technology
Popular science-fiction films depict a dystopian view of AI technology. However, real-world applications of AI are more nuanced and complex.  

A good example is ChatGPT. Amassing about 100 million active monthly users only two months after going live, ChatGPT is easily the fastest growing and most popular consumer application ever.   

AI has emerged as a powerful and revolutionary force that is reshaping industries, revolutionising processes, and redefining what's possible.   

However, as with any new or breakthrough technology, it’s important we understand the potential risks that come with it. Individuals and businesses should not use the tool without completely understanding its data privacy and ownership issues and consequences.  

While some organisations made the early decision to block ChatGPT and similar tools, we believe there’s a place for Generative AI in the workplace. Its ability to unlock creativity and efficiency will revolutionise the workforce and the performance uplift organisations yield from employees; just as the PC, mobile devices, and the Internet did. Contrary to some of the myths circulating, you can use Generative AI technology like ChatGPT while keeping your company's critical data secure.  

Does ChatGPT Collect Data? 

ChatGPT collects IP addresses and other data like browser types and settings. It uses cookies to track browsing activities. ChatGPT could share all this information or data with its vendors or third parties without giving you notice.  

Another important privacy concern is that the program automatically opts everyone in to allow OpenAI's trainers to include prompt data in the model.  Although ChatGPT states it offers an opt-out feature in its comprehensive terms and conditions for data collection and management, it also notes that opting out might limit the type of answers users receive.  

Essentially, users who decide not to contribute their personal data to model training cannot retain their chat history. This may disincentivise users from opting out since losing access to previous responses and conversations can considerably impact the user experience and the type of answers users receive.  

To use ChatGPT or other Generative AI technologies, businesses should carefully vet the service, consider security protocols, data handling, and the future of collected data.  

Business Threats and Challenges  

Like all new technologies, Generative AI introduces a layer of complexity relating to threats and data protection that organisations must proactively address and provide adequate security parameters to protect users and data:  

  • Accidental data loss: ChatGPT is an AI language model that generates responses based on input. This poses a potential risk of accidental disclosure of confidential information, where users don’t understand potential implications of the information being shared 
  • Synthetic media and deepfakes: Creating realistic fake media content can lead to reputational damage, fraud, and disinformation.  
  • Intellectual Property (IP) theft: Generative AI can potentially be used to reverse-engineer proprietary algorithms or designs, leading to IP theft.  
  • Social engineering and advanced phishing attacks: AI models can create highly convincing phishing messages, making it challenging to differentiate legitimate communication from malicious attacks.  
  • Other risks and issues: Widespread use of Generative AI can erode trust in digital content, optimise and automate cyberattacks, and create new regulatory challenges.  

Navigating the Sensitive Data Landscape  

Netskope Threat Labs observed data over a four-week period, revealing analysing the posting behaviour of enterprise users. They found that source code, intellectual property, passwords and keys, and regulated data were among the most frequently posted types of sensitive information shared.  

With hundreds of posts per 10k enterprise users, the potential risks of data exposure are evident. However, organisations can mitigate against risks and establish safe and effective guardrails for their users. Leading security expert, Netskope provides the holistic, comprehensive protection, and necessary controls organisations that need to enable the responsible use of Generative AI tools within the workplace.  

The Privacy Predicament  

Privacy is a paramount concern in the realm of Generative AI. Users and organisations must exercise caution when sharing sensitive information and understand the limitations of data security provided by ChatGPT.  

Embracing the Power of AI while Upholding Privacy  

The Missing Link in partnership with Netskope recognises that prohibitive measures such as blocking AI tools will only serve to stifle innovation and creativity, leaving businesses trailing behind the curve. Instead, we advocate for the safe and responsible use of AI tools, combining the right cultural orientation with modern data protection technology.  

By fostering a culture of understanding and education, we empower businesses to leverage the benefits of Generative AI without compromising sensitive data.  

Let us embrace AI's potential while safeguarding privacy as an integral part of the innovation journey.  

Find out more.



Matt Dunn

Head of Automation