Safeguarding our data in the age of generative AI

Safeguarding our data in the age of generative AI

Jonathan Mepsted, Vice President – Middle East & Africa at Netskope, looks at how organisations can enable genAI tools without compromising their overall security.

We have entered a period of unprecedented technological growth. Artificial intelligence (AI), especially Generative AI (GenAI), has become a powerful force for change that is already reshaping industries, enhancing human potential, and influencing how we interact with information and one another. GenAI applications such as ChatGPT have captured the public’s interest, including for business use. We cannot undervalue GenAI’s potential for innovation, but we also must be aware of its security risks, which is why the conversation is quickly shifting towards how to safely enable genAI tools in business environments.

As AI systems thrive across a wide range of sectors and applications, they pose particular security issues that call for a proactive and flexible strategy from the security sector to protect against sensitive data leaks. In the era of AI and ChatGPT, cybersecurity is crucial, and the security sector needs to adapt and respond to this so that businesses are adequately protected. A Netskope study unveiled new research highlighting that for every 10,000 enterprise users, an organisation experiences approximately 183 incidents of sensitive data being posted to ChatGPT per month, source code being one of the most common data leaks. Clearly, we can’t allow the unchecked use of these tools in business settings, as exciting, fast, and productive to use they may be.

Protecting data

One of the main concerns with AI-based technology is the concern with data privacy. GenAI works by learning from existing data to generate new, realistic data based on the characteristics of the training data. As at least one major brand learned the hard way, once source code is inputted into GenAI like ChatGPT, it can be publicly accessed. ChatGPT took what is supposed to be confidential information and then used it elsewhere.

Consider what might happen if other forms of sensitive data see similar exposure, whether personally identifiable information (PII) or other forms of intellectual property. Unauthorised access to sensitive data poses severe risks to organisations, potentially leading to financial losses. Luckily, modern data protection controls already offer a solution. Steve Foster, Netskope’s Head of Solutions Engineering, rightfully said: “ChatGPT and other generative AI applications can seem like a new and daunting challenge for data protection, but actually, many of the same approaches that work for securing data in the cloud work just as well for securing data around generative AI systems.” Experts suggest that security teams need visibility; using automated tools that continuously monitor what applications employees have access to, what data they are uploading onto it, and the context with which they intend to access and share that data.

Boost accountability

Another way organisations can safeguard their data while still permitting their employees to enjoy the convenience provided by these applications is to ensure transparency and accountability. By fostering a culture of continuous learning, adapting cutting-edge technologies, and prioritising ethics, privacy, and fairness, we can create a secure and trustworthy AI landscape that empowers innovation and protects against malicious intent.

Employees must be aware that they are interacting with AI systems, and companies should be clear about the limitations and potential biases of interacting with AI systems. On a larger scale, governments all around the world have been trying to regulate the generative AI industry, which is growing exponentially and at a speed that is too difficult to regulate with the current procedures. The first western country to initially ban ChatGPT was Italy, citing privacy concerns related to the data model. They said that they would investigate ChatGPT, and whether or not it complied with the General Data Protection Regulation.

What’s the solution?

All of this safeguarding leads to another obvious question: shouldn’t companies just block ChatGPT and other GenAI apps? Doing so presents the risk of falling behind the curve of the industry and willfully ignoring the potential productivity these tools represent. If internet security has taught us anything over the last two decades, it’s that ‘allow or block’ decisions can’t be binary—the context of the action taken, balancing security, performance, and business productivity, is what’s needed.

There is a solution to this: allowing companies to continue using ChatGPTlike tools, while also protecting the companies’ proprietary data. Businesses can do this by utilising modern data protection capabilities that incorporate AI and machine learning (ML) into critical security processes, including the safe use of generative AI tools.

What distinguishes fully modern tools from incomplete Data Loss Prevention (DLP) solutions are several important capabilities. For example, modern DLP tools have deep contextual awareness that enables advanced DLP solutions to identify, analyse and protect structured and unstructured data. With the help of automated ML data classification and Train Your Own Classifiers (TYOC) technology, these security solutions can automatically identify and categorise new data based on a ‘train-and-forget’ approach, ensuring data protection adapts and applies well beyond what standard data identification can achieve.

Modern Tools

Additionally, in modern DLP tools, AI-based threat protection provides unparalleled speed and exceptional results in detecting a wide range of threats, including the defence from multivarious attacks, polymorphic malware, novel phishing web domains, zero-day threats, and malicious web content. AI uses also extend to how security and network performance are usefully balanced. AI-driven SD-WAN capabilities, for example, help defenders proactively monitor the network, provide predictive insights, simplify management, minimise support tickets, and troubleshoot devices remotely, resulting in improved network performance and user experience.

The emergence of generative AI represents a milestone in human history. As in other previous computing milestones with vast potential, cybersecurity safeguards are not optional. The dynamic and evolving nature of AI introduces new attack surfaces, novel vulnerabilities, and unprecedented threats that necessitate an agile and proactive response from the security industry.

Together — with the right mindset coupled with the right investments in technology — we can forge a future where AI and humanity coexist symbiotically, unleashing the true potential of this transformative technology while safeguarding what matters most – ourselves, our data, and our shared values. Only through a collective effort and a forward-thinking approach can we ensure that AI remains a force for good, driving progress and prosperity while protecting our digital world.

Tags:
,