Laptop screen showing ChatGPT.
Image: Adobe Stock

The new application of the firm’s security tools includes data analysis of generative AI inputs and such real-time user engagement elements as policy and risk coaching on the use of ChatGPT. It not only keeps an eye on the data that users feed to generative AI models or to other large language models such as Google Bard and Jasper, it can also block those inputs if they include sensitive data or code.

The new suite of capabilities is aimed at making sure employees at organizations, whether on premise or remote, are using ChatGPT and other generative AI applications in a way that doesn’t compromise enterprise data, according to the firm.

Netskope said its data showed that:

  • Roughly 10% of enterprise organizations are actively blocking ChatGPT use by teams.
  • One percent of enterprise employees actively use ChatGPT daily.
  • Each user submits, on average, eight ChatGPT prompts per day.
  • ChatGPT usage is growing 25% monthly in enterprises.

Based on research by data security firm Cyberhaven, at least 10.8% of company employees have tried using ChatGPT in the workplace, and 11% of data that employees upload to ChatGPT is confidential.

Jump to:

Zero-trust approach to protecting data fed to AI

Robinson said Netskope’s solution applied to generative AI includes its Security Service Edge with Cloud XD, the company’s zero-trust engine for data and threat protection around apps, cloud services and web traffic, which also enables adaptive policy controls.

“With deep analysis of traffic, not just at the domain level, we can see when the user is requesting a login, or uploading and downloading data. Because of that, you get deep visibility; you can set up actions and safely enable services for users,” he said.

According to Netskope, its generative AI access control and visibility features include:

  • IT access to specific ChatGPT usage and trends within the organization via the industry’s broadest discovery of software as a service (using a dynamic database of 60,000+ applications) and advanced analytics dashboards.
  • The company’s Cloud Confidence Index, which classifies new generative AI applications and evaluates their risks.
  • Granular context and instance awareness via the company’s Cloud XDTM analytics, which discerns access levels and data flows through application accounts.
  • Visibility through a web category for generative AI domains, by which IT teams can configure access control and real-time protection policies and manage traffic.

Managing access to LLMs isn’t a binary problem

As part of its Intelligent Security Service Edge platform, Netskope capabilities reflect a growing awareness in the cybersecurity community that access to these new AI tools is not a “use” or “don’t use” gateway.

“The main players, including our competitors, will all gravitate toward this,” said James Robinson, deputy chief information security officer at Netskope. “But it’s a granular problem because it’s not a binary world anymore: whether members of your staff, or other tech or business teams, people will use ChatGPT or other tools, so they need access, or they will find ways, for good or bad,” he said.

“But I think most people are still in the binary mode of thinking,” he added, noting that there is a tendency to reach for firewalls as the tool of choice to manage osmosis of data into and out of an organization. “As security leaders, we should not just say ‘yes’ or ‘no.’ Rather, we should focus more on ‘know’ because this is a granular problem. To do that, you need a comprehensive program.”

SEE: Companies are spending more on AI, cybersecurity (TechRepublic)

Real-time user engagement: popup coaching, warnings and alerts

Robinson said the user experience includes a real-time “visual coaching” message popup to warn users about data security policies and the potential exposure of sensitive data.

“In this case, you will see a popup window if you are beginning to log in to a generative AI model that might, for example, remind you of policies around use of these tools, just when you are going onto the website,” said Robinson. He said the Netskope platform would also use a DLP engine to block uploads to the LLM of sensitive information, such as personally identifiable information, credentials, financials or other information based on data policy (Figure A).

Figure A

Netskope popup window giving warning to users.
Netskope popup window warns user of LLM that the data they will not be allowed to upload sensitive data. Image: Netskope

“This could include code, if they are trying to use AI to do a code review,” added Robinson who explained that Cloud XD is applied here as well.

SEE: Salesforce puts generative AI into Tableau (TechRepublic)

The platform’s interactive feature includes queries that ask users to clarify their use of AI if they take an action that is against policy or is contrary to the system’s recommendations. Robinson said this helps security teams evolve their data policies around the use of chatbots.

“As a security team I’m not able to go to every business user and ask why they are uploading certain data, but if I can bring this intelligence back, I might discern that we need to change or alter our policy engine,” he said.