Oct 12, 2023The Hacker NewsData Security / Artificial Intelligence

ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are ill-equipped to handle these challenges, since they focus on file-based data protection.

A new report by LayerX, “Browser Security Platform: Guard your Data from Exposure in ChatGPT” (Download here), sheds light on the challenges and risks of ungoverned ChatGPT usage. It paints a comprehensive picture of the potential hazards for businesses and then offers a potential solution: browser security platforms. Such platforms provide real-time monitoring and governance over web sessions, effectively safeguarding sensitive data.

ChatGPT Data Exposure: By the Numbers

  • Employee usage of GenAI apps has surged by 44% in the last three months.
  • GenAI apps, including ChatGPT, are accessed 131 times a day per 1,000 employees.
  • 6% of employees have pasted sensitive data into GenAI apps.

Types of Data at Risk

  • Sensitive/Internal Information
  • Source Code
  • Client Data
  • Regulated PII
  • Project Planning Files

Data Exposure Scenarios

  1. Unintentional Exposure: Employees may inadvertently paste sensitive data into ChatGPT.
  2. Malicious Insider: A rogue employee could exploit ChatGPT to exfiltrate data.
  3. Targeted Attacks: External adversaries could compromise endpoints and conduct ChatGPT-oriented reconnaissance.

Why File-Based DLP Solutions Are Inadequate

Traditional DLP solutions are designed to protect data stored in files, not data inserted into web sessions. They are ineffective against the risks posed by ChatGPT.

3 Common Approaches to Mitigating Data Exposure Risks

  1. Blocking Access: Effective but unsustainable due to productivity loss.
  2. Employee Education: Addresses unintentional exposure but lacks enforcement mechanisms.
  3. Browser Security Platform: Monitors and governs user activity within ChatGPT, effectively mitigating risks without compromising productivity.

What Sets Browser Security Platforms Apart?

Browser security platforms offer real-time visibility and enforcement capabilities on live web sessions. They can monitor and govern all means by which users provide input to ChatGPT, offering a level of protection that traditional DLP solutions cannot match.

A Three-Tiered Approach to Security

Browser security platforms offers three levels of protection:

  1. ChatGPT Access Control: Tailored for users who interact with highly confidential data, this level restricts access to ChatGPT.
  2. Action Governance in ChatGPT: This level focuses on monitoring and controlling data insertion actions like paste and fill, mitigating the risk of direct sensitive data exposure.
  3. Data Input Monitoring: The most granular level, it allows organizations to define specific data that should not be inserted into ChatGPT.

A browser security platform allows for a mix of blocking, alerting, and allowing actions across these three levels, enabling organizations to customize their data protection strategies.

Securing and Enabling ChatGPT

The browser security platform is the only solution today that can effectively guard against data exposure risks in ChatGPT, enabling organizations to harness the full potential of AI-driven text generators without compromising on data security. For more insights, download the report.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Source: thehackernews.com/