Generative AI (Artificial Intelligence) has turned out to be a game changer after the introduction of ChatGPT, DALL-E, Bard, Gemini, GitHub Copilot etc. in 2022 and 2023 [1]. The majority of organizations are trying to figure out their AI strategy, but the LLM and its pipeline security, responsibility, and ethics can’t be ignored. Artificial Intelligence has come a long way since its inception and now encompasses a broad spectrum of capabilities, ranging from natural language processing and computer vision to decision-making and problem-solving. It has become a powerful tool for user experience, business process development, and delivering a personalized solution. It’s important that effective risk management strategies are implemented and evolved along with AI based solutions.

A successful AI deployment requires 5 critical stages

  1. Data Collection: The process of collecting and gathering raw data from multiple sources (this is done by integrating data sources with the target).
  2. Data Cleaning/Preparation: The process of cleaning the data before using it for the AI pipeline (this is done by removing duplicates and excluding non-supported format, empty cells, or invalid entries that can lead to technical issues)
  3. Model Development: The process of building data models by training a large set of data and analyzing certain patterns from the datasets and making predictions without additional human intervention. An iterative MDD (model driven development) is generally followed here.
  4. Model Serving: The process of deploying machine learning (ML) models into the AI pipeline and integrating them into a business application. Mostly, these model functions, available as API, are deployed at a scale and can be used to perform tasks or make predictions based on real-time or batch data.
  5. Model Monitoring: The process of assessing the performance and efficacy of the models against the live data and tracking metrics related to model quality (e.g., latency, memory, uptime, precision accuracy, etc.) along with data quality, model bias, prediction drift, and fairness.

While companies can use Gen AI solutions to expedite AI model development, it also poses enormous risks [3] to critical proprietary and business data. Data integrity and confidentiality are crucial and associated risk must be considered before approving new AI initiatives. The AI solutions can create a serious malware risk and impact if the right practices aren’t followed. Following different types of attacks can compromise the integrity and reliability of the data models-

  1. Data Pipeline attack – The entire pipeline of data collection to data training provides a large attack surface that can be easily exploited to obtain access, modify data, or introduce malicious inputs and cause privacy violations.
  2. Data Poisoning attack – it involves inserting harmful or misleading data into the training datasets to intentionally influence or manipulate the model operation. It can also be done by modifying the existing dataset or deleting a portion of the dataset.
  3. Model Control attack – Malware taking broader control of the decision-making process of the model resulting in erroneous outputs and significant impact to life and loss. This primarily occurs when externally accessible AI models are intentionally manipulated (like control of an automated vehicle).
  4. Model Evasion attack – This attack results in a real-time data manipulation assault like changing the user inputs or device readings to modify the AI’s responses or actions.
  5. Model Inversion attack – It’s a reverse engineering attack that can be used extensively to steal proprietary AI training data or personal information by exploiting the model output. Ex The inversion attack on a model predicting the cancer can be used to infer the person’s medical history.
  6. Supply Chain attack – Attackers hack third party software components (ex – open source third party libraries or assets) included in the model training, deployment or pipeline to insert malicious code and control the model behavior. Ex – Due to 1600 HuggingFace API leaked tokens [4], hackers were able to access 723 accounts of Organizations using HuggingFace API in their model development supply chain.
  7. Denial of Service attack (DoS) – This kind of attack overloads AI systems with numerous requests or inputs, resulting in performance degradation or denial of service due to the resource downtime or exhaustion. Though it doesn’t result in the theft or loss of critical information, it can cost the victim a great deal of time and money to handle. Flooding services and Crashing services are two popular methods.
  8. Prompt attack – These attacks include manipulative tactics where attackers deceive users into revealing confidential information by exploiting security weaknesses in language learning models used by AI-driven solutions like chatbots and virtual assistants. Ex – A security flaw found in Bing Chat [5] successfully tricked models into spilling its secrets.
  9. Unfairness and Biased risks – AI systems may create unfair results or promote social prejudices, posing ethical, reputational, and legal issues. Given the fact AI solutions have potential to revolutionize many industries and improve people’s lives in countless ways, this biases and unfairness may severely impact minorities, people of color, or users not well represented in the training dataset. Ex – A face detection solution may not recognize non-white faces if those users weren’t added in the training set.

I would like to present the following recommendations to enhance the security of data models, MLOps Pipeline, and AI applications. The best practices will provide security guardrails and monitoring of assets while complying with regulations across respective geographies. AI models will be playing a critical role in delivering competitive advantage to organizations, therefore AI process integrity and confidentiality must be maintained by securing the most important assets and formulating the multi-prong approach to achieve AI security.

Recommendations

  1. Zero trust AI [6]: Access to model/data must be denied unless the user or application can prove their identity. Once identified, the user should be allowed to access only the required data for a limited period of time resulting in a least-privilege access, rigorous authentication, and continuous monitoring. “Trust, but verify” approach to AI results in models being continuously questioned and evaluated on a continuous basis – The Vault (secrets management), Identity and Access Management (IAM), and multi-factor authentication (MFA) plays a central role here.
  2. Artificial Intelligence Bill of Material (AIBOM): It’s similar to Software Bill of Material (SBOM) but prepared exclusively for the AI models, resulting in an enhanced transparency, reproducibility, accountability and Ethical AI considerations. The AIBOM [7] details the building components comprising an AI system’s training data sources, pipelines, model development, training procedures, and operational performance to enable governance and assess dependencies. A suggested schema of AIBOM is referred [8] here.
  3. Data Supply Chain – The access to clean, comprehensive, and enriched unstructured and structured data is the critical building block for AI models. The enterprise AI Pipeline and MLOps solutions supporting orchestration, CICD, ecosystem, monitoring, and observability are needed to automate and simplify machine learning (ML) workflows and deployments.
  4. Regulations and Compliance – “Data is the new oil” and each country [9] is implementing its own rules to safeguard their interest. Organizations must adhere to AI data regulation and compliance enforced in the respective region. A “human centered design approach” Ex – H.R. 5628 (Algorithmic Accountability Act), H.R. 3220 Deep Fakes Accountability Act), European Union’s Artificial Intelligence Act are few of the recent regulations.
  5. Continuous Improvement and Enablement – With the continuous evolution of AI processes and models, the security of the AI ecosystem is a journey. A significant attempt must be made to frequently provide cybersecurity training to not only the data scientist and engineers but also developers and operations team building and supporting AI applications.
  6. Balanced Scorecard based approach for CISOs – CISOs are now being invited to boardroom discussions to share their cybersecurity vision and align it with business priorities. A metrics driven based balanced scorecard solution (How CISOs Can Take Advantage of the Balanced Scorecard Method), provides a holistic approach to protect enterprise assets from malicious threats. A balanced scorecard-based cybersecurity strategy map can reduce business risks, increase productivity, enhance customer trust, and help enterprises grow without the fear of a data breach.

To summarize, it’s critical to safeguard data and assets by compartmentalizing AI operations and adopting a metrics driven approach. A balance between harnessing AI’s power and addressing its data security and ethical implications is crucial for a sustainable business solution.

References-

[1] https://blogs.nvidia.com/blog/ai-security-steps/

[2] https://www.leewayhertz.com/ai-model-security/

[3] https://www.hpe.com/in/en/what-is/ai-security.html

[4]https://www.securityweek.com/major-organizations-using-hugging-face-ai-tools-put-at-risk-by-leaked-api-tokens/

[5] https://www.wired.com/story/chatgpt-prompt-injection-attack-security/

[6] https://www.computer.org/csdl/magazine/co/2022/02/09714079/1AZLiSNNvIk

[7] https://snyk.io/series/ai-security/ai-bill-of-materials-aibom/

[8] https://github.com/jasebell/ai-bill-of-materials

[9]https://www.techtarget.com/searchenterpriseai/feature/AI-regulation-What-businesses-need-to-know

About the Author

Securing AI Models – Risk and Best PracticesArun Mamgai has more than 18 years of experience in cloud-native cybersecurity, application modernization, open-source secure supply chains, AI/machine learning (ML), and digital transformation (including balanced scorecards, data management, and digital marketing) and has worked with Fortune 1000 customers across industries. He has published many articles highlighting the use of generative AI for cybersecurity and securely developing modern cloud applications. He has been invited to speak at leading schools on topics such as digital transformation and application-level attacks in connected vehicles and has been a judge for one of the most prestigious awards in the technology sector. He has also mentored multiple start-ups and actively engages with a nonprofit institution that enables middle school girls to become future technology leaders.

Arun can be reached online at ([email protected] or https://www.linkedin.com/in/arun-mamgai-10656a4/)

Source: www.cyberdefensemagazine.com

Leave a Reply

Your email address will not be published. Required fields are marked *