By: Rajat Kohli, Partner at Zinnov
There is something to be learned from epic fantasy productions like Harry Potter. That every few years, there will be a gifted wizard who doesn’t yet know their potential and has the power to do big things. But when left untamed without any guidance and direction, they become capable of making things detrimental, raising havoc of massive proportions. In the real world today, that wizard is AI.
President Biden’s AI executive order (EO) is an attempt to not only provide powerful stakeholders technological direction, but also longevity of impact through a self-sustaining machinery via mechanisms for reliability, equity and accountability. Aimed at balancing AI’s innovation potential, this order will set the tone for future AI regulations.
Impact on Business and Workflows
The EO delves into regulation AI from multiple ends. Provisions like disclosure of development process, model weight ownership, and results of “red-teaming” and safety testing for dual-use foundation models clearly demonstrate the aim to bolster transparency. Additionally, the directive to identify existing standards and practices relating to the authentication, labelling, testing and detection of synthetic content and developing guidance around the use of such techniques like watermarking are good first steps towards removing discrepancies like biases, data hallucination and misuses of data.
As a result of the order’s many provisions, testing of language models against multiple frameworks to ensure compliance will see a boost. Typically, software integration and algorithm testing are outsourced to system integrators (SIs) like TCS, Infosys, Wipro, among others. Hence, these players are likely to come up with dedicated solutions and toolkits for such workloads.
Another area that can see a surge is LM-Ops tools (language model optimization) within generative AI. Prompts made to tools like ChatGPT must adhere to content safety regulations and need to be flagged off when there’s a discrepancy like biases and harmful language. Hence, prompt optimization is a critical area and because of generative AI’s rapid development, we see the new role of prompt engineers gaining importance day by day.
Similarly, data annotation and data labelling are also likely to get a boost. Transparency in the development and use of AI requires clean data sets – the quality of the of output is as good as the data it’s trained on. Hence, technical capabilities that are pre-cursors to developing an AI model are key. For example, Google used Snorkel AI to replace 100K+ hand-annotated labels in critical ML pipelines for text classification, leading to a 52% performance improvement.
With the EO’s aim to promote the safe, secure, and trustworthy use and development of AI, the role of regulation takes center stage, shaping a future where large or small companies can profit from while minimizing its own unintended consequences.
Market Dynamics: How the AI Order Affects Players
All businesses that use AI will be impacted by the executive order, but the impact is not as binary, there’s nuance. It depends on the technological investment in AI and complexity of the workload.
It’s a no-brainer that AI adoption requires large investments, and large enterprises are well-positioned to make them. They have the capital to undertake core AI development initiatives like building custom AI models the way Meta and Google did with LLaMA and Bard. Once the regulations come into effect, their offerings will need to comply to the set standards.
SMBs, on the other hand, might not have the same monetary capacity to commit a huge amount of money to complex technology projects. This disadvantage gets compounded by the fact that SMBs are a big target for cybersecurity attacks and generative AI has a plethora of vulnerabilities that exposes SMBs to attacks, putting their cybersecurity concerns at peak. For SMBs, simple workloads, like deploying a customer support chatbot are more feasible. Once the regulations are in effect, SMBs can integrate regulation-compliant products and offerings into their workflows and reap the benefits that AI brings. In parallel, they can come up with LM-Ops solutions and dedicated toolkits the way small scale ISVs do and expand their offerings.
The implementation of regulations coming out of the order are poised to make changes in the future, be it the pace of business cycles, types of solutions and offerings, business workflows or even expansion of new revenue streams.
AI-volution and the Future
There has been a steady shift in certain processes, particularly as it concerns safety and flagging off content. Traditionally, issues regarding safety and flagging off disparities like biases would be done in retrospect, after AI applications were developed. While this approach does help identify the discrepancies and implement learnings in the future, it doesn’t solve the current problem.
Responsible AI by design, as an approach, weaves in transparency, fairness and accountability right form the design phase. It focuses on how we can integrate responsible AI principles at the time of development of AI apps or foundational models, not after the process. It enables training the algorithm in such a way that they adhere to the regulations from the start.
Anthropic is a good example of a company that’s scaling AI safely and responsibly with intentionality. They have also instituted a responsible scaling policy (RSP) aimed at mitigating catastrophic risks, in tandem with other measures to mitigate bias, toxicity and misinformation, protecting customer privacy, building robust and reliable systems and using constitutional AI. In fact, their tool Claude 2.1 has made significant gains in honesty, with a 2x decrease in false statements compared to the Claude 2.0, improving the overall accuracy of the tool for its users.
Both regulation and intentionality are crucial to ensure that AI evolves in the right direction. The EO uses a combination approach of guidelines and regulation, encouraging companies to develop responsible AI applications and models.
The future of AI is bright, but also uncertain. President Biden’s executive order lays a framework to steer this powerful tool in a direction that maximizes business and societal benefit while minimizing harm. Regulation and intentional development focused on responsibility are crucial to ensure AI evolves safely and for the greater good. There will always be risks, but if we act thoughtfully the benefits will far outweigh them.
About the Author
Rajat Kohli is a leader and industry expert currently operating in the capacity of a Partner for Zinnov’s Digital Transformation practice. He has worked extensively with clients across the globe on end-to-end facets of strategic business consulting and advisory engagements delivering enduring results.
He has been associated with Zinnov since the past 14 years, driving top and bottom-line impact for Fortune 500 Global Technology Giants and SaaS Unicorns. He has led organizations through rapid turnarounds and disruptive transformations, helping them conceptualize and implement their Market Expansion, Sales Enablement, and Product Strategy.
Rajat has acquired domain experience and deep understanding of Digital Transformation, SMB, Cloud Infrastructure, Artificial Intelligence, Go-to-market (GTM) strategy, Market Opportunity Assessment, IT Outsourcing, Account, and Partner Strategy. He has been instrumental in successfully leading teams across client engagements in the Retail and Banking sector.
Rajat has also been a persistent contributor for thought-leaderships and strategic POVs and has tried to bring to the forefront his strategic thinking skills and well-honed industry and functional experience.
Rajat can be found on LinkedIn https://www.linkedin.com/in/rajat-kohli-2aa2068/ and at our company website https://zinnov.com/
Source: www.cyberdefensemagazine.com