The inner-functioning of AI-powered search is more complex than plain-text search via Google when it comes to extracting answers from websites and databases. Fortanix is trying to build a security wall to protect the search query and the extraction of data from AI systems.
Generative AI technologies will ultimately provide fine-grained responses based on associativity of information, and retrieve information from sources users may not be aware of, says Richard Searle, vice president of confidential computing at Fortanix.
“What we’re finding … is that there is a deeper focus happening in the AI domain around privacy, and consent permissioning of information. These are obviously the core cases,” Searle says.
Fortanix is effectively building a security wall around the way AI search queries are handled. Fortanix is building security around prompts, where users provide AI search queries. The security wall extends to data retrieval from LLMs, which are delivered to customers.
“We think there’s a significant market there. The partners that we’re working with within the AI space are talking about search to their customers already today,” Searle says.
Fortanix’s private AI search are pinned on confidential computing, which creates secure vaults that is accessible only to authorized parties via keys. The data is processed inside without leaving the vault.
“Data is going to be subject to not only the data protection regulations that we have today, but also these emerging AI regulations,” Searle says.
Fortanix’s technology is a component in the emerging field of confidential AI. Confidential AI is extended to AI scenarios typically associated with GPUs and accelerators, said Mark Russinovich, chief technology officer for Microsoft Azure, during a panel discussion at the Open Confidential Computing Conference in March.
Protecting Vector Databases
Private search with AI relies on data extraction from knowledge graphs or vector databases, which could draw information from a wide ranges of sources, including static conventional databases.
At Nvidia’s GPU Technology Conference, the company’s CEO Jensen Huang defined vector databases as a new style of database that takes structured data or unstructured data that can be reindexed by encoding the meaning of data.
“Now this becomes an AI database, and that AI database in the future once you create it, you can talk to it,” Huang said.
Fortanix’s goal is to initiate privacy for the search initiator — a human or machine user — and protect the privacy and integrity of the information that might be within the vector embeddings.
“Confidential computing, I think has a very important role to play,” Fortanix’s Searle says.
Private Search Differs From Conventional Search
Confidential AI prevents the leakage of AI information, which helps meets regulatory requirements. The layer also anonymizes the information so a user’s intent or identity is protected. That is the opposite of conventional search, where user information and motives drives Google Ads and analytics.
A higher level of AI privacy is a cornerstone for industries such as healthcare and banking, which are tied to regulatory issues.
“Imagine that you want to do search in a scientific domain — you might want to be able to use data from a different institution to the one you’re working in that has some specific expertise in a particular field. Showing the privacy of that data and the accuracy of this research is important,” Searle says.
State of Confidential AI
Confidential AI is an emerging concept, and search fits into that profile. Intel and AMD have chips with hooks for secure enclaves. Microsoft, Google, and Amazon are providing confidential computing virtual machines in their cloud services.
Companies can bring data sets together to train foundational model effectively, and these data sets are becoming high-value assets for top corporations, said Ian Buck, vice president and general manager of Nvidia’s hyperscale and high-performance computing division, during the panel discussion.
The IT industry is rapidly approaching the deployment of confidential computing in data centers, edge computing and PCs, but there’s a lot more to do with confidential AI, said Intel’s chief technology officer, Greg Lavender, during the panel.
“Privacy and safety and responsible AI are going to be big forcing functions for this as the government adopts AI technologies from the industry,” Lavender said.
Source: www.darkreading.com