Generative artificial intelligence and large language learning models present many opportunities, as well as risks, multi-cloud confidential computing platform Anjuna cofounder and CEO Ayal Yogev writes in TechCrunch. Yogev said the solution to the "complex and serious security concerns" LLMs pose is confidential computing, which "protects data while in use and ensures code integrity." Through the method of confidential computing "data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs."
30 June 2023
Op-ed: 'Confidential computing' as a solution to risks posed by LLMs
Related stories
The final days of grace: Preparing for the U.S. sensitive data rule
Notes from the IAPP Canada: Taking meaningful steps to protect children online
A view from DC: Double toil and trouble in Connecticut’s privacy amendment
US lawmakers find bipartisanship in opposition to UK's order on Apple encryption back door
A view from Brussels: Where does Brussels stand on sovereignty?