Generative artificial intelligence and large language learning models present many opportunities, as well as risks, multi-cloud confidential computing platform Anjuna cofounder and CEO Ayal Yogev writes in TechCrunch. Yogev said the solution to the "complex and serious security concerns" LLMs pose is confidential computing, which "protects data while in use and ensures code integrity." Through the method of confidential computing "data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs."
30 June 2023
Op-ed: 'Confidential computing' as a solution to risks posed by LLMs
Related stories
European Commission holds firm on AI Act implementation timeline
Notes from the Asia-Pacific region: India's digital landscape marches ahead
A view from Brussels: Will the EU pause the AI Act?
Navigate 2025: How individuals' feelings inform AI governance practices
There's no opting-out of universal opt-outs