Cloud providers play a vital role in the rising use and deployment of AI systems. Generative AI foundation models in general get better when they’re trained on larger sets of data, and cloud providers let organizations store and process more data, as well as serve apps at scale that use that data.
As with any emerging technology, it’s crucial to address the unique threats that target AI while also addressing the risks facing any cloud app. We offer our recommendations in Best Practices for Securely Deploying AI on Google Cloud, a new research report published today.
At the heart of our analysis is how we drive requirements and recommendations for securing AI workloads on Google Cloud. We explain the security capabilities that Google provides, and the steps that we advise customers to take. We review essential security domains, address model-specific concerns including prompt injection, and outline proactive measures for future-proofing AI governanceand resilience.
“As the world focuses on the potential of AI — and governments and industry work on a regulatory approach to ensure AI is safe and secure — we believe that AI represents an inflection point for digital security,” we wrote in the report.