How can generative AI help better protect our systems and what is needed for the dream of systems with self-defense and self-healing to become a reality? What are the easiest quick wins and what is the longer term outlook?
It was a privilege to drive this discussion with an outstanding team of experts -Birat Niraula, who leads security for Google’s Enterprise networks, Clint Mazur, who leads Container platforms, DevOps and automation at Cigna Evernorth and Mark Rushing who leads technology for Citi's Cyber operations.
Here are just a few valuable insights from the speakers:
- Start adopting AI for augmentation of human work. Low risk and high value use cases include: – Augmenting tasks of SOC analysts and Threat Hunters providing additional data and broader coverage.– Improved developer productivity via summary of wiki pages, security guidelines, compliance checks and recommendations on bug fixes.– Improved testing, via analysis of edge cases that developers are not thinking about.
- Self-defense and self-healing, should be introduced gradually with AI enabled change emulation:– Apply self-healing to systems that can recover through automation. Start remediating cases that should never be happening (where it is clear that the sickness is worse than the risk of the medicine).– Use AI to better understand the operational aspects. Don't just auto-remediate - understand "why" things are happening, learn and predict the impact of changes to production. – Follow your change management processes, even when things are automated, and have humans reviewing high risk changes (e.g. large scale rollouts).
- Consider risk vs. reward– Consider the business problem first and then bring the right technology. – Don’t just slap AI into everything, but bring it where it will bring most value!
This was fun and more insights can be found at - https://lnkd.in/gTyuvWg5