Tech Inside

Cloud-based AI prone to toxic combinations, leaves sensitive data vulnerable: report

Cloud and AI are undeniable game changers for businesses. However, both introduce complex cyber risks when combined, according to a recent risk report.

Tenable, an exposure management company that exposes and closes the cybersecurity gaps that erode business value, reputation, and trust, said in a release that cloud-based AI is prone to avoidable toxic combinations that leave sensitive AI data and models vulnerable to manipulation, data tampering, and data leakage.

The Tenable Cloud AI Risk Report 2025 highlights the current state of security risks in cloud AI development tools and frameworks, as well as in AI services offered by the three major cloud providers: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Key findings from the report include:

  • Cloud AI workloads aren’t immune to vulnerabilities: Approximately 70% of cloud AI workloads contain at least one unremediated vulnerability. In particular, Tenable Research found CVE-2023-38545—a critical curl vulnerability—in 30% of cloud AI workloads.
  • Jenga-style cloud misconfigurations exist in managed AI services: 77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks. This means all services built on this default Compute Engine are at risk.
  • AI training data is susceptible to data poisoning, threatening to skew model results: 14% of organizations using Amazon Bedrock do not explicitly block public access to at least one AI training bucket and 5% have at least one overly permissive bucket.
  • Amazon SageMaker notebook instances grant root access by default: As a result, 91% of Amazon SageMaker users have at least one notebook that, if compromised, could grant unauthorized access, which could result in the potential modification of all files on it.
Photo: Tenable

“When we talk about AI usage in the cloud, more than sensitive data is on the line,” said Liat Hayun, VP of Research and Product Management, Cloud Security at Tenable. “If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust.”

She added, “Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.”

Read also

admin

Recent Posts

Five Women, Five Stories, One Unbreakable Thread: UNYIELDING GRIT

The spirit of Women’s Month soared to new heights on March 26, 2026, as the…

1 month ago

Roundtable Discussion on Board-level Climate Governance

What’s next? This powerful question was posed at the end of the roundtable discussion on…

2 months ago

GAIN convention explores impact, potential of agentic AI

Industry associations, AI experts, and educators identified early adoption and training as key to unlocking…

8 months ago

Reflections on AI in Education: A Path Forward

Last Friday, May 16, I attended the 16th Innotech International Conference of SEAMEO at the…

12 months ago

UNICEF, DBM & EU launch finance program for Filipino children

UNICEF, in partnership with the Department of Budget and Management and the European Union, announced…

12 months ago

‘My Dream in a Shoebox’ marks 16th year, celebrates support of partners

My Dream in a Shoebox (MDIAS), the annual advocacy campaign of award-winning strategic marketing agency…

12 months ago