In their rush to get an edge over rivals with AI, many organisations are finding themselves exposed to critical security gaps by not tracking the severity of security incidents, according to cybersecurity firm Tenable.
As many as a third of organisations have already suffered an AI-related breach, the company said today in a report that puts the blame on security capabilities that have not caught up with the pace of AI adoption.
Instead of prioritising proactive risk reduction and long-term resilience, some 43 per cent of organisations track security incident frequency and severity, a metric that only has value after a compromise, according to Tenable.
This “rearview mirror mindset”, the company argues, can provide an illusion of security. Its findings are from a study it commissioned and developed with the Cloud Security Alliance, which surveyed more than 1,000 IT and security professionals worldwide.
Focusing on the right issues matters as well. While security teams focus on emerging “AI-native” risks such as model manipulation, the majority of AI breaches stem from long-standing, preventable issues – exploited software vulnerabilities (21 per cent), insider threats (18 per cent), and misconfigured systems (16 per cent).
Organisations also reported an average of 2.17 cloud-related breaches in the last 18 months, with just 8 per cent considering them “severe.”
This gap indicates that many incidents may be downplayed, masking the real level of risk. This is especially when underlying causes like misconfigured cloud services (33 per cent) and excessive permissions (31 per cent) are preventable.

Unfortunately, industry leaders are applying 21st-century technology to a 20th-century security mindset, said Liat Hayun, vice-president of product and research at Tenable.
“They are measuring the wrong things and worrying about futuristic AI threats while ignoring the foundational weaknesses that attackers are exploiting today,” she added. “This isn’t a technology problem; it’s a leadership and strategy issue.”
According to Tenable, leaders who look at more reactive metrics will face significant challenges that include a lack of visibility (28 per cent) and overwhelming complexity (27 per cent). Only 20 per cent of respondents focus on unified risk assessment and just 13 per cent on tool consolidation.
In a separate study, tech giant IBM also found that AI adoption outpacing AI security and governance. A large proportion of organisations lacked AI controls and governance policies, it revealed.
Worringly, the findings suggest that AI is already an easy, high-value target for threat actors. Thirteen per cent of organisations say their AI models or applications have been breached, while 8 per cent are unsure if their security has been breached.
Of those compromised, 97 per cent report not having AI access controls in place. This has resulted in 60 per cent of the AI-related security incidents leading to compromised data and 31 per cent resulting in operational disruption.