Meeting ISO 27001 & PCI-DSS in 2025: What the Compliance World Misses About AI Security

Meeting ISO 27001 & PCI-DSS in 2025: What the Compliance World Misses About AI Security

Why AI Security Is Quickly Becoming the Blind Spot in Compliance

In 2025, the pressure to meet standards like ISO 27001 and PCI DSS 4.0.1 is higher than ever. Organisations are racing to interpret new requirements, pass audits, and keep stakeholders reassured. Yet, many are missing a crucial piece: the way artificial intelligence (AI) changes the threat model for compliance, especially now that AI is woven into business-critical operations and payment systems.

While compliance teams track policy updates and tick off checklists, attackers exploit the unique, rapidly evolving vulnerabilities that AI introduces. This post shines a light on critical AI-related gaps in current compliance strategies, why legacy controls are no longer enough, and how businesses can upgrade their approach to both meet regulations and actually secure their environments.


PCI DSS 4.0.1: Where We Are and What’s Changed

March 31, 2025 was a landmark deadline. PCI DSS version 4.0.1 is now fully in effect, bringing in dozens of new requirements, 47, in fact, that are now mandatory for all merchants and service providers handling payment card data. The emphasis is on flexibility, tailored implementation, and a more risk-led mindset. Notable additions include:

  • Sensitive Authentication Data (SAD) controls: Encryption, retention, and disposal are heavily prescribed, with a requirement that SAD and PAN data use different cryptographic keys.
  • Customised security controls: There’s greater recognition that one size doesn’t fit all; firms can choose alternative controls if they meet rigorous custom validation.
  • Continuous security testing: Regular vulnerability assessment, penetration testing, and network monitoring have been upgraded.

But even with these updates, PCI DSS still largely assumes that threat surfaces look like they did a few years ago, fixed databases, human-accessed interfaces, static applications.

With AI tools handling billions of transactions, making credit decisions, or sorting fraud, new exposures emerge that these standards barely touch upon.


ISO 27001: Strong on Process, but Can It Keep Up?

ISO 27001 remains the gold standard for information security management systems (ISMS). The 2022 refresh introduced more explicit references to cloud security and supplier risk, but didn’t overhaul its approach for the reality of pervasive AI.

ISO 27001’s real strength is in its process discipline, risk assessments, asset registers, roles and responsibilities. Unfortunately, without a deep understanding of how AI models process, store, and act upon sensitive data, many businesses are flying blind on risk. ISO 27001 can help identify which AI assets fall in scope, but does not alone guarantee that “secure by design” translates to “secure in practice” for machine learning systems.

image_1


How AI Upends Classic Compliance

So what is it that compliance teams and auditors are missing as they review AI-centric environments?

1. Model Explainability and Transparency

Most regulatory texts now mention the need for “explainable AI”—but integrating this into audit and compliance work is baffling for many non-specialists. Key issues:

  • AI decisions often can’t be traced with traditional logs. When a model denies a transaction, what data influenced the choice? The lack of clear audit trails undermines incident investigations and can hide systemic bias.
  • For fraud and payment use cases, explainability isn’t a luxury, it’s a requirement for compliance with both data protection law and PCI DSS when disputes arise.

2. Dynamic (Not Static) Risk

The classic compliance “snapshot” (annual assessments, point-in-time pentests) struggles with AI models. These systems learn, update, and adapt without explicit releases. A model passed as secure last quarter may have drifted, that is, changed its behaviour—because of new training data, exposing fresh risk that goes untested for months.

  • Example: A customer risk scoring AI, after an automated model update, starts leaking personal data in its decision output, a violation of PCI DSS and, likely, the UK GDPR.

3. Data Lineage and Provenance

PCI DSS and ISO 27001 expect clear, documented data flows. In AI ecosystems, data can traverse microservices, third-party APIs, and internal systems, with transformations, filtering, or enrichment steps along the way.

  • Can you prove the full journey of a transaction, including model decisions and all underlying datasets?
  • Most organisations can’t, making them non-compliant and at risk during audits or breaches.

image_2

4. Built-In Tools and Shadow AI

New compliance risks come from the unmonitored use of powerful cloud-native tools. For example:

  • Snapshot management, API tokens, or lifecycle rules in cloud environments like AWS S3 or Azure Blob can be exploited to delete logs or backup data, removing the very records you need for compliance.
  • Shadow AI (unregistered or unsanctioned models used by staff) introduces invisible risk, as these don’t appear in asset registers or risk assessments, yet they can access sensitive data or make impactful decisions.

Filling the Gaps: Operational Strategies That Make a Difference

Build AI-Aware Governance into Your ISMS

  • Integrate model inventory, version control, and risk ratings into your asset management process.
  • Assign clear business ownership for every AI service, treating models as living, evolving systems, not just “black box” code.

Audit Trails and Documentation for Models

  • Extend PCI DSS and ISO 27001’s documentation requirements to cover:
  • Model architecture and training sources
  • Versioning history and update logs (including who authorised changes)
  • Result explainability, including example inputs and outputs
  • Test whether you could explain, to an external auditor, why a model made a specific decision in a live scenario.

Continuous Monitoring, Augmented for AI

  • Beyond traditional logging, monitor for:
  • Model drift: Is the model’s performance or behaviour suddenly changing?
  • Adversarial inputs: Is someone testing your AI for weaknesses?
  • Unauthorised retraining: Are new datasets or scripts being introduced outside of process?
  • Embrace both policy (role-based access, least privilege) and technical controls (automated alerts on critical AI events).

Responding to Emerging Threats

  • Include AI-specific incident scenarios in your runbooks, think model poisoning, data extraction, or high-impact automation errors.
  • Make sure the incident response team can collaborate with data scientists and ML engineers for investigation.

image_3

Train for Tomorrow’s Audit, Today

  • Upskill auditors and risk teams on AI concepts, so they’re not “checking the box” on irrelevant controls.
  • Bring in external AI compliance experts for deep-dives, especially ahead of major recertification efforts.
  • Use NIST AI Risk Management Framework (AI RMF) as a practical overlay alongside ISO 27001 and PCI DSS for holistic coverage.

Strategic Recommendations for 2025 and Beyond

  1. Integrate, Don’t Bolt On: Treat AI governance as a primary lens in your risk assessment, don’t just wrap extra policies around legacy frameworks.
  2. Automate and Orchestrate: Use tools that track model lineage, automate compliance checks, and provide real-time visibility into your AI environment.
  3. Close the Talent Gap: Build blended compliance and AI teams, not siloed functions. Run cross-discipline exercises.
  4. Think Beyond Compliance: Focusing on checklists alone won’t defend your data or reputation. Build a culture of proactive, intelligent AI security.
  5. Stay Engaged: Standards are evolving. Participate in PCI SSC, ISO, or NIST working groups where possible. Watch for pending legislation on AI explainability, transparency, and safety.

Ready for a Health Check?

As AI regulation and attacker sophistication both accelerate, traditional compliance can’t keep pace unless it adapts. At EJN Labs, we help UK and global clients bridge these new gaps, mapping AI risk, remediating control weaknesses, and ensuring your compliance effort truly covers your modern environment.

If you want to review your AI and data security strategy, or benchmark your compliance maturity, get in touch at ejnlabs.com, or email contact@ejnlabs.com.

Explore more about cyber security and how we can help you prepare for tomorrow’s challenges in our articles section.


By thinking proactively and treating AI security as a pillar of compliance, rather than an afterthought, you can ensure your business stays both certified and genuinely secure in 2025 and beyond.

Leave a Reply

Your email address will not be published. Required fields are marked *