Sherlocked Security – AI Act / NIS2 Readiness
Technical and Governance Readiness for EU AI Regulation and NIS2 Directive
1. Statement of Work (SOW)
Service Name: AI Act & NIS2 Compliance Readiness
Client Type: AI/ML Solution Providers, Critical Infrastructure Entities, Cloud Platforms, Digital Service Providers, Data Processors
Service Model: Risk Classification, Governance Assessment, Technical Safeguards Validation, Regulatory Mapping, Remediation Roadmap
Compliance Coverage: EU AI Act, NIS2 Directive, ISO/IEC 42001, ISO 27001/27036, ENISA Guidelines, GDPR, OWASP LLM Top 10
Assessment Types:
- AI System Risk Classification (Minimal, Limited, High, Unacceptable)
- NIS2 Security & Incident Response Control Mapping
- AI Governance & Lifecycle Accountability Review
- Technical & Organizational Security Measures Assessment
- Third-Party Risk & Supply Chain Exposure Review
2. Our Approach
[AI/NIS2 Asset Discovery] → [Regulatory Scope Analysis] → [Risk Classification & Gap Assessment] → [Controls Review (Org + Tech)] → [Remediation Strategy] → [Governance & Audit Framework Setup] → [Continuous Compliance Monitoring]
3. Methodology
[AI Model & System Inventory] → [Risk Categorization (EU AI Act)] → [NIS2 Critical Entity Review] → [Security Policy & Procedure Review] → [LLM/ML Attack Surface Assessment] → [Third-Party Contract Analysis] → [Audit & Readiness Report Generation]
4. Deliverables to the Client
- AI Risk Categorization Matrix (per EU AI Act)
- NIS2 Compliance Gap Report & Maturity Score
- AI System Governance & Lifecycle Policy Review
- Cybersecurity Safeguards and Response Plan (NIS2)
- Third-Party & Supply Chain Risk Analysis
- Technical Controls Mapping (LLM-specific, Cloud, Infra)
- Governance Documentation Kit (AI Logs, DPIAs, ROPAs)
- Continuous Monitoring & Incident Notification Strategy
5. What We Need from You (Client Requirements)
- Inventory of AI systems and models (LLMs, classification, recommendation engines, etc.)
- Details of ML pipelines, training data governance, and risk assessments
- Organizational cybersecurity policy and NIS2 risk management documentation
- List of critical services, suppliers, and IT/OT dependencies
- Incident response plans and notification procedures
- Third-party contract SLAs and data sharing protocols
- NDA and Scope Confirmation
6. Tools & Technology Stack
- AI/ML Model Tracking: MLflow, Weights & Biases, Amazon SageMaker
- Security Frameworks: ISO/IEC 27001, 42001, NIST AI RMF, ENISA Guidelines
- Vulnerability Assessment: Trivy, Clair, OWASP AI Security Testing Tools
- Governance Tools: TrustArc, OneTrust, OpenGPTGuard, Confluence
- Threat Detection: Falco, CrowdStrike, SentinelOne, Wazuh
- DPIA & ROPA Generators: PrivIQ, GDPR365, Microsoft Compliance Manager
7. Engagement Lifecycle
1. Kickoff & Inventory Collection → 2. AI Act Risk Categorization → 3. NIS2 Criticality Assessment → 4. Governance & Technical Gap Review → 5. Risk & Exposure Report → 6. Remediation & Advisory Support → 7. Continuous Monitoring & Compliance Enablement
8. Why Sherlocked Security?
Feature | Sherlocked Advantage |
---|---|
Dual-Focused Compliance Expertise | Combined AI regulation and cybersecurity directive coverage |
Risk-Based Control Mapping | Technical and organizational safeguards aligned to specific risk class |
LLM/AI-Specific Security Expertise | Tailored assessments for large language models and ML attack surfaces |
Audit-Ready Governance Documentation | End-to-end policy kits for AI logs, audit trails, and incident handling |
Third-Party Risk Mitigation Strategy | Deep analysis of supplier contracts, MLaaS dependencies, and data flows |
9. Real-World Case Studies
AI Risk Categorization for Predictive Healthcare Startup
Issue: Company built a diagnostic AI without classifying its risk tier under EU AI Act.
Impact: Unintentional exposure to regulatory penalties for unclassified high-risk use.
Fix: Conducted risk assessment, updated documentation, enforced model explainability, and published transparency reports.
Outcome: Startup successfully aligned with EU AI Act for high-risk AI, allowing further scaling in European market.
NIS2 Assessment for a Critical Infrastructure Provider
Issue: Energy company lacked formal cybersecurity governance per NIS2 requirements.
Impact: Vulnerable to fines, breaches, and regulatory non-conformance.
Fix: Performed gap analysis, developed incident notification playbooks, and implemented monitoring for key assets.
Outcome: Achieved NIS2 technical and procedural compliance ahead of deadlines.
10. SOP – Standard Operating Procedure
-
AI Inventory & Risk Categorization (AI Act)
- Identify all AI systems in use or in development.
- Classify by risk (minimal, limited, high, unacceptable) based on application context (e.g., biometrics, critical infrastructure, employment).
-
NIS2 Scope & Critical Asset Assessment
- Define critical services and infrastructure covered under NIS2.
- Perform risk evaluation across digital services, cloud platforms, and dependencies.
-
Governance & Policy Review
- Review policies for AI explainability, human oversight, and transparency.
- Ensure incident response procedures exist for both AI failure and cyberattacks.
- Verify that ML lifecycle governance and risk logs (model changes, data updates) are maintained.
-
Security Control Mapping & Gap Identification
- Map security controls to NIS2 expectations (e.g., vulnerability handling, authentication, logging).
- Identify missing or weak security controls in the ML lifecycle and infrastructure.
-
Third-Party & Supply Chain Risk Evaluation
- Review contracts with AI vendors and infrastructure providers.
- Verify SLAs for incident notification, data localization, and regulatory support.
-
Remediation Planning & Documentation
- Develop AI risk statements, DPIAs, and regulatory declarations.
- Implement technical remediations (e.g., input validation, access controls, logging).
- Build an audit trail of actions taken.
-
Post-Assessment Monitoring & Advisory
- Design and deploy continuous compliance monitoring dashboards.
- Update model governance and cybersecurity safeguards regularly.
- Assist with regulator engagements and ongoing reporting.
11. AI Act & NIS2 Readiness Checklist
1. AI Risk Classification & Transparency
- Inventory all deployed and in-development AI systems
- Classify according to EU AI Act criteria (minimal → high-risk)
- Document model purpose, intended audience, and potential impact
- Provide human oversight mechanisms and fail-safes
- Publish algorithm transparency and decision-logic summaries
2. AI Governance & Lifecycle Controls
- Define roles for AI accountability (data scientist, reviewer, compliance lead)
- Maintain model lifecycle logs and risk registers
- Enforce versioning and traceability of training datasets
- Integrate fairness, bias mitigation, and explainability techniques
- Retain access logs and model input/output audit trails
3. NIS2 Cybersecurity Controls
- Implement patch management, asset discovery, and vulnerability scans
- Apply identity/access management (IAM), MFA, and segmentation
- Maintain incident detection and rapid response plans
- Train staff on NIS2-related security roles and obligations
- Monitor for log anomalies, endpoint compromise, and OT/IT asset changes
4. Third-Party & Supply Chain Assurance
- Maintain a live inventory of third-party services, APIs, and dependencies
- Define SLAs for data protection and vulnerability notification
- Validate cloud provider conformance with NIS2 cybersecurity standards
- Ensure MLaaS providers comply with AI risk disclosures and audits
5. Compliance Documentation & Audit Readiness
- Prepare AI Act declarations and EU-required conformity documentation
- Maintain DPIAs, ROPAs, and incident logs
- Store evidence of model testing, bias mitigation, and failover processes
- Document cybersecurity readiness metrics for NIS2 audits
6. Continuous Monitoring & Reporting
- Track compliance metrics with real-time dashboards
- Set alerts for AI drift, model performance degradation, and new vulnerabilities
- Perform regular self-assessments aligned with NIS2 and AI Act revisions
- Establish notification workflows for security or model failures