CRA Article 14: Embedded Vulnerability Reporting
CRA Article 14 vulnerability reporting: 24-hour, 72-hour, and 14-day deadlines explained, plus how to set up a minimum viable PSIRT for embedded teams.
Article 14 of the Cyber Resilience Act is the provision that will catch embedded product teams most off-guard. It's not the vulnerability-handling process itself (most firmware teams have some form of this, even if informal) — it's the mandatory external reporting chain with hard, legally-binding deadlines.
The reporting obligation kicks in on 11 September 2026, more than a year ahead of full CRA enforcement. That means companies that are otherwise not CRA-ready still need to have their ENISA notification process working by September 2026.
This post breaks down the full Article 14 mechanism, clarifies the regulatory interpretation of "actively exploited," and outlines a minimum viable PSIRT setup for small embedded teams.
The Article 14 Reporting Chain
Article 14 establishes a multi-stage reporting obligation for two distinct event types: actively exploited vulnerabilities and severe incidents.
Stage 1: 24-Hour Early Warning
Under Article 14(2)(a), upon becoming aware of an actively exploited vulnerability in their product, a manufacturer must submit an early warning to ENISA within 24 hours.
The early warning must contain:
- Identification of the product (name, version)
- Nature of the vulnerability (brief description)
- Whether the manufacturer is aware of any malicious actors exploiting it
- Whether corrective or mitigating measures are available
This is notification only—you don't need a patch ready. You need to notify.
Stage 2: 72-Hour Vulnerability Notification
Under Article 14(2)(b), within 72 hours of becoming aware, the manufacturer must submit a more detailed vulnerability notification to ENISA.
This must include:
- Product identifier
- Affected versions
- Nature of the vulnerability (CVSS-style severity, attack vector, conditions for exploitation)
- Status of corrective measures (patch available, in development, or workaround only)
- Planned timeline for patch release (if not yet available)
- Any mitigating factors that reduce real-world exploitability
Stage 3: 14-Day Final Report
Under Article 14(2)(c), within 14 days of becoming aware, the manufacturer must submit a final report (or "vulnerability report" in the regulation's terminology) to ENISA.
The final report must include:
- Full technical description of the vulnerability
- CVE identifier (or request for one if not yet assigned)
- Root cause analysis
- Corrective measures taken or planned
- Vulnerability impact assessment
After the 14-day report, ENISA may request additional information and can share relevant information with national cybersecurity authorities (CSIRTs and market surveillance authorities).
The Severe Incident Track
Alongside the vulnerability reporting track, Article 14(3) creates a parallel requirement for severe incidents:
- 24-hour early warning to ENISA upon becoming aware of a severe incident that has an impact on the security of the product
- 72-hour incident notification with fuller details
- One-month final report following the incident
A "severe incident" is defined in the CRA as an incident that negatively affects the ability of a product with digital elements to protect the availability, authenticity, integrity, or confidentiality of data, or that has led or is capable of leading to the introduction of malicious code.
For most embedded firmware teams, the vulnerability reporting track (not the incident track) is the primary obligation to manage.
What "Actively Exploited" Actually Means
This is the regulatory interpretation question that matters most for your triage process.
The CRA does not define "actively exploited" with precision in the text—this is an area where ENISA guidance and market surveillance practice will define the standard. Based on ENISA's published guidance (2025) and the regulation's intent, the working definition for compliance purposes is:
A vulnerability is actively exploited when:
- There is evidence that a threat actor has used the vulnerability against at least one system in the wild (not in a controlled research environment)
- OR ENISA or a national CSIRT has flagged the vulnerability as actively exploited in their databases
This is different from CVE "known exploited" standards in two important ways:
- Threat intelligence vs. public record: You don't need to wait for a CVE to appear on CISA's KEV list or similar databases. If your own threat intelligence, security researcher report, or customer incident report indicates exploitation in the wild, the 24-hour clock starts.
- Your product specifically vs. the class of vulnerability: If a library you use has a known-exploited CVE (e.g., in CISA KEV), but that CVE is in a code path not present in your firmware build, it's likely not "actively exploited in your product." However, if a CVE in a library you use has been exploited and the vulnerable code path is present in your firmware, the clock is running.
Practical implication: Your vulnerability triage process needs to determine within hours — not days — whether a newly disclosed CVE affects a code path that's actually present and exploitable in your firmware. This is one of the reasons VEX (Vulnerability Exploitability eXchange) documents are becoming important for CRA compliance. (See our post on SBOM and VEX for firmware.)
What Triggers the Clock
Article 14(2) says the clock starts when the manufacturer "becomes aware" of the vulnerability. This has a specific regulatory meaning:
- Receipt of a vulnerability report from a security researcher
- Internal discovery of a vulnerability via security testing or code review
- Notification from a component vendor (e.g., a vulnerability in a chipset SDK)
- Monitoring of CVE databases for software you ship
- Customer incident reports indicating exploitation
You cannot argue you "weren't aware" if:
- The vulnerability was published in a CVE database for a component you ship and you have no monitoring in place
- A security researcher notified your published security contact and you didn't read it
- Your vulnerability disclosure policy (VDP) inbox was unmanned
This means passive monitoring of CVE databases for all components in your firmware is a compliance requirement, not just good practice.
The ENISA Single Reporting Platform
Article 14(8) mandates that ENISA establish a single reporting platform for CRA vulnerability notifications. As of early 2026, ENISA has published the technical specification for this platform and is in the process of onboarding manufacturers.
Key practical points:
- The platform will be available at a URL published by ENISA (check ENISA's SRP page for registration details)
- Notifications must be submitted via the platform, not via email to national CSIRTs
- ENISA will route relevant information to the appropriate national CSIRT based on the manufacturer's location and the affected member states
- API access will be available for automated submission (important for manufacturers with large vulnerability volumes)
Action item: Register your organisation on the ENISA reporting platform before September 2026. Don't wait until you have a vulnerability to report—the registration process takes time, and you don't want to be figuring out the platform mechanics during a 24-hour incident response window.
Article 14 and Coordinated Vulnerability Disclosure
Article 14 interacts with coordinated vulnerability disclosure (CVD) in ways that create tension for security researchers and manufacturers.
Article 14(1) requires manufacturers to notify users of the affected product "without undue delay" after becoming aware of a vulnerability—potentially before a public disclosure or patch is available. This is different from standard CVD practice, where disclosure timing is coordinated between the researcher and vendor.
Recital 97 provides some nuance: manufacturers should coordinate with ENISA when disclosure timing affects security. In practice:
- For vulnerabilities reported via CVD with an agreed embargo: the 24-hour ENISA notification can happen under embargo (ENISA is not required to make it public immediately)
- The 14-day final report triggers ENISA information-sharing with national CSIRTs, but not necessarily public disclosure
For manufacturers operating a bug bounty or VDP: Your programme rules should explicitly mention CRA reporting obligations. Security researchers need to understand that disclosures to you trigger regulatory timelines.
The CVD Policy Requirement (Annex I, Part II)
Separate from but related to Article 14, Annex I Part II requires manufacturers to have a coordinated vulnerability disclosure policy as part of the essential requirements. This is distinct from the Article 14 reporting obligation.
Your CVD policy must:
- Define a contact point for receiving vulnerability reports (e.g., security@yourcompany.com, published in security.txt)
- Describe the process for receiving, triaging, and handling reports
- Set expectations for response timelines and researcher communication
- Define your disclosure timeline and process
Minimum viable CVD policy for small embedded teams:
- Publish a security.txt file at
/.well-known/security.txtpointing to your security contact - Maintain a monitored security@yourcompany.com inbox
- Commit to acknowledging reports within 5 business days
- Define a 90-day default disclosure timeline (aligned with industry norm)
- Document the CVD policy on your product's security documentation page
The CVD policy must be published and accessible — it's part of the technical documentation reviewable by market surveillance authorities.
Setting Up a Minimum Viable PSIRT
PSIRT stands for Product Security Incident Response Team. For large companies, this is a dedicated team. For small embedded product companies, it can be one or two people with defined responsibilities and documented processes.
The CRA doesn't use the term "PSIRT," but the obligations in Article 14 and Annex I Part II effectively require PSIRT-equivalent capabilities.
Minimum Viable PSIRT for a 10–50 Person Embedded Team
Roles (can be combined):
- PSIRT Lead (1 person): Owns the vulnerability management process, makes triage decisions, signs off on ENISA notifications. This should be a senior engineer or security-aware engineering manager.
- PSIRT Analyst (1 person): Monitors CVE databases, triages incoming reports, prepares notification drafts. Can be a developer with security interest.
- Legal/Compliance contact: Notified for all ENISA submissions. Not necessarily a FTE—can be an external counsel relationship.
Tooling (minimum):
- CVE monitoring: Subscribe to NVD feeds for all OS/library packages in your firmware. Tools like OSV-Scanner, CVSS tools, or integration into your CI pipeline.
- Ticketing: A dedicated queue (Jira security project, GitHub private security advisories, or similar) for vulnerability reports—separate from general bug tracking, with appropriate access controls.
- Runbooks: Written procedures for the 24-hour, 72-hour, and 14-day ENISA notifications. These should be checklists, not essays.
Process (minimum):
- Inbound monitoring: Daily CVE feed review for all components in shipping firmware
- Triage SLA: 48-hour triage decision for any new CVE affecting your firmware (affected or not, exploitable or not)
- Escalation trigger: Any CVE with CVSS ≥ 8.0 or any CVE marked exploited in CISA KEV or ENISA databases triggers immediate PSIRT Lead review
- ENISA notification trigger: Evidence of active exploitation → immediate 24-hour clock start
- Patch SLA: Critical/actively-exploited vulnerabilities: patch within 30 days. High severity: 90 days. Medium/Low: next scheduled release.
- Documentation: All triage decisions documented with rationale (needed for technical documentation file)
The End-of-Life Security Update Obligation
Article 14(4) and Annex I Part II require manufacturers to deliver security updates for a period appropriate to the expected product lifetime. This is a standing obligation, not time-limited.
For firmware teams, "deliver" means:
- Having an OTA update mechanism (or documented manual update procedure) — see our guide on CRA-compliant OTA firmware updates for implementation details
- Publishing security advisories with each security update
- Maintaining a record of all security updates delivered
There is no prescribed minimum support lifetime in the CRA text itself, but market surveillance authorities will compare your stated support period against the "expected product lifetime" for your product category. For consumer IoT: 5 years is becoming a market norm. For industrial products: 10+ years is expected.
Article 64: Fine Exposure
Article 64 sets the penalty regime. Violations of the Article 14 vulnerability reporting obligations fall under Article 64(2), the same tier as Annex I essential requirements violations:
- Maximum fine: €15 million or 2.5% of worldwide annual turnover, whichever is higher
This is the highest fine tier in the CRA. Failure to notify ENISA is a discoverable, binary fact—if you didn't submit the notification, there's no ambiguity. The lower fine tier under Article 64(3) — €10 million or 2% of turnover — applies to violations of other obligations such as those in Articles 18–23, 28, and 30–33, but not to Article 14.
The September 2026 reporting deadline gives you limited time. Start building your PSIRT capability now, register on the ENISA platform before it goes live, and put your CVD policy in place this quarter.
The Stack Canary assessment tool will assess your current vulnerability handling practices and identify specific gaps against the Article 14 and Annex I requirements.
Based on Regulation EU 2024/2847, Article 14, Article 64, Annex I Part II, and ENISA vulnerability reporting guidance (2025). This does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
Sources
Check your CRA compliance status
Answer 7 questions about your embedded product and get a personalized gap analysis — with your CRA classification, key deadlines, and specific action items.
Start free assessment →