The NIS2 Incident Reporting Framework: Step-by-Step Guide

Who should read this: Incident response teams, CISOs, compliance officers, legal teams, and anyone responsible for incident reporting and regulatory notification.

One of NIS2's most concrete obligations is incident reporting. When something goes wrong--a cyberattack, a data breach, system failure--you must notify your regulator. The notification deadlines are tight: 24 or 72 hours, depending on your entity type. There is no flexibility. Miss the deadline, and you face sanctions.

Article 23 lays out the framework. This guide walks through the reporting requirements, timelines, what you must report, and how to build the capability to meet these obligations.

What Is a "Significant Incident"?

You do not report every incident. You report only "significant incidents" as defined in Article 23(3).

An incident is significant if:

(a) It has caused or is capable of causing severe operational disruption of the services or financial loss for the entity concerned, or

(b) It has affected or is capable of affecting other natural or legal persons by causing considerable material or non-material damage.

In practical terms, a significant incident is one that materially disrupts your services, causes substantial financial loss, or harms other people or organisations. A single infected computer that is quickly isolated does not meet this threshold. A ransomware attack that encrypts your critical systems and brings them down for hours or days does.

The word "capable of" is important. You do not wait to assess actual impact. If you suspect an incident could cause severe disruption if not addressed, it is significant. This means you must assess incidents quickly and err on the side of reporting rather than under-reporting.

Article 23(3) gives regulators scope to issue implementing acts defining significance more specifically for particular entity types. For example, for cloud providers or DNS operators, significance might be defined more precisely (e.g., a DNS outage affecting X percent of traffic, or a cloud provider downtime affecting X customers). If you are a DNS, cloud, data centre, or CDN provider, check the implementing acts for your sector for precise significance thresholds.

The Reporting Timeline: Three-Stage Process

NIS2 requires a three-stage reporting process with strict deadlines.

Stage 1 – Early Warning (24 Hours)

Within 24 hours of becoming aware of a significant incident, you must submit an early warning to your competent authority or CSIRT.

The early warning is brief. It should indicate:

That you have identified a significant incident (entity name, incident description).

Whether the incident is suspected of being caused by unlawful or malicious acts (i.e., is it a cyber attack, or is it a technical failure, natural disaster, or accident?).

Whether the incident could have cross-border impact (does it affect services to customers in other Member States or other countries?).

The early warning gives regulators immediate awareness of the incident so they can coordinate response, alert other Member States if cross-border impact is likely, and provide initial guidance.

Trust service providers have a tighter deadline: 24 hours for the full incident notification (see below), not just the early warning.

Stage 2 – Incident Notification (72 Hours)

Within 72 hours of becoming aware of the significant incident, you must submit a full incident notification.

The incident notification should include:

Updated information from the early warning (is it confirmed to be malicious? Is cross-border impact confirmed?).

An initial assessment of the incident including:

Severity (how severe is the impact? Low, medium, high, critical?).

Impact (what systems are affected? What services? How many users/customers?).

Indicators of compromise (if the incident is a cyber attack, what technical indicators identify the attack--IP addresses, domains, file hashes, etc.? These help the regulator understand the threat and coordinate with other Member States).

The incident notification is more detailed than the early warning. Your incident response team should have gathered enough information by the 72-hour mark to provide an initial assessment.

For trust service providers, this is where the tighter deadline applies: they must submit their initial notification within 24 hours, not 72 hours.

Stage 3 – Intermediate and Final Reports

After the 72-hour incident notification, the process depends on the incident's status.

If the incident is still ongoing and you need more time to complete your investigation, you may provide intermediate reports at the request of your CSIRT or competent authority. These reports update them on your progress in mitigating or resolving the incident.

Within one month of submitting the initial incident notification (72-hour mark), you must submit a final report including:

A detailed description of the incident, including its severity and impact.

The type of threat or root cause that is likely triggered the incident (e.g., phishing and credential harvesting, vulnerable web application, insider threat, supply chain attack, malware infection, etc.).

Applied and ongoing mitigation measures (what have you done to contain and resolve the incident? Are systems being restored? Are further improvements being made to prevent recurrence?).

Where applicable, cross-border impact (is the incident affecting customers or services in other EU Member States or non-EU countries?).

If the incident is still ongoing at the time you submit the final report, you must provide a progress report at that time and a final report within one month of handling the incident.

Who Do You Report To?

You report to your CSIRT (Computer Security Incident Response Team) or, where applicable, your competent authority.

Article 23(1) specifies: "Each Member State shall ensure that essential and important entities notify, without undue delay, its CSIRT or, where applicable, its competent authority in accordance with paragraph 4 of any incident that has a significant impact..."

In most Member States, you report to the national CSIRT. A few Member States designate a different competent authority to receive NIS2 reports. You must determine which applies in your Member State. Contact your Member State's national cybersecurity authority (usually a government agency with responsibility for cybersecurity, often housed in the interior ministry, defence ministry, or digital affairs department) to confirm whether you report to the CSIRT or a specific competent authority.

The reporting mechanism is typically secure email or an online portal. Your CSIRT publishes contact information and instructions on how to submit notifications. Use only official channels; do not send notifications to random email addresses.

If you notify the competent authority (rather than the CSIRT directly), the competent authority is required to forward your notification to the CSIRT. So in either case, both your competent authority and the CSIRT will see your notification.

Notification of Service Recipients

Alongside notifying your regulator, you must also notify customers or service recipients if they are likely to be affected by the incident.

Article 23(1) states: "Where appropriate, entities concerned shall notify, without undue delay, the recipients of their services of significant incidents that are likely to adversely affect the provision of those services."

"Without undue delay" is vague. In practice, you should notify service recipients as soon as you can confirm that they are affected and as soon as you have enough information to provide meaningful guidance.

Article 23(2) adds: "Where applicable, Member States shall ensure that essential and important entities communicate, without undue delay, to the recipients of their services that are potentially affected by a significant cyber threat any measures or remedies that those recipients are able to take in response to that threat."

This means you should not only tell customers that there is an incident, but also tell them what they can do about it. For example, if there is a phishing campaign targeting your customers, tell them to be suspicious of phishing emails and change passwords. If there is a vulnerability in your service, tell customers to apply patches or upgrade.

The notification should be clear, timely, and actionable. Jargon-filled or delayed notifications undermine trust.

What Information Must You Report?

The Directive specifies certain information that must be included in your notification:

Entity information: Your entity's name, sector, size, primary service or activity.

Incident description: What happened? When did you first become aware of it? What systems or services were affected?

Malicious or unlawful act: Is this a deliberate cyber attack, or is it an accident, technical failure, or natural disaster?

Severity and impact: How bad is it? How many users/customers are affected? Is your service down, degraded, or fully operational?

Indicators of compromise: If it is a cyber attack, what technical indicators identify the attack (malicious IP addresses, domains, file hashes, command-and-control servers, etc.)?

Root cause: What is the likely cause? E.g., phishing and social engineering, vulnerable application, unpatched system, insider threat, supply chain compromise, malware infection, etc.

Mitigation measures: What have you done to contain, remediate, and prevent recurrence?

Cross-border impact: Does the incident affect customers in other EU Member States or non-EU countries?

Your Member State may require additional information via implementing acts. Check your national regulator's guidance.

Building Incident Response Capability

To comply with Article 23 reporting, you must have the capability to detect, investigate, and report incidents within the tight deadlines. This requires:

Detection Capability

You must be able to detect significant incidents. This requires:

Security monitoring tools (SIEM, intrusion detection, endpoint detection and response) that generate alerts for suspicious activity.

Log collection from critical systems (servers, firewalls, domain controllers, applications) so you have visibility into what is happening.

Alert triage and escalation processes so significant alerts are identified and escalated to incident responders quickly.

A definition of significant incident that your team understands so they can classify incidents accurately.

Incident Response Team

You need a team with clear roles and 24/7 availability:

Incident coordinator: Manages the overall incident response, ensures proper escalation, and coordinates with other teams.

Technical investigator: Analyzes logs, gathers forensic evidence, identifies root cause and indicators of compromise.

Communications: Drafts customer notifications, communicates with regulators, updates senior management.

Legal/compliance: Advises on regulatory notification obligations, data protection requirements, and legal liability.

All-hands point of contact: When an incident is identified, there must be a way to contact incident responders immediately, 24 hours a day, 7 days a week.

Playbooks and Procedures

Your team needs documented procedures for handling different incident types:

Ransomware: Contain the attack (isolate infected systems), assess impact, activate backup systems if necessary, assess ransom demands and regulatory reporting obligations, investigate root cause.

Data breach: Identify what data was accessed or exfiltrated, assess whether it is personal data (triggering GDPR notification), assess business impact, contain the breach, notify customers and regulators.

System outage: Assess root cause, restore service using backup or alternative systems, investigate root cause, communicate with customers.

Supply chain compromise: Identify how your supply chain was affected, isolate compromised systems, alert downstream customers if you are a service provider, remediate.

External testing: You may want to conduct tabletop incident response exercises quarterly where your team simulates handling different incident scenarios. This builds muscle memory and identifies gaps in your capability.

Documentation and Evidence Preservation

During incident response, you must preserve evidence:

Preserve system logs and forensic data from affected systems (do not overwrite them).

Document your investigation steps and findings (what did you find? What does it mean?).

Preserve communications (emails, messages) that are relevant to the incident.

Maintain chain of custody if evidence may be needed for legal proceedings (law enforcement investigation).

This evidence becomes part of your final report to regulators and may be needed for internal investigation, legal proceedings, or cyber insurance claims.

Information Protection During Reporting

Article 23(6) states: "Where appropriate...the CSIRT, the competent authority or the single point of contact shall, in accordance with Union or national law, preserve the entity's security and commercial interests as well as the confidentiality of the information provided."

This means that information you provide to regulators is protected. Your sensitive technical details about your systems, your root cause analysis, your remediation measures--these should not be publicly disclosed by regulators without your consent.

However, Article 23(7) allows public disclosure if "public awareness is necessary to prevent a significant incident or to deal with an ongoing significant incident, or where disclosure...is otherwise in the public interest." For example, if there is a widespread vulnerability affecting many entities, the regulator may disclose the incident to alert the public.

Your notification to regulators is confidential, but this confidentiality is not absolute. It is balanced against public interest in awareness of threats.

The "Safe Harbor" Provision

Article 23(1) includes an important provision: "The mere act of notification shall not subject the notifying entity to increased liability."

This is a "safe harbor--notifying a regulator of an incident cannot be used against you to increase your legal liability. However, the safe harbor is limited to the act of notification itself. It does not protect you from liability if you caused the incident through negligence or recklessness (e.g., failing to patch known vulnerabilities).

The safe harbor encourages organisations to report incidents honestly and completely without fear that the report itself will be used against them.

Cross-Border Incidents

If your significant incident affects customers or services in multiple Member States, you must notify each affected Member State's CSIRT or competent authority.

Article 23(1) requires you to "report, inter alia, any information enabling the CSIRT or, where applicable, the competent authority to determine any cross-border impact of the incident."

Article 23(8) allows the single point of contact in one Member State to forward your notification to the single points of contact of other affected Member States at the request of the CSIRT or competent authority.

In practice, if you have a cross-border incident (e.g., a cloud provider whose customers span multiple Member States), you should identify all affected Member States and their CSIRTs, and ensure notifications are sent to each.

Practical Incident Reporting Checklist

Here is a practical checklist for incident reporting:

Have you identified who your national CSIRT or competent authority is? Do you have their secure reporting email or portal?

Does your incident response team have the contact information and escalation procedures to activate incident response within hours of discovering an incident?

Have you defined what constitutes a "significant incident" for your organisation (in consultation with your regulator and implementing acts if available)?

Does your incident response playbook include steps for gathering the information required by Article 23 (severity, impact, indicators of compromise, root cause, mitigation measures, cross-border impact)?

Do you have detection capability (monitoring tools, log collection) to identify incidents quickly?

Have you documented your incident response procedures and tested them (tabletop exercises, incident drills)?

Do you have a 24/7 point of contact for incident response activation?

Do you know your notification deadlines (24 hours for trust service providers for initial notification; 72 hours for others; one month for final report)?

Have you identified which customers or service recipients you must notify if an incident occurs?

Do you have templates for customer notifications that comply with Article 23(2) (explaining what measures recipients can take)?

Have you coordinated incident reporting with your legal and compliance teams so you understand data protection notification requirements (GDPR), potential legal liability, and cyber insurance implications?

Key Takeaways

- A significant incident is one that causes or is capable of causing severe operational disruption or financial loss to your entity, or considerable damage to other persons or organisations; you must determine significance quickly so you can meet reporting deadlines.

- The NIS2 reporting timeline is three-stage: early warning within 24 hours (indicating whether the incident is malicious and if it has cross-border impact), incident notification within 72 hours with initial severity and impact assessment (24 hours for trust service providers), and final report within one month with detailed root cause analysis and mitigation measures.

- You must report to your national CSIRT or designated competent authority using official secure channels; you must also notify service recipients "without undue delay" if they are likely to be affected and explain what measures they can take to protect themselves.

- Building compliance capability requires detection tools, a 24/7 incident response team with clear roles, documented playbooks for different incident types, evidence preservation procedures, and regular tabletop exercises to test your capability.

- The safe harbor provision protects you from increased liability merely for notifying regulators of an incident, but does not shield you from liability if you caused the incident through negligence; information you report is protected from public disclosure unless disclosure is necessary for public safety.

- Trust service providers face a tighter timeline: 24-hour notification requirement (rather than 72 hours) for the initial incident notification when the incident affects their trust services.