
When an industrial network becomes unstable, pharmaceutical production feels the impact immediately. We were engaged to understand, diagnose, and stabilize an environment where each interruption could jeopardize the production of a vital drug. Here is how our team conducted the network audit for a global pharmaceutical leader to restore performance, stability, and cybersecurity.
During network outages impacting its production lines, a global pharmaceutical player called on DATIVE to audit its industrial network. The objective: identify potential root causes, improve the stability of communications and strengthen the cybersecurity and performance of its OT infrastructures.

In the pharmaceutical industry, the slightest instability can have very real consequences. A batch interrupted, a PLC disconnected, a frozen supervision system... and the entire production chain stops: every microsecond counts. Since commissioning and validation activities are among the most demanding, these instabilities can also affect finished product quality, leading to significant financial loss.
Production continuity, batch traceability and regulatory compliance are the three pillars on which a pharmaceutical site’s reliability is built. When an industrial network suffers intermittent outages, the risk goes far beyond the technical perimeter. It becomes an economic, public health and regulatory issue.
We see this frequently: IT/OT convergence, essential for digital transformation, introduces new dependencies and multiplies points of fragility. In an environment subject to the NIS2 directive, IEC 62443 standards and Data Integrity principles, a simple network anomaly can compromise far more than a signal: it may call into question the quality of a product intended for patients.
It was in this demanding context that a global manufacturer of injectable drugs approached us.
The objective was clear: understand the origin of recurring network outages and restore lasting stability on its industrial network.

When we arrived on this drug production site, the maintenance teams were facing random interruptions, sometimes brief but sufficient to disrupt production. These outages were neither localised nor systematic, which made their diagnosis particularly complex.
We therefore defined a three-part action plan: understand the causes, secure communications and propose concrete measures to strengthen the resilience of the OT network. The client’s expectations were clear: obtain a clear diagnosis, prioritised recommendations and a holistic cyber hardening plan, without disrupting ongoing production.
To achieve this, we deployed an industrial IDS sensor on site. Configured in port mirroring mode, this tool allowed us to observe traffic in real time without impacting operations, in a fully passive way. The analysis was carried out over several days to cover production cycles and obtain a representative view of the industrial network.
Facing intermittent outages or network instabilities? Contact our experts for an OT network audit.
On site, we first validated the mirroring configuration and spoke with automation and maintenance teams. These discussions are essential to understand operational practices, recent changes and sensitive assets.
The IDS sensor was then connected to the industrial network cabinet. Within a few hours, it had already identified nearly 200 heterogeneous communicating devices, highlighting the density and complexity of this pharmaceutical plant’s OT environment.
Our method is based on a cross-functional approach: combining technical analysis, behavioural observation and a cyber perspective. We examined IPv4 and IPv6 flows, measured TCP retransmissions, looked for abnormal behaviours and evaluated the consistency of network configurations.
This approach makes it possible to correlate seemingly unrelated symptoms (unusual latency, a saturated port, a silent PLC) in order to understand the root causes of disruptions.
We also assessed traffic quality via performance indicators: average latency, packet loss, collisions and traffic prioritisation (QoS). Finally, we reviewed switch sizing and load balancing to verify that the infrastructure could properly handle the volume generated by production and supervision systems.

The detailed analysis revealed several factors. Taken individually, they could have seemed minor. Together, they painted the picture of a high-performing network weakened by heterogeneous configurations.
Several unnecessary IPv6 flows were detected on PLCs and industrial workstations. This configuration introduced a risk of address conflicts and random outages. We recommended disabling IPv6 on devices that did not require this protocol.
Some supervision workstations had two IP addresses on the same network. This dual attachment caused packet loss and routing errors. A standardised configuration was recommended.
We observed packet retransmissions and incomplete connections, suggesting desynchronisation between PLCs or recent configuration changes. These anomalies are often invisible to operators but degrade overall stability. We advised our client to verify the configuration of devices with the highest retransmission rates.
Abnormal activity was detected on a filling PLC, characterised by a massive burst of TCP packets. Without prompt remediation, this type of behaviour can saturate a segment and trigger cascading outages impacting the production chain.
We observed Profinet-DCP frames sent in multicast by a Siemens device. While legitimate in some scenarios, these frames can become overly chatty and impact network stability. They may reflect discovery attempts or poor PLC configuration.
Several IP cameras were connected to the industrial network. While useful for visual supervision, these devices generated a significant video stream. On undersized switches, this traffic created temporary overloads and communication loss. We recommended a separate physical or virtual network to isolate these flows.
Some network printers were connected to the production network. This type of device unnecessarily increases the attack surface and exposes known vulnerabilities such as PrintNightmare. We recommended removing them or isolating them on an office subnet.
These findings confirm a frequently observed reality: network disruptions rarely result from a single cause. They emerge from the accumulation of small imbalances which, together, undermine the overall performance of the industrial process.
Have your OT infrastructures assessed before weak signals turn into critical incidents.
We structured our recommendations across three time horizons: short, medium and long term, enabling progressive and controlled cyber deployment.
Combined, these measures not only improve immediate network stability but also embed security and performance into a continuous improvement approach. Resilience becomes an operational reflex.

At the end of our engagement, the results were tangible.
Intermittent outages disappeared. TCP retransmission rates stabilised. Communications between PLCs and supervision are now smooth and predictable.
The network regained its stability, and above all its visibility: every flow and every dependency is now known and controlled.
Maintenance teams, initially faced with hard-to-interpret anomalies, now benefit from clear indicators and a consolidated supervision framework. This performance improvement naturally reinforced the site’s cybersecurity posture. A controlled network is, by nature, a more secure network.
By identifying weak signals before they became critical, the network audit conducted by DATIVE enabled this pharmaceutical giant to restore the stability its production relied on.
Cybersecurity and performance are not mutually exclusive. They complement and reinforce each other. In an environment where every batch matters, ensuring the continuity and security of the industrial network means safeguarding confidence in the medicine itself.
Experiencing network instabilities or anomalies in your OT environments? Contact DATIVE’s experts for a comprehensive diagnosis combining performance, reliability and industrial cybersecurity.