Industry at the AI edge – governance can’t wait (Reader Forum)

Edge AI adoption in industry is accelerating, but without governance frameworks, organisations risk inefficiency, security gaps and IT/OT conflict. Success depends on unified monitoring, clear accountability and aligning people, processes and technology to safely manage distributed, resource-intensive edge environments.

Edge AI is coming to a factory floor near you. In January, the World Economic Forum shared that intelligent industrial deployments are expected to jump from 20% to 50% by the end of the decade. In theory, this drives more autonomous decision-making closer to the data source, reshaping how businesses operate, compete, and grow. In practice, industrial operators need to keep in mind that adoption isn’t a one-and-done decision.

Deployment without governance guardrails creates significant teething pains. For example, older devices with a “set and forget” mentality now need strict resource allocation. Also, frameworks are often designed for centralized cloud environments rather than distributed edge environments. And the gap between IT and OT quickly becomes an ownership vacuum when there’s no clear accountability if something goes wrong.

Teams moving with the moment must remember that optimizing production isn’t possible through adoption alone. Instead, success requires deploying this technology alongside the frameworks meant to safely oversee it.

Resource consumption dilemma

David-Montoya AI
Montoya – AI compounds IT/OT challenge

Edge AI is an evolution in two parts. First, Industry 4.0 and the embrace of intelligent efficiency generate massive amounts of data from the factory floor. Rather than sending raw streams to the cloud – straining bandwidth and testing latency – producers are more often processing information at the edge. This way, a PLC or industrial device only sends summaries, thereby reducing volume and improving response times. 

This is a tall order for production machinery that, until recently, solely focused on deterministic industrial control in air-gapped environments. AI further compounds the challenge. Devices today aren’t just transferring data but they’re also running or feeding machine learning models locally. This results in compute-intensive workloads on legacy hardware – a problem since industrial ecosystems aren’t always the “smartest”.

Production machinery is built to last decades while still getting the job done. These devices were never built for AI workloads and enterprise-grade cybersecurity. Edge AI represents a one-two punch that requires a corresponding shift to prevent devices from maxing out or opening backdoors. Without this visibility, operators have little warning before a device becomes overwhelmed, fails, and grinds production to a halt.

Tension between cloud and edge

Most AI frameworks were built under the assumption of a centralized architecture with models in the cloud or a data center. As such, they’re easier to access, update, and monitor. The edge flips this thinking since models are distributed across devices in various locations, making it harder to pull logs, push updates, or roll back a model. Modern ecosystems are rapidly adapting to support edge machine learning lifecycle management – often faster than organizations can update their internal governance. 

The bottleneck, in other words, isn’t the technology but the people and processes.

Operating at the edge demands governance at the edge. Teams must be able to monitor model input and output with a clear chain of command. This is particularly important in regulated industries where compliance requires human oversight, audit trails, and validated decision-making. If not, we’re looking at a governance gap with knock-on effects across budget allocation, approval processes, and incident response. Nowhere is this clearer than when an edge model makes an incorrect autonomous decision and teams point fingers across the IT/OT divide.

Teams must be on same page

Rolling out before teams are ready only exacerbates the divide between IT and OT. If adoption occurs before governance is agreed, duties fall through the cracks because edge AI sits squarely at the intersection of both domains – the cybersecurity and networking considerations of IT and the production and uptime focus of OT.

Resource consumption is again a good example. The need to continuously monitor device status and performance metrics, which were previously managed from an operational standpoint, demands an IT-oriented view that requires active monitoring to ensure continuity. At the same time, device performance is integral to uptime and production. This means that applying this technology isn’t just a task for IT or OT but for both at the same time.

As I wrote recently for RCR Wireless, the two must recognize that the only way forward is together. Teams that can technically and culturally realign are far better positioned for the intelligent industrial future than those still arguing over who’s doing what. Getting this right matters.

The answer here isn’t to forego innovation but to know the risks and onboard cautiously. 

Unified monitoring for IT/OT

Admins can’t enforce policies on systems they can’t see, so start by unlocking network oversight. This means deploying unified monitoring that understands both IT and OT protocols across the full stack. By doing so, teams can back decisions with data and make approval processes enforceable. From there, teams establish what “normal” looks like across resource consumption, traffic patterns, and decision outputs, giving threshold alerts and rollback triggers a meaningful baseline to work from on every edge device.

There’s risk and reward in this evolution. On the one hand, manufacturing is already the most attacked industry and accelerating smart industrial deployments without proper guardrails threatens further cybersecurity exposure. But, on the other hand, serious efficiencies await those who get this right. This is something we recently saw with a global cable manufacturer implementing unified monitoring across its production environment to improve incident response, increase cross-team visibility, and catch machine issues before they cause costly shutdowns.

This is a balancing act that, enacted cautiously and conscientiously, promises to future-proof industrial strategies, maintain resilience across global value chains, and drive sustained growth.

David Montoya is presales director at Paessler GmbH. With deep expertise in manufacturing and IT/OT convergence, Montoya helps teams deliver proactive issue prevention and monitoring solutions that deploy fast and scale on their terms.

ABOUT AUTHOR