Recap from the Technical Deep Dive with COO/CPO Kirill Sokolinsky and VP of Sales & Marketing Chris Nixon
EyeOTmonitor’s latest customer webinar took a technical deep dive into one of the most critical capabilities of the platform: the Events and Alerts Management system. Led by COO/CPO Kirill Sokolinsky and moderated by Chris Nixon, the session offered real-world examples, live configuration demos, and a detailed roadmap for what’s next.
Today’s networks are increasingly complex, made up of not only traditional routers and switches, but also radios, edge devices, surveillance cameras, and IoT endpoints with varying protocols. Being able to detect early indicators of failure — like a drop in signal strength, bandwidth transmission anomalies, or interface errors — can dramatically reduce downtime, truck rolls, and support escalations.
At the heart of EyeOTmonitor’s monitoring platform is a powerful events and alerts engine that uses severity logic, tagging, and real-time data collection to:
It’s not just about sending an alert. It’s about surfacing what matters — and doing it intelligently.
Kirill began the session by breaking down how EyeOTmonitor conceptually understands and monitors a device. Each device is categorized based on three dimensions, which together determine its severity state:
Refers to how the system connects with and validates device functionality. EyeOTmonitor checks for responsiveness using protocols like:
Metrics tracked include latency, jitter, stability, and availability. Visual indicators (such as a changing icon color) give instant insight into whether these services are healthy.
Collected via SNMP, ONVIF, or APIs, these include:
Properties reflect the “health” of the device itself beyond basic network reachability.
Every networked device — whether wired or wireless — relies on interfaces. EyeOTmonitor tracks:
These three categories — services, properties, interfaces — are the foundation of EyeOTmonitor’s Severity Rules Engine.
The Severity Rules Engine lets users define thresholds for when a device or individual property moves from a normal to warning, severe, or critical state. This affects both UI visualization (such as color-coded icons and alerts in the side panel) and triggers for event generation.
Users can define:
You can think of this as both visual feedback and logic enforcement. If a property crosses a threshold, you’ll see it — and you can act on it.
Tags are one of the most powerful and flexible features in EyeOTmonitor. They allow you to apply logic broadly or narrowly, without rewriting rules for each device type or protocol.
Example:
Set one rule for all “Access Cameras” regardless of how they’re monitored (SNMP or ONVIF).
Example:
Set an event to trigger if any radio’s local RSSI falls below -70dBm, regardless of model.
This abstraction layer makes large-scale monitoring scalable and consistent.
Kirill stressed a key point: EyeOTmonitor separates events from alerts on purpose.
You might care about CPU spikes across a month but don’t need an email every time it happens. That’s an event, not an alert.
The heart of the session was a series of live configuration demos. Here are the key examples:
Goal: Detect when a camera stops sending video to a recorder.
Approach: Monitor TX bandwidth on the camera’s interface. Trigger a “No Video” event if bandwidth drops below 250kbps.
Outcome: Event is logged, and interface visually enters a severe state.
Goal: Detect signal degradation on point-to-point and point-to-multipoint links.
Approach: Track local RSSI and tag radios by vendor (e.g., Ubiquiti Wave).
Outcome: Events are generated across multiple links and visible in the event log.
Goal: Catch physical port failures even if the device remains reachable.
Approach: Create events tied to “Interface Operational Status.”
Outcome: Event triggers, port visually marked red, validating the detection logic.
Goal: Catch overheating on SFP modules.
Approach: Set interface-level rule for optical transceiver temperature > 70°C. Trigger a critical event.
Outcome: Event logged and property marked critical.
Demonstrated how device state is derived from worst-case property unless otherwise configured.
Adjusted latency threshold on a camera from 4ms to 2ms — device UI updated from green to yellow in real time.
Several enhancements are in development to give users even more control and context:
Detect deltas over time, such as:
We’re building logic that focuses not just on values — but on how those values behave over time.
This enables use cases like: If interface errors increase by 500 packets in 5 minutes, trigger alert.
The session closed with gratitude and an open call for collaboration:
We’re humbled by the turnout and your continued trust. Everything we’ve built — and are building — comes directly from conversations with users like you.
Chris emphasized EyeOTmonitor’s commitment to support:
If you’re ready to start building your own intelligent monitoring strategy: