Wednesday, August 29, 2018

Testing MQTT Telemetry with arbitrary function

Do you need to test your IoT platforms against arbitrary telemetry data from
a variety of sensors?

You can easily to generate ANY data you want from ANY sensor you want
using MIMIC IoT Simulator.

The below screenshot shows a NODE-RED subscriber client graphing a
simulated sensor publishing data of a Sine function with configurable
amplitude. In this case we just kept expanding the amplitude every minute.



The code to achieve that is shown and consist of a total of 10 lines including
comments and the effort in minutes.

Since the amplitude is parametrized, with the same code you could have
multiple sensors sending the same function, but different amplitudes at any
point in time. Any other parameters can be made unique for each simulated
sensor.

The next screenshot shows 4 such sensors with different amplitudes
simultaneously. Any more would clutter the single graph too much.


Wednesday, August 1, 2018

Video: Monitor end-to-end latency of your IoT Application with 10,000 Sensors

This 5-minute Youtube video shows how to monitor response time to a
MQTT broker in an IoT Application with 10,000 active publishers.


This is not only important in the selection process, but also for ongoing
monitoring / troubleshooting, as outlined in our previous blog post
"IoT Sensors Need to be Managed", and this Gartner report.

We are following the testing methodology outlined in our previous post
"MQTT performance methodology using MIMIC MQTT Simulator" to
minimize the interference between the test equipment and the system
under test.

We are using the open-source Node-RED flows published in our Github
repository to measure and graph end-to-end latency between a publisher
and subscriber once a second for several brokers.

This simulates the latency between your sensors that are publishing
telemetry and your application that is consuming the telemetry.

We are using MIMIC MQTT Simulator to deploy 10,000 publisher clients
to the MessageSight broker on our intranet.

When we turn the switch on, the first 2,000 publishers are starting.
We'll time lapse this process for brevity. Notice how we are graphing
the number of active sensors sending telemetry.

The graphs show the latency for 4 brokers. Only the bottom line for
the MessageSight is being influenced with our active sensors. The
others are purely for control, to make sure our measuring and graphing
is correct. In particular, the mosquitto line should be steady, since
it is doing nothing.

The public broker graphs will be unpredictable.

Notice how the blue MessageSight line is mostly steady around 0
milliseconds.  The white mosquitto line is steady around 50 milliseconds,
and should remain so for the duration of the experiment. (It turns out
the reason for the 50 ms delay is explained here).

If your application has real-time requirements, then response time
is a vital parameter to monitor. Even if not, then response time
degradation can point to problems in your setup, specially at high
scale.

As more sensors become active, the blue latency graph becomes more
erratic. This is expected, as the broker is doing more work. In this
experiment, each sensor is only sending a small message every second,
and you can see the messages per second at the bottom of the MIMICview
graphical user interface.

There are many variables that could impact the latency: the distance
between sensors, brokers and applications, the message profile, that
is the average size of the payload and frequency of the messages, the
QOS of the messages, the topic hierarchy being published to, the number
of subscriber clients and their performance, the retention policy for
messages, and many more. Only your particular requirements would tell
whether the performance is acceptable for you.

(This is a follow-up to our earlier video).