With higher scalability of IoT environments in the face of outsider access
over the internet comes the requirement of proving resilience of your
IoT infrastructure. How do you know your IoT application can withstand
expected, and unexpected loads, recover from faults, and resist malicious
Even in the absence of hackers, devices break and misbehave all the time.
How does your infrastructure handle the so-called crying baby, ie. a sensor
publishing too frequently, and how can be assured it does not interfere with
There are plenty of references about this requirement, eg. this post about
" We’re already dealing with serious-scale connectivity when we talk
Internet of Things, and we impose the throttling limits on IoT
Hub to protect
against what otherwise looks like Denial of Service
(DoS) attacks on the
or this Gartner report which contains
"... An IoT solution may be made up of hundreds or thousands of
To test all of the devices in their real environments may be
expensive or dangerous. However, you also need to ensure
that your IoT
platform and back-end systems can handle the load of all
of those devices and
correctly send and receive data as necessary."
Some platforms document their throughput testing, eg. Solace, or throttling
policies, like for Azure and Amazon. For others this information is hard to
But even if they are documented, throughput is highly dependent on your
specific application profile, and policies are subject to change at any
time, and you will likely run into some problem in production. Unless you
thoroughly tested before deployment, and can rely on QOS guarantees.
To test your specific performance requirements you will need to setup a
test that recreates your specific message patterns, connectivity patterns,
scale, message processing, etc, not just once, but continually.
MIMIC IoT Simulator is specifically designed for rapid development,
regression testing, continuous tuning, thorough training, compelling
demonstration of large-scale IoT environments.
MIMIC provides a virtual IoT lab with unlimited scale and a flexible,
programmable simulation framework to customize your performance testing.
As a quick example, we did a simple test to run a sensor at 10 messages
/ second to a broker, which ran indefinitely. Then we increased the rate
to 100 messages / second, and in multiple separate tests it was
disconnected after 20 minutes. A look at the packet exchange with
wireshark uncovered an explicit disconnect initiated at the broker.
(Original post at our blog page)