Wednesday, September 10, 2025

How a Simulator Like MIMIC Simulator Helps nGenius Customers

Netscout nGenius is a service assurance and performance management platform
It ingests NetFlow/IPFIX  and packets, metadata, and application-level information
Customers use it to monitor end-to-end service delivery, VoIP/UC quality, and application 
performance.

Common problems that customers can run into are:

  1. High Data Rates – Full packet capture plus flows can overwhelm storage and analysis systems.

  2. Service/Application Visibility Gaps – Correlating flows, packets, and user experience is complex.

  3. Scalability and Cost – Packet-based monitoring requires very powerful hardware and lots of storage.

  4. Multi-Vendor Complexity – Different devices export different flows/metadata.

  5. Training & Troubleshooting – Staff need to learn how to interpret flow + packet data for root cause analysis.

  6. Integration Challenges – Feeding nGenius data into ITSM/SIEM/SOC tools isn’t always straightforward.

     

 

 

 

MIMIC Simulator Suite virtualizes large network environments to help tackle some of these problems:

  1. Validate Scale – Generate realistic traffic (flows + emulated devices) to see how nGenius handles high loads before production.

  2. Application/Service Testing – Simulate voice, video, or application flows so teams can practice monitoring service quality.

  3. Multi-Vendor Assurance – Emulate Cisco, Juniper, Palo Alto, etc. devices to test interoperability.

  4. Training Lab – Give engineers real scenarios (DDoS, poor QoS, packet loss) without touching live users.

  5. Safer Testing – Use simulated instrumentation data (SNMP, NetFlow, sFlow) instead of actual sensitive user data, avoiding compliance risks.

  6. Integration Validation – Feed nGenius with reproducible test data to confirm workflows with SIEM, NMS, or service desks.

 

 


 

Tuesday, August 5, 2025

MIMIC Simulator and LiveNX

LiveNX customers can benefit from MIMIC Simulator to complement their in-house lab at a fraction of the cost of real equipment: 

 

Customize LiveNX

  • Enables rapid development of custom features by recreating the exact scenario in MIMIC with repeatable test data.

  • This makes development, troubleshooting and support faster and more precise.

     

Large-Scale Test Environments

  • LiveNX is designed to monitor enterprise and service provider networks with thousands of devices and interfaces.

  • With MIMIC, LiveNX customers can simulate tens of thousands of routers, switches, firewalls, and endpoints before deploying in production.

  • This allows validation of LiveNX scalability without needing physical gear. Resources can be allocated well in advance to ensure smooth operations.


Figure 1 - Topology view with connections between sites


Training and Demos Without Real Hardware

  • MIMIC can generate realistic SNMP responses from a variety of vendor devices and pathological scenarios, as well as NetFlow, sFlow, and syslog traffic.

  • LiveNX teams can train staff and demonstrate LiveNX features using fully controlled, reproducible simulated environments — no need to access the live production network.

  • Disaster preparation can be done safely in the simulated lab.

 

Figure 2 - Traffic breakdown for one of the connections

 

Network Change / Upgrade Validation

  • Before rolling out firmware upgrades, new device models, or topology changes, LiveNX users can simulate the new environment in MIMIC.

  • This ensures LiveNX’s discovery, topology visualization, and performance analytics work correctly ahead of time.


Proof-of-Concept (PoC) Acceleration

  • Customers evaluating LiveNX can set up large testbeds overnight with MIMIC instead of waiting for lab hardware.

  • This reduces time-to-value and makes the PoC process smoother.









Monday, June 2, 2025

MIMIC Simulator and Dynatrace

In our quest to support all possible network management platforms, we have

interoperated with Dynatrace by discovering large networks:



 

 

 

 

 

 

 

 

 

 

and drilling into devices:




Thursday, December 1, 2022

MIMIC MQTT Lab: Test MQTT 5 support on AWS IoT Core

 AWS recently announced MQTT 5 support for AWS IoT.

We tested it in less than 5 minutes with MIMIC MQTT Lab AWS . You can do the same to make sure your
AWS IoT application uses the latest MQTT 5 features such as properties in PUBLISH messages, etc. 
Check the 2-minute Youtube video that shows the MQTT 5 CONNACK with new reason code:
CONNACK rc=0x00 Session Expiry Interval 0,Receive Maximum 100,Maximum QoS 1,Retain Available 1,Maximum Packet Size 149504,Topic Alias Maximum 8,Wildcard Subscription Available 1,Subscription Identifiers Available 0,Shared Subscription Available 1,Server Keep Alive 50



When we connect with the disallowed QOS 2, we get a new self-explanatory error code:
CONNACK rc=0x9b Reason String CONNACK:QOS 2 is not supported:861b3462-65d8-ba70-5472-63869294a5a1

and when we send a malformed PUBLISH (empty topic and topic alias):

INFO  12/02.10:53:07 - MQTT[AGT=3916] - sent CONNECT (51 bytes)
INFO  12/02.10:53:07 - MQTT[AGT=3916] - rcvd CONNACK rc=0x00 Session Expiry Interval 0,Receive Maximum 100,Maximum QoS 1,Retain Available 1,Maximum Packet Size 149504,Topic Alias Maximum 8,Wildcard Subscription Available 1,Subscription Identifiers Available 0,Shared Subscription Available 1,Server Keep Alive 50
INFO  12/02.10:53:08 - MQTT[AGT=3916] - sent PUBLISH (126 bytes)
INFO  12/02.10:53:08 - MQTT[AGT=3916] - rcvd DISCONNECT reason 0x82 (Reason String DISCONNECT:Data in packet does not conform to MQTT specification:19ec6dc1-0b50-888c-6c3e-3be26faee968)



Monday, November 21, 2022

MQTT performance testing - Best Practices

The MIMIC Simulator performance testing methodology attempts to overcome
common problems with published performance benchmarks, specially in the 
IoT arena. In this article we examine one recently published report and discuss 
how to make it better.
 
The main problem with any performance test is that the results apply only to the 
specific test scenario. If the test scenario is carefully selected, the results will be 
relevant for a wide variety of situations. If the test report is good, then the exact 
methodology is documented, so you can evaluate it, and determine whether the 
results can be useful for you. For example this report

https://www.researchgate.net/publication/354610718_Stress-Testing_MQTT_Brokers_A_Comparative_Analysis_of_Performance_Measurements

performed one test scenario for an uncommon situation of a small set (3) of high-
frequency publishers, and 15 mosquitto_sub subscribers. Plain text MQTT is only 
used in trivial situations, and there is no indication that TLS transport is measured.
Latency measurements suffer from the time synchronization problem on different 
systems.
 
Specifically, it says right at the beginning in the abstract
 
"The evaluation of the brokers is performed by a realistic test scenario"

 but then, in section 4.1.1. Evaluation Conditions:

"

Number of topics:                   3
                                    (via 3 publisher threads)
Number of publishers:               3
Number of subscribers:              15 (subscribing to all 3 topics)
Payload:                            64 bytes
Topic names used to publish large
number of messages:                 ‘topic/0’, ‘topic/1’, ‘topic/2’
Topic used to calculate latency:    ‘topic/latency

"
 
so rather than testing a large-scale environment, a small set (3) of high-frequency 
publishers, and 15 mosquitto_sub subscribers was used. In our experience, no 
recent broker has any problem with less than 1000 publishers.

Second, in section 4. the subscriber back-end is detailed:

"The subscriber machine used the “mosquitto_sub” command line
subscribers, which is an MQTT client for sub- scribing to topics and
printing the received messages. During this empirical evaluation, the
“mosquitto_sub” output was redirected to the null device (/dev/null)
"

using a the simple mosquitto_sub client which is single threaded. In addition,
the subscribers subscribe to all topics, probably the wildcard topic #. So, out of 
many code paths in the broker, the least commonly used is tested. If your 
application uses a topic hierarchy, with different subscribers subscribing to 
different topic trees, then topic matching performance needs to be exercised.

Third, while QOS 0, 1 and 2 seem to be tested, only a single payload size 
was used, and there is no indication that TLS transport is measured.

Fourth, they attempt to measure latency correctly, ie. section 4.1.2

"Latency is defined as the time taken by a system to transmit a message
from a publisher to a subscriber
"

but their methodology is flawed, since it is almost impossible to synchronize the 
clocks on 2 separate systems to millisecond accuracy and in table 6 the latencies 
are in the 1ms range. So, the measurements rely on unknown synchronization. 
For an example of the MIMIC latency testing methodology see this blog post.




Friday, November 18, 2022

Migrating from shuttered IBM Watson IoT platform

In a previous article simulation was recognized as helping prevent IoT project failures
so prevalent in the industry.
 
With the recent announcements of the shuttering of the Google IoT Core and 
IBM Watson IoT platforms, we can suggest that MIMIC IoT Simulator can be used 
to help migrate from the obsolete IoT platforms to a new offering by:


1) running a facsimile of your environment in MIMIC

2) staging migration to the new platform

3) testing requirements at various scales to make it future-proof

before you impact your production network.

Thursday, November 17, 2022

How to scale your MQTT lab to 1000 sensors in minutes

TL;DR Money saved: $40,000. Time saved: immeasurable.

We needed to create a MQTT lab with 1000 sensors to test a subscriber client with
realistic telemetry. The open-source client tracks any key value, and alerts if any
arbitrarily pre-selected value exceeds a threshold.

We bought 1 real Shelly Plus H&T sensor for $40.

After you have configured it, it sends MQTT messages to the broker, but only 
every time the temperature and humidity changes. So, to test our application, we 
would have had to run to the refridgerator quite often to make it change the 
temperature.

As you can see from the screenshot


it sends JSON payloads, but very infrequently. In our case, after 6 minutes


So, every time we wanted a message, we needed to change the temperature.

To accelerate development, we used MIMIC.

First we just captured the messages with wireshark, recorded into MIMIC MQTT Simulator
and generated messages whenever and however we wanted. Rather than waiting for minutes, we can
send any message with any value in seconds, speeding up development time. Then we multiplied
the sensor 1000-fold, quickly reaching the required scalability at no additional cost.

This video shows the process in 2 minutes:


Money saved: $40,000. Time saved: immeasurable.