A small, special-purpose network (Figure 1) connects three computers through a layer 3 switch. Computers consist of CPUs connected to nodes via PCI buses. CPU1 is partitioned into two virtual machines. This model demonstrates several abstraction techniques. CPU actions and message transmissions are defined as resource costs that are used with quantity resources that represent memories and buffers, and by the server resources represent CPUs, busses, and cables. Shared memory makes information available to blocks simultaneously. Hierarchical data structures carry messages between blocks (e.g., move routing table entries), represent the flow of data messages through the protocol stack (with support for encapsulation, addressing, etc.) and collect statistical data for performance analysis.
The network model consists of:
The top-level MLDesigner model (Figure 2) uses custom icons for the CPUs, Nodes and Switch. Quantity and server resources (the small square blocks) model the CPUs, the busses, buffers, and the Ethernet cables. Statistical modules (the yellow/shaded blocks) collect and graph performance data.
Figure 3 shows CPU1. The top two ApplicationProcessData blocks generate messages (driven by periodic and Poisson model models) and pass them to NodeA. The bottom Application Process Data Block receives messages from NodeA. Three hierarchical server resources model the two-layer server resource.
The details of the 2-Layer Server Resources block (in lower left corner of Figure 3) show how the server resource primitives are organized to split CPU1 into two virtual machines (Figure 4 below.)
The model generates both dynamic (during simulation) and summary statistical reports.
This summary graph splits the sample period (300 units) into 50 batches and reports the T1 > T3 mean latency for each batch.
Figure 7 shows utilization of Virtual Machine 2 (refer to Figure 2). VM2 holds Thread 3, which receives messages from Thread 1, VM1. VM1 and VM2 are located in CPU1.
Figure 8 shows switch buffer utilization.