Manifold 0.12 User Guide

Index



Introduction

Manifold is a parallel discrete event simulation framework for simulation of modern multicore computer architectures. The software package consists of a parallel simulation kernel, a number of component models, and a few ready-to-use simulator programs that use the component models to build and simulate system models. Users can also port third-party components to Manifold and build system models with them. This user guide describes how to obtain Manifold source code, and how to build and run the simulator programs.



Overview

Manifold is designed for parallel simulation of multicore systems. The general simulation system is shown in Figure 1 below.

sys.png

Figure 1 Manifold Simulation System.

At run-time, instruction streams are fed to the multicore system model for simulation. Example sources of instructions include PIN [1] trace files and the QSim [2] multicore emulator. Components of the system model can be assigned to different host machines for parallel simulation.

The following are the general steps that a simulator program needs to follow to create a system model for simulation.

The simulator programs that are part of the distribution package can serve as examples for how to write simulator programs with Manifold. The component models that are included in the package can be used in building system models. The user can also port third-party components to Manifold and build system models using such components.

Major features of Manifold include the following:



Current Release

The current release is Release 0.12. The software is distributed as a source code package that contains the following:

Zesto

SPX

Simple-proc

Mcp-cache:

Simple-cache:

Iris:

CaffDRAM:

Testing and Portability



Source Code Directory Structure

The Manifold source code is organized as follows:

  ROOT
  |... code
       |... doc
       |... kernel
       |... models
       |    |... cache
       |    |    |... mcp-cache
       |    |    |... simple-cache
       |    |... memory
       |    |    |... CaffDRAM
       |    |... network
       |    |    |... iris
       |    |... processor
       |         |... simple-proc
       |         |... zesto
       |... simulator
            |... smp
	    |    |... common
	    |    |... config
            |    |... QsimClient
            |    |... QsimLib
            |    |... QsimProxy
            |    |... TraceProc
            |... smp2
	         |... common
		 |... config
                 |... QsimClient
                 |... QsimLib
                 |... Trace

where ROOT represents the root of the source tree.

Under each of kernel, mcp-cache, iris, and zesto, there is a subdirectory doc/doxygen that contains a user guide in Doxygen format for the respective component.


The simulator directory

There are two sets of simulator programs, under smp and smp2, respectively. Programs under smp use a fixed set of components, while programs under smp2 allows one component to be replaced by another by simply changing a configure file. For example, by changing one line of code, the user can replace Zesto with SPX.

Under each of smp and smp2, there are a total of six simulator programs, two each under QsimClient, QsimLib, and TraceProc. The difference is the source of instruction streams. Programs under QsimClient use QSim server to get instructions. Those under QsimLib are built with the QSim libraries. And those under TraceProc use PIN traces.

The common code of the simulator programs is located in common. The directory config contains configuration files for the simulator programs.



Build Process Overview

To build and run the simulator programs that are part of the software package, you will need to perform the following steps:

The simulators can respectively take instructions from three different sources: trace files, QSim library, and QSim server. Depending on which source you use, some of the steps above may be optional.

The following explains each step in detail.



Install Required Packages

Before you proceed, you need to install the following required packages.



Download and Build QSim

If you choose to use QSim to get instructions, you need to build and install QSim first.

Download

We recommend using QSim version 0.1.5, which is available at the Manifold web site:

Build and Installation

Instructions for building and installing QSim can be found in the INSTALL file in the root directory of QSim source code.

In addition to the QSim libaries, you also need to do the following:

All the instructions are in the INSTALL file.

After you are finished, you installation directory should look like the following, assuming QSIM_INSTALL is the root of the installation directory.

$ ls <QSIM_INSTALL>/lib
libqemu-qsim.so  libqsim-client.so  libqsim.so

$ ls <QSIM_INSTALL>/include
mgzd.h  qsim-client.h  qsim.h  qsim-load.h  qsim-net.h  qsim-regs.h  qsim-vm.h




Download and Build Manifold Libraries

There are two ways to download Manifold source code: from the Manifold website or through SVN checkout. Depending on which way is used to obtain the source code, the build process is slightly different.

Download Manifold source package

Manifold source package is available at the Manifold website:

After download, follow the following instructions to build the manifold libraries:

  1. Untar the source package.
    $ tar xvfz manifold-0.11.1.tar.gz
    

  2. Go to the code subdirectory.
    $ cd manifold-0.11.1/code
    

  3. Run configure and make.
    $ ./configure [--prefix=INSTALL_PATH]
    $ make
    
    The default installation directory is /usr/local/lib. If you want to install in a different location, the path of that location should be passed to configure. In addition, if QSim is installed in a location other than the default, you need to tell configure that location. Options that you can specify for configure are described below.

  4. Optionally, install the libraries.
    $ make install
    


Download Manifold source code through SVN checkout

Manifold source code is available through SVN checkout at the following address:

To build the un-packaged source code, you need to have the autotools package installed on your machine.

  1. From the code subdirectory, run autoreconf.
    $ cd code
    $ ./autoreconf -si
    
    This would create the configure script.

  2. Run configure and make.
    $ ./configure [--prefix=INSTALL_PATH]
    $ make
    

  3. Optionally, install the libraries.
    $ make install
    


Configure options

This section describes all of the options you can use when running the configure script.



Build the Simulator Programs

The simulator programs are located in ROOT/code/simulator/smp and ROOT/code/simulator/smp2. Programs under smp use a fixed set of components: Zesto, MCP-cache, Iris, and CaffDRAM. Those under smp2 are more flexible. They allow a component to be replaced by another by simply modifying the configure file. For example, you can replace Zesto with SimpleProc, or CaffDRAM with DRAMSim2.

In the following we only discuss programs under smp.

There are four subdirectories of simulators, based on how they get instructions:

In addition there are two other subdirectories:

To build the simulators, follow the following steps. Here we use the simulators under QsimClient as an example.

  1. Go to the simulator source directory.
    $ cd ROOT/code/simulator/smp/QsimClient
    

  2. Run make. It is likely that you need to modify the Makefile so the header files and libraries can be found.
    $ make
    



Start the Simulators

In each of the subdirectories there is a program called smp_llp. This program simulates the following system model, where each core node has a processor core, a private L1 cache, and a shared L2 slice.

manifold_example_sys1.png

Figure 2 System Model Simulated by smp_llp.

In addition, there is a program called smp_l1l2. It simulates a slightly different model in which L2's are in separate nodes, like memory controllers.

Configuration parameters for the components, except Zesto processor, are defined in a libconfig configuration file in the subdirectory config.

In the following we describe how to start the simulators in each of the subdirectories.


Start the Simulators in QSimClient

These simulators require QSim server be started first.

To start the QSim server, run the following commands:

$ cd QSIM_ROOT/remote/server
$ make
$ ./qsim-server <port> <state_file> <benchmark>  

where


After the QSim server has started, the simulator can be started.

If QSim is installed in /usr/local, do the following,

$ cd SIMULATOR_ROOT
$ mpirun -np <NP> <prog> <conf_file> <server> <port>

If Qsim is not installed in /usr/local, do the following, assuming QSim installation path is QSIM_INSTALL.

$ cd SIMULATOR_ROOT
$ QSIM_PREFIX=<QSIM_INSTALL> LD_LIBRARY_PATH=<QSIM_INSTALL>/lib  mpirun -np <NP> <prog> <conf_file> <server> <port>

where

For example:

$ mpirun -np 2 smp_llp ../config/conf2x2_torus_llp.cfg localhost 12345

The output of the simulation is stored in files named DBG_LOG<i>, where <i> is 0 to n-1, n being the number of LPs. The output files contain statistics collected by the components assigned to the corresponding LP.



Start the Simulators in QsimProxy

Simulators in QsimProxy use proxy processes that is placed between the QSim server and the back-end timing simulation to improve performance. The proxy processes act as client to the QSim server and obtains instructions from the QSim server over TCP/IP. The proxies and the back-end simulation form a producer-consumer relationship using shared memory segments. The proxies put instructions in the shared memory segments to be removed by the back-end simulation. The proxies keeps monitoring the contents of the shared memory segments. Once the contents fall below a threshold, they communicate with the server to obtain more instructions.

The processes should be started in the following order.

  1. Start the QSim server.
  2. Start the proxy processes.
  3. Start the simulator.

To start the QSim server, run the following commands:

$ cd QSIM_ROOT/remote/server
$ make
$ ./qsim-server <port> <state_file> <benchmark>  

where


After the QSim server has started, the proxies are started. Each proxy is a multithreaded process that can serve multiple core models in the back-end. Obviously, the proxy and the core models they serve must run on the same physical machine because they communication with shared memory segments.

$ cd ROOT/models/processor/zesto/proxy_mt
$ make
$ ./proxy <qsim_server> <port> <shared_mem_filename> <shared_mem_size> <core-proxy_map>

where

An example core-proxy map file is as follows:

0  crankshaft
1  crankshaft
2  crankshaft
...
15 crankshaft

This file is for a simulation model that has 16 cores. All the cores are served by a single proxy running on the machine crankshaft.


After the QSim server and the proxies have started, the simulator can be started.

$ cd SIMULATOR_ROOT
$ mpirun -np <NP> <prog> <conf_file> <shared_mem_filename> <shared_mem_size>

where

For example:

First start the proxy:

$ ./proxy localhost 12345 ~/shm_file 262144 ./core_proxy_map

Then start simulator:

$ mpirun -np 9 smp_llp ../config/conf4x5_torus_llp.cfg ~/shm_file 262144

Unlike the other simulator programs, with proxies, we put two core models in a single MPI process (LP). Therefore, for a 16-core model, we use 9 processes (8 for the cores, 1 for network and memory controllers).

The output of the simulation is stored in files named DBG_LOG<i>, where <i> is 0 to n-1, n being the number of LPs. The output files contain statistics collected by the components assigned to the corresponding LP.



Start the Simulators in QSimLib

Simulators in this subdirectory can only be run with 1 LP, or in sequential mode.

If QSim is installed in /usr/local, do the following.

$ mpirun -np 1 <prog> <conf_file> <state_file> <benchmark>

If Qsim is not installed in /usr/local, do the following, assuming QSim installation path is QSIM_INSTALL.

$ QSIM_PREFIX=<QSIM_INSTALL> LD_LIBRARY_PATH=<QSIM_INSTALL>/lib  mpirun -np 1 <prog> <conf_file> <state_file> <benchmark>

where

For example:

$ mpirun -np 1 smp_llp ../config/conf4x1_ring_llp.cfg myState_16 myBench.tar

The output of the simulation is stored in a file named DBG_LOG0, which contains statistics collected by all of the components.



Start the Simulators in TraceProc

These simulators use traces obtained with a PIN-based program.

$ mpirun -np <NP> <prog> <conf_file> <trace_file_basename>

where

For example:

$ mpirun -np 2 smp_llp ../config/conf2x2_torus_llp.cfg myTrace

The output of the simulation is stored in files named DBG_LOG<i>, where <i> is 0 to n-1, n being the number of LPs. The output files contain statistics collected by the components assigned to the corresponding LP.



Selecting Synchronization Algorithm

Manifold supports the following synchronization algorithms:

The default algorithm is SA_CMB_OPT_TICK. The algorithm can be set in the simulator program when calling the Manifold::Init() function.

For example, to set the algorithm to SA_CMB, do the following:

Manifold::Init(argc, argv, Manifold::TICKED, SyncAlg::SA_CMB);


The Quantum algorithm is slightly different. After calling Manifold::Init(), you need to call another function to set the quantum value. For example:

Manifold::Init(argc, argv, Manifold::TICKED, SyncAlg::SA_QUANTUM);
Quantum_Scheduler* sch = dynamic_cast<Quantum_Scheduler*>(Manifold::get_scheduler());  //get the scheduler
assert(sch);
sch->init_quantum(10); //set the quantum to 10 cycles



Common Problems

The following is a list of commonly encountered problems, and how to solve them.



References