User Tools


PICMC installation instructions

Installation of the picmc package

The DSMC / PICMC package is usually being shipped as compressed tar archive, such as picmc_versionNumber.tar.bz2. In order to uncompress it, the following command can be used:

tar xjvf picmc_versionNumber.tar.bz2

This will create an installation folder picmc_versionNumber/* where the required binaries, libraries and script files are included. The meaning of the sub folders are as follows:

  • picmc_versionNumber/bin = the C++ compiled binaries including picmc worker task, magnetic field computation module bem and the script interpreter rvm.
  • picmc_versionNumber/lib = compiled run time libraries used by the script interpreter rvm.
  • picmc_versionNumber/scr = scripts used by rvm for scheduling the simulation run and managing the input and output data as well as internal model parameters (including species and cross sections)
  • picmc_versionNumber/sh = Linux shell scripts (bash) for submitting simulation runs
  • picmc_versionNumber/data = Tabulated cross section data for some species

After creation of the installation folder picmc_versionNumber/ it shall be moved into an appropriate installation directory, e.g.

mv picmc_versionNumber /opt/picmc

In the home directories of the users, appropriate environment variable settings need to be included into their /home/user/.bashrc files. A possible way is to include the following lines:

export PICMCPATH=/opt/picmc/picmc_versionNumber
export LD_LIBRARY_PATH=$PICMCPATH/lib:$LD_LIBRARY_PATH
export RVMPATH=$PICMCPATH/rvmlib
export PATH=$PICMCPATH/bin:$PICMCPATH/sh:$PATH

The part after PICMCPATH= has to be replaced by the path of your actual installation directory. After changing the .bashrc the users are required to close their command prompt and open a new one in order for the changes to take effect.

The direct invocation of simulation runs is done via the script picmc_versionNumber/sh/rvmmpi. In order for this to work correctly, the correct path of your OpenMPI installation directory has to be specified in this file. The respective line can be found in the upper part of the file and shall look like follows:

MPI_PATH=/opt/openmpi-2.1.0

Please replace this path with the correct installation directory of OpenMPI for your Linux machine.

Job submission via the SLURM job scheduler

In case of the SLURM job scheduler, we provide a sample submission script which is given here in submit.zip. Unpack this ZIP archive and place the script submit into an appropriate folder for executable scripts such as /home/user/bin or /usr/local/bin. Don't forget to make this script executable via chmod +x submit.

In the upper part of the script, there is an editable section which has to be adjusted to your actual SLURM installation:

# ======================
# Editable value section
# ======================
 
picmcpath=$PICMCPATH                           # PICMC installation path
partition="v3"                                 # Default partition
memory=1300                                    # Default memory in MB per CPU
walltime="336:00:00"                           # Default job duration in hhh:mm:ss (336 hours -> 14 days), 
                                               # specify -time <nn> of command line for different job duration
MPIEXEC=mpiexec                                # Name of mpiexec executable
MPIPATH=/opt/openmpi-2.1.0                     # Full path to OpenMPI installation
#MAILRECIPIENT=user@institution.com            # Uncomment this to get status emails from SLURM 
                                               # (Needs a running mail server installation such as postfix)
 
QSUB=sbatch
 
# =============================
# End of editable value section
# =============================

It is important to specify the correct OpenMPI installation path in MPIPATH as well as the default SLURM partition under partition. Additionally, the approximate maximal memory usage per task and maximal run time of the job are to be specified in memory and walltime. In the settings shown above, there is a maximal memory allocation of 1.3 GB per task and a maximal run time of 14 days. These settings can be overwritten by command line switches of the submit script. The submit script can be invoked in the following ways:

Command line Comment
submit -bem 40 simcase Perform magnetic field simulation with 40 cores in the default partition
submit -picmc 12 simcase Perform DSMC / PICMC simulation run with 12 cores in the default partition
submit -partition special -picmc 60 simcase Perform DSMC / PICMC simulation run with 60 cores in a SLURM partition named special
submit -partition special -N 3 -picmc 60 simcase Perform DSMC / PICMC simuation run with 60 cores and split the number of cores evenly over 3 compute nodes
submit -mem 4096 -picmc 8 simcase Perform DSMC / PICMC simulation run on 8 cores and foresee memory allocation of up to 4 GB per core
submit -after NNNN -picmc 24 simcase Perform DSMC / PICMC simulation run on 24 cores but do not start until the SLURM job NNNN is completed. The previous SLURM job may be a magnetic field computation. This way, a picmc simulation can be automatically started when a formerly required magnetic field computation is finished.

All above examples assume that your simulation case is given in the file simcase.par in the actual directory. By submitting a SLURM job via submit the following additional files will be created:

  • simcase-out.txt: The screen output in stdout; this file is being refreshed typically every 15-30 seconds.
  • simcase-err.txt: Error messages (if any) can be found here. If the simulation crashes or behaves in an unexpected way, it is important to look into this file.
  • simcase.slurm: This is a native SLURM submission script generated by submit according to the actual settings as specified in the command line. If the job needs being re-submitted with the same settings, it can be easily done via
    sbatch simcase.slurm