Man Linux: Main Page and Category List

NAME

       mdrun_mpi  -  performs  a  GROMACS  simulation  across multiple CPUs or
       systems VERSION 4.0_rc1

SYNOPSIS

       mdrun_mpi -s topol.tpr -o traj.trr  -x  traj.xtc  -cpi  state.cpt  -cpo
       state.cpt  -c  confout.gro  -e ener.edr -g md.log -dgdl dgdl.xvg -field
       field.xvg -table table.xvg -tablep tablep.xvg -tableb table.xvg  -rerun
       rerun.xtc  -tpi  tpi.xvg  -tpid  tpidist.xvg -ei sam.edi -eo sam.edo -j
       wham.gct  -jo  bam.gct  -ffout  gct.xvg  -devout  deviatie.xvg   -runav
       runaver.xvg  -px  pullx.xvg  -pf  pullf.xvg  -mtx nm.mtx -dn dipole.ndx
       -[no]h -nice int -deffnm string -[no]xvgr -[no]pd -dd vector -npme  int
       -ddorder  enum  -[no]ddcheck  -rdd  real -rcon real -dlb enum -dds real
       -[no]sum -[no]v -[no]compact -[no]seppot -pforce real -[no]reprod  -cpt
       real  -[no]append  -maxh  real  -multi  int  -replex  int  -reseed  int
       -[no]glas -[no]ionize

DESCRIPTION

       The mdrun program is the main  computational  chemistry  engine  within
       GROMACS.  Obviously, it performs Molecular Dynamics simulations, but it
       can  also  perform  Stochastic  Dynamics,  Energy  Minimization,   test
       particle   insertion  or  (re)calculation  of  energies.   Normal  mode
       analysis is another option. In this case mdrun builds a Hessian  matrix
       from  single  conformation.   For usual Normal Modes-like calculations,
       make sure that the structure  provided  is  properly  energy-minimized.
       The generated matrix can be diagonalized by g_nmeig.

       This  version  of  the  program  will  only run while using the OpenMPI
       parallel computing library.  See mpirun(1).  Use  the  normal  mdrun(1)
       program for conventional single-threaded operations.

       The  mdrun  program reads the run input file ( -s ) and distributes the
       topology over nodes if needed.  mdrun produces  at  least  four  output
       files.  A single log file ( -g ) is written, unless the option

       -seppot  is  used,  in  which  case  each  node writes a log file.  The
       trajectory file ( -o ), contains coordinates, velocities and optionally
       forces.   The  structure  file  (  -c  )  contains  the coordinates and
       velocities of the last step.  The energy file ( -e ) contains energies,
       the  temperature, pressure, etc, a lot of these things are also printed
       in the log file.  Optionally coordinates can be written to a compressed
       trajectory file ( -x ).

       The  option  -dgdl is only used when free energy perturbation is turned
       on.

       When mdrun is started using MPI with more than 1 node,  parallelization
       is used. By default domain decomposition is used, unless the -pd

       option is set, which selects particle decomposition.

       With  domain  decomposition,  the spatial decomposition can be set with
       option -dd . By default mdrun selects a good decomposition.   The  user
       only  needs  to  change  this  when  the  system is very inhomogeneous.
       Dynamic load balancing is set with the option -dlb , which can  give  a
       significant   performance  improvement,  especially  for  inhomogeneous
       systems. The only disadvantage of dynamic load balancing is  that  runs
       are  no  longer  binary  reproducible,  but  in  most cases this is not
       important.  By default the  dynamic  load  balancing  is  automatically
       turned  on  when the measured performance loss due to load imbalance is
       5% or more.  At  low  parallelization  these  are  the  only  important
       options  for domain decomposition.  At high parallelization the options
       in the  next  two  sections  could  be  important  for  increasing  the
       performace.

       When  PME  is  used  with  domain  decomposition, separate nodes can be
       assigned to do only the PME mesh calculation; this  is  computationally
       more  efficient starting at about 12 nodes.  The number of PME nodes is
       set with option -npme , this can not be more than half  of  the  nodes.
       By  default  mdrun  makes  a guess for the number of PME nodes when the
       number of nodes is larger than 11 or performance  wise  not  compatible
       with  the  PME  grid  x  dimension.  But the user should optimize npme.
       Performance statistics on this issue are written at the end of the  log
       file.   For good load balancing at high parallelization, npme should be
       divisible by the number of PME nodes

       This section lists all options that affect the domain decomposition.

       Option -rdd can be used to set the required maximum distance for  inter
       charge-group  bonded  interactions.   Communication for two-body bonded
       interactions below the non-bonded cut-off  distance  always  comes  for
       free  with  the  non-bonded communication.  Atoms beyond the non-bonded
       cut-off  are  only  communicated  when   they   have   missing   bonded
       interactions;  this  means  that  the  extra  cost  is minor and nearly
       indepedent of the value of -rdd With dynamic load balancing option -rdd
       also  sets the lower limit for the domain decomposition cell sizes.  By
       default -rdd is determined by mdrun based on the  initial  coordinates.
       The  chosen  value  will  be  a  balance  between interaction range and
       communication cost.

       When inter charge-group bonded interactions are beyond the bonded  cut-
       off  distance,  mdrun  terminates  with  an  error  message.   For pair
       interactions and tabulated bonds that do not generate exclusions,  this
       check can be turned off with the option -noddcheck .

       When  constraints  are  present,  option -rcon influences the cell size
       limit as well.  Atoms connected by NC  constraints,  where  NC  is  the
       LINCS  order  plus  1,  should  not be beyond the smallest cell size. A
       error message is generated when this happens and the user should change
       the  decomposition  or decrease the LINCS order and increase the number
       of LINCS iterations.  By default mdrun estimates the minimum cell  size
       required   for   P-LINCS   in   a   conservative   fashion.   For  high
       parallelization it can be useful to set the distance  required  for  P-
       LINCS with the option -rcon

       The  -dds  option sets the minimum allowed x, y and/or z scaling of the
       cells with dynamic load balancing. mdrun will ensure that the cells can
       scale  down  by  at  least  this  factor.  This  option is used for the
       automated spatial decomposition (when not using -dd ) as  well  as  for
       determining  the  number of grid pulses, which in turn sets the minimum
       allowed cell size. Under certain circumstances the value of -dds  might
       need to be adjusted to account for high or low spatial inhomogeneity of
       the system.

       The option -nosum can be  used  to  only  sum  the  energies  at  every
       neighbor  search  step  and  energy  output  step.   This  can  improve
       performance  for  highly  parallel  simulations   where   this   global
       communication  step  becomes  the  bottleneck.  For a global thermostat
       and/or barostat the temperature  and/or  pressure  will  also  only  be
       updated every nstlist steps.  With this option the energy file will not
       contain averages and fluctuations over all integration steps.

       With -rerun an input trajectory can  be  given  for  which  forces  and
       energies  will  be (re)calculated. Neighbor searching will be performed
       for every frame, unless nstlist is zero (see the .mdp file).

       ED (essential dynamics) sampling is switched on by using the -ei

       flag followed by an .edi file.  The .edi file  can  be  produced  using
       options  in  the  essdyn  menu of the WHAT IF program. mdrun produces a
       .edo file that contains projections of positions, velocities and forces
       onto selected eigenvectors.

       When user-defined potential functions have been selected in the

       .mdp  file  the  -table  option is used to pass mdrun a formatted table
       with potential functions. The file is  read  from  either  the  current
       directory  or  from  the  GMXLIB  directory.  A number of pre-formatted
       tables are presented in the GMXLIB dir, for 6-8, 6-9, 6-10, 6-11,  6-12
       Lennard  Jones  potentials with normal Coulomb.  When pair interactions
       are present a separate table for pair  interaction  functions  is  read
       using the -tablep option.

       When   tabulated   bonded   functions  are  present  in  the  topology,
       interaction functions are read using  the  -tableb  option.   For  each
       different tabulated interaction type the table file name is modified in
       a different way: before the file extension an underscore  is  appended,
       then  a  b  for bonds, an a for angles or a d for dihedrals and finally
       the table number of the interaction type.

       The options -pi , -po , -pd , -pn are used for potential of mean  force
       calculations and umbrella sampling.  See manual.

       With  -multi multiple systems are simulated in parallel.  As many input
       files are required as the number of  systems.   The  system  number  is
       appended  to  the  run  input  and  each  output filename, for instance
       topol.tpr becomes topol0.tpr, topol1.tpr etc.  The number of nodes  per
       system  is  the total number of nodes divided by the number of systems.
       One use of  this  option  is  for  NMR  refinement:  when  distance  or
       orientation  restraints are present these can be ensemble averaged over
       all the systems.

       With -replex replica exchange is attempted every given number of steps.
       The  number  of replicas is set with the -multi option, see above.  All
       run input files should use a different coupling temperature, the  order
       of the files is not important. The random seed is set with

       -reseed . The velocities are scaled and neighbor searching is performed
       after every exchange.

       Finally some experimental algorithms can be tested when the appropriate
       options   have   been   given.   Currently   under  investigation  are:
       polarizability, glass simulations and X-Ray bombardments.

       The option -pforce is useful when you suspect a simulation crashes  due
       to  too  large forces. With this option coordinates and forces of atoms
       with a force larger than a certain value will be printed to stderr.

       Checkpoints containing the complete state of the system are written  at
       regular  intervals (option -cpt ) to the file -cpo , unless option -cpt
       is set to -1.  A simulation can be continued by reading the full  state
       from file with option -cpi . This option is intelligent in the way that
       if no checkpoint file is found, Gromacs just assumes a normal  run  and
       starts from the first step of the tpr file.

       With checkpointing you can also use the option -append to just continue
       writing to the previous output files. This is not  enabled  by  default
       since  it  is  potentially dangerous if you move files, but if you just
       leave all your files in place and restart mdrun with exactly  the  same
       command (with options -cpi and -append ) the result will be the same as
       from a single run. The contents will be binary  identical  (unless  you
       use  dynamic  load balancing), but for technical reasons there might be
       some extra  energy  frames  when  using  checkpointing  (necessary  for
       restarts without appending).

       With  option  -maxh a simulation is terminated and a checkpoint file is
       written at the first neighbor search step where the  run  time  exceeds
       -maxh *0.99 hours.

       When  mdrun  receives  a TERM signal, it will set nsteps to the current
       step plus one. When mdrun receives a USR1 signal, it  will  stop  after
       the  next  neighbor  search step (with nstlist=0 at the next step).  In
       both cases all the usual output will be written to file.  When  running
       with  MPI,  a  signal to one of the mdrun processes is sufficient, this
       signal should not be sent to mpirun or the mdrun process  that  is  the
       parent of the others.

FILES

       -s topol.tpr Input
        Run input file: tpr tpb tpa

       -o traj.trr Output
        Full precision trajectory: trr trj cpt

       -x traj.xtc Output, Opt.
        Compressed trajectory (portable xdr format)

       -cpi state.cpt Input, Opt.
        Checkpoint file

       -cpo state.cpt Output, Opt.
        Checkpoint file

       -c confout.gro Output
        Structure file: gro g96 pdb

       -e ener.edr Output
        Energy file: edr ene

       -g md.log Output
        Log file

       -dgdl dgdl.xvg Output, Opt.
        xvgr/xmgr file

       -field field.xvg Output, Opt.
        xvgr/xmgr file

       -table table.xvg Input, Opt.
        xvgr/xmgr file

       -tablep tablep.xvg Input, Opt.
        xvgr/xmgr file

       -tableb table.xvg Input, Opt.
        xvgr/xmgr file

       -rerun rerun.xtc Input, Opt.
        Trajectory: xtc trr trj gro g96 pdb cpt

       -tpi tpi.xvg Output, Opt.
        xvgr/xmgr file

       -tpid tpidist.xvg Output, Opt.
        xvgr/xmgr file

       -ei sam.edi Input, Opt.
        ED sampling input

       -eo sam.edo Output, Opt.
        ED sampling output

       -j wham.gct Input, Opt.
        General coupling stuff

       -jo bam.gct Output, Opt.
        General coupling stuff

       -ffout gct.xvg Output, Opt.
        xvgr/xmgr file

       -devout deviatie.xvg Output, Opt.
        xvgr/xmgr file

       -runav runaver.xvg Output, Opt.
        xvgr/xmgr file

       -px pullx.xvg Output, Opt.
        xvgr/xmgr file

       -pf pullf.xvg Output, Opt.
        xvgr/xmgr file

       -mtx nm.mtx Output, Opt.
        Hessian matrix

       -dn dipole.ndx Output, Opt.
        Index file

OTHER OPTIONS

       -[no]hno
        Print help info and quit

       -nice int 19
        Set the nicelevel

       -deffnm string
        Set the default filename for all file options

       -[no]xvgryes
        Add  specific  codes  (legends  etc.)  in the output xvg files for the
       xmgrace program

       -[no]pdno
        Use particle decompostion

       -dd vector 0 0 0
        Domain decomposition grid, 0 is optimize

       -npme int -1
        Number of separate nodes to be used for PME, -1 is guess

       -ddorder enum interleave
        DD node order: interleave , pp_pme or cartesian

       -[no]ddcheckyes
        Check for all bonded interactions with DD

       -rdd real 0
        The maximum distance for  bonded  interactions  with  DD  (nm),  0  is
       determine from initial coordinates

       -rcon real 0
        Maximum distance for P-LINCS (nm), 0 is estimate

       -dlb enum auto
        Dynamic load balancing (with DD): auto , no or yes

       -dds real 0.8
        Minimum allowed dlb scaling of the DD cell size

       -[no]sumyes
        Sum the energies at every step

       -[no]vno
        Be loud and noisy

       -[no]compactyes
        Write a compact log file

       -[no]seppotno
        Write  separate V and dVdl terms for each interaction type and node to
       the log file(s)

       -pforce real -1
        Print all forces larger than this (kJ/mol nm)

       -[no]reprodno
        Try to avoid optimizations that affect binary reproducibility

       -cpt real 15
        Checkpoint interval (minutes)

       -[no]appendno
        Append to previous output files when restarting from checkpoint

       -maxh real -1
        Terminate after 0.99 times this time (hours)

       -multi int 0
        Do multiple simulations in parallel

       -replex int 0
        Attempt replica exchange every  steps

       -reseed int -1
        Seed for replica exchange, -1 is generate a seed

       -[no]glasno
        Do glass simulation with special long range corrections

       -[no]ionizeno
        Do a simulation including the effect of an X-Ray bombardment  on  your
       system

SEE ALSO

       gromacs(7)

       More   information   about   the   GROMACS   suite   is   available  in
       /usr/share/doc/gromacs or at <http://www.gromacs.org/>.

                                Mon 22 Sep 2008                   mdrun_mpi(1)