g_analyze - analyzes data sets
g_analyze -f graph.xvg -ac autocorr.xvg -msd msd.xvg -cc coscont.xvg
-dist distr.xvg -av average.xvg -ee errest.xvg -g fitlog.log -[no]h
-nice int -[no]w -[no]xvgr -[no]time -b real -e real -n int -[no]d -bw
real -errbar enum -[no]integrate -aver_start real -[no]xydy
-[no]regression -[no]luzar -temp real -fitstart real -smooth real
-filter real -[no]power -[no]subav -[no]oneacf -acflen int
-[no]normalize -P enum -fitfn enum -ncskip int -beginfit real -endfit
g_analyze reads an ascii file and analyzes data sets. A line in the
input file may start with a time (see option -time) and any number of
y values may follow. Multiple sets can also be read when they are
seperated by & (option -n), in this case only one y value is read from
each line. All lines starting with and @ are skipped. All analyses
can also be done for the derivative of a set (option -d).
All options, except for -av and -power assume that the points are
equidistant in time.
g_analyze always shows the average and standard deviation of each set.
For each set it also shows the relative deviation of the third and
forth cumulant from those of a Gaussian distribution with the same
Option -ac produces the autocorrelation function(s).
Option -cc plots the resemblance of set i with a cosine of i/2
periods. The formula is: 2 (int0-T y(t) cos(i pi t) dt)2 / int0-T y(t)
This is useful for principal components obtained from covariance
analysis, since the principal components of random diffusion are pure
Option -msd produces the mean square displacement(s).
Option -dist produces distribution plot(s).
Option -av produces the average over the sets. Error bars can be
added with the option -errbar. The errorbars can represent the
standard deviation, the error (assuming the points are independent) or
the interval containing 90% of the points, by discarding 5% of the
points at the top and the bottom.
Option -ee produces error estimates using block averaging. A set is
divided in a number of blocks and averages are calculated for each
block. The error for the total average is calculated from the variance
between averages of the m blocks B_i as follows: error2 = Sum (B_i -
B)2 / (m*(m-1)). These errors are plotted as a function of the block
size. Also an analytical block average curve is plotted, assuming that
the autocorrelation is a sum of two exponentials. The analytical curve
for the block average is:
f(t) = sigma sqrt(2/T ( a (tau1 ((exp(-t/tau1) - 1) tau1/t + 1)) +
(1-a) (tau2 ((exp(-t/tau2) - 1) tau2/t + 1)))),
where T is the total time. a, tau1 and tau2 are obtained by fitting
f2(t) to error2. When the actual block average is very close to the
analytical curve, the error is sigma*sqrt(2/T (a tau1 + (1-a) tau2)).
The complete derivation is given in B. Hess, J. Chem. Phys.
Option -filter prints the RMS high-frequency fluctuation of each set
and over all sets with respect to a filtered average. The filter is
proportional to cos(pi t/len) where t goes from -len/2 to len/2. len is
supplied with the option -filter. This filter reduces oscillations
with period len/2 and len by a factor of 0.79 and 0.33 respectively.
Option -g fits the data to the function given with option -fitfn.
Option -power fits the data to b ta, which is accomplished by fitting
to a t + b on log-log scale. All points after the first zero or
negative value are ignored.
Option -luzar performs a Luzar & Chandler kinetics analysis on output
from g_hbond. The input file can be taken directly from g_hbond -ac,
and then the same result should be produced.
-f graph.xvg Input
-ac autocorr.xvg Output, Opt.
-msd msd.xvg Output, Opt.
-cc coscont.xvg Output, Opt.
-dist distr.xvg Output, Opt.
-av average.xvg Output, Opt.
-ee errest.xvg Output, Opt.
-g fitlog.log Output, Opt.
Print help info and quit
-nice int 0
Set the nicelevel
View output xvg, xpm, eps and pdb files
Add specific codes (legends etc.) in the output xvg files for the
Expect a time in the input
-b real -1
First time to read from set
-e real -1
Last time to read from set
-n int 1
Read sets seperated by &
Use the derivative
-bw real 0.1
Binwidth for the distribution
-errbar enum none
Error bars for -av: none, stddev, error or 90
Integrate data function(s) numerically using trapezium rule
-aver_start real 0
Start averaging the integral from here
Interpret second data set as error in the y values for integrating
Perform a linear regression analysis on the data
Do a Luzar and Chandler analysis on a correlation function and related
as produced by g_hbond. When in addition the -xydy flag is given the
second and fourth column will be interpreted as errors in c(t) and
-temp real 298.15
Temperature for the Luzar hydrogen bonding kinetics analysis
-fitstart real 1
Time (ps) from which to start fitting the correlation functions in
order to obtain the forward and backward rate constants for HB breaking
-smooth real -1
If = 0, the tail of the ACF will be smoothed by fitting it to an
exponential function: y = A exp(-x/tau)
-filter real 0
Print the high-frequency fluctuation after filtering with a cosine
filter of length
Fit data to: b ta
Subtract the average before autocorrelating
Calculate one ACF over all sets
-acflen int -1
Length of the ACF, default is half the number of frames
-P enum 0
Order of Legendre polynomial for ACF (0 indicates none): 0, 1, 2 or
-fitfn enum none
Fit function: none, exp, aexp, exp_exp, vac, exp5, exp7 or
-ncskip int 0
Skip N points in the output file of correlation functions
-beginfit real 0
Time where to begin the exponential fit of the correlation function
-endfit real -1
Time where to end the exponential fit of the correlation function, -1
is till the end
More information about GROMACS is available at
Thu 16 Oct 2008 g_analyze(1)