NAME
vw - Vowpal Wabbit -- fast online learning tool
DESCRIPTION
VW options:
-a [ --audit ]
print weights of features
-b [ --bit_precision ] arg
number of bits in the feature table
-c [ --cache ]
Use a cache. The default is <data>.cache
--cache_file arg
The location(s) of cache_file.
-d [ --data ] arg
Example Set
--daemon
read data from port 39523
--decay_learning_rate arg (=0.707106769)
Set Decay factor for learning_rate between passes
-f [ --final_regressor ] arg
Final regressor
-h [ --help ]
Output Arguments
--version
Version information
-i [ --initial_regressor ] arg
Initial regressor(s)
--initial_t arg (=1)
initial t value
--min_prediction arg
Smallest prediction to output
--max_prediction arg
Largest prediction to output
--multisource arg
multiple sources for daemon input
--noop do no learning
--port arg
port to listen on
--power_t arg (=0)
t power value
--predictto arg
host to send predictions to
-l [ --learning_rate ] arg (=0.100000001)
Set Learning Rate
--passes arg (=1)
Number of Training Passes
-p [ --predictions ] arg
File to output predictions to
-q [ --quadratic ] arg
Create and use quadratic features
--quiet
Don’t output diagnostics
-r [ --raw_predictions ] arg
File to output unnormalized predictions to
--sendto arg
send example to <hosts>
-t [ --testonly ]
Ignore label information and just test
--thread_bits arg (=0)
log_2 threads
--loss_function arg (=squared)
Specify the loss function to be used, uses squared by default.
Currently available ones are squared, hinge, logistic and
quantile.
--quantile_tau arg (=0.5)
Parameter \tau associated with Quantile loss. Defaults to 0.5
--unique_id arg (=0)
unique id used for cluster parallel
--compressed
use gzip format whenever appropriate. If a cache file is being
created, this option creates a compressed cache file. A mixture
of raw-text & compressed inputs are supported if this option is
on
--sort_features
turn this on to disregard order in which features have been
defined. This will lead to smaller cache sizes
--ngram arg
Generate N grams
--skip_gram arg
Generate skip grams. This in conjunction with the ngram tag can
be used to generate generalized n-skip-k-gram.