Man Linux: Main Page and Category List

NAME

       wapiti - a web application vulnerability scanner.

SYNOPSIS

       wapiti http://server.com/base/url/ [options]

DESCRIPTION

       Wapiti allows you to audit the security of your web applications.
       It  performs  "black-box" scans, i.e. it does not study the source code
       of the application but will scans the webpages of the deployed  webapp,
       looking  for  scripts  and forms where it can inject data. Once it gets
       this list, Wapiti acts like a fuzzer, injecting payloads to  see  if  a
       script is vulnerable.

OPTIONS

       -s, --start <url>
            specify an url to start with.

       -x, --exclude <url>
            exclude  an url from the scan (for example logout scripts) you can
       also use a wildcard (*):
            Example  :  -x  "http://server/base/?page=*&module=test"   or   -x
       "http://server/base/admin/*" to exclude a directory

       -p, --proxy <url_proxy>
            specify a proxy (-p http://proxy:port/)

       -c, --cookie <cookie_file>
            use a cookie

       -t, --timeout <timeout>
            set the timeout (in seconds)

       -a, --auth <login%password>
            set credentials (for HTTP authentication) doesn’t work with Python
       2.4

       -r, --remove <parameter_name>
            removes a parameter from URLs

       -m, --module <module>
            use a predefined set of scan/attack options:
                 GET_ALL: only use GET request (no POST)
                 GET_XSS: only XSS attacks with HTTP GET method
                 POST_XSS: only XSS attacks with HTTP POST method

       -u, --underline
            use color to highlight vulnerable parameters in output

       -v, --verbose <level>
            set the verbosity level:
                 0: quiet (default),
                 1: print each url,
                 2: print every attack

       -h, --help
            print help page

EFFICIENCY

       Wapiti is developed in python and use a library I  made  called  lswww.
       This  web  spider library does the most of the work. Unfortunately, the
       html parsers module within python only works with  well  formated  html
       pages  so  lswww fails to extract informations from bad-coded webpages.
       Tidy can clean these webpages on the fly for  us  so  lswww  will  give
       pretty  good  results.  In order to make Wapiti far more efficient, you
       should:

                 apt-get install python-utidylib python-ctypes

AUTHOR

            Copyright       (C)       2006-2007        Nicolas        Surribas
       <nicolas.surribas@gmail.com>

            Manpage   created   by   Thomas   Bläsing  <thomasbl@pool.math.tu-
       berlin.de>