There are 2 unix tools that lets you download a website from terminal, or do testing with servers. They are
curl. If you have perl installed, you also have
HEAD, usually installed at
The major difference between wget and curl is that wget lets you download a site by crawling links, while curl is for specific URL or a list of URLS.
How to download just one single file from a website?
# download a file wget http://example.org/somedir/largeMovie.mov
How to download a entire website?
# download website, 2 levels deep, wait 9 sec per page wget --wait=9 --recursive --level=2 http://example.org/
Some sites check on user agent, so you might add this option
--user-agent='Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)'.
How to download a numbered file sequence from a website?
# download all jpg files named cat01.jpg to cat20.jpg curl -O http://example.org/xyz/cat[01-20].jpg
# download all jpg files named cat1.jpg to cat20.jpg curl -O http://example.org/xyz/cat[1-20].jpg
Other useful options are:
--referer http://example.org/→ set a referer (that is, a link you came from)
--user-agent "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322)"→ set user agent, in case the site needs that.
Note: curl cannot be used to download entire website recursively like wget can.