Linux: Download Website: wget, curl

By Xah Lee. Date: . Last updated: .

Here's how to download websites, 1 page or entire site.


Download 1 Web Page

# download a file

Download Entire Website

# download website, 2 levels deep, wait 9 sec per page
wget --wait=9 --recursive --level=2

Some sites check on user agent. (user agent basically means browser). so you might add this option “--user-agent=”.

wget --user-agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.96 Safari/537.36'

What is My User Agent String

Your user agent string is:


# download a html page
curl -O

Download Image Sequence

# download all jpg files named cat01.jpg to cat20.jpg
curl -O[01-20].jpg
# download all jpg files named cat1.jpg to cat20.jpg
curl -O[1-20].jpg

Other useful options are:

Set a referer (that is, a link you came from)
--user-agent "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322)"
Set user agent, in case the site needs that.

Note: curl cannot be used to download entire website recursively. Use wget for that.

Linux Shell Basics

Sys Admin


Linux Desktop