Linux: Download Website: wget, curl

By Xah Lee. Date: . Last updated: .

Here's how to download websites, 1 page or entire site.

wget

Download 1 Web Page

# download a file
wget http://example.org/somedir/largeMovie.mov

Download Entire Website

# download website, 2 levels deep, wait 9 sec per page
wget --wait=9 --recursive --level=2 http://example.org/

Some sites check on user agent. (user agent basically means browser). so you might add this option “--user-agent=”.

wget http://example.org/ --user-agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.96 Safari/537.36'

What's My User Agent String

Your user agent string is:

curl

Download Image Sequence

# download all jpg files named cat01.jpg to cat20.jpg
curl -O http://example.org/xyz/cat[01-20].jpg
# download all jpg files named cat1.jpg to cat20.jpg
curl -O http://example.org/xyz/cat[1-20].jpg

Other useful options are:

Note: curl cannot be used to download entire website recursively. Use wget for that.

Shell Basics

  1. Get System Info
  2. Shell Basics
  3. grep, cat, awk, uniq
  4. sort
  5. find, xargs
  6. diff Files/Dir
  7. dir size: du
  8. dir tree
  9. tar gzip bzip2 xz 7zip rar zip
  10. wget, curl, GET, HEAD
  11. rsync
  12. Install Packages

If you have a question, put $5 at patreon and message me.