An alternative to the wget command for Linux. How to use curl to download files.

Posted: August 15, 2017. At: 10:45 PM. This was 5 months ago. Post ID: 6523
Page permalink.
WordPress uses cookies, or tiny pieces of information stored on your computer, to verify who you are. There are cookies for logged in users and for commenters. These cookies expire two weeks after they are set.

How the curl command works on Linux. A good alternative to wget.

This shows how to use the curl command to download a file from the Internet. I am retrieving a file from a website which I may then view with my local machine.

ubuntu ~ $ curl > out.gif
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   211  100   211    0     0     67      0  0:00:03  0:00:03 --:--:--    67

The curl -i command will show the web headers; this is good for finding out what software a website is running on.

ubuntu ~ $ curl -i
HTTP/1.1 200 OK
Server: Apache
ETag: "6082151bd56ea922e1357f5896a90d0a:1425454794"
Last-Modified: Wed, 04 Mar 2015 07:39:54 GMT
Accept-Ranges: bytes
Content-Length: 1020
Content-Type: text/html
Date: Tue, 15 Aug 2017 05:02:52 GMT
Connection: keep-alive
<html><head><title>Microsoft Corporation</title><meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7"></meta><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></meta><meta name="SearchTitle" content="" scheme=""></meta><meta name="Description" content="Get product information, support, and news from Microsoft." scheme=""></meta><meta name="Title" content=" Home Page" scheme=""></meta><meta name="Keywords" content="Microsoft, product, support, help, training, Office, Windows, software, download, trial, preview, demo,  business, security, update, free, computer, PC, server, search, download, install, news" scheme=""></meta><meta name="SearchDescription" content=" Homepage" scheme=""></meta></head><body><p>Your current User-Agent string appears to be from an automated process, if this is incorrect, please click this link:<a href="">United States English Microsoft Homepage</a></p></body></html>

here is how to get the time between requesting data from a web server and actually receiving it.

ubuntu ~ $ echo "`curl -s -o /dev/null -w '%{time_starttransfer}-%{time_pretransfer}'`"|bc

I got a response back from Google in .003 of a second. That is pretty good.

Here is a more comprehensive version of this command.

ubuntu ~ $ curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null -s
Lookup time:    0.509
Connect time:   0.512
PreXfer time:   0.512
StartXfer time: 0.654
Total time:     0.654

This shows more detailed information about the web server and how fast the request was. The curl command is very useful for downloading files from the web as well as many other uses. Experiment yourself and see what you can do with it.

No comments have been made. Use this form to start the conversation :)

Leave a Reply