How the curl command works on Linux. A good alternative to wget.
This shows how to use the curl command to download a file from the Internet. I am retrieving a file from a website which I may then view with my local machine.
ubuntu ~ $ curl http://s2.enemy.org/globe.gif > out.gif % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 211 100 211 0 0 67 0 0:00:03 0:00:03 --:--:-- 67 |
The curl -i
command will show the web headers; this is good for finding out what software a website is running on.
ubuntu ~ $ curl -i http://www.microsoft.com HTTP/1.1 200 OK Server: Apache ETag: "6082151bd56ea922e1357f5896a90d0a:1425454794" Last-Modified: Wed, 04 Mar 2015 07:39:54 GMT Accept-Ranges: bytes Content-Length: 1020 Content-Type: text/html Date: Tue, 15 Aug 2017 05:02:52 GMT Connection: keep-alive <html><head><title>Microsoft Corporation</title><meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7"></meta><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></meta><meta name="SearchTitle" content="Microsoft.com" scheme=""></meta><meta name="Description" content="Get product information, support, and news from Microsoft." scheme=""></meta><meta name="Title" content="Microsoft.com Home Page" scheme=""></meta><meta name="Keywords" content="Microsoft, product, support, help, training, Office, Windows, software, download, trial, preview, demo, business, security, update, free, computer, PC, server, search, download, install, news" scheme=""></meta><meta name="SearchDescription" content="Microsoft.com Homepage" scheme=""></meta></head><body><p>Your current User-Agent string appears to be from an automated process, if this is incorrect, please click this link:<a href="http://www.microsoft.com/en/us/default.aspx?redir=true">United States English Microsoft Homepage</a></p></body></html> |
here is how to get the time between requesting data from a web server and actually receiving it.
ubuntu ~ $ echo "`curl -s -o /dev/null -w '%{time_starttransfer}-%{time_pretransfer}' http://google.com/`"|bc .003 |
I got a response back from Google in .003 of a second. That is pretty good.
Here is a more comprehensive version of this command.
ubuntu ~ $ curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null -s http://google.cn/ Lookup time: 0.509 Connect time: 0.512 PreXfer time: 0.512 StartXfer time: 0.654 Total time: 0.654 |
This shows more detailed information about the web server and how fast the request was. The curl command is very useful for downloading files from the web as well as many other uses. Experiment yourself and see what you can do with it.
I have a fresh new download tool suitable also for large downloads: a̅tea
The innovation about a̅tea is that it supports DANE, that is the certificate check is secured via DNSSEC and a hash retrievable in a TLSA RR. It is the most secure way to download on the web given that a site does already support DANE.
Look here: https://www.elstel.org/atea/