Client URL, or simple cURL is a library and command-line utility for transferring data between systems. It supports a myriad of different protocols and tends to be installed by default on many Unix-like operating systems. Because of it’s general availability, it is a great choice for when you need to quickly download a file to your local system.
To follow along at home, you will need to have the
curl utility installed. As mentioned, it’s pretty standard issue on Unix-like operating systems such as Linux and macOS.
If you don’t have the
curl command available, please consult your favorite package manager. Even if it’s not installed, your package manager more than likely has it available to install.
The commands we will be issuing will be pretty safe as they will be downloading files from the Internet and non-destructive in nature. Obviously, downloading files off of the Internet can be sketchy, so be sure you are downloading from reputable sources.
Also, if you plan to run any scripts you have downloaded, it’s good practice to check their contents before making them executable and running them. A quick
cat and looking over the code is often sufficient depending on the size of the file and your knowledge of the code you’re reviewing.
— Sorry to interrupt this program! 📺
If you're interested in learning Node in a comprehensive and structured way, I highly recommend you try Wes Bos' Learn Node course. Learning from a premium course like that is a serious investment in yourself.
Plus, this is an affiliate link, so if you purchase the course you help Alligator.io continue to exist at the same time! 🙏
Fetching remote files
Out of the box, without any command-line arguments, the
curl command will fetch a file and display it’s contents to the standard output.
Let’s give it a try by downloading the
robots.txt file from your favorite development blog:
$ curl https://alligator.io/robots.txt User-agent: * Sitemap: https://alligator.io/sitemap.xml
Not much to it! Give
curl a URL and it will fetch the resource and display it’s contents.
Saving remote files
Fetching a file and display it’s contents is all well and good, but what if you want to actually save the file to your system?
To save the remote file to your local system, with the same filename as the server you’re downloading from, add the
--remote-name argument, or simply,
$ curl -O https://alligator.io/robots.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 56 100 56 0 0 251 0 --:--:-- --:--:-- --:--:-- 251
Instead of displaying the contents of the file,
curl displays a nice little text-based progress meter and saves the file to the same name as the remote file’s name. You can check on things with the
$ cat robots.txt User-agent: * Sitemap: https://alligator.io/sitemap.xml
Saving remote files to a specific filename
What if you already had a local file with the same name as the file on the remote server?
Unless you are okay with overwriting your local file of the same name, you will want to add the
--output argument followed by the name of the local file you’d like to save the contents to:
$ curl -o gator-bots.txt https://alligator.io/robots.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 56 100 56 0 0 221 0 --:--:-- --:--:-- --:--:-- 221
Which downloads the remote
robots.txt file to the locally named
$ cat gator-bots.txt User-agent: * Sitemap: https://alligator.io/sitemap.xml
Thus far all of the examples have included fully qualified URLs that include the
https:// protocol. If you happened to try to fetch the
robots.txt file and only specified
alligator.io, you would be presented with an error about a redirect as we redirect requests from
https:// (as one should):
$ curl alligator.io/robots.txt Redirecting to https://alligator.io/robots.txt
No big deal though.
curl has a flag you can pass in. The
-L argument tells
curl to redo the request to the new location when a
3xx response code is encountered:
$ curl -L alligator.io/robots.txt User-agent: * Sitemap: https://alligator.io/sitemap.xml
Of course if so desired, you can combine the
-L argument with some of the aforementioned arguments to download the file to your local system.
curl is a great utility for quickly and easily downloading files from a remote system. While it’s similar to rsync in functionality, I find it to be a bit easier to work with since there’s less to remember in terms of arguments for some of the more common / basic tasks.
Like most of the simple yet powerful command-line utilities we discuss, this post really only covers the tip of the iceberg. With support for many different protocols and the added upload capabilities,
curl has a ton to offer.
Ready to learn more? Check out the manual page for
curl by running