If like me you've got something on your network monitoring DNS requests, such as Pi Hole or AdGuard Home, and you have a computer or two or three running Linux, you've probably seen many requests like the above. Odds are they hit your top domains, especially if you have multiple machines running the same distro.
I fiddle with a few different distros but I tend to stick to Ubuntu for my main one. I have three machines on my network running Ubuntu. As you can see, they frequently connect to the domain connectivity-check.ubuntu.com.
Every single OS on every single device – whether it's a computer, smartphone, or IoT device – pretty much does the same thing. For example Windows uses the rather more cryptic domain msftncsi.com, standard Android pings connectivitycheck.gstatic.com, and even my phone running Graphene OS – a fork of Android focused heavily on security and privacy – checks in now and then with connectivitycheck.grapheneos.network.
The purpose of these domains is quite self-explanatory: it's simply a way for the device to check if it's connected to the internet. If not, it'll show an error letting you know there's a connection problem, or if it detects a captive portal (e.g. a login page for hotel WiFi) it'll redirect you to that.
It's often the case that one server or VPS can easily run multiple sites, each with their own domain names. Even though it sacrifices some security, SNI is pretty much what we've collectively chosen to use to make this possible while still using TLS (HTTPS, SSL).
This is largely necessary because it's the norm for multiple sites to share the same IP address. When running multiple sites from a VPS, it's possible for this to mean anyone visiting one of your sites can see the TLS certificates for the other domains you serve up, and regardless, it's very easy to simply perform a reverse lookup on an IP address to look at all the domains pointing to it as DNS records are public.
And this may just be me, but it feels a bit “hacky.” Whenever someone visits your site, the browser can work out which is the correct vhost to talk to thanks to SNI, but having a bunch of rejected and invalid TLS certs coming from that same IP is just not a “clean” setup.
From the cheapest managed website hosting plan to the most high end rented server, if you plan to serve content over the internet, HTTPS is no longer optional.
As a result, pretty much every host out there will chuck in a free SSL certificate. Quick pedantic sidenote: although they're still very commonly referred to as SSL certificates, even by experts, no one actually uses the SSL protocol anymore because it's no longer secure. Today what we use is TLS.
Specifically, TLS 1.3 is the most recent version but lacks support in older clients, while TLS 1.2 is outdated and less secure but far more widely supported compared to 1.3, and by using a small list of good, secure ciphers you can avoid the exploits that have plagued TLS 1.2. Ideally, using only the latest version is best. In fact if you happen to be reading this a few years after publication you'll probably be fine dropping TLS 1.2.
But I'll stick to the here and now: 2021. By default, most web servers enable insecure protocols such as TLS 1 and TLS 1.1. These should not be used at all. They're inherently insecure.
In this post I will assume you either have a server you're setting Nginx up on, or already have an Nginx server up and running which you wish to upgrade the security of. I won't go over other important steps in securing a server such as firewall settings, SSH keys, auto updates and the like although I will cover these in a future article and in this one I will give you a few tips that can help prevent exploits just by changing your Nginx config.