On my personal VPS I host a handful of websites accessed from a variety of domains and sub-domains, as well as a few more involved webapps such as tt-rss. Historically applications that cross multiple programming languages and databases have been a terrible pain to deploy and keep running on a private server, but since containers have arrived this has become a lot easier.
On my server, I wanted to have a web server listening on the standard http/https ports proxying traffic for a variety of sites and applications, based on the domain/sub-domain in the request. Some of these applications would be hosted by containers running on other ports. The following post outlines how to do this with CentOS 7, Nginx, and Docker. I also wanted to be able to connect securely to these so in the examples below, you will see references to my LetsEncrypt certificates being used for various sub-domains.
Assumptions
You will need to install nginx and docker.
yum install -y docker nginx
systemctl enable docker && systemctl start docker
systemctl enable nginx && systemctl start nginx
This post also assumes you are have DNS for your domain(s) pointing to your server, and optionally have familiarized yourself with LetsEncrypt and generated relevant certificates.
Static Web Content
Several of the sites I host are just static web content, most notably this blog which I recently started writing with Hugo. (an immense relief after years Drupal, php, and databases) For this kind of content we don’t really need Docker, nginx is perfectly capable of hosting these easily.
Your main /etc/nginx/nginx.conf should look something like this:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
include /etc/nginx/mime.types;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
After this you can drop new config files into /etc/nginx/conf.d/ for each new site/sub-domain you want to host.
My configuration for this blog, including SSL using LetsEncrypt certs looks like this:
server {
listen 80;
listen 443 default_server ssl;
server_name rm-rf.ca www.rm-rf.ca;
access_log /var/log/nginx/rm-rf-access.log;
error_log /var/log/nginx/rm-rf-error.log;
ssl_certificate /etc/letsencrypt/live/rm-rf.ca/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rm-rf.ca/privkey.pem;
root /var/www/sites/rm-rf.ca;
index index.html;
location / {
root /var/www/sites/rm-rf.ca;
}
}
Running Web Apps as Containers
For the most part getting your applications running in containers will need to be an exercise for the reader. For tt-rss I simply used the Docker setup from the clue/ttrss image.
docker run -d --name ttrssdb nornagon/postgres
docker run -d --link ttrssdb:db -p 3001:80 clue/ttrss
You should now have tt-rss running on port 3001, however you do not need to open this port to the world, nginx is just going to proxy to it over localhost.
You can now add nginx config to forward traffic to it based on the domain in the request, in this example https://ttrss.rm-rf.ca
.
server {
listen 443 ssl;
server_name ttrss.rm-rf.ca;
access_log /var/log/nginx/ttrss-access.log;
error_log /var/log/nginx/ttrss-error.log;
ssl_certificate /etc/letsencrypt/live/rm-rf.ca/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rm-rf.ca/privkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3001;
proxy_redirect http://localhost:3001 /;
proxy_read_timeout 60s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Result
You can now run any open source webapp of your choosing on your VPS, and expose it securely over https without resorting to ports in your URLs. Just get your container running on a local port, and drop a new nginx config in place.