Authors: Aaron Rajala, Altti Paakkala & Otto Punkari
Data is the new oil. You’ve probably heard this saying far too many times. While looking at the market capitalization of FAANG companies, one could come to the conclusion, that the saying is true. How is this “new oil” extracted? By surveilling you, of course! Your habits, preferences, lifestyle, personality and financial decisions are extremely valuable data for companies. By using these services, you’re trading your deeply personal data for convenience and fun. So why worry?
There is continual centralization of data in to the hands of few companies. The devastating effects can be felt whenever one of these services providers are hacked and all the personal data stored ends up on the black market. So it doesn’t even require malevolent actors to be bad idea to put every data egg in one data basket.
To combat this trend, the best choice is to host required services yourself and offer them to the friends and family. Which we have done and will show you how you can do it too.
Before starting this project it is for the best to figure out can you fulfill the basic hard requirements. A minimum budget on 50€ is recommended, due to the cost of acquiring a domain name and a virtual machine from public cloud. The rest of the requirements are “soft”, which means they aren’t absolutely necessary to succeed, but will make the project a lot more manageable.
The hard ones
- Internet connection
- Server computer/Virtual machine with public IPv4 address
- Linux server distrubution
- Domain name(FQDN)
The soft ones
- Basic/Advanced Linux knowledge
- Basic/Advanced understanding of Internet Protocol stack
- The ability to use search engines correctly
- To understand that the problem is likely between the keyboard and chair
- A habit of documenting your successes/failures
A hosted virtual machine is the easiest way to implement such project. However it is a compromise due to the machine not being your own. Even so it is far better situation than trusting everything to unaccountable companies. The minimum requirements for hosted virtual machine.
- Processor: 2-4 vCPU cores
- System memory: 8GB RAM
- Storage: 80GB SSD
These should be considered as the absolute minimum specification for server/virtual machine due to running multiple services at the same time. Only running email server will take over 3GB of RAM which means there is almost nothing left to do anything else. Running out of system memory will slow down the computer so much, to make it nigh unusable.
A domain name, such as example.com, is required for this project due to SSL-certificates only being accessible known and owned domain names. Almost all the services that we’re using require a SSL-certificate to function securely. There are multiple providers to buy such domain name:
The domain name providers don’t serve all international markets. such as domains.google doesn’t offer their services to Finland, so you might need to look around to find a provider. Most of the highly desirable domains are already taken, so you might need to use so creativity to find your own. The pricing is usually dictated by the top-level domain, for example .com. For sake of consistency we’re using example.com as the example domain throughout the guide.
After acquiring a domain from the provider of your choice, this is where you’ll go to redirect the Domain to your server’s public IP-address. Unfortunately every Domain name provider’s tools are little different, so your experience might not match exactly ours. Setting example.com to the right IP-address begins with selecting Add Record. After this type on the Name input field example.com and Record input field your server’s public IPv4 address. Pushing Save Record will make it so that public DNS-services will redirect all quaries for example.com to your server.
The implementation Ubuntu Server 20.04 LTS Ubuntu Server 20.04 LTS
We chose Ubuntu Server 20.04 LTS for its widespread usage and support. Debian based distributions are more beginner friendly in general, so its should be chosen for such project. All public clouds have clean install image of Ubuntu server, which we selected to use from our provider.
Create new user (with sudo privileges) as root -> use the new user instead of root
After creating the new user that is used to do the project, it best to update and upgrade the system to be up to date with the latest security patches. This is done by executing following commands:
sudo apt update sudo apt upgrade
Ubuntu by default has a software firewall available. This should be used to block any unwanted connections from outside:
sudo ufw enable
UFW will be enabled after restarting the computer, so it is better to make few changes before that. By default UFW denies all incoming connections. To allow incoming the required basic connections like SSH and HTTP/S you must execute following commands:
sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw allow 443/tcp
This ensures that you can log into your instance when firewall is enabled. To enable the firewall restart the server:
sudo reboot now
Docker and Docker Compose
Docker is a necessary component for this project due to being the only way to implement some services, so we chose to install it from start. Docker is a OS-level virtualization software, meant for running software in packages called containers. These containers can be thought as “pseudo virtual machines” where they are isolated from another, but share single system kernel to use fewer resources than actual virtual machines. We base this guide on the official documentation.
Set up the repository:
sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release
Add Docker’s GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Set up the stable repository:
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker engine with apt:
sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io
To list available version in your repository run command:
apt-cache madison docker-ce
Specify what version you want to download for your system by filling <VERSION_STRING> and <VERSION_STRING> with the right version:
sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io
To see if your docker installation is successful, it’s good to run the good ol’ hello world:
sudo docker run hello-world
Successful installation and running “hello world” should look something like this. Congratulations!
It’s not over yet. Time to move onto Docker Compose. Execute following command to download stable release of Docker Compose:
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
And give the permission to execute:
sudo chmod +x /usr/local/bin/docker-compose
Test the installation:
Now that you’ve done the basics, it’s time to spin up services. There is no right or wrong answer what to do next, so try things and find out what works for you.
A collection of open-source software that run on Docker providing the end-user with modern mail server, SOGo, Postfix, Dovecot etc. Mailserver is a prerequisite for services such as Bitwarden. User to be warned, this is one of the harder ones we implemented into our project. The other notable matter is that Mailcow is a recourse intensive series of programs, requiring at least 3 Gigabytes of system memory (RAM) to run continuously.