Category Archives: Servers

Updates…

It’s been quiet here for a while, but be things have been happening behind the scenes. In case your wondering the site (and surroundings) have been seeing a number of updates which eventually may make it into separate posts.

  • I’m running on a Digital Ocean droplet. It was provisioned as an Ubuntu 12.04 LTS, which is dead by now (as in no more updates including security updates). The server has now been roll up to an Ubuntu 16.04 LTS in place.
  • As I was messing around with the server, I’ve added IPv6 support.
  • The DNS has been updated to have full support for DNSSEC.
  • My Let’s Encrypt Certificates now has automated certificate renewals and I’ve upgraded to CAA support.
  • The Webserver has been switched from Apache to NGINX.
  • The PHP has been switched from PHP 5.6 series to a modern 7.0.
  • I’m adopting full Git-backed backup of all server setup and configuration using BitBucket.org. It’s not complete but most config files have been added and managed using GitHub.

These was the majority of changes on the site and server the past few months. With these updates in place, I might get back to producing content for the site.

No more pets in IT

Remember the good old days, when IT got a new server. It was a special event – and naturally the naming game. Finding that special name for the server, which neatly fitted into the naming scheme adopted (be it Indian cities, Nordic mythology or cartoon characters).

This ought to be then, but the ceremony still happens regularly in many IT departments, where servers are treated with the same affection as with pets – and with all the bad side effects it may bring along.

Remember the “superman” server – it must not die – superman will live on forever- nor matter how much patching, maintenance and expensive parts replacements it need, we will care for that special pet server… and we would be wrong to do so.

Modern IT should not be that cozy farm from the 1950ies, but find their reflection in modern farming.

From pets to cattle

In a modern farm the cattle isn’t named individually and harsh at may seem – when one of the cows doesn’t perform, it replaced as the performance of the farm as a whole matter much more, than the individual care and nurture of the individual animals on the farm… and rarely are the individual animals named – perhaps in recognition that they will be replaced and the number of animals will be adjusted to align with the requirements of the farm.

Modern IT does have all the technology to adopt the modern farm metaphor and should do so as we move to virtual servers, containers, micro services and cloud-based infrastructure.

All these technologies (along with others) have enabled us to care much less about a specific server or service, and instead focus on what “server templates” are needed to support the services provided by IT – and mange the number of instances needed to support the requirements posed to IT.

Hardware as Software – From care to control

As servers move from a special gem to a commodity and we may need 10, 50 or 100 small servers in the cloud instead of a single huge “enterprise” spaceship in the server room, a key challenge is our ability to manage and control them – and the tools to do that is also readily available.

Using tools like Chef, Puppet or Docker(files) – as the enabler for our server templates from above – developers are able to describe a specific server configuration and use this template to produce as many identical copies as may be needed. Further more, as we’re moving to manage the herd of servers, the server templates should easily be manged using the standard version control software used to mange your source code already.

Using this template model, the developers take control (and responsibility) of making sure the complete stack needed to run the service, is coherent, and the operations can make sure to size (and resize) the resources available as needed.

Finally as we move to a “cattle” perception of servers, no one should ever need to login to a specific server and make changes – it all needs to go through the configuration management tools and tracking changes all changes to the production environment. If a server starts acting up, kill that the server and spin a new server up in your infrastructure.

(This post originally appeared on Linked)

Watching your Raspberry Pi

So I’ve installed a Raspberry Pi and it’s been running smoothly day in, day out. I’d like it to stay that way, but as the server is running it’s gathers lint in log files, databases grows and knowing how the load on CPU and memory is utilized through out time, I was looking for a tool which could help me to solve this problem.

As fun as it might be to build your own solution, I’ve learned to appreciate ready to use solutions, and it seems a nice little tool is available called RPi-Monitor. Assuming you run the Raspbian, the RPi-Monitor is available as a package ready to install through the standard package manager (once you’ve added the package repository).

RPi-Monitor installs a web server on port 8888 and gives you a nice overview on key resources – cpu, memory, disk  and more – and even historical graphs is available.

RPi-Monitor is free, but if you find it useful, do consider donating to the author on the website.

Using (Google) Calendar for domains

Here’s a little trick, which is has proven itself just as useful as it is easy. To most companies handling domains is critical task, as losing your domain name may have catastrophic consequences. Handling domains isn’t particularly hard, but there are some tasks, that may be time-critical to handle in due time – luckily Google Calendar provides an easy way to help make sure these tasks are handled.

(In this little tip, I’m using Google Calendar as the reference, but Outlook.com, Office365 or any other online calendaring system can probably do the same.)

Setup a new Google Calendar on an existing Google Account and call it “domains”.

Whenever a domain name is bought or renewed, make a new entry in the calendar at the expire time of the expiry date of the domain. Note the domain name in the subject of the calendar, and if you buy domains at various registrars note any details needed (but not confidential) in the description field.

Next step is to remove the default pop-up notification and add email notifications instead. Choose which warning horizons you’d like – i.e. 1 month, 1 week and 48 hours – and Google will let you know when the renewal is coming up.

Final step is to invite any other who needs to be notified of the domain expiry to the appointment, and make sure, that they notifications is also set up with the warning horizons they like.

… also applicable of certificates

The calendar notifications can also be utilized for SSL / TLS certificates. When buying or renewing certificates make an entry on their expiry date and set up notifications as described above. This way you should be able to ensure your users never see an expired certificate again.

Server setup: A user account

So, I’ve been moving the site to a VPS – a Virtual Private Server. A VPS is basically the same as a physical server to which you can’t have physical access. When you get your virtual server, most likely it will be setup with a basic disk image with an Operating System and a root account. In my case at DigitalOcean I choose to setup an Ubuntu Linux image and here are the first moves you should take after creating the VPS to get the basic security in place.

Setting up a user account

At DigitalOcean the server images is deployed and once it’s ready you get a mail with the root password. Letting root login over the internet is pretty bad practice, so the first step you should do is login (over SSH) and setup a new user. Creating the new user is done with the adduser command and follow the instructions, then start visudo to grant your new user some special powers:

adduser newuser
visudo

In the visudo file you want to add copy of an existing line. Find this line:

root    ALL=(ALL:ALL) ALL

… and make a copy of the line. Change the “root” to your newly created login name to grant you new user the right to become root.
Save and exit the file. Check out can be come root from you new account (first switch to the new user with the command “su – newuser” (change newuser to you new username), then try to switch back to root by writing “sudo su -” and enter the password to your new user account (not the root password, and surely you didn’t use the same right?). If this success enter “exit” twice to get back to the initial root shell. The new account is setup and has the rights to become root.

Setting up SSH

Next step is preventing root from login in from remote locations (we only want the newly created account from above to be able to login remotely and then change to root if needed).

Setup the .ssh directory

Assuming you have an existing SSH key set start up creating a “.ssh” directory in you new users directory.
Add your public key to the directory (it’s probably called “id_rsa.pub”) and name it “authorized_keys”.

Make sure…

  • the .ssh directory and the file in it is owned by your newuser-account (not root).
  • the directory is set to 0700 and the file to 0600 (using the chmod command).

You should now be able to login to the “newuser” account remotely using SSH.

Reconfiguring the SSH daemon

Asuming your new account is setup and able to login from remote with SSH the next step should be reconfiguring the SSH daemon to a more secyre setup, open the sshd-configuration file with this command (as root):

vi /etc/ssh/sshd_config

The changes you should make are these two:

PasswordAuthentication no
PermitRootLogin no

The first requires we only allow logins using public-key authentication – no password-only logins. The second denies root to login from remote. If we need root access, we must login with the regular account and then change to root.

Once the changes are med, make sure they take effect by reloading the SSH daemon with this command (as root):

reload ssh

Once this is completed, please move on and setup a firewall.

The emergency hatch

Should you get into trouble and not be able to get back in to your server using SSH, DigitalOcean offers an emergency hatch. If you log into the backend (where you created the VPS) there’s an option to get “console” access to your server. Using this console is as close as you can get to actually sitting with a console next to the machine, and could be the access you needed to fix any misconfiguration or problem preventing you getting in through regular SSH.

Moving the site

This site (and my other site in Danish) have been hosted on a cheap shared hosting site a few years. As shared hosting platforms go, the service and features at GigaHost was quite reasonable, but their servers seemed continuously overloaded and the site had a few issues from time to time. I’ve been moving everything from the shared hosting platform to the smallest available VPS server at DigitalOcean.

Why the move?

  • Performance on shared hosting platforms never seems to amaze.
  • Limited set of features – no shell access, dummy selfcare interface, reasonable features – but limited.
  • Was dirt cheap when I moved in, but not as much – the VPS is actually priced lower.

How did I move the site?

The various parts of the move will probably be described in details in further posts on the site in the foreseeable future, but basically the steps included:

  • setting up an account on Digital Ocean and creating a droplet.
  • setting up a user acount, getting a firewall up and running, securing a few items.
  • installing a webserver and mysql.
  • moving the data from the shared hosting platform (databases and code) to the new webserver.
  • testing everything works by hacking the local hosts-file.
  • redirecting DNS to point to the new site.
  • deleting all stuff from the shared hosting platform once everything has been verified to work as expected.

What comes next…

Running my own server opens a lot of interesting new possibilities. I’m no longer running Apache (which was mandatory previously). Now I’m running nginx which seems much more light-weight.  I’m also running NewRelic which seems to provide amazing insights into how the server resources are utilized.

My first experiments on this server, has been focused on getting the old stuff up and running. You might notice, that the site is running somewhat faster (and I’m still tweaking things).

I expect to be able to use this server to experiment with node.js, ruby and other interesting stuff… and the Comunity help pages at Digital Ocean seems quite amazing.

 

Caution: Here be dragons!

Running your on server (virtual or real) is slightly more complicated than being just another guest on a shared hosting platform. While I do feel reasonable fit on a Linux platform (and run it as my daily desktop), I’ve been blessed with a hints and help from a friend throughout the process which made the move considerably faster (and the settings far more secure from the outset.

I’m sure I’ll run into some trouble along the way – I even managed to -amost – shut myself out of my virtual server once, as I only allowed for SSH access,  but seemed to have deleted all public keys needed on the server to allow my self to get back in.

ftp on OSX Lion

While it really isn’t secure at any measure, ftp is a very useful way of moving files around. Apple’s OSX have a build-in basic ftp server, but in Lion (version 10.7) the user interface seems to have disappeared from the User interface. The servers is still available under the hood if you need it.

To enable the ftp-server (the availability) enter this command in a terminal window:

sudo launchctl load -w /System/Library/LaunchDaemons/ftp.plist

From then on use this command to enable the ftp-server:

sudo launchctl start com.apple.ftpd

and youse this command to stop the ftp-server:

sudo launchctl stop com.apple.ftpd

To remove (the availability) of the ftp-server issue this command:

sudo launchctl unload /System/Library/LaunchDaemons/ftp.plist
  • If you need the ftp-server from time to time, you should probably not remove it, but just stop it, when it’s not being used.
  • If you often need an ftp-server you should probably look at a more full-featured ftp-server (such as pure ftpd).

HTTPS, SSL, TLS – What it does

While surfing the net, you often come across web agencies how promote SSL-certificates (or TLS security) on their products – or their ability to create “secure web applications” with SSL. Most users know HTTPS/SSL/TLS as the little lock, that promises “security” when visiting a page – but what kind of security it actually provides is rarely explained – and far worse often misunderstood.

The while SSL is the popular name (and as it was once known) and HTTPS usually is the way users sees it (as part of a URL in a browser) – the correct name is TLS a short for Transport Layer Security.

The TLS provides point-to-point security between the browser and and server. It makes certain no-one can see the traffic (/data) sent between the two parties. Simply put, it provide a secure tunnel/pipe, where anyone can’t listen in. Almost everyone understand TLS to this point.

Many users however thinks it provides more than this. That TLS provides protection from malware infection, voids dangers of cross-site scripting attacks and other dangers of the web, and TLS does not provide any of the sorts.

While the security it provides is good and solid, it is important to understand the scope and purpose of TLS/SSL, it’s often an important part of the security infrastructure of a web application, but only part of it.