Category Archives: Platform(s)

On computing platforms – OS X, Linux, the Cloud and the mobile world.

No more pets in IT

Remember the good old days, when IT got a new server. It was a special event – and naturally the naming game. Finding that special name for the server, which neatly fitted into the naming scheme adopted (be it Indian cities, Nordic mythology or cartoon characters).

This ought to be then, but the ceremony still happens regularly in many IT departments, where servers are treated with the same affection as with pets – and with all the bad side effects it may bring along.

Remember the “superman” server – it must not die – superman will live on forever- nor matter how much patching, maintenance and expensive parts replacements it need, we will care for that special pet server… and we would be wrong to do so.

Modern IT should not be that cozy farm from the 1950ies, but find their reflection in modern farming.

From pets to cattle

In a modern farm the cattle isn’t named individually and harsh at may seem – when one of the cows doesn’t perform, it replaced as the performance of the farm as a whole matter much more, than the individual care and nurture of the individual animals on the farm… and rarely are the individual animals named – perhaps in recognition that they will be replaced and the number of animals will be adjusted to align with the requirements of the farm.

Modern IT does have all the technology to adopt the modern farm metaphor and should do so as we move to virtual servers, containers, micro services and cloud-based infrastructure.

All these technologies (along with others) have enabled us to care much less about a specific server or service, and instead focus on what “server templates” are needed to support the services provided by IT – and mange the number of instances needed to support the requirements posed to IT.

Hardware as Software – From care to control

As servers move from a special gem to a commodity and we may need 10, 50 or 100 small servers in the cloud instead of a single huge “enterprise” spaceship in the server room, a key challenge is our ability to manage and control them – and the tools to do that is also readily available.

Using tools like Chef, Puppet or Docker(files) – as the enabler for our server templates from above – developers are able to describe a specific server configuration and use this template to produce as many identical copies as may be needed. Further more, as we’re moving to manage the herd of servers, the server templates should easily be manged using the standard version control software used to mange your source code already.

Using this template model, the developers take control (and responsibility) of making sure the complete stack needed to run the service, is coherent, and the operations can make sure to size (and resize) the resources available as needed.

Finally as we move to a “cattle” perception of servers, no one should ever need to login to a specific server and make changes – it all needs to go through the configuration management tools and tracking changes all changes to the production environment. If a server starts acting up, kill that the server and spin a new server up in your infrastructure.

(This post originally appeared on Linked)

Bulk conversion of webp files to png format

Google has come up with a nice new image format called webp. Currently support for this format is fairly limited, so if you need to use webp images else where it might be nice to convert them to a more widely supported format. To do the conversion, Google has made a small tool available called dwebp. The tool however does only seem to support conversion of a single image, not a batch of images.

Using regular command line magic it’s easy though. Download the tool a pair it with the find and xargs command and you should quickly be on you way. If all the webp files needing conversion to png is in a single directory, simply do this:

find . -name "*.webp" | xargs -I {} dwebp {} -o {}.png

It findes all webp files, and converts them one by one. If the initial files name was image.webp the resulting file will be called image.webp.png (as the command above doesn’t remove the .webp but only appends .png at the end.

The command assumes the dwebp program is available in you include path. If this isn’t the case, you need to specify the complete path to the program.

Watching your Raspberry Pi

So I’ve installed a Raspberry Pi and it’s been running smoothly day in, day out. I’d like it to stay that way, but as the server is running it’s gathers lint in log files, databases grows and knowing how the load on CPU and memory is utilized through out time, I was looking for a tool which could help me to solve this problem.

As fun as it might be to build your own solution, I’ve learned to appreciate ready to use solutions, and it seems a nice little tool is available called RPi-Monitor. Assuming you run the Raspbian, the RPi-Monitor is available as a package ready to install through the standard package manager (once you’ve added the package repository).

RPi-Monitor installs a web server on port 8888 and gives you a nice overview on key resources – cpu, memory, disk  and more – and even historical graphs is available.

RPi-Monitor is free, but if you find it useful, do consider donating to the author on the website.

Using (Google) Calendar for domains

Here’s a little trick, which is has proven itself just as useful as it is easy. To most companies handling domains is critical task, as losing your domain name may have catastrophic consequences. Handling domains isn’t particularly hard, but there are some tasks, that may be time-critical to handle in due time – luckily Google Calendar provides an easy way to help make sure these tasks are handled.

(In this little tip, I’m using Google Calendar as the reference, but, Office365 or any other online calendaring system can probably do the same.)

Setup a new Google Calendar on an existing Google Account and call it “domains”.

Whenever a domain name is bought or renewed, make a new entry in the calendar at the expire time of the expiry date of the domain. Note the domain name in the subject of the calendar, and if you buy domains at various registrars note any details needed (but not confidential) in the description field.

Next step is to remove the default pop-up notification and add email notifications instead. Choose which warning horizons you’d like – i.e. 1 month, 1 week and 48 hours – and Google will let you know when the renewal is coming up.

Final step is to invite any other who needs to be notified of the domain expiry to the appointment, and make sure, that they notifications is also set up with the warning horizons they like.

… also applicable of certificates

The calendar notifications can also be utilized for SSL / TLS certificates. When buying or renewing certificates make an entry on their expiry date and set up notifications as described above. This way you should be able to ensure your users never see an expired certificate again.

Beware of DNS

For some time the server running this site had been acting up. Page loads were slow, access through SSH seemed lagging and something was absolutely misbehaving.

I’ve been trying to figure out what exactly was going on, but nothing really made sense. there were plenty of disk space, memory was reasonable utilized (no swapping) and the CPU load seemed to be less than 0.1 at any time – there were no good reason the server was “turtling” along at such a perceived slow pace.

Thanks to a tip from Henrik Schack, the server is now running at full speed again. it turned out that one of the DNS resolvers used by the machine was in a bad state and slow, unreliable or dysfunctional DNS causes trouble all sorts of places. The fix was quite easy, the file /etc/resolv.conf was updated to contain the IPs of the Google Public DNS servers, and once the file was saved things were back to the rapid normal.

All computers really need solid, fast DNS servers these days – be it servers or workstations – as the auto-updates and the utilization of “cloud-resources” of various kind much have DNS working to reach out to the destinations they need. If your system starts acting up without any reasonable explanation, checking DNS could be an easy place to start checking things.

Updating Viscocity certificates (on mac osx)

When using Viscocity to connect to a corporate network or any other openVPN server, you’re probably using certificates with a reasonable lifetime, but sometimes the certificate expire and needs be updated. Replacing the certificate files through the Viscocity interface is quite easy – just edit the connection and replace the certificate files in the appropriate tab.

There is however another little trick, which may need to be applied before the new certificates work. Viscocity offers to save the certificate password in the Keychain and I choose to use this feature, which caused a bit of trouble when updating the certificate. While it ought to – Viscocity does not – clear the password, when the certificate is changed, so to get prompted you need to go into the Keychain access tool and delete the stored password.

Look for an entry looking something like the highlighted line below and delete the occurrence.
Screen Shot 2014-09-09 at 23.04.07


Connection debugging tip

Viscocity provides a detailed log, which makes it much easier to debug connection issues. In the OSX Menu bar, right click the Viscocity icon, then choose “Details”. This opens a details window where a the button bar. The button to the right allows you to see a fairly detailed log of what Viscocity is doing, and provides clues on what to fix. In the screenshot below, it’s a wrong certificate password issue (“private-key-password-failure”).


Sending mail from a droplet

As stated earlier this site is now running on a DigitalOcean droplet. A droplet is basically the same as having a “real server”, and when running a bare bones machine, it isn’t born with the ability to handle email – receiving nor sending. As a number of web apps require the ability to handle mail, I had to setup facilities on the server (or droplet) to handle mail.

The “default” way to do this would probably be to install sendmail or postfix, as they are full-featured mail server, but configuring a mail-server, keeping it secure and updated is a nightmare I’d like to avoid. Therefore it was time to look for another option.

Enter msmtp

msmtp is an open-source, light-weight solution, which allows you to get your server to send email, or as the project itself describes it:

In the default mode, it transmits a mail to an SMTP server (for example at a free mail provider) which takes care of further delivery.

msmtp project homepage

There are several ways msmtp can be setup, but in this post I’ll just cover the two basic scenarios.


msmtp can handle mail delivery different ways. I’ll just cover two basic scenarios here.

If you have a smtp-server available. Your hosting provider or someone else may provide you with access to a full-featured SMTP-server. If this is the case, you can configure msmtp to pass all mail on to that server like this:

# smtp server configuration
account  smtp
port   25
# Default account to use
account default : smtp

As you’re talking to a “real” SMTP server all options and features should (potentially) be available to you.

If you have a Google account – either a regular Gmail account or Google Apps account will do just fine. To configure msmtp to use the Gmail SMTP server use this configuration:

# Gmail/Google Apps
account  gmail 
port   587 
password  enter-password-here!
auth   on 
tls   on 
tls_trust_file /etc/ssl/certs/ca-certificates.crt 
# Default account to use
account default : gmail

In the above example you need to change “” to an actual GMail account, and you need to change “enter-password-here!” to the password belonging to the specified Gmail addresss.

Using Gmail, all mail passed on from msmtp, will be sent from the account credentials used in the configuration, and there doesn’t seem to be a way to override this. You may therefore opt to create a specific mail-account for this use. You can set a custom Reply-To header in the mails passed through Gmail SMTP, which in many cases may help secure the replies get to a proper recipient.

If your site has adopted DMARC, this may not be a suitable option (at least not on the free tier), as they don’t support signing and do not offer dedicated IP-addresses for you SPF-records.

Testing 1, 2, 3…

Once you’ve set up the mstmp configuration file, it’s time to do some testing. Create at text file called “testmail.txt” with this content:

Subject: Subject for test mail
This is the body content for the test mail.

Change to your own actual email address. Then enter from the command line:

cat testmail.txt | msmtp

You should recieve your test mail shortly.

Setting up an alias

Many unix/linux tools and apps seems to assume, that you have sendmail installed and that it is available at /usr/bin/sendmail or a few other locations in the file system. To handle these cases easily, you can create an alias pointing the sendmail name to the msmtp binary like this (the examples should cover most cases):

ln -s /usr/bin/msmtp /usr/sbin/sendmail
ln -s /usr/bin/msmtp /usr/bin/sendmail
ln -s /usr/bin/msmtp /usr/lib/sendmail

Depending on which package manager your installation use, it may automatically setup these aliases, so do check if they exist before trying to create them.

Setting up with PHP

if you made the aliases as suggested above, it may already work, but you should make the following changes, just keep things clean and transparent.
Find all php.ini files applicable (you probably have one for the web-server and another for the Command Line):

Add or change the line:

sendmail_path = "/usr/bin/msmtp -t"

Now for some testing. Add a file with the following content (change the example-address to your own):

<!--?php mail("","test","test",""); ?-->

Now, call the file from the command line using the php cli, and then call the file through the webserver. In both cases you should receive an email shortly.

 Another suggestion…

Apart from running sendmail or postfix, there also seems to an application similar to mstmp called ssmtp, which offers many of the same features as msmtp.

Server setup: Setting up a firewall

A firewall is a basic filter that can provide an efficient protection to your server by only allowing the traffic in and out as the rules of the firewall allows it. Setting up a firewall on a Ubuntu Linux server does not need to be complicated – in fact the one used in this example is called “uncomplicated firewall”.

To get the firewall up and running make sure it’s installed through the package manager. Login and switch to a root shell, then install the firewall with this command:

apt-get install ufw

If everything goes okay, the firewall is installed but not configured nor enabled.

Firewall Configuration

I find the easiest way to mange the firewall is through a little script in the root home directory. The beginning script could look something like this:

ufw reset
ufw allow from
#ufw allow ssh
ufw enable
ufw status

Line 2 resets any existing configuration rules in the firewall.

In line 3 you should change the to you own fixed IP address if you have one (you really ought to). This line will allow any traffic from you ip-number into the server (assuming there is something able to receive it naturally).

If you haven’t a fixed IP number line 3 should be removed and line 4 used instead. It allows SSH connections from any outside IP-number to knock on the door – then well rely on the SSH daemon (and the configuration of this) to reject any unwanted visitors knocking on the server.

Line 5 enables the firewall and line 6 prints a list of the current status and configuration of the firewall.

Depending on what you are using your server to do, you’ll probably need a few more lines in the firewall script. If you’re running a webserver, you should at least add a line (just above the “ufw enable” line) allowing web traffic to pass through the server:

utf enable www

Are you using https on you’re webserver? – then you need to allow that too:

utf enable www

The simple enable lines above are suitable for “publicly accessible services”. If you’re running something the whole world should be able to use, UFW allows for that too. The Community documentation on UFW over at the Ubuntu site is quite helpful.

Server setup: A user account

So, I’ve been moving the site to a VPS – a Virtual Private Server. A VPS is basically the same as a physical server to which you can’t have physical access. When you get your virtual server, most likely it will be setup with a basic disk image with an Operating System and a root account. In my case at DigitalOcean I choose to setup an Ubuntu Linux image and here are the first moves you should take after creating the VPS to get the basic security in place.

Setting up a user account

At DigitalOcean the server images is deployed and once it’s ready you get a mail with the root password. Letting root login over the internet is pretty bad practice, so the first step you should do is login (over SSH) and setup a new user. Creating the new user is done with the adduser command and follow the instructions, then start visudo to grant your new user some special powers:

adduser newuser

In the visudo file you want to add copy of an existing line. Find this line:

root    ALL=(ALL:ALL) ALL

… and make a copy of the line. Change the “root” to your newly created login name to grant you new user the right to become root.
Save and exit the file. Check out can be come root from you new account (first switch to the new user with the command “su – newuser” (change newuser to you new username), then try to switch back to root by writing “sudo su -” and enter the password to your new user account (not the root password, and surely you didn’t use the same right?). If this success enter “exit” twice to get back to the initial root shell. The new account is setup and has the rights to become root.

Setting up SSH

Next step is preventing root from login in from remote locations (we only want the newly created account from above to be able to login remotely and then change to root if needed).

Setup the .ssh directory

Assuming you have an existing SSH key set start up creating a “.ssh” directory in you new users directory.
Add your public key to the directory (it’s probably called “”) and name it “authorized_keys”.

Make sure…

  • the .ssh directory and the file in it is owned by your newuser-account (not root).
  • the directory is set to 0700 and the file to 0600 (using the chmod command).

You should now be able to login to the “newuser” account remotely using SSH.

Reconfiguring the SSH daemon

Asuming your new account is setup and able to login from remote with SSH the next step should be reconfiguring the SSH daemon to a more secyre setup, open the sshd-configuration file with this command (as root):

vi /etc/ssh/sshd_config

The changes you should make are these two:

PasswordAuthentication no
PermitRootLogin no

The first requires we only allow logins using public-key authentication – no password-only logins. The second denies root to login from remote. If we need root access, we must login with the regular account and then change to root.

Once the changes are med, make sure they take effect by reloading the SSH daemon with this command (as root):

reload ssh

Once this is completed, please move on and setup a firewall.

The emergency hatch

Should you get into trouble and not be able to get back in to your server using SSH, DigitalOcean offers an emergency hatch. If you log into the backend (where you created the VPS) there’s an option to get “console” access to your server. Using this console is as close as you can get to actually sitting with a console next to the machine, and could be the access you needed to fix any misconfiguration or problem preventing you getting in through regular SSH.