All posts by Flemming Mahler

Are you ready for transparency?

Running a modern IT platform is rarely an easy nor isolated task. Most platforms consist of a fairly large number of components ranging from OS level to 3. party libraries and components added in the user interfacing layers – and adding numerous integrations does make it an interesting challenge to quickly identify and correct bugs and errors.

While the system complexity does pose a challenge is surely not an impossible task, as several tools exists for most – if not all – platforms to allow instrumentation of the platform and utilize the instrumentation tools to handle the platform and identify issues quickly.

Instrumentation provides insight…

Instrumentation tools are generic tools which allows you to identify errors in your production environment and provide sufficient context and debug information to allow developers to diagnose, understand and fix the issues identified. Examples of such tools include AppDynamics, New Relic, Stackify and many others. If you’re the Do-It-Yourself type, it’s not unfeasible to build a tool yourself by hooking into error handlers and other hooks exposed by the specific platform due to be instrumented.

Having worked with various degrees of instrumentation for 10+ years – homebuild and purchased tools, I can certainly confirm that such tools works and allows you to mature a complex IT platform much quicker, as the insights provided from a live production environment allows you to attack the most occurring errors experienced by real users of the system.

Test suites are great for minimizing risk during development, but the test suites are based on assumptions on how users and data acts in your platform,  and while the identified errors experienced over time certainly help minimizing risks in new development, it is “theory” as opposed to instrumentation which is much more “practice”.

Heimbach - power plant 12 ies


Transparency not needed

While the tools to do instrumentation for most platforms may readily be available, the “natural use”  – even in an enterprise setting – seems surprisingly low, and I suspect numerous reasons exists.

We do not need it is often the most common. As set procedures exists and they seem to work, why would we need to introduce a new tool to provide data we already have. Error logs, end-user descriptions and screenshots have been used for decades and why should there be a better method?

It introduces risk is another often cited concern. As instrumentation tools are not considered a need tool in the IT platform, operations may oppose to adding it to the already complicated stack – especially if the value of the instrumentation is not known or recognized.

It is expensive is another misconception. Instrumentation often don’t provide any direct business value (assuming your IT platform isn’t burning and the users is leaving rapidly). Most of the value offered by instrumentation tools is fixing issues faster and the scope of issues  being smaller, and as such it’s often hard to prove the value offered by issues not occurring.

Transparency not desired

Apparently many people believe firmly, that issues not seen nor reported are not real issues, and does not exist. Gaining insights into one instrumented platform and running a black-box platform next to it, may  cause the false belief that the black box system is running more stable and with fewer issues than the transparent system.

The reason is simply that on black box systems (that is systems without any instrumentation tools to monitor their actual performance) it is rare to proactively examine logs files and other places where the black box might emit issues. Only when an issue is reported, developers are assigned to examine these sources to resolve the issue.

Gaining insights into an IT platform though instrumentation and being able to resolve “real” issues as experienced by your users should be a fantastic thing, but beware that many people implicitly seems to believe, that with you don’t monitor for errors and issues, they probably doesn’t exist – however false it is.
See No Evil, Hear No Evil, Speak No Evil

Accidental Architecture

Most IT departments have the best intentions of providing the best quality, coherent solutions, yet market conditions, projects running in parallel and various constraints on budgets, resources or time, often causes what might be defined as Accidental Architecture.

The easiest way to identify cases where you’ve been hit by accidental architecture is when describing your it architecture and look for the works “except” or “but not”. Typical examples include – we have a single-sign system utilized everywhere except…”, “We update all systems to current versions, but not this old system…”.

The accidental architecture seem to be caused by a few main drivers:

  1. Lack of overview
  2. Lack of time
  3. Lack of resources

Lack of overview

When the root cause is lack of overview, the case is often, that decisions and designs are implemented without understanding the entire scope of the problem – or existing procedures and architecture in place. While a good coherent architecture seems to have been designed, it turns out that existing complexities which wasn’t known or addressed causes issues.

Lack of time

Deadlines often seem to be the cause of many issues – and no matter who much time you may a assign to a project, you’ll often need just a little more. As a deadline approaches often shortcuts a made to make the deadline and the shortcuts – which is assumed to be fixed in the next version, but often forgotten and abandoned – until issues arise.

Lack of resources

The issues caused by lack of time and lack of resource may seem similar, but are different. When lack of time causes the issue, the problem could have been solved, whereas lack of resources often happens when budget constraints or lack of knowledge causes an architecture to be chosen which may not be the right solution of the problem at hand.

The lack of resource issue may often occur, when projects are expected to drive enterprise changes – merging billing systems, decommissioning legacy platforms and other issues, which should be done, but often a product development project may not be able to facilitate.

The first step, is to realize there is a problem…

While many organizations fail to realize the existence and the volume, most actually seem to have a fair share of them – and if not handled – probably a growing volume.

Once you’ve realized you have cases of accidental architecture, make sure you address them on your technical debt list and have plan for what to do about the system. While “pure” technical debt most often may cause operational issues, the accidental architecture usually cause customer facing issues and are not recognized as severely as the operational issues caused by technical debt.

The issues introduced by accidental architecture is often complexity, slowly rising operational costs and increased user-support costs. To keep your IT domain alive and moving forward, time and resources must be found continuously to address and resolve the accidents.

Three points on the costs of COTS

It seems to be quite popular to move away from custom build IT solutions to so called COTS – commercial of the shelf solutions. The idea being, that to software solution fulfil a functionality which long has been commoditized and standardized to such an extent that it offers no “competitive edge” nor core value to the business.

For most companies and organizations the office suite would be a pretty safe bet for a piece of software which is magnificently suited for a COTS solution. Finding someone who develops an internal word processor in-house seems crazy as so many fully capable solutions exists in the market.

As time passes more software seem to be included in the parts which may be commoditized and custom solutions be replaced by standard solutions to provide an adequate and capable solution to areas served by custom solutions.

The drive to COTS software seem to be a hard challenge to many organizations, as the primary driver in most COTS adoption projects seems to be a drive from the accountants and a mistrust to the IT department to choose and deliver the best fit solutions to the rest of the business.

When listening for failed Microsoft Office implementations it sems fairly small, yet the number of failed ERP projects seem endless. The scope of this post is not to address when nor how to choose COTS solutions, but just make the point, that the choice of COTS is often naive and not fully understood ahead of the decision itself.

  • When adopting COTS you’re tied to the options and customizations offered by the chosen COTS software. You should never expect to be able to force the solution to be adapted to your organization and processes, but instead be prepared to adapt the organization and processes to fit within options offered by the chosen software.
  • Choosing COTS is a strategic commitment to the vendor of the software within the scope the COTS solution is adapted to fit within the organization. Once implemented within an organization, the adopting organization is often committed to follow the roadmap and direction the vendor chooses – as the cost of switching to another solution often is large and challenging project.
  • When adopting COTS you’re committing to follow along. All versions of software has a limited “life cycle” and as new versions are released you’re expected to follow along – in the pace that’s suitable for your organization and within the roadmap offered by the vendor (in terms of support and upgrade paths).

While COTS software seems like a cheap and easy solution to many areas within an organization, the three points above seems to be forgotten too often and causes problems with the stand COTS solutions again and again.

Coming back to Microsoft Office it seems all organizations are more than capable to restrain within the possibilities offered by Word, Excel and “friends”. As the Office documents seems to be the standard exchange format, there is an implicit drive to move the organization to current versions of the software and the new options offered by new versions.

When COTS implementations fail it seems often seems, that organizations are unwilling to adopt within the options offered by the COTS software chosen – and thus breaking the core idea of COTS as a commoditized solution.

It also seems many organizations seem to forget the commitment to follow the COTS vendor, and often end up using software versions dangerously outdated, as no budget exists to update or too many customizations have been made (see paragraph above) to make it easy to upgrade to current versions.

While COTS may offer solutions to many areas in the organization, please be warned – there is no free lunch. COTS does not only come with an initial implementation price – it also comes with commitment.

Bulk conversion of webp files to png format

Google has come up with a nice new image format called webp. Currently support for this format is fairly limited, so if you need to use webp images else where it might be nice to convert them to a more widely supported format. To do the conversion, Google has made a small tool available called dwebp. The tool however does only seem to support conversion of a single image, not a batch of images.

Using regular command line magic it’s easy though. Download the tool a pair it with the find and xargs command and you should quickly be on you way. If all the webp files needing conversion to png is in a single directory, simply do this:

find . -name "*.webp" | xargs -I {} dwebp {} -o {}.png

It findes all webp files, and converts them one by one. If the initial files name was image.webp the resulting file will be called image.webp.png (as the command above doesn’t remove the .webp but only appends .png at the end.

The command assumes the dwebp program is available in you include path. If this isn’t the case, you need to specify the complete path to the program.

Watching your Raspberry Pi

So I’ve installed a Raspberry Pi and it’s been running smoothly day in, day out. I’d like it to stay that way, but as the server is running it’s gathers lint in log files, databases grows and knowing how the load on CPU and memory is utilized through out time, I was looking for a tool which could help me to solve this problem.

As fun as it might be to build your own solution, I’ve learned to appreciate ready to use solutions, and it seems a nice little tool is available called RPi-Monitor. Assuming you run the Raspbian, the RPi-Monitor is available as a package ready to install through the standard package manager (once you’ve added the package repository).

RPi-Monitor installs a web server on port 8888 and gives you a nice overview on key resources – cpu, memory, disk  and more – and even historical graphs is available.

RPi-Monitor is free, but if you find it useful, do consider donating to the author on the website.

Using (Google) Calendar for domains

Here’s a little trick, which is has proven itself just as useful as it is easy. To most companies handling domains is critical task, as losing your domain name may have catastrophic consequences. Handling domains isn’t particularly hard, but there are some tasks, that may be time-critical to handle in due time – luckily Google Calendar provides an easy way to help make sure these tasks are handled.

(In this little tip, I’m using Google Calendar as the reference, but, Office365 or any other online calendaring system can probably do the same.)

Setup a new Google Calendar on an existing Google Account and call it “domains”.

Whenever a domain name is bought or renewed, make a new entry in the calendar at the expire time of the expiry date of the domain. Note the domain name in the subject of the calendar, and if you buy domains at various registrars note any details needed (but not confidential) in the description field.

Next step is to remove the default pop-up notification and add email notifications instead. Choose which warning horizons you’d like – i.e. 1 month, 1 week and 48 hours – and Google will let you know when the renewal is coming up.

Final step is to invite any other who needs to be notified of the domain expiry to the appointment, and make sure, that they notifications is also set up with the warning horizons they like.

… also applicable of certificates

The calendar notifications can also be utilized for SSL / TLS certificates. When buying or renewing certificates make an entry on their expiry date and set up notifications as described above. This way you should be able to ensure your users never see an expired certificate again.

Beware of DNS

For some time the server running this site had been acting up. Page loads were slow, access through SSH seemed lagging and something was absolutely misbehaving.

I’ve been trying to figure out what exactly was going on, but nothing really made sense. there were plenty of disk space, memory was reasonable utilized (no swapping) and the CPU load seemed to be less than 0.1 at any time – there were no good reason the server was “turtling” along at such a perceived slow pace.

Thanks to a tip from Henrik Schack, the server is now running at full speed again. it turned out that one of the DNS resolvers used by the machine was in a bad state and slow, unreliable or dysfunctional DNS causes trouble all sorts of places. The fix was quite easy, the file /etc/resolv.conf was updated to contain the IPs of the Google Public DNS servers, and once the file was saved things were back to the rapid normal.

All computers really need solid, fast DNS servers these days – be it servers or workstations – as the auto-updates and the utilization of “cloud-resources” of various kind much have DNS working to reach out to the destinations they need. If your system starts acting up without any reasonable explanation, checking DNS could be an easy place to start checking things.

Viewing EML files

As mails bounch around some email programs (I’m looking at you, Microsoft), seems to encrypt package forwarded mails in attachments with the extension .eml.

On Linux…

While Mozilla Thunderbird should be able to read them (as should Evolution), it requires you have the mail  application available on your machine, but I haven’t – I’m doing just fine with GMail in the browser. So far the best solution I’ve find – assuming it’s trivial non-sensitive, personal files – that an Online viewer seems to work pretty well. My preferred solution is the free one from encryptomatic. It handles the mails quite nicely, it restores the formatting to something quite readable and even handles embedded images and attachments within the eml-file.

On Windows…

If you’re using Windows Live Mail or any other mail application running on windows, it can probably handle the .eml files. An other option is to look for an App, as there seems to exist several apps on windows, which renders the .eml files with no issues.

A little trick (with a browser)

When using windows – even in a VirtualBox – there’s an easy little trick you can use: Save the file and simply rename the file extension from “.eml” to “.mht” and open the file with Internet Explorer. It should render perfectly.

Once the .eml file is renamed to .mht Google Chrome and Firefox seems able to render the contents too – though handling images and attachments seems much less graceful.

Updating Viscocity certificates (on mac osx)

When using Viscocity to connect to a corporate network or any other openVPN server, you’re probably using certificates with a reasonable lifetime, but sometimes the certificate expire and needs be updated. Replacing the certificate files through the Viscocity interface is quite easy – just edit the connection and replace the certificate files in the appropriate tab.

There is however another little trick, which may need to be applied before the new certificates work. Viscocity offers to save the certificate password in the Keychain and I choose to use this feature, which caused a bit of trouble when updating the certificate. While it ought to – Viscocity does not – clear the password, when the certificate is changed, so to get prompted you need to go into the Keychain access tool and delete the stored password.

Look for an entry looking something like the highlighted line below and delete the occurrence.
Screen Shot 2014-09-09 at 23.04.07


Connection debugging tip

Viscocity provides a detailed log, which makes it much easier to debug connection issues. In the OSX Menu bar, right click the Viscocity icon, then choose “Details”. This opens a details window where a the button bar. The button to the right allows you to see a fairly detailed log of what Viscocity is doing, and provides clues on what to fix. In the screenshot below, it’s a wrong certificate password issue (“private-key-password-failure”).


Sending mail from a droplet

As stated earlier this site is now running on a DigitalOcean droplet. A droplet is basically the same as having a “real server”, and when running a bare bones machine, it isn’t born with the ability to handle email – receiving nor sending. As a number of web apps require the ability to handle mail, I had to setup facilities on the server (or droplet) to handle mail.

The “default” way to do this would probably be to install sendmail or postfix, as they are full-featured mail server, but configuring a mail-server, keeping it secure and updated is a nightmare I’d like to avoid. Therefore it was time to look for another option.

Enter msmtp

msmtp is an open-source, light-weight solution, which allows you to get your server to send email, or as the project itself describes it:

In the default mode, it transmits a mail to an SMTP server (for example at a free mail provider) which takes care of further delivery.

msmtp project homepage

There are several ways msmtp can be setup, but in this post I’ll just cover the two basic scenarios.


msmtp can handle mail delivery different ways. I’ll just cover two basic scenarios here.

If you have a smtp-server available. Your hosting provider or someone else may provide you with access to a full-featured SMTP-server. If this is the case, you can configure msmtp to pass all mail on to that server like this:

# smtp server configuration
account  smtp
port   25
# Default account to use
account default : smtp

As you’re talking to a “real” SMTP server all options and features should (potentially) be available to you.

If you have a Google account – either a regular Gmail account or Google Apps account will do just fine. To configure msmtp to use the Gmail SMTP server use this configuration:

# Gmail/Google Apps
account  gmail 
port   587 
password  enter-password-here!
auth   on 
tls   on 
tls_trust_file /etc/ssl/certs/ca-certificates.crt 
# Default account to use
account default : gmail

In the above example you need to change “” to an actual GMail account, and you need to change “enter-password-here!” to the password belonging to the specified Gmail addresss.

Using Gmail, all mail passed on from msmtp, will be sent from the account credentials used in the configuration, and there doesn’t seem to be a way to override this. You may therefore opt to create a specific mail-account for this use. You can set a custom Reply-To header in the mails passed through Gmail SMTP, which in many cases may help secure the replies get to a proper recipient.

If your site has adopted DMARC, this may not be a suitable option (at least not on the free tier), as they don’t support signing and do not offer dedicated IP-addresses for you SPF-records.

Testing 1, 2, 3…

Once you’ve set up the mstmp configuration file, it’s time to do some testing. Create at text file called “testmail.txt” with this content:

Subject: Subject for test mail
This is the body content for the test mail.

Change to your own actual email address. Then enter from the command line:

cat testmail.txt | msmtp

You should recieve your test mail shortly.

Setting up an alias

Many unix/linux tools and apps seems to assume, that you have sendmail installed and that it is available at /usr/bin/sendmail or a few other locations in the file system. To handle these cases easily, you can create an alias pointing the sendmail name to the msmtp binary like this (the examples should cover most cases):

ln -s /usr/bin/msmtp /usr/sbin/sendmail
ln -s /usr/bin/msmtp /usr/bin/sendmail
ln -s /usr/bin/msmtp /usr/lib/sendmail

Depending on which package manager your installation use, it may automatically setup these aliases, so do check if they exist before trying to create them.

Setting up with PHP

if you made the aliases as suggested above, it may already work, but you should make the following changes, just keep things clean and transparent.
Find all php.ini files applicable (you probably have one for the web-server and another for the Command Line):

Add or change the line:

sendmail_path = "/usr/bin/msmtp -t"

Now for some testing. Add a file with the following content (change the example-address to your own):

<!--?php mail("","test","test",""); ?-->

Now, call the file from the command line using the php cli, and then call the file through the webserver. In both cases you should receive an email shortly.

 Another suggestion…

Apart from running sendmail or postfix, there also seems to an application similar to mstmp called ssmtp, which offers many of the same features as msmtp.