Category Archives: Software Engineering

Development flows, Version Management, Test driven development and other teams supporting the software delevelopment process

Have your IT systems joined Social Media?

No, your servers should (probably) not have a facebook profile, nor should your servicebus have a twitter profile, but as the work tools change and evolve, you should probably consider updating the stream of status mails to more modern “social media” used at work.

When you’re in DevOps you probably get a steady stream of emails from various systems checking in. It may be alert emails, health checks or backup completed emails. It’s been more “fun” getting these mails with the rise of unlimited mail storage and powerful email-search tools should you ever need to find something in the endless stream of server-generated mails.

As we’ve adopted new tools, the automatic messaging from the servers has more or less stayed the same – until recently. We’ve been drinking the Kool-Aid with Slack, Jira and other fancy tools, and recently we considered moving the IT systems along… and so we did.

Slack has a very nice API and even with free tier, you can enable a “robot” (a robot.sh shell script that is) to emit messages on the various slack channels. We’ve integrated the slackbot into our various (automated)workflows in our Continuous integration / Continuous Deployment  pipeline, so that releases which move from environment to the next – and finally into production, emits a message to a #devops channel . We’ve also made a #operations channel, and when our monitoring registers “alert events”, it emits messages onto this channel. Using this setup anyone on the team can effectively and easily subscribe to push messages.

As a sanity measure and not to have “yet another mailbox”, we’ve tried to preach that everything on Slack should be considered ephemeral (that is short lived), and if anything worth remembering is said, it must be captured elsewhere.
releasetrain
As many other companies we use Jira to manage our software development. A task is tracked from idea to delivery and then closed and archived. As a Jira task is long-lived we’ve also integrated the same CI/CD pipeline into Jira. Once the release train starts rolling – a ticket in Jira is created (by the scripting tools through the Jira API) and updated as it pass through the environments – and closed automatically when the solution has been deployed to production.

The ticket created in Jira contains a changelog generated from pulling the contained commits from Git included in the pull request and if possible (assuming the commit comments are formatted correctly) linked to the Jira issues contained in the release (build from the pull request).

The jira tickets created by the release train is collected in a kanban board where each environment/stage have a separate column giving a quick overview of the complete state of releases (what is where right now).

A future move we’ve considered was if we should have blogging by servers. Assuming agile developers is able to create reasonable commit comments which may be readable (and comprehensible) by non-developers, it might me interesting utilizing a blogging tool such as wordpress to provide a historical log of releases.

As you adopt new tools for communication, remember to also think of automated communication, which may have a useful place in the new tools. Often new platforms have readily available APIs which allows your IT platforms to provide – or receive -information much more efficiently than pagers, email or whatever was available once you set it up in times gone by.

(This post originally appeared on Linked)

No more pets in IT

Remember the good old days, when IT got a new server. It was a special event – and naturally the naming game. Finding that special name for the server, which neatly fitted into the naming scheme adopted (be it Indian cities, Nordic mythology or cartoon characters).

This ought to be then, but the ceremony still happens regularly in many IT departments, where servers are treated with the same affection as with pets – and with all the bad side effects it may bring along.

Remember the “superman” server – it must not die – superman will live on forever- nor matter how much patching, maintenance and expensive parts replacements it need, we will care for that special pet server… and we would be wrong to do so.

Modern IT should not be that cozy farm from the 1950ies, but find their reflection in modern farming.

From pets to cattle

In a modern farm the cattle isn’t named individually and harsh at may seem – when one of the cows doesn’t perform, it replaced as the performance of the farm as a whole matter much more, than the individual care and nurture of the individual animals on the farm… and rarely are the individual animals named – perhaps in recognition that they will be replaced and the number of animals will be adjusted to align with the requirements of the farm.

Modern IT does have all the technology to adopt the modern farm metaphor and should do so as we move to virtual servers, containers, micro services and cloud-based infrastructure.

All these technologies (along with others) have enabled us to care much less about a specific server or service, and instead focus on what “server templates” are needed to support the services provided by IT – and mange the number of instances needed to support the requirements posed to IT.

Hardware as Software – From care to control

As servers move from a special gem to a commodity and we may need 10, 50 or 100 small servers in the cloud instead of a single huge “enterprise” spaceship in the server room, a key challenge is our ability to manage and control them – and the tools to do that is also readily available.

Using tools like Chef, Puppet or Docker(files) – as the enabler for our server templates from above – developers are able to describe a specific server configuration and use this template to produce as many identical copies as may be needed. Further more, as we’re moving to manage the herd of servers, the server templates should easily be manged using the standard version control software used to mange your source code already.

Using this template model, the developers take control (and responsibility) of making sure the complete stack needed to run the service, is coherent, and the operations can make sure to size (and resize) the resources available as needed.

Finally as we move to a “cattle” perception of servers, no one should ever need to login to a specific server and make changes – it all needs to go through the configuration management tools and tracking changes all changes to the production environment. If a server starts acting up, kill that the server and spin a new server up in your infrastructure.

(This post originally appeared on Linked)

Ephemeral Feature Toggles

At its simplest  feature toggle is a small if-statement, which may be toggled without deploying code, and used to enable features (new or changed functionality) in your software.

In an agile setup most developers love  having feature toggles available, as they often allow for a continuous delivery setup  with very little friction and obstacles. While this is true, it often seems developers forget to think of feature toggles as ephemeral, and doesn’t realize what a terrible mess, this can cause – given they don’t remove the toggles once the feature is launched and part of the product.

While feature toggles often is an extremely lean way to  introduce public features in software – short circuiting normal release ceremony, as it has already happened before the launch of the feature – which often literally is a button press in a admin/back end interface.

Feature toggles must be ephemeral when used to support Continuous Delivery.

Introducing feature toggles in source code is often just allowing an if-statement and toggling between two states. While it ought to be a single place a toggle occur in the source code, it may often be in views, models, libraries and/or templates – and thus leads to (potentially) a lot of clutter in the source code.

If feature toggles are allowed to live on, the next issue is often that feature toggles may become nested or unintentional crossed as various functions, components in the code, may depend on other features.

Feature toggles is a wonderful thing, but all toggles should not have a lifespan beyond the two sprints forward – allowing for the wins of the feature toggles, yet avoiding long term clutter and complications in the source code.

Other uses for feature toggles…

Feature toggles may exist as hooks for operations or to define various class-of-service for your users.

When used for operations, they may be used as hooks to disable especially heavy functionality during times of heavy load – black Friday, Christmas or whenever it may be applicable.

When used for Class-of-Service feature toggles can be used to determine which feature a basic and a premium user – or any other classes you may choose to have – have access to.

If you’re applying feature toggles as Operations hooks or class-of-service, don’t make them ephemeral, but have your developers mark them clearly in source code.

Why DevOps works…

I’m digging through a backlog of podcasts and the gem of the day goes to SE-Radio podcast. In episode #247 they talk about DevOps and while I’ve preached and practiced DevOps for years, as mainly as common sense, the podcast did present a more reasonable argument why it works.

Developers are praised and appreciated for short time to market; the number of new features they introduce and changes they make to the running system.

Operations are praised for stability and up time, and thus change is bad; many changes is really bad.

A DevOps fuse the two roles and responsibilities, the developers need to balance the business value new development may cause with the risks it introduce and balance what and when things are released into production.

If you’re into Software Engineering and DevOps curious give it a listen at SE-Radio #247.

(This post originally appeared on Linked)

How not to become the maintenance developer

As a developer it seems, you always seem to strive towards producing ever more complicated code. Utilizing new frameworks, adopting an ever evolving “convention before configuration”, pushing object-oriented programming – maybe Domain Driven Development – are practices introduced, refined and explored in the quest to prove yourself as a steadily better developer with rising skills.

Workstation

Yet to what point?

While the intricate complications may impress fellow developers, doing so often digs a hole which may be pretty hard to get out of. Every complication – not matter if it is in the design, the architecture or the structure of the code – often provides the opposite effect of the desired outcome. The whims of your current self to impress developers with the latest fashionable technique, is short termed reward for a long term pain.

I accept some fashions and whims do change and solidify to become better practice, often the urge to over use the latest and greatest frameworks, techniques and paradigms, often only leads to a painful maintenance in years to come – and as you’ve complicated things beyond comprehension you’ll probably be the trapped in the maintenance for the lifetime of the developed code.

Keep It Simple

As new development challenges often are much more fun, than maintaining legacy code, there are a few basic things you can do to keep yourself from being the eventual eternal maintenance developer and they are straightforward:
Keep it simple
When solving a problem, make as simple and as little code as needed. Don’t wrap everything in object hierarchies, split configuration, views, models and controllers in individual files – do it only, when it provide clear and apparent value.

Names matter

Choose short, but descriptive names for variables, functions, classes and objects in your code. An object containing content for an invoice should be named $invoice, not just $x, $y or $z. Using naming of artifacts in the code provide clues to content and functionality makes it much easier for anyone – including your future self – to comprehend and understand the code when doing maintenance.

Don’t fear comments

Comments does not slow down your code. Al code is compiled at the latest at run-time and a few well placed comments may often be very valuable for the eventual maintainer. Don’t comment obvious code constructs (“don’t state the obvious”) , but do reference business rules or why something tricky is going on.

Be consistent

Find a consistent way to solve the same task/pattern every time, as it will help you focus on what the code is doing instead of the syntax expressed. If you during development find a better way to do something, remember to go back and fix the instances where applicable.

Move on…

Every developer will eventually be stuck with some development, but making your code accessible and easy to understand – the odds of you being stuck forever on maintenance duty is much lower. Great code move on from the parent and have a life on it’s own in the care of other developers if you’ve done your job right.

Are you ready for transparency?

Running a modern IT platform is rarely an easy nor isolated task. Most platforms consist of a fairly large number of components ranging from OS level to 3. party libraries and components added in the user interfacing layers – and adding numerous integrations does make it an interesting challenge to quickly identify and correct bugs and errors.

While the system complexity does pose a challenge is surely not an impossible task, as several tools exists for most – if not all – platforms to allow instrumentation of the platform and utilize the instrumentation tools to handle the platform and identify issues quickly.

Instrumentation provides insight…

Instrumentation tools are generic tools which allows you to identify errors in your production environment and provide sufficient context and debug information to allow developers to diagnose, understand and fix the issues identified. Examples of such tools include AppDynamics, New Relic, Stackify and many others. If you’re the Do-It-Yourself type, it’s not unfeasible to build a tool yourself by hooking into error handlers and other hooks exposed by the specific platform due to be instrumented.

Having worked with various degrees of instrumentation for 10+ years – homebuild and purchased tools, I can certainly confirm that such tools works and allows you to mature a complex IT platform much quicker, as the insights provided from a live production environment allows you to attack the most occurring errors experienced by real users of the system.

Test suites are great for minimizing risk during development, but the test suites are based on assumptions on how users and data acts in your platform,  and while the identified errors experienced over time certainly help minimizing risks in new development, it is “theory” as opposed to instrumentation which is much more “practice”.

Heimbach - power plant 12 ies

 

Transparency not needed

While the tools to do instrumentation for most platforms may readily be available, the “natural use”  – even in an enterprise setting – seems surprisingly low, and I suspect numerous reasons exists.

We do not need it is often the most common. As set procedures exists and they seem to work, why would we need to introduce a new tool to provide data we already have. Error logs, end-user descriptions and screenshots have been used for decades and why should there be a better method?

It introduces risk is another often cited concern. As instrumentation tools are not considered a need tool in the IT platform, operations may oppose to adding it to the already complicated stack – especially if the value of the instrumentation is not known or recognized.

It is expensive is another misconception. Instrumentation often don’t provide any direct business value (assuming your IT platform isn’t burning and the users is leaving rapidly). Most of the value offered by instrumentation tools is fixing issues faster and the scope of issues  being smaller, and as such it’s often hard to prove the value offered by issues not occurring.

Transparency not desired

Apparently many people believe firmly, that issues not seen nor reported are not real issues, and does not exist. Gaining insights into one instrumented platform and running a black-box platform next to it, may  cause the false belief that the black box system is running more stable and with fewer issues than the transparent system.

The reason is simply that on black box systems (that is systems without any instrumentation tools to monitor their actual performance) it is rare to proactively examine logs files and other places where the black box might emit issues. Only when an issue is reported, developers are assigned to examine these sources to resolve the issue.

Gaining insights into an IT platform though instrumentation and being able to resolve “real” issues as experienced by your users should be a fantastic thing, but beware that many people implicitly seems to believe, that with you don’t monitor for errors and issues, they probably doesn’t exist – however false it is.
See No Evil, Hear No Evil, Speak No Evil

Function names as signaling

In most web applications there’s a host of functions (or methods if speaking in the object-oriented world). It’s widely recognized, that it’s probably a good idea to name them something, which may suggest the purpose or functionality of what the function is doing, but often developers seem to fail at making a stringent naming convention. Before starting on your next big development adventure, here are a three suggested rules for naming functions.

1. It’s more important to have a suggestive name, than a short one.

Never call a function something short but meaningless. Instead use CamelCase and make a sentence suggesting what the function does.

  • Bad examples found in live code: “process”, “fixit”, “cleanup”.
  • Good examples: “saveToDatabase”, “convertIpnumberToDomainName”, “calculateTotalPricing”.

2. Use prefixes on functions

Reserve common names (more if you like) for specific type of functions. Here are a few suggested rules:

  • “get”-functions should always retrieve and return data – never print data.
  • “print”-functions shoudl always print data to “standard out”.
  • “set”-functions should ways set data to an object (and choose if “set” also saves data or not).
  • “save”-functions (if set-functions doesn’t save properties) saves all current properties to the persistent storage (usually database).

3. Reuse data model objects in function names

If you’re data model (or object model) already contains names of entities, reuse these in function names. If a table is named “Travel”, call the function “saveTravelRecord”, Not just “saveDataRecord”.
Make consistent use of the same names, field names, properties and other entities found in the application. Using the same name for the same object all across the application, may seem obvious, but somehow developers seem to find slightly different names for the same object again and again.

While the three above tips may seem simple, do check your code and see how many places they are broken. I’ve seen countless times, and getting it right from the beginning would have cost a small effort, refactoring the code years later is a much bigger effort.

Bread crumbs in version control

I’m sorry but sometimes I really don’t get why even seasoned developers doesn’t learn the art of the commit message in version control system. All too often I’ve come across check-ins where the entire commit message just reads “bugfix”, “change”, “oops” or something just as mindless.

The effort of writing a useful message compared to the potential benefit seems to be one the best ratios – but of course the pay-back is usually some time away – too bad. Once you work on the same code for years – or even better inherit code from others, you’ll quickly learn to appreciate anyone who used more than 10 seconds on composing a thoughtful message for the future.

Here are 3 rules you should always, always obey when committing to a version control system.

Always leave a reference to the issue/bug tracking system.

All professional development uses some sort of issue tracking system, to keep track of bugs, new features and other changes to the system. The issue tracking system should always be able to tell who asked for a change, why it was asked for and what considerations was made before the code change. By leaving a reference to the issue tracker, it’s often much easier to get “the big picture” if the change need to be changed years later. To make sure you get it in, just write “Bug #number#: “ as the initial part of the commit message.

Don’t write what, write why

Don’t write it’s a bug fix – most people will know it from look either at the code or in the issue tracking system (see point 1 above), rather write why it fixes the issue (“New check to check for missing parameters”, “Now handles no search result from db correct” – not just “bugfix”).

Keep it brief.

Log messages are not a place to store documentation, user guides or any other important information. You can assume it’s the future you (or another future fellow developer) who will look at the code and try to make sense of it. Think of this, when writing the message – it’s not for the project manager, it’s not for the end-users – it’s for a developer doing maintenance work on the code in the future.

PHP best practice: Function Parameters

I’ve been developing web applications for some years now, and while I make no claims to being the world greatest developer, I do figure, that I do have some solid experience which may help others – or at least encourage thoughts and discussion. This is the first in a series of posts, and while it may be from a PHP developers point of view, it may applicable to other programming languages, and maybe even beyond web applications. Here are my four tips on function parameters.

Always have a default value on all parameters

Functions parameters are often used as input to SQL queries, calling webservices or computations. Null or non-existing values often tend to break these things and throws horrible messages to the end-user.

While the parts of the program using your function always should provide proper values, laziness, oversights or unexpected scenarios, may prove it not to be the case. If at all possible, always try to provide default values on all parameters to a function – and if you really can’t make sure it’s handled gracefully.

Choose reasonable defaults

When providing default values, don’t choose extremes. If you’re browsing in a list of usernames, don’t use a (pure) wildcard as default value – use an “A” to list all users starting with the letter A. If you’re function allows a limit on the number of rows returned form a database query (say for paging purposes), set the default to 10, 20 or some other low number, don’t go for worst case scenarios like 9999, or 999999.

If the developer using the function needs plenty of rows, it’s easy to pass a specific value, and if the developer forgets to specify the number of rows, expected, they get a reasonable result, which may help them to ask for more (if actually needed).

Always sanitize input

Even though a given function naturally only will be used with valid input and so on, every function should take steps to secure them selves.
One of the most basic steps is to make sure all input is sanitized. Protecting your function from making SQL injection threats or other security issues, is not the responsibility of the places utilizing the function, but the responsibility of the function it self, and thus it should take steps to make sure it doesn’t introduce security issues.

As basic validation of simple input parameters, look at the ctype functions in PHP. If you can always try to validate against a whitelist (which characters are allowed) instead of blacklists (these characters are not allowed) as missing things which may introduce issues in a black list is harder, than allowing what you expect, to pass through.

Accept an array of key value pairs

If a simple function accepts a single or two parameters, they may be passed as regular parameters, but once the list of possible parameters starts to grow, changing it a single parameter – which is an array with key/value-pairs, seems to be a much more solid long-term solution on keeping the interface developer friendly. Sure you can have endless lists of 10-15 parameters, but if you have default values on the first 14 values and just need to change the last, the code ought to be much more clean by being able to pass an array with the single key/value-pair needed to be changed from the default value.

That was the first four tips on function parameters. More on PHP and Security.

Build-in time bombs

I’ve been refactoring and refactoring some old code, and it’s kind of odd what short-cuts (or even time bombs), you’ll find in code, which apparently wasn’t supposed to live on for years.
In a now retired CMS system, we had an issue every new year, when some kind of bug would reset all “schedules” for upcoming stories and content. No-one ever got around to fix it, as the system was soon to be decommissioned – but sadly the bug did survive a few years anyway.

These days, I’m was working in the system, which came to replace the broken system from before. Here’s another odd thing – It was apparently hard coded into the editor in the backend, that stories need to have a “published date” between 1996 and 2011. As there really isn’t any documentation and the original developer has left the company, it’s hard to know why it was made so. While there probably was a reason, it’s lost by now.

 <select name="year" size="1" style="width:55px;" title="Year">
 <?php
   for ($i = 1996; $i < 2011; $i++) {
       printf("<option value=\"%1\$d\"%2\$s>%1\$d</option>",$i,($year == $i) ? " selected" : "");
   }
   ?>
</select>

We fixed it by making the latest “published date” go between 1996 and current year plus one.

While it was an odd thing, I’m surely glad that we from time to time do some sort of maintenance of old code – even while it seems to work perfectly. It’s far from the only bad thing we found, and while most people worry mainly about new code, sometimes you should also remember the old stuff.