Baking Audiobooks with m4baker

Building audiobooks on (Debian) Linux in the m4b format is actually possible and doesn’t have to be a pain. I’ve found numerous recipes with shell instructions, but having a nice simple app to handle the building of the books seems much easier.

Most of the apps available for Linux seemed to be in a pre-alpha state, but after a few experiments I’ve settled on m4baker, which – while a bit rough – actually seems to do the job just fine.

Getting the m4baker running on my Debian Testing took a few steps:

sudo apt-get install python-qt4
 sudo apt-get install libcanberra-gtk-module
 sudo apt-get install faac
 sudo apt-get install libmp4v2-2
 sudo apt-get install mp4v2-utils
 sudo apt-get install sox
 sudo apt-get install libsox-fmt-mp3

Once these steps have completed successfully the final step is getting m4baker installed and running:

  • Download the source from https://github.com/crabmanX/m4baker/releases
  • Unpack the file and from the unpacked directory run the install script:
    python setup.py install --optimize=1
    

This should have successfully installed M4Baker and all the required files and libraries to build m4b-audiobooks (suitable for iTunes and other m4b-supporting audio players).

You launch  m4baker either through the (start) menu or simply with the m4Baker command from the shell.

m4Baker is an open source project available on GitHub.

 

Get your DMARC going

Get your company implementing DMARC now…

During the past 5-6 years email industry efforts have been pushing the DMARC standard along. It provides the best widely supported and seemingly efficient way to – as a domain-owner – protect the domain from misuse and abuse in terms of spam and phishing attacks.

postkasse_japanAs sending email has often been a wild-west, and knowing who is a valid sender of email may prove a challenge for many companies – and as most IT developers does seem to care too much about the finer details of email (and production just as bad email headers as HTML markup 🙂 ), implementing DMARC protection on your domain may actually be a challenge.

The DMARC standard provide you 3 powerful tools:

  • Using DMARC you have the power (through) DNS to declare which mail-servers are valid senders of email from your domain.
  • The DKIM signing of mails allows your to prove to recipients it was sent from a valid server.
  • Finally DMARC  provides a way for the email receiver to report back to the sender about messages that pass and/or fail DMARC evaluation.

In summary, you have the option to protect the credibility of your domain (by not exposing it to spam and phishing), and you should care now, as Google through Gmail seems to be starting to push harder to signal which email is “safe” (or legitimate at least).

This latter effort will not only remove fake emails pretending to be from your domain, but it will likely also promote your legitimate emails and make them more likely to reach their audience.

Here are a few articles on how to get on with DMARC implementation:

 

 

How not to become the maintenance developer

As a developer it seems, you always seem to strive towards producing ever more complicated code. Utilizing new frameworks, adopting an ever evolving “convention before configuration”, pushing object-oriented programming – maybe Domain Driven Development – are practices introduced, refined and explored in the quest to prove yourself as a steadily better developer with rising skills.

Workstation

Yet to what point?

While the intricate complications may impress fellow developers, doing so often digs a hole which may be pretty hard to get out of. Every complication – not matter if it is in the design, the architecture or the structure of the code – often provides the opposite effect of the desired outcome. The whims of your current self to impress developers with the latest fashionable technique, is short termed reward for a long term pain.

I accept some fashions and whims do change and solidify to become better practice, often the urge to over use the latest and greatest frameworks, techniques and paradigms, often only leads to a painful maintenance in years to come – and as you’ve complicated things beyond comprehension you’ll probably be the trapped in the maintenance for the lifetime of the developed code.

Keep It Simple

As new development challenges often are much more fun, than maintaining legacy code, there are a few basic things you can do to keep yourself from being the eventual eternal maintenance developer and they are straightforward:
Keep it simple
When solving a problem, make as simple and as little code as needed. Don’t wrap everything in object hierarchies, split configuration, views, models and controllers in individual files – do it only, when it provide clear and apparent value.

Names matter

Choose short, but descriptive names for variables, functions, classes and objects in your code. An object containing content for an invoice should be named $invoice, not just $x, $y or $z. Using naming of artifacts in the code provide clues to content and functionality makes it much easier for anyone – including your future self – to comprehend and understand the code when doing maintenance.

Don’t fear comments

Comments does not slow down your code. Al code is compiled at the latest at run-time and a few well placed comments may often be very valuable for the eventual maintainer. Don’t comment obvious code constructs (“don’t state the obvious”) , but do reference business rules or why something tricky is going on.

Be consistent

Find a consistent way to solve the same task/pattern every time, as it will help you focus on what the code is doing instead of the syntax expressed. If you during development find a better way to do something, remember to go back and fix the instances where applicable.

Move on…

Every developer will eventually be stuck with some development, but making your code accessible and easy to understand – the odds of you being stuck forever on maintenance duty is much lower. Great code move on from the parent and have a life on it’s own in the care of other developers if you’ve done your job right.

Are you ready for transparency?

Running a modern IT platform is rarely an easy nor isolated task. Most platforms consist of a fairly large number of components ranging from OS level to 3. party libraries and components added in the user interfacing layers – and adding numerous integrations does make it an interesting challenge to quickly identify and correct bugs and errors.

While the system complexity does pose a challenge is surely not an impossible task, as several tools exists for most – if not all – platforms to allow instrumentation of the platform and utilize the instrumentation tools to handle the platform and identify issues quickly.

Instrumentation provides insight…

Instrumentation tools are generic tools which allows you to identify errors in your production environment and provide sufficient context and debug information to allow developers to diagnose, understand and fix the issues identified. Examples of such tools include AppDynamics, New Relic, Stackify and many others. If you’re the Do-It-Yourself type, it’s not unfeasible to build a tool yourself by hooking into error handlers and other hooks exposed by the specific platform due to be instrumented.

Having worked with various degrees of instrumentation for 10+ years – homebuild and purchased tools, I can certainly confirm that such tools works and allows you to mature a complex IT platform much quicker, as the insights provided from a live production environment allows you to attack the most occurring errors experienced by real users of the system.

Test suites are great for minimizing risk during development, but the test suites are based on assumptions on how users and data acts in your platform,  and while the identified errors experienced over time certainly help minimizing risks in new development, it is “theory” as opposed to instrumentation which is much more “practice”.

Heimbach - power plant 12 ies

 

Transparency not needed

While the tools to do instrumentation for most platforms may readily be available, the “natural use”  – even in an enterprise setting – seems surprisingly low, and I suspect numerous reasons exists.

We do not need it is often the most common. As set procedures exists and they seem to work, why would we need to introduce a new tool to provide data we already have. Error logs, end-user descriptions and screenshots have been used for decades and why should there be a better method?

It introduces risk is another often cited concern. As instrumentation tools are not considered a need tool in the IT platform, operations may oppose to adding it to the already complicated stack – especially if the value of the instrumentation is not known or recognized.

It is expensive is another misconception. Instrumentation often don’t provide any direct business value (assuming your IT platform isn’t burning and the users is leaving rapidly). Most of the value offered by instrumentation tools is fixing issues faster and the scope of issues  being smaller, and as such it’s often hard to prove the value offered by issues not occurring.

Transparency not desired

Apparently many people believe firmly, that issues not seen nor reported are not real issues, and does not exist. Gaining insights into one instrumented platform and running a black-box platform next to it, may  cause the false belief that the black box system is running more stable and with fewer issues than the transparent system.

The reason is simply that on black box systems (that is systems without any instrumentation tools to monitor their actual performance) it is rare to proactively examine logs files and other places where the black box might emit issues. Only when an issue is reported, developers are assigned to examine these sources to resolve the issue.

Gaining insights into an IT platform though instrumentation and being able to resolve “real” issues as experienced by your users should be a fantastic thing, but beware that many people implicitly seems to believe, that with you don’t monitor for errors and issues, they probably doesn’t exist – however false it is.
See No Evil, Hear No Evil, Speak No Evil

Accidental Architecture

Most IT departments have the best intentions of providing the best quality, coherent solutions, yet market conditions, projects running in parallel and various constraints on budgets, resources or time, often causes what might be defined as Accidental Architecture.

The easiest way to identify cases where you’ve been hit by accidental architecture is when describing your it architecture and look for the works “except” or “but not”. Typical examples include – we have a single-sign system utilized everywhere except…”, “We update all systems to current versions, but not this old system…”.

The accidental architecture seem to be caused by a few main drivers:

  1. Lack of overview
  2. Lack of time
  3. Lack of resources

Lack of overview

When the root cause is lack of overview, the case is often, that decisions and designs are implemented without understanding the entire scope of the problem – or existing procedures and architecture in place. While a good coherent architecture seems to have been designed, it turns out that existing complexities which wasn’t known or addressed causes issues.

Lack of time

Deadlines often seem to be the cause of many issues – and no matter who much time you may a assign to a project, you’ll often need just a little more. As a deadline approaches often shortcuts a made to make the deadline and the shortcuts – which is assumed to be fixed in the next version, but often forgotten and abandoned – until issues arise.

Lack of resources

The issues caused by lack of time and lack of resource may seem similar, but are different. When lack of time causes the issue, the problem could have been solved, whereas lack of resources often happens when budget constraints or lack of knowledge causes an architecture to be chosen which may not be the right solution of the problem at hand.

The lack of resource issue may often occur, when projects are expected to drive enterprise changes – merging billing systems, decommissioning legacy platforms and other issues, which should be done, but often a product development project may not be able to facilitate.

The first step, is to realize there is a problem…

While many organizations fail to realize the existence and the volume, most actually seem to have a fair share of them – and if not handled – probably a growing volume.

Once you’ve realized you have cases of accidental architecture, make sure you address them on your technical debt list and have plan for what to do about the system. While “pure” technical debt most often may cause operational issues, the accidental architecture usually cause customer facing issues and are not recognized as severely as the operational issues caused by technical debt.

The issues introduced by accidental architecture is often complexity, slowly rising operational costs and increased user-support costs. To keep your IT domain alive and moving forward, time and resources must be found continuously to address and resolve the accidents.

Three points on the costs of COTS

It seems to be quite popular to move away from custom build IT solutions to so called COTS – commercial of the shelf solutions. The idea being, that to software solution fulfil a functionality which long has been commoditized and standardized to such an extent that it offers no “competitive edge” nor core value to the business.

For most companies and organizations the office suite would be a pretty safe bet for a piece of software which is magnificently suited for a COTS solution. Finding someone who develops an internal word processor in-house seems crazy as so many fully capable solutions exists in the market.

As time passes more software seem to be included in the parts which may be commoditized and custom solutions be replaced by standard solutions to provide an adequate and capable solution to areas served by custom solutions.

The drive to COTS software seem to be a hard challenge to many organizations, as the primary driver in most COTS adoption projects seems to be a drive from the accountants and a mistrust to the IT department to choose and deliver the best fit solutions to the rest of the business.

When listening for failed Microsoft Office implementations it sems fairly small, yet the number of failed ERP projects seem endless. The scope of this post is not to address when nor how to choose COTS solutions, but just make the point, that the choice of COTS is often naive and not fully understood ahead of the decision itself.

  • When adopting COTS you’re tied to the options and customizations offered by the chosen COTS software. You should never expect to be able to force the solution to be adapted to your organization and processes, but instead be prepared to adapt the organization and processes to fit within options offered by the chosen software.
  • Choosing COTS is a strategic commitment to the vendor of the software within the scope the COTS solution is adapted to fit within the organization. Once implemented within an organization, the adopting organization is often committed to follow the roadmap and direction the vendor chooses – as the cost of switching to another solution often is large and challenging project.
  • When adopting COTS you’re committing to follow along. All versions of software has a limited “life cycle” and as new versions are released you’re expected to follow along – in the pace that’s suitable for your organization and within the roadmap offered by the vendor (in terms of support and upgrade paths).

While COTS software seems like a cheap and easy solution to many areas within an organization, the three points above seems to be forgotten too often and causes problems with the stand COTS solutions again and again.

Coming back to Microsoft Office it seems all organizations are more than capable to restrain within the possibilities offered by Word, Excel and “friends”. As the Office documents seems to be the standard exchange format, there is an implicit drive to move the organization to current versions of the software and the new options offered by new versions.

When COTS implementations fail it seems often seems, that organizations are unwilling to adopt within the options offered by the COTS software chosen – and thus breaking the core idea of COTS as a commoditized solution.

It also seems many organizations seem to forget the commitment to follow the COTS vendor, and often end up using software versions dangerously outdated, as no budget exists to update or too many customizations have been made (see paragraph above) to make it easy to upgrade to current versions.

While COTS may offer solutions to many areas in the organization, please be warned – there is no free lunch. COTS does not only come with an initial implementation price – it also comes with commitment.

Bulk conversion of webp files to png format

Google has come up with a nice new image format called webp. Currently support for this format is fairly limited, so if you need to use webp images else where it might be nice to convert them to a more widely supported format. To do the conversion, Google has made a small tool available called dwebp. The tool however does only seem to support conversion of a single image, not a batch of images.

Using regular command line magic it’s easy though. Download the tool a pair it with the find and xargs command and you should quickly be on you way. If all the webp files needing conversion to png is in a single directory, simply do this:

find . -name "*.webp" | xargs -I {} dwebp {} -o {}.png

It findes all webp files, and converts them one by one. If the initial files name was image.webp the resulting file will be called image.webp.png (as the command above doesn’t remove the .webp but only appends .png at the end.

The command assumes the dwebp program is available in you include path. If this isn’t the case, you need to specify the complete path to the program.

Watching your Raspberry Pi

So I’ve installed a Raspberry Pi and it’s been running smoothly day in, day out. I’d like it to stay that way, but as the server is running it’s gathers lint in log files, databases grows and knowing how the load on CPU and memory is utilized through out time, I was looking for a tool which could help me to solve this problem.

As fun as it might be to build your own solution, I’ve learned to appreciate ready to use solutions, and it seems a nice little tool is available called RPi-Monitor. Assuming you run the Raspbian, the RPi-Monitor is available as a package ready to install through the standard package manager (once you’ve added the package repository).

RPi-Monitor installs a web server on port 8888 and gives you a nice overview on key resources – cpu, memory, disk  and more – and even historical graphs is available.

RPi-Monitor is free, but if you find it useful, do consider donating to the author on the website.

Using (Google) Calendar for domains

Here’s a little trick, which is has proven itself just as useful as it is easy. To most companies handling domains is critical task, as losing your domain name may have catastrophic consequences. Handling domains isn’t particularly hard, but there are some tasks, that may be time-critical to handle in due time – luckily Google Calendar provides an easy way to help make sure these tasks are handled.

(In this little tip, I’m using Google Calendar as the reference, but Outlook.com, Office365 or any other online calendaring system can probably do the same.)

Setup a new Google Calendar on an existing Google Account and call it “domains”.

Whenever a domain name is bought or renewed, make a new entry in the calendar at the expire time of the expiry date of the domain. Note the domain name in the subject of the calendar, and if you buy domains at various registrars note any details needed (but not confidential) in the description field.

Next step is to remove the default pop-up notification and add email notifications instead. Choose which warning horizons you’d like – i.e. 1 month, 1 week and 48 hours – and Google will let you know when the renewal is coming up.

Final step is to invite any other who needs to be notified of the domain expiry to the appointment, and make sure, that they notifications is also set up with the warning horizons they like.

… also applicable of certificates

The calendar notifications can also be utilized for SSL / TLS certificates. When buying or renewing certificates make an entry on their expiry date and set up notifications as described above. This way you should be able to ensure your users never see an expired certificate again.

Beware of DNS

For some time the server running this site had been acting up. Page loads were slow, access through SSH seemed lagging and something was absolutely misbehaving.

I’ve been trying to figure out what exactly was going on, but nothing really made sense. there were plenty of disk space, memory was reasonable utilized (no swapping) and the CPU load seemed to be less than 0.1 at any time – there were no good reason the server was “turtling” along at such a perceived slow pace.

Thanks to a tip from Henrik Schack, the server is now running at full speed again. it turned out that one of the DNS resolvers used by the machine was in a bad state and slow, unreliable or dysfunctional DNS causes trouble all sorts of places. The fix was quite easy, the file /etc/resolv.conf was updated to contain the IPs of the Google Public DNS servers, and once the file was saved things were back to the rapid normal.

All computers really need solid, fast DNS servers these days – be it servers or workstations – as the auto-updates and the utilization of “cloud-resources” of various kind much have DNS working to reach out to the destinations they need. If your system starts acting up without any reasonable explanation, checking DNS could be an easy place to start checking things.

Pioneering the Internet….