Category Archives: eBusiness

eBusiness, Project Managment and Getting Things Done

Accidental Architecture

Most IT departments have the best intentions of providing the best quality, coherent solutions, yet market conditions, projects running in parallel and various constraints on budgets, resources or time, often causes what might be defined as Accidental Architecture.

The easiest way to identify cases where you’ve been hit by accidental architecture is when describing your it architecture and look for the works “except” or “but not”. Typical examples include – we have a single-sign system utilized everywhere except…”, “We update all systems to current versions, but not this old system…”.

The accidental architecture seem to be caused by a few main drivers:

  1. Lack of overview
  2. Lack of time
  3. Lack of resources

Lack of overview

When the root cause is lack of overview, the case is often, that decisions and designs are implemented without understanding the entire scope of the problem – or existing procedures and architecture in place. While a good coherent architecture seems to have been designed, it turns out that existing complexities which wasn’t known or addressed causes issues.

Lack of time

Deadlines often seem to be the cause of many issues – and no matter who much time you may a assign to a project, you’ll often need just a little more. As a deadline approaches often shortcuts a made to make the deadline and the shortcuts – which is assumed to be fixed in the next version, but often forgotten and abandoned – until issues arise.

Lack of resources

The issues caused by lack of time and lack of resource may seem similar, but are different. When lack of time causes the issue, the problem could have been solved, whereas lack of resources often happens when budget constraints or lack of knowledge causes an architecture to be chosen which may not be the right solution of the problem at hand.

The lack of resource issue may often occur, when projects are expected to drive enterprise changes – merging billing systems, decommissioning legacy platforms and other issues, which should be done, but often a product development project may not be able to facilitate.

The first step, is to realize there is a problem…

While many organizations fail to realize the existence and the volume, most actually seem to have a fair share of them – and if not handled – probably a growing volume.

Once you’ve realized you have cases of accidental architecture, make sure you address them on your technical debt list and have plan for what to do about the system. While “pure” technical debt most often may cause operational issues, the accidental architecture usually cause customer facing issues and are not recognized as severely as the operational issues caused by technical debt.

The issues introduced by accidental architecture is often complexity, slowly rising operational costs and increased user-support costs. To keep your IT domain alive and moving forward, time and resources must be found continuously to address and resolve the accidents.

Three points on the costs of COTS

It seems to be quite popular to move away from custom build IT solutions to so called COTS – commercial of the shelf solutions. The idea being, that to software solution fulfil a functionality which long has been commoditized and standardized to such an extent that it offers no “competitive edge” nor core value to the business.

For most companies and organizations the office suite would be a pretty safe bet for a piece of software which is magnificently suited for a COTS solution. Finding someone who develops an internal word processor in-house seems crazy as so many fully capable solutions exists in the market.

As time passes more software seem to be included in the parts which may be commoditized and custom solutions be replaced by standard solutions to provide an adequate and capable solution to areas served by custom solutions.

The drive to COTS software seem to be a hard challenge to many organizations, as the primary driver in most COTS adoption projects seems to be a drive from the accountants and a mistrust to the IT department to choose and deliver the best fit solutions to the rest of the business.

When listening for failed Microsoft Office implementations it sems fairly small, yet the number of failed ERP projects seem endless. The scope of this post is not to address when nor how to choose COTS solutions, but just make the point, that the choice of COTS is often naive and not fully understood ahead of the decision itself.

  • When adopting COTS you’re tied to the options and customizations offered by the chosen COTS software. You should never expect to be able to force the solution to be adapted to your organization and processes, but instead be prepared to adapt the organization and processes to fit within options offered by the chosen software.
  • Choosing COTS is a strategic commitment to the vendor of the software within the scope the COTS solution is adapted to fit within the organization. Once implemented within an organization, the adopting organization is often committed to follow the roadmap and direction the vendor chooses – as the cost of switching to another solution often is large and challenging project.
  • When adopting COTS you’re committing to follow along. All versions of software has a limited “life cycle” and as new versions are released you’re expected to follow along – in the pace that’s suitable for your organization and within the roadmap offered by the vendor (in terms of support and upgrade paths).

While COTS software seems like a cheap and easy solution to many areas within an organization, the three points above seems to be forgotten too often and causes problems with the stand COTS solutions again and again.

Coming back to Microsoft Office it seems all organizations are more than capable to restrain within the possibilities offered by Word, Excel and “friends”. As the Office documents seems to be the standard exchange format, there is an implicit drive to move the organization to current versions of the software and the new options offered by new versions.

When COTS implementations fail it seems often seems, that organizations are unwilling to adopt within the options offered by the COTS software chosen – and thus breaking the core idea of COTS as a commoditized solution.

It also seems many organizations seem to forget the commitment to follow the COTS vendor, and often end up using software versions dangerously outdated, as no budget exists to update or too many customizations have been made (see paragraph above) to make it easy to upgrade to current versions.

While COTS may offer solutions to many areas in the organization, please be warned – there is no free lunch. COTS does not only come with an initial implementation price – it also comes with commitment.

Better but Broken

Working with application development – either on the web, on the desktop or any other place – is often quite interesting. When making new releases features are added, changed – or in rare cases removed.

As a developer – or “software product manager” – it must be an interesting challenge to keep up with the users and the market to capture the features and changes to a product, which will make it better from release to release.

There are probably many ways to try to keep up – by doing research and by listening to user feedback seems to be two obvious choices, but I’m sure, there are many others. Some, I’m sure is also just a gut feeling of what might be cool new features. If you’re good – and now the users, the market and the competitors, you’re making steady progress.

Yet sometimes you miss. The slow adoption rate of Microsoft Vista might be a sign of a very public miss.

It doesn’t have to be a big miss, to chase a user away.

picture-1This weekend it happened to one of my favorite iPod Touch games – Tap Defense was upgraded to version 2.0 – and while most of the updates probably are great, there’s one little detail, which probably ensured I’ll rarely play it again (unless I find a way to fix it).

I used to play Tap Defense a lot while listening to Audiobooks and Podcasts. The new version has been updated with sound effects and music – and now the podcast or audiobook goes away (pauses) when the game is launched.

I’m sure TapJoy, developers of the Tap Defense game, are proud of their new sounds, but if I need to choose between the game and my listening to podcasts, the game looses. Please bring the ability to keep listening to what every the iPod plays, back in version 2.1.

What is twitter?

One of the hottest sites on the web for more than a year is twitter, but what is Twitter? – I’ve tried a few times to explain it, and while it may be a fun task, it has often become quite a mess. This is an attempt to capture the most successful explanation of twitter.

The core of Twitter is a combination of three different characteristics:

  • Twitter is like a blog – An author publish content. It may be personal, it may be themed, it may be interessting – there are no set rules for the contents except those set by the author.
  • Twitter is like an SMS – There are a 140 character limit on each piece of content. If you need more, you need to split contents in several “twits”.
  • Twitter is a network – It’s no just a website. Through build in services and APIs you can connect with twitter through SMS, desktop clients, Instant messaging and many other ways. Besides a technical network, it’s also a social network where you can follow other interesting users, communicate with other users (private or in public).

That’s pretty much the core.

There are a lot of other features stuff you can do, and while many has public feeds, you can even choose to keep you twittering private and only allow people you authorize to see you contents on twitter.

Another interesting use of twitter is when applications start interacting with it. When I post contents on the WordPress-based blog, a twit is automatically posted announcing it and thus twitter may in some extend be an alternative to my RSS feeds. See also iWantSandy of an interesting application interaction through twitter.

If you want to follow my ramblings on Twitter go ahead, they’re public.
If you have a better definition of what twitter is, please post a comment.

Website Traffic Tracking

Do you have a website? If so please go to the place you store the access logs, and check how much disk space they use. Having a website a few yours old, you’re probably looking at gigabytes, and what exactly is the value of that?

Sure keeping track of traffic levels is sort of interesting, but sometimes you need to balance the value provided by the space/resources required, and I’ve been slowly changing the way I use the access logs on this site.

Step 1: Don’t track the images

Do you really need to track, which images downloaded from the site, or would it be enough to know which pages are loaded? – For my part page impressions is enough intelligence on the site traffic and with apache it’s easy to disable image tracking. The easy way to do it is by adding a parameter to your log configuration saying:

env=!object_is_image

Restart the webserver and the log file should be somewhat smaller from now on.

Step 2: Use Awstats

My next step was to use Awstats. It parses the raw accesslog data into a database-file, which is significantly smaller than the raw files themselves. Awstats is a lot like other access-log analyzing packages, but it seemed to be just a notch above the rest.

Step 3: Drop the access logs for long term intelligence

While access logs on the webserver may be the source for traffic intelligence, there are several options to track traffic through remote services.

Most of them are pretty good and if you’re interested in generic analytics, you should probably look at one of the many options available to do traffic tracking as a remote service.

Some of the options available include Google Analytics (which I use), StatCounter and several others. Isn’t it nice, that someone else offer to keep all those historic data online – and in many cases absolutely free.

I still have access logs, but they’re used to (1) validate data from Google Analytics and (2) keep an eye on what’s happening on the site “now”. Any data more than a week (or so) only exist at Google Analytics…

Letting others feed the web for you

I follow a ton of sites on the web. I go for a morning surf through each and every one of them; I use an aggregator which checks the feeds from the websites, and tell me where to go for news. I guess most people do this – using feeds to find updates and then visit the site to check out the content.This way of tracking sites has changed one important thing on this website – the most popular file on the site is no longer the frontpage nor is it at particular popular page with a high Google ranking – it’s the feeds. Until recently almost 25% of all inbound tracking was hits to the main feed-URL.

While I do appreciate the traffic, serving a feed is more a necessity/convenience than it is adding value to the site is self, and wouldn’t it be quite nice, if I could use the webserver resources for something better than letting aggregators know if I’ve change anything or not.

Well, guess what. I’m (almost) not wasting any server resources on feeds – FeedBurner handles that.

There really isn’t any magically in doing this – FeedBurner is pushing more than a million feeds, but there are three reasons why you should let feedburner (or an other feeding server) push your feeds:

  • By using feedburner, I’ve moved a lot of traffic away from this server and thus pulling less traffic and a lesser load on the server.
  • FeedBurner are assumable feed experts and they probably ensure the readers used by people tracking the site, get the best possible feed.
  • Since FeedBurner Pro is Free, I can even brand the feeds with my own domain name, so that visitors don’t even know FeedBurner is serving the feeds. My mainfeed lives at http://feeds.netfactory.dk/netfactory

There are a few other cool benefits – Feedburner offers statistics on the feed usage and widgets I can use on the website, but the three above points should be enough to get most blogs and small websites to at least consider using FeedBurner.

Gmail filter feature wanted

I’ve been moving a fairly large part of my private mail to my Gmail account.  Gmail do have some amazing features for searching, labeling and handling mail and the virtually unlimited storage is also pretty cool – Much cooler than keeping a huge mail-archive on an IMAP-server or having it on a local fragile hard disk.

One of the more recent things I’ve started using GMail for is backups of this site. With the wp-db-backup plugin for WordPress a backup of the entired database is mail to my gmail account every 24 hours and using filters I’m applying a ”backup”-label and auto-archiving it (and avoiding noise in inbox).

While this works great, I’d like to be able to through a filter automatically delete messages older than say 7 days with a certain label. This could help me remove old backups automatically – and also clear-out mailing list messages (they usually have an online archive, so why waste Gmail space on a copy).

Date-based searching is already available in gmail, so please, someone at the Gmail development staff, make it available in filters too…

GMail filter/spam buglike feature

Gmail from Google is usually great, but it does have some bug-like features.

One of the most annoying is when your inbound mail is tagged with a label and should have been auto-archived, but is caught by their spam filter and placed in the spam folder.
Discovering and finding the mail in the spam folder is easy, but when you tell Gmail, that the mail isn’t spam, it doesn’t pop in to the archive (where the filter-rule should have put it). It pops back into the inbox, and you can then archive from there.

If they can figure out the mail ought to have been archived automatically, they should at least offer an easier way to run a filter on the inbox, than having to edit the filter and reapply it.