18 March 2011
One of Einstein’s colleagues asked him for his telephone number one day. Einstein reached for a telephone directory and looked it up. “You don’t remember your own number?” the man asked, startled.” No,” Einstein answered. “Why should I memorize something I can so easily get from a book?”
In this day of Google do you need to memorise anything?
19 January 2011
What else would you call the Server that manages your 404 and other HTTP errors but the Sorry Server
11 January 2011
Q. What is the height of optimism?
A. An Australian batsman putting on sunscreen.
Q. What is the main function of the Australian Coach?
A. To transport the Team from the hotel to the ground.
Q. Why don’t Australian fielders need pre-tour travel injections?
A. Because they never catch anything.
Q. What’s the Australian version of LBW?
A. Lost, Beaten and Walloped.
Q. What do you call an Australian with 100 runs against his name?
A. A bowler.
Q. What’s the most proficient form of footwork displayed by Ponting?
A. The walk back to the pavilion.
Q. Who has the easiest job in the Australian squad?
A. The guy who removes the red ball marks from the bats.
Q. What do Australian batsmen and drug addicts have in common?
A. Both spend most of their time wondering where their next score will come from.
Q. Why are Australian cricketers cleverer than Houdini?
A. Because they can get out without even trying.
Q. What does Ryan Harris put in his hands to make sure the next ball almost always takes a wicket?
A. A bat
7 January 2011
Over the last couple of weeks I have put together some of my key learning’s from the last nine month, leading the architecture team on a migration project. There were a lot of firsts in this project for me, both from the problem space of the project but also leading a relatively large architecture team. This has been a good change to reflect on the project and take some time to capture these learnings.
By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest.
Leading a team of architects presented several challenges one simple problem of having multiple architects but one architecture to document. A simple habit that the team got into which helped develop a culture of knowledge sharing was the Architecture Review Meeting. At the core of the whole project we developing a Reference Architecture to tie everything back to. Last but not least, we used a technique from the agile world called the retrospective to provide a bit of continuous improvement as we moved from one part of the project to the next.
Doing a migration project was challenging, from the start is seemed that migration is just ETL. As the project evolved we soon realised that a migration is much more than ETL. We had to tackle several problems such as migrating a large volume of data. The importance but also the frustration of having to define what would happen on the day. By looking at other migration projects we came across a technique of defaulting rather than exception.
What a journey!
6 January 2011
As part of our fact-finding for the migration we were working on, we looked at several previous migration projects to learn from them. One idea that one particular migration had adopted was defaulting rather than exception out any data that fails. Obviously the type of data you are migrating and the capabilities of the target system have a big bearing on how successful this strategy will be.
This isn’t a substitute for not doing proper data profiling and associated data cleansing before the migration. When doing the trial runs you should also be reviewing the outcome of these and doing any data cleansing to fix any data errors. When the actual migration occurs, the number of exceptions should be minimal.
The goal of this strategy is to get as much data into the target system with each migration. If doing a single migration the chance of an exception occurring that requires backing out the entire migration is greatly reduced. If doing multiple migrations then this reduces the occurrence where something fails and has to be backed out and then scheduled into the next migration.
The first consideration is what type of data you are migrating and the types of errors that can occur during the migration. These errors can include:
- Data type errors
- Referential integrity rules
- Business rules
For each of the data entities that you identify you need to look at the errors that can occur and develop an appropriate “default” entity that you can load these into. It could be as simple as creating a “draft” status for an order where no rules are enforced and then use this. It would be best to have this status unique for the migration as you will need to go over all the data placed in this status and manually repair these.
The range of exceptions that can and probably will occur will be varied, but if you have done adequate profiling and give enough thought you should come up with some appropriate scenarios. The key to doing this is to have an appropriate process in place to go through the entities that have been “defaulted” and make necessary fixes to move these entities into a valid state.
The concept is relatively simple but developing the business rules to apply will be the hard part. In effect it is just a mapping rule to say if it doesn’t fit one of these, then shove it in here.
5 January 2011
One problem with large projects is managing multiple architects working on the one architecture. You need to be able to decompose the problem space into chunks that you can assign to individual architects to focus on. Then there is how to document the architecture, do you produce one architecture document or multiple architecture documents.
Unfortunately this seems to be one of those problems where there is no perfect solution. Every way you decide to tackle it you will get to the end and not want to use that approach again because of the issues it introduced. From previous projects there seem to be three logical ways to tackle a problem like this:
- Have multiple architects editing the one architecture document
- Break the project into domains and write an architecture document for each domain
- Have an architect document a domain and then consolidate into a single architecture document
The first problem is defining the domains that people will work on. For the recent migration project I was on we used a reference architecture to do this. Once you have the break up of the problem space you then have to deal with how you tackle documenting it.
1. Have multiple architects editing the one architecture document
- The architecture of the project is in one document
- Don’t need to repeat content across multiple documents
- Easy to identify if there are any gaps in the architecture
- Complexity of multiple people editing a single document
- Domains don’t necessarily align to the document sections
- Different people have different writing styles
- Hard to track progress
2. Break the project into domains and write an architecture document for each domain
- Architects are largely self-sufficient
- Easy to track progress
- Have to read multiple documents to understand the architecture of the project
- Some content will need to be repeated across document
- Hard to manage things like issues, risks, dependencies etc across documents
- Gaps in the architecture aren’t always relevant
3. Have an architect document a domain and then consolidate into a single architecture document
- The architecture of the project is in one document
- Easy to track progress
- Architects are largely self-sufficient
- Don’t need to repeat content across multiple documents
- Different people have different writing styles when consolidating
For the recent migration project that I was on we went for the “Have an architect document a domain and then consolidate into a single document”. We produced a lite version of the architecture document that we called a Solution Brief. This was just a cut down version of the architecture document with only some of the sections. But at the end of the process we were able to just cut and paste the content of the Solution Briefs into the single architecture document.
Look it wasn’t perfect but I think the process was the better of the three. The big learning was to define the domains that people are working on well as if you don’t get this right up front you are constantly addressing scope issues for the domains you have defined.
4 January 2011
Unlike a normal project where you are delivering a technical solution to solve a problem that has a life of years or even decades, a migration can be a one-off event. As the migration project forms there are a lot of interest in understanding what will happen “On the Day”. A migration is usually a critical event that will impact a large portion of the organisation including people involved with the old systems as well as the new system being migrated to.
One of the key questions that we needed to answer has to do with the expected duration of the migration. Although the actual amount of time that is needed for the migration is hard to define at this level, we can identify what dependencies drive when things can occur. The first key point is to understand when we could start the migration, when does the business want to take the current systems offline or in what state do the new systems need to be in to receive the data. Is there some type of End of Day activity that has to occur before we can start the migration. Once you have figured out when you can start the migration there is a standard set of technical steps involving Extracting Transforming and then Loading the data.
Then there is a step where we need to reconcile the data that has been loaded to confirm that everything occurred as it should have. This is a complex task to define early in the project because the business will find it hard to define what they need to reconcile. The first stab will be to reconcile basically every field or pretty close to. It really isn’t until you are able to perform your first trial migration that you can narrow down what needs to be reconciled and at what level. I think there is an aspect of trusting something that hasn’t been built. It isn’t until you do a migration and there is a report that says I migrated 100,000 orders and there are 100,000 orders in then new systems that you can delegate that responsibility to a piece of software.
Reconciliation will have a large bearing on how long the running of the migration on the day will take. The first trial migration you perform the reconciliation step will take a long time. There will be no pressure time wise as it will be a trial and there will be a natural tendency to be ultra cautious. Once you have completed this initial trial migration it could be a good check-point to review the requirements for reconciliation. I know a waterfall methodology wouldn’t cope with this approach but I think this is critical to ensuring the reconciliation is successful. It will also allow constructive conversations around the amount of information that needs to be visible versus presenting the information in summary to make the reconciliation more efficient. At the end of this review you should be able to come up with a straw-man schedule for the reconciliation and planned duration.
Laying out the migration activity you will have at a high level the following activities:
At some stage through this process you will have to identify what we referred to as “The Point of No Return”. This was a point in the migration past which you could not abort the migration and just re-enable the old system. The architecture of the migration to a large degree will influence when this point is in the migration and the goal will be to make it as late as possible. In conjunction with the Point of No Return there will need to be a business Go or No Go decision on the migration. The system constraints that will influence this will be unique for each migration, so will vary greatly. The key is to be aware that this point will exist and to identify and communicate it early.
What will happen “On the Day” is important but also is something that many people will influence so will evolve. I have outlined a high level process above, use it or create your own and progressively define the details of each of these steps as you go. The duration of the migration will be a key question as discussed in Migration is just ETL you need to identify points in the architecture where you can tune for performance.
31 December 2010
On an agile project a few years back I got to experience first hand the retrospective. The project ran one of these at the end of each iteration and it was an easy going meeting over a beer last thing on the Friday. More as an observer I found it to be an effective meeting where concerns were able to be raised in a constructive manner. It was also good to be able to see the effect these meetings had on behaviours on the project with issues in one retrospective becoming a pro in a future one.
On the migration project that I have been working on we hit a junction going from Part A of the project to Part B. I wanted a way to check in with the team and get some feedback on how thing were going. A retrospective format session seemed like the right way to do this. The first challenge I had, was coming up with the right questions to ask, after much searching I settled on the following:
- What one aspect of this project would you keep unchanged?
- What one aspect of this project would you change if you could?
We ran the meeting going around the room with each person having to provide an answer to the question being asked. This was to ensure an equal contribution from everyone on the team. We went around the room twice for each question to get a minimum of 2 answers to each question from each team member. After the two times around the room we opened it to the floor for other suggestions.
One criterion I imposed as we weren’t to solve the problem just raise them in this forum. Unfortunately we have a tendency to want to solve every problem. Keeping it to just raising an aspect we were able to keep the session to 60 minutes.
Coming out of the meeting it wasn’t necessary to have a follow-up meeting and develop a 10 point plan to address all of the issues blah blah blah. A retrospective is good in that it provides a check point to look back (retrospectively) at part of the project and get different opinions on how it went. Just taking the time to hear everyone’s thoughts causes a change in behaviour. Doing it as a one-off probably hasn’t caused a huge change compared to doing it weekly or fortnightly but feedback as been positive.
Most waterfall project I have been on do post-implementation reviews (PIRs) the problem with them is that they are usually after the event. It would be good to find a way to incorporate the retrospective into a project while it was running. This may be a challenge for the next project I am on.
30 December 2010
One of the big problems when performing a migration is the movement of a large volume of data around the network. This may have to occur during normal business operation if you are dealing with an organisation that runs 24×7 or you may have exclusive use of the network out of hours. Either way you could be moving a large amount of data across a network. Transferring 1GB of data will take about 15 minutes over a 10 Mbps network.
Remember also that you will need to execute the migration multiple times throughout the project. In the early stages this will just be with test data but you will need to be running it several time with production data during mock runs.
The first scenario is that you just wear the time cost to move the data off your production systems onto the migration platform. This may or may not be acceptable depending on what outage will the business will accept for the entire migration. Some alternatives you have here are to increase the bandwidth of the network. If you increase the bandwidth from 10 Mbps to 100 Mbps that same 1GB of data would only take 1 min 30 sec to transfer.
Another way of tackling this problem is to reduce the amount of data you have to transfer on the day of the migration by pre-migrating some of the data. Some data can be categorised as read-only so can be migrated days or weeks before the actual migration is to occur. The key here is to identify this data; it could be reference data, orders that have been delivered etc. Once you have identified the data that can be migrated early, the migration on the day will only need to migrate that data that hasn’t already been migrated. This may be significant or not depending on the nature of the data that needs to be migrated.
Change data capture (CDC) provides a pattern where a copy of the data can be taken usually from a back-up or other means and then kept in sync by replicating the changes to the copy. Firstly getting the copy of the data from the source system can be done in a non evasive manner by restoring an offline backup. Then generally the tool will use the log for the source system to capture the changes to replicate to the copy. This generally puts a negligible performance impact on the source system so should not impact to production running of the source system. This will come down to the way CDC has been implemented by the vendor as to any impact to the source systems.
There are a few alternatives to how to deal with the problem of moving a significant volume of data as part of a migration. As outline migration is much more than ETL and the mapping rules that tell you what data to migrate and the systems to get it from won’t be know until relatively late in the project. So from an architecture perspective you may decide on an approach to move the data but may also need a strategy on an alternative if you find the timings are too long.