By Brad Egeland, Originally published in BradEgland.com
No matter how smoothly things have gone on your project up till this point, as you approach implementation you’ll feel at least two things: relief and anxiety. Relief because it’s nearly over and you’re still moving forward. Anxiety because it’s nearly over and there may be something—or some things—that you’ve overlooked. How you manage the project—including your team and your customer—can help out a lot on that common feeling of anxiety and that “have I done everything I need to do” moment of truth. But there are things you really know deep down that you must do in order to really feel comfortable as you roll out the new tech solution to your anxiously awaiting customer and their end users.
While there a many different official checklists that you can track down on the internet, I’ll give you my personal checklist of five key areas to focus on that will help ensure that you are rolling out a fully prepared and effective solution to your project client.
Review the project schedule. Give the project schedule a solid review…with your team. Go over all tasks to ensure they are complete, check deliverables and milestones, and compare it one more time to the original statement of work to ensure you have covered everything that was discussed around project kickoff time. You do not want to get to deployment to find out you’ve omitted a key report or functionality—though that should have been fleshed out at user acceptance testing (UA—but not always because I’ve never met a customer who was a pro at UAT.
Check for all approvals. This may be more of a “cover all your bases” gesture, but make sure you have all official, signed approvals in place for all key project deliverables. They may come in handy post-implementation if there is any question at all as to your delivery team’s performance on the project.
Review UAT results. Check the UAT results one more time and make sure you have the proper UAT approval and official sign-off in place. That’s critical because it is your client’s statement that they have fully tested the solution against their requirements and test cases and have ensured that it is performing properly and producing the desired results. Never move toward deployment without this in place.
Performance test the whole solution. This one is key. Any solution can be created to work in the client’s environment (well, there are those exceptions I guess, but I hope you know that very early on), but not all will perform to the needs of your project client. I was called in to take over a large implementation for JP Morgan Chase because our solution couldn’t get past this point—transaction times were still unacceptable and we were one step away from deployment. It took a two week onsite gathering in a war room with several expensive techs involved to get through testing and improve processing times to more acceptable levels and we still fell outside what the customer really wanted. They signed off only because they were tired of the process and didn’t see an acceptable end result coming in the near future.
There are solid performance testing platforms out there to help you get through situations like this. Take Appvance, for example. Their platform tests full beginning-to-end—from the presentation layer to the back end. Not simple protocol level tests (end to end) like other tools. Server stats integrated into reports alongside UX transaction times. After my experience with JP Morgan Chase, I researched these types of solutions and basically found that no solution other than Appvance can ramp 100 to 10M real browsers, on local or remote machines instantly, to simulate actual users from beginning-to-end like Appvance. And testing exactly what the users experience will be, even when the app is loaded with users is critical when so much code is written on the client-side of the equation today.
Conduct a one-on-one with the sponsor. Finally, conduct a one-on-one with the project sponsor. I’m not saying that I always do this—but I should and I always wish I did. This is not a lessons learned session…you still really need to conduct one of those near or post-implementation gather key information on perceived good and bad performance and experiences on the engagement. This is more an informal “we are at the end, how do you feel about this?” type of discussion and a chance for you&mdaash;as the project manage—to get ideas of potential future needs from the project client. I’ve often turned these discussions into lucrative next phases on consulting engagements…don’t skip this opportunity, but don’t make it feel like a sales situation.
No one—or even five—actions while guarantee project success and fully ensure that your solution is ready for the real world. I don’t think I’ve rolled out one IT project that has not had a few hiccups shortly after implementation. That’s why you usually keep the project team intact— or at least guarantee availability—for 15 to 30 days post-implementation before a full handoff to tech support. Trust me, it helps keep customer satisfaction higher and ensures a smoother transition to the support group.