Archive by Author

Planning a Customer Product Roll-Out?

7 Jan

First, Walk in Their Shoes

(So you don’t have to swim in them)

Joseph Konrad
Senior Business Analyst

titanic

When the Titanic’s captain was asked how her maiden voyage would go, he reputedly said, “Swimmingly.” Her construction was considered the acme of civilian ship architecture, and her voyage in 1912 carried a Who’s Who of high-society Americans. (Many had been in Britain for a year experiencing the glamour of a coronation and also, more importantly, marrying impoverished nobility willing to confer a title in return for refilling the empty family coffers.) This new maritime product’s ‘rollout’ was to be every bit as confident as its construction, expressing an ideal captured in the Japanese word ‘shibumi’, a movement of effortless perfection. Everything would perform well by its nature.

Instead, in the epitome of disaster, we got a Leonardo DiCaprio film.

Step One: Boring, Methodical Preparation

Success may be so uneventful as to look easy, even banal, but failure can be spectacular. Given the choice, I’ll take banality.

In creating a product and introducing it to its customer, a desire for effortless perfection can’t hide that we live in the world of Murphy’s Law, and supreme confidence is no legal defense. Reality demands another approach at every step of an endeavor, Rigor. It may not result in even the appearance of ‘shibumi’, but it can prevent us from going to the bottom. This form of QA may be doggedly dull, but it’s also essential in the process of successfully creating and deploying a product to a customer.

This imperative was on display during the spring, summer and fall of 2012, when, at the request of one of our most important and growing relationships, Fortigent expanded its suite of reports to include more fixed income data, income projections for both bonds and equities, and the use of sector/industry data for equities.

When the new reports were being developed, numerous constituent parties had to be identified in their relationships and organized. Within Fortigent, this endeavor included our consultants interacting with the outside customer. It needed people with industry and business analysis knowledge (helping to design specifications) and developers (creating the new reports). Expansive integration testing meant growing the test team beyond its normal core of professionals to include consultants, Operations team members, developers and analysts. The final test was (and remains) the ultimate user acceptance test, by the customer.

Through it all, we worked to follow an orderly plan (tailored as events required) and to keep constituents positively engaged. We had to know who the customers were for each component. If the report developer needed improved report requirements, she was my customer and it was my job to provide them. To the suppliers of vital meta-data that feeds these report beasts, it’s we who are the customers. To the developer of the data integrity controls that helps Operations shoulder responsibility for the data garden, Operations was the customer.

Step Two: Boring, Methodical Expansion

Our initial thrust was outward, to one customer. Now these reports will be made available to another of our largest customers. It’s become a redeployment process.

On this second cruise we start by identifying the prize: As trouble-free a roll-out of this new product to a customer as possible. A satisfying experience includes keeping the customer engaged, educated, and happy with their initial and subsequent experiences of the new tools.

How do we ensure that last bit? By pretending to be the customer ourselves and living their experience through a dry-run, by taking a walk in their shoes. We identified 20 of the most sizable composite portfolios for which these reports will be run, created sample report packages for them, and started running them as of a recent date. We surveyed the quality of the meta-data associated with the securities. Were their ratings ready? Their effective durations? Was call schedule data up to date?

The first reports were surprisingly good, but had rough edges. Where we found gaps, we filled them, and simultaneously improved our internal QA tools to keep them filled. After upgrades, the second batch was better, and we started looking for the non-obvious at the granular level. Starting big, we worked our way down into the nitty-gritty of the customer experience. Anything we found benefited not only this second customer, but meant an incremental gain for the first as well. If you have to make mistakes, never make the same one twice. Make educational new ones.

Soon it will be time to bring the customer in more directly. We’ll prepare a real-world demonstration with real client data and engage in energetic interaction, fielding questions, capturing ideas, and creating expectations both realistic and positive. Finally, as confidence in data quality and its utilization rises, there will be more formal training and education: Here’s how this works. What do you think? What do you need? Can we make it better? The second customer, like the first, will be a partner, not a passenger.

In effect we want the go-live date for our customer to be ho-hum. In fact, if we have done our part rigorously, they will have been effectively ‘live’ without knowing it even before the roll-out date.

Step Three: Boring, Methodical Follow-through

Something unexpected will always happen. It probably won’t be a shipwreck, but internal redundancy expects – in fact, demands – every effort be made to find problems with data quality or utilization long before an external customer might. And if snags are found, it means having a trained, experienced team to work the problem, inform the customer, gather feedback, and energize the solution at every step of their experience.

These are living products and long-term commitments. They don’t get sold, then forgotten. Each successive deployment, each ‘sale’, becomes easier until the new feature is part of the essential fabric of our product offering.

Methodical analysis. Thorough planning. Rigorous multi-disciplinary testing. Flexible thinking and resource allocation. Customer engagement. Committed, energetic follow-up.

These are all essentials in the rigorous alchemy of customer satisfaction. It may not be effortless perfection – these things take work – but it can be a comfortable and profitable non-event, free of icebergs.


Joseph KonradJoseph Konrad started at Marine Midland Bank (extinct) as a trust assistant. From there
he worked in portfolio accounting, performance measurement, trading and operations
management before joining Fortigent as a
business analyst.

Rapid Development at Fortigent

2 Jul

by Andriy Volkov

The key to Rapid Development at Fortigent is our streamlined Software Factory. By “software factory” I mean the whole set of mechanisms, tools and processes involved in taking ideas from inception to production. A streamlined software factory is one that creates no obstacles, first and foremost for developers.

The point is to make the software easy to improve. If this most fundamental of all qualities is present, all other -ilities, like usability, scalability, performance, resource utilization, functional richness — will catch up. Optimizing our Software Factory for iterative, incremental improvements allowed Fortigent to reduce the cost of mistakes and minimize the risk associated with innovation.

Out of the endless variety of Best Practice memes floating on the Web, here are some of the main points we came to appreciate:

Continuous Integration

At the heart of Fortigent’s Software Factory is our continuous integration (CI) loop. Its purpose is to create the shortest possible feedback cycle for development to feed on. The traditional selling point of CI is its ability to catch conflicting changes early, but the real benefits of CI are far more fundamental than that.

At Fortigent, CI is not just a common development environment, it is a set of automated processes designed to break easily. Continuously rebuilding the entire software stack (including the database!) from source code creates constant pressure to keep our software in the known, functional state. This results in software that is always ready for release, allowing our team to react quickly to any new requirement or change in priorities. This is the very definition of Agile.

Build Automation

Our TeamCity build automation server hosts more than 80 build configurations. About a third of those run on every code change. There are builds doing .NET compilation and unit testing, linting JavaScript, running QUnit and Watin tests, creating MSI installers or NuGet packages, and deploying applications to staging environments.

Ad-Hoc Environment Allocation

We have 5 identical environments closely mimicking our production setup, named ENV1 to 5. Why are they made identical?  So we can install the same exact binary package to any of them! This saves a huge amount of energy otherwise required to manage all those installations and config files.

Different versions of the same application may be automatically deployed to multiple ENVs. For example, an actively developed application may have its trunk build pushed to ENV3 while its more stable production maintenance branch will be deployed to ENV4 and 5, to facilitate integration testing of other projects.

Database Change Management

Since our databases are always in flux, we had to devise a process to keep all those changes in control, while not restricting the developers’ freedom to write arbitrary SQL scripts to solve their data and schema migration problems. With this in mind, you can appreciate our database change management process, which revolves around an idea I call “delta script queues”.

A delta script queue is really just a directory in our Subversion repository, but here’s what makes it a queue: Every time a new database change is required, it’s coded up in SQL and the script is added at the end of the queue. Before the change will appear in production, it will be deployed to several development databases, and because they get reset on a weekly basis, all SQL scripts will run again and again, always in the same sequence. To avoid surprises, an existing script already in the queue is considered immutable, but its effects can always be offset by adding a subsequent script. When the changeset is finally ready to go, the queue is flushed by executing it on production, and a new queue is started for the next changeset.

One of our most important CI processes is “BoatSync/DbBuildBase”. The BoatSync process scripts the production database schema daily and checks any changes into the source control. DbBuildBase rebuilds all 5 of our production databases from these scripts, and further applies any dev-in-progress scripts from the delta script folders. The resulting database files are then published to a network share. Those test builds that rely on database, start by downloading the database and reattaching it to a local SQL Server instance. This allows every test to run against a fresh copy of the database, with all production and development changes synced up!

Test Driven Development

Nested within the CI loop is the Red-Green-Refactor cycle of TDD running on each individual developer’s workstation. We strive to keep both loops as short as possible, and for each individual iteration to contain as few changes as possible. Minimizing the amount of “balls in the air” (i.e., broken code) at any given time helps our developers stay in control, while avoiding the “stack overflows” with their panic, and the resulting cowboy coding episodes.

Our software project mechanics are designed to encourage the “test-first” coding style. Whenever possible, the unit tests run against a local database, reconstructed from production schema, with recent changes applied. Every application being developed is configured to run locally, without having to push a build to a common “development environment”.

Challenges and Next Steps

It took us years to come from where we were to where we are, but we are far from where we’re heading to.

Here are some of the challenges we’re thinking about now:

  • Our legacy projects still rely on project references for dependency management. We need to finish the process of NuGet-ization, switch everything to binary references and stop committing binaries to source control,
  • Our Subversion repository is huge. Perhaps we should migrate to a “repository-per-project” model? Should we adopt Git or Mercurial for new projects?
  • Our RDBMS-centric architecture is reaching its limits. We are thinking along CQRS lines, with primed caches backing up the reads, and message-driven worker services handling the cache misses and the writes. This should make our logic less query-heavy, which would eliminate the need for ORM and reduce the number of unit-tests requiring a live database connection.
  • Our backlog-management and work-initiation processes are still pretty immature, despite our partial success with Kanban.
  • We need a lot more automated regression and integration testing done at the UI level.
  • In general, need to increase our test coverage, and tighten up our TDD. While a few of us have experimented with behavior-driven development, that whole area lies largely unexplored.

The key to our success so far is a philosophy of continuous improvement. Our development process wasn’t handed down from a mountain top on stone tablets. Instead, it evolved over time as we saw opportunities for improvement. Our team is a true team of peers. Architecture and process aren’t defined by a single team lead. Any team member can contribute to improving the process or suggest an architectural change. Proposed changes often lead to water cooler discussions about the best approach but a new process or technology that is accepted by the team will be adopted immediately and will often be in our production code within days.

One thing that unites our development team is a love for the craft of software development. Although at times we’ll have intense debates about the path forward, we have a shared goal of developing quality code that we can maintain for years to come.

Extreme flexibility to adopt new technology and processes allows us to ride a wave of continuous improvement. While that can be challenging at times, and we’re all learning new things every day, it’s allowed us to create value by using the best that modern software engineering has to offer.


Photograph of Andriy VolkovAndriy Volkov is a Senior Developer at Fortigent and has been around since CGA snow. He’s got a degree in computers. He is a Continuous Improvement dude. Besides beating the CI drum, he likes sencha, LPs, backpacking, and meditation. You can follow him on twitter @zvolkov.

Work at Fortigent

26 Jun

Are you a passionate developer that believes in software craftsmanship?

If so, Fortigent is looking for developers like you.

We are always looking for top-notch software developers and testers.

Fortigent is a Rockville, Maryland-based financial services firm offering specialized investment management, portfolio advisory and performance measurement services to our high net-worth individuals and advisor clients.

We offer a developer-friendly agile development process, healthy work/life balance, respect for industry-accepted best practices, and an opportunity to express your ideas and demonstrate your excellence in a small team environment where your voice will be heard.

If you would like to join the Fortigent Development Team, submit your resume, and your answer to the question, “What characterizes good code in your opinion?  What characterizes bad code?” to guru.rao@fortigent.com

Read more about the Fortigent Development Team here.

And don’t forget to follow us on twitter, @fortigentdevs