Archive | July, 2012

Getting There from Here

20 Jul

by Brett W Green

How do we get there from here?  That’s a question engineers ask themselves a lot. The sequence of questions that leads us to this important inquiry usually goes like this:

  1. What do we not like about our current solution?
  2. If we could do it all again, what would it look like?
  3. How do we get there from here?

In the process of getting to the answer, we usually find ourselves having to make a lot of compromises.  The “perfect” solution, if such a thing exists, is typically very difficult to achieve either due to cost, risk, hardware limitations, resources, or a host of other impediments.

A practical engineer is a good engineer, however, and the good ones will try to (a) do something that feels at least somewhat “elegant” and/or (b) do something that is along the path towards to so-called perfect solution.

Often, in a time crunch, we are nonetheless forced into doing something we’ve taken to calling “hackalicious”… a completely inelegant change to code that solves the problem and lets you move onto something more important at the time.  Any engineer that says they’ve never hacked something is lying.  I wager that every software system on Earth (and in orbit, for that matter) has at least one piece of code that the engineers would describe as a hack.  Hacks work, or we wouldn’t use them, but they are typically very difficult to work around or change and, if a system has too many, can render the entire system a “blower upper” rather than a “fixer upper”.

We’re constantly in the process of evaluating our software systems here at Fortigent to see what kinds of changes we’d like to make in both the short and long term across all of our various systems both internal and external.

Some of the changes are made to ensure that our systems can scale both because of a growing client base and because the Earth revolves around the Sun.  Every month we add nearly 750,000 transactions into our system .  A system designed for the volume we had 3 years ago may not be adequate today.

Other changes are made to improve performance of the system both for end-users and internal processes. This can involve smaller tactical changes to improve the speed of one part of the system,  simple hardware upgrades, and sometimes larger architectural paradigm shifts.

Finally, we make changes to make it easier for us to make changes! An accumulation of small changes, lack of automated testing and poorly designed architecture can lead to parts of the system that are very resistant to change and therefore cost us more money to maintain and make it slower for us to deliver new features to market.  These types of changes are typically tough decisions to make.  You’re often forgoing some short-term gain that will ultimately pay off in the long term.  A team that never makes this trade will arrive eventually at the “blower upper” phase.  A team that does this too often is probably misaligned with the business.

Here’s a “hit list” of the larger changes we are looking at through the end of this year and into 2013:

Infrastructure Changes:

  • Moving targeted disks onto faster RAID 10 Fibre Channel SAN disk
    • RAID 5 offers data protection and most of our disks use this.
    • RAID 10 maintains this protection while adding “striping” which can improve performance, although this is a complicated subject (
    • Some of our peripheral disks are on SATA storage which just doesn’t have adequate I/O throughput.
  • Moving targeted disks to SSD storage
    • SSD compared to traditional “spinning media” hard disks is like iPods vs. Vinyl.  Anyone who thinks SSD isn’t the future still shops for old VHS tapes at the thrift store.  Although prices have fallen, it’s still relatively expensive so this will probably be targeted at critical areas.
  • SQL Server Enterprise Edition Features
    • Partioning will allow us to cut large tables into virtual physical “chunks” which make it far more efficient to access the data along the typical business lines.
    • Compression allows us to squeeze more data onto disk “pages”, trading CPU time for I/O time which, with those spinning vinyl disks, is usually an excellent trade-off.

Architectural Changes

  • Data Warehousing
    • If we can be said to be doing warehousing, it’s somewhat incomplete.
    • Our only true warehouse table stored data at Account, Bucket (aka investment) and Date level.
    • This is not “granular” enough to do things like unrealized/realized gain reports and, since most of our performance reports are portfolio-based, is not aggregated enough to completely satisfy the report needs.
    • Although we’ve made a lot of progress, we’re still doing too much I/O at runtime.
    • Still very much in the design and thinking phase here, but hoping to make some changes in 2013.
  • File Storage redesign
    • Currently, all of our report and proposal files are stored in a large SQL Server database.  This is very difficult to maintain and causes I/O problems with our other database mechanisms
    • We are looking to move all of our file storage off to a file-system based solution by end of Q3.
  • Persisted Positions storage and usage
    • Many of our systems calculate security-based positions ‘on the fly’ by aggregating the transactional changes on that security since its inception.  That is a lot of I/O and a lot of unnecessary work.  We are working on a system for storing and maintaining calculated positions that should improve performance and consistency within the system.  This is targeted for release at the end of Q3.
  • Segmented database servers
    • True scalability comes from being able to add hardware and servers as volume increases.
    • We’d like to be able to segment larger advisors into their own servers, or several medium-sized advisors onto one server, improving overall system performance.
    • This allows us infinite growth and the ability to take on an advisor of any size without worry.
    • It may also be an advantage for closing on potential clients with more strict security needs who want their data housed independently.

We hope these and many other exciting features we’re adding will allow us to grow through 2013 and beyond!

Top Check-In Comments As Of July 2012

16 Jul

My favorite svn check-in comments from the last few months:

I Like when Zach refers to himself in third person:
Author: zgirod, This is fixing a bug where zach forgot another where clause.. this one is causing performance issues

These were all within 10 minutes of each other:
Author: apodlyesnyy
one more fix
second fix
third fix

Some random favorites:
Author: kjones, KJ – HappyScript to keep your JavaScript happy.
Author: JBosse, JDB – *sad panda*
Author: JBosse, JDB – Fixed my POS code.
Author: JBosse, JDB – That’s all for today folks!
Author: kjones, KJ – Console Log removed [sigh].
Author: bgreen, Unused styles… I Haz them.
Author: JBosse, JDB – Silly radix, ints are for kids!
Author: bgreen, Geoff owes me a pint
Author: bgreen, Fix a couple tests… Geoff owes me another pint.
Author: bgreen, Fix QUnit Tests… Jimmy owes me a pint.
Author: dfiala, API PROD Pack: Just to give Guru something to review
Author: dfiala, API PROD Package- (*&^@
Author: bgreen, I suck…
Author: JBosse, JDB – Slap-happy.

Abhi having a bad day:
Author: asharma, Soft deletion sucks big time!!!!!!!!!!!!!!

And the number one SVN checkin of all time:
Author: JBosse:

                           d8888888888888b                        _,ad8ba,_
                          d888888888888888)                     ,d888888888b,
                          I8888888888888888 _________          ,8888888888888b
                __________`Y88888888888888P"""""""""""baaa,__ ,888888888888888,
            ,adP"""""""""""9888888888P""^                 ^""Y8888888888888888I
         ,a8"^           ,d888P"888P^                           ^"Y8888888888P'
       ,a8^            ,d8888'                                     ^Y8888888P'
      a88'           ,d8888P'                                        I88P"^
    ,d88'           d88888P'                                          "b,
   ,d88'           d888888'                                            `b,
  ,d88'           d888888I                                              `b,
  d88I           ,8888888'            ___                                `b,
,888'           d8888888          ,d88888b,              ____            `b,
d888           ,8888888I         d88888888b,           ,d8888b,           `b
,8888           I8888888I        d8888888888I          ,88888888b           8,
I8888           88888888b       d88888888888'          8888888888b          8I
d8886           888888888       Y888888888P'           Y8888888888,        ,8b
88888b          I88888888b      `Y8888888^             `Y888888888I        d88,
Y88888b         `888888888b,      `""""^                `Y8888888P'       d888I
`888888b         88888888888b,                           `Y8888P^        d88888
Y888888b       ,8888888888888ba,_          _______        `""^        ,d888888
I8888888b,    ,888888888888888888ba,_     d88888888b               ,ad8888888I
`888888888b,  I8888888888888888888888b,    ^"Y888P"^      ____.,ad88888888888I
  88888888888b,`888888888888888888888888b,     ""      ad888888888888888888888'
  88888888888888888888888888888888888888888b,`"""^ d8888888888888888888888888I
  I888888888888888888888888888888888888888888888P^  ^Y8888888888888888888888'
  `Y88888888888888888P88888888888888888888888888'     ^88888888888888888888I
   `Y8888888888888888 `8888888888888888888888888       8888888888888888888P'
    `Y888888888888888  `888888888888888888888888,     ,888888888888888888P'
     `Y88888888888888b  `88888888888888888888888I     I888888888888888888'
       "Y8888888888888b  `8888888888888888888888I     I88888888888888888'
         "Y88888888888P   `888888888888888888888b     d8888888888888888'
            ^""""""""^     `Y88888888888888888888,    888888888888888P'
                             "8888888888888888888b,   Y888888888888P^
                              `Y888888888888888888b   `Y8888888P"^
                                "Y8888888888888888P     `""""^

Rapid Development at Fortigent

2 Jul

by Andriy Volkov

The key to Rapid Development at Fortigent is our streamlined Software Factory. By “software factory” I mean the whole set of mechanisms, tools and processes involved in taking ideas from inception to production. A streamlined software factory is one that creates no obstacles, first and foremost for developers.

The point is to make the software easy to improve. If this most fundamental of all qualities is present, all other -ilities, like usability, scalability, performance, resource utilization, functional richness — will catch up. Optimizing our Software Factory for iterative, incremental improvements allowed Fortigent to reduce the cost of mistakes and minimize the risk associated with innovation.

Out of the endless variety of Best Practice memes floating on the Web, here are some of the main points we came to appreciate:

Continuous Integration

At the heart of Fortigent’s Software Factory is our continuous integration (CI) loop. Its purpose is to create the shortest possible feedback cycle for development to feed on. The traditional selling point of CI is its ability to catch conflicting changes early, but the real benefits of CI are far more fundamental than that.

At Fortigent, CI is not just a common development environment, it is a set of automated processes designed to break easily. Continuously rebuilding the entire software stack (including the database!) from source code creates constant pressure to keep our software in the known, functional state. This results in software that is always ready for release, allowing our team to react quickly to any new requirement or change in priorities. This is the very definition of Agile.

Build Automation

Our TeamCity build automation server hosts more than 80 build configurations. About a third of those run on every code change. There are builds doing .NET compilation and unit testing, linting JavaScript, running QUnit and Watin tests, creating MSI installers or NuGet packages, and deploying applications to staging environments.

Ad-Hoc Environment Allocation

We have 5 identical environments closely mimicking our production setup, named ENV1 to 5. Why are they made identical?  So we can install the same exact binary package to any of them! This saves a huge amount of energy otherwise required to manage all those installations and config files.

Different versions of the same application may be automatically deployed to multiple ENVs. For example, an actively developed application may have its trunk build pushed to ENV3 while its more stable production maintenance branch will be deployed to ENV4 and 5, to facilitate integration testing of other projects.

Database Change Management

Since our databases are always in flux, we had to devise a process to keep all those changes in control, while not restricting the developers’ freedom to write arbitrary SQL scripts to solve their data and schema migration problems. With this in mind, you can appreciate our database change management process, which revolves around an idea I call “delta script queues”.

A delta script queue is really just a directory in our Subversion repository, but here’s what makes it a queue: Every time a new database change is required, it’s coded up in SQL and the script is added at the end of the queue. Before the change will appear in production, it will be deployed to several development databases, and because they get reset on a weekly basis, all SQL scripts will run again and again, always in the same sequence. To avoid surprises, an existing script already in the queue is considered immutable, but its effects can always be offset by adding a subsequent script. When the changeset is finally ready to go, the queue is flushed by executing it on production, and a new queue is started for the next changeset.

One of our most important CI processes is “BoatSync/DbBuildBase”. The BoatSync process scripts the production database schema daily and checks any changes into the source control. DbBuildBase rebuilds all 5 of our production databases from these scripts, and further applies any dev-in-progress scripts from the delta script folders. The resulting database files are then published to a network share. Those test builds that rely on database, start by downloading the database and reattaching it to a local SQL Server instance. This allows every test to run against a fresh copy of the database, with all production and development changes synced up!

Test Driven Development

Nested within the CI loop is the Red-Green-Refactor cycle of TDD running on each individual developer’s workstation. We strive to keep both loops as short as possible, and for each individual iteration to contain as few changes as possible. Minimizing the amount of “balls in the air” (i.e., broken code) at any given time helps our developers stay in control, while avoiding the “stack overflows” with their panic, and the resulting cowboy coding episodes.

Our software project mechanics are designed to encourage the “test-first” coding style. Whenever possible, the unit tests run against a local database, reconstructed from production schema, with recent changes applied. Every application being developed is configured to run locally, without having to push a build to a common “development environment”.

Challenges and Next Steps

It took us years to come from where we were to where we are, but we are far from where we’re heading to.

Here are some of the challenges we’re thinking about now:

  • Our legacy projects still rely on project references for dependency management. We need to finish the process of NuGet-ization, switch everything to binary references and stop committing binaries to source control,
  • Our Subversion repository is huge. Perhaps we should migrate to a “repository-per-project” model? Should we adopt Git or Mercurial for new projects?
  • Our RDBMS-centric architecture is reaching its limits. We are thinking along CQRS lines, with primed caches backing up the reads, and message-driven worker services handling the cache misses and the writes. This should make our logic less query-heavy, which would eliminate the need for ORM and reduce the number of unit-tests requiring a live database connection.
  • Our backlog-management and work-initiation processes are still pretty immature, despite our partial success with Kanban.
  • We need a lot more automated regression and integration testing done at the UI level.
  • In general, need to increase our test coverage, and tighten up our TDD. While a few of us have experimented with behavior-driven development, that whole area lies largely unexplored.

The key to our success so far is a philosophy of continuous improvement. Our development process wasn’t handed down from a mountain top on stone tablets. Instead, it evolved over time as we saw opportunities for improvement. Our team is a true team of peers. Architecture and process aren’t defined by a single team lead. Any team member can contribute to improving the process or suggest an architectural change. Proposed changes often lead to water cooler discussions about the best approach but a new process or technology that is accepted by the team will be adopted immediately and will often be in our production code within days.

One thing that unites our development team is a love for the craft of software development. Although at times we’ll have intense debates about the path forward, we have a shared goal of developing quality code that we can maintain for years to come.

Extreme flexibility to adopt new technology and processes allows us to ride a wave of continuous improvement. While that can be challenging at times, and we’re all learning new things every day, it’s allowed us to create value by using the best that modern software engineering has to offer.

Photograph of Andriy VolkovAndriy Volkov is a Senior Developer at Fortigent and has been around since CGA snow. He’s got a degree in computers. He is a Continuous Improvement dude. Besides beating the CI drum, he likes sencha, LPs, backpacking, and meditation. You can follow him on twitter @zvolkov.