Rockets before Rovers: The Agile Moon Landing Project

28 Jan

Brett W. Green's On the Contrary

When a pack of Software Engineers met in Utah in early 2001 to discuss a new way of building software, the summit did not produce a brand new SDLC, but rather a simple set of guidelines for software teams to follow in order to achieve better results with less friction. This ‘Agile Manifesto’ was as much a guiding force for future software development as it was an indictment of the processes that continue to plague the industry. As simple as the guidelines are, they can be profound when applied properly:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over Processes and tools

Working software over Comprehensive documentation

Customer collaboration over Contract negotiation

Responding to change over Following a plan

That is, while there is value in the items on the right…

View original post 591 more words

Coding Kata: Practice Time for Programmers

4 Sep

Before I came to Fortigent, I spent two decades as a musician and educator.  Training for a career in music often starts


in elementary school.  I joined the school band, learned to read music and played an instrument.  I continued on through middle and high school.  I learned to practice, to perform in front of an audience, and to listen to music.  I continued through college, learning music history, music theory, and to teach music.  I started reading music educator’s magazines and blogs.  In my senior year, I spent months student teaching in a classroom with a master teacher, practicing everything I’d learned.  By the time I got to my own classroom, I didn’t feel completely prepared, but I’d had real, hands-on experience in a professional setting.


So how did I get from teaching music to being a software engineer at Fortigent?  I started programming text-based adventures in BASIC on my Commodore 64 as soon as I was tall enough to reach the programming manuals on my father’s bookshelf.  For years, I programmed little games or small applications, but didn’t actually decide to get a formal technical education until I was on maternity leave from teaching and had some time.

Going from music to programming was a bit of a shock.  I took classes, talked to my professors, did programming projects, and read blogs about software engineering.  This college experience was really different from the first, however.  ProjectThe two things that I really missed in the computer science degree were practice and that final student teaching-like internship.  I found myself a part-time, volunteer position with a videogame company as I entered my final stretch of college classes.  I learned a lot about teamwork, source control, and how to work and navigate in a large business application.  The other programmers were helpful, but I was still lacking the one-on-one experience that I’d gotten as a student teacher.  I also had not learned to practice.  At the time, I thought of the programming projects as practice, but those projects were nothing like the practice I had done as a musician.  There was no repetitive improvement or long term reflection on a set of exercises.

Soon after I got to Fortigent, someone gave me Bob Martin’s (usually known as Uncle Bob) book The Clean Coder.  As I read through the book, the chapter on practicing really hit home.  Uncle Bob wrote about how many software engineers never really learn to practice.  How can someone be a good programmer if they don’t know how to practice and don’t have a master teacher working closely with them?  “Perhaps the best way to acquire ‘Design Sense’ is to find someone who has it, put your fingers on top of theirs, put your eyeballs right behind theirs, and follow along as they design something.”  (  While I’ve been doing a little bit of this at work (the programmers here at Fortigent don’t seem at all shy about me standing over their shoulder asking questions while they program), this is not the same as programming what they are programming.

About a decade ago, programmers started talking about practicing.  Dave Thomas mentions sitting at his son’s karate lesson watching the children perform kata.  The students followed the movements of a master teacher and would perform these same movements at home as practice.  As Dave Thomas was watching the students learn their kata, he had been experimenting with a small coding exercise, doing it several different ways, trying to make it more efficient.  It hit him that a small exercise such as what he was doing was very similar to the practice movements the students were performing. ( Just as in Japanese martial arts kata, the purpose of a coding kata is to practice.  A coding kata is a small set of exercises that encourage trying out different combinations of coding techniques in a practice environment.

To me this feels like when I would go through a piece of music with my college trumpet teacher – he would play a line of music and I would play it back to him trying to get the same sound, inflections, and tone as he did.  As a software engineer, I have been going through the kata of the software engineering community, trying to get a feel for good programming.  Sometimes I will try an exercise on my own several times, seeing if I can improve upon my code each time.  Other times I will find a video of someone else doing the kata, so I can learn from how they program.

powerpointAt The Rockville .NET Group (RockNug) meeting on September 11, 2013, I will be sharing part of one kata I’ve found particularly useful, Roy Osherove’s String Calculator Kata.  I found it to be a great introduction to Test Driven Development.  I’ve watched several people’s online videos of the kata being done in several languages, using several unit testing frameworks.  I felt that his kata imparted to me the idea of working incrementally and solving problems as simply as possible.  As I worked through this kata, I read comments left by other readers, which really gave a sense of getting to hear what people were thinking while doing the kata.


I’ve been developing a sense of how to write tests before code, I’ve been learning to work incrementally, I’ve been learning to refactor my code, and I’ve been striving to think of simpler solutions to problems I encounter.  I’ve also been learning technology basics such as how to use a testing framework, keyboard shortcuts, and all the wonders of Resharper.  I have found several good resources and kata about refactoring legacy code, and this is where I plan to focus next.  I believe this will provide a good starting point for testing and refactoring legacy code that I will be working with shortly.

Crash (UA)Test Dummy: Part II

1 Aug

Part II: We’re Beyond the Looking (Windshield) Glass

The opening salvo of what was a watershed year for Fortigent was the launch of our online proposal system – followed by other incredible features released in 2012. Fortigent’s planning suite, the Monte Carlo and the efficient frontier/portfolio analysis toolkit (AA Presentation), have always been Excel based and this remained so even as the rest of the system went web based. The latter was also our method for maintaining Fortigent’s capital market assumptions and models, as well as our recommended portfolio risk/return characteristics for advisors to use in proposals.

Since they were Excel based, not only did the user experience leave much to be desired, but the maintenance from our side was extremely arduous. The game plan was to simply port over the existing functionality, thereby getting advisors off the excel sheets quickly and allowing our internal resources a more efficient manner of maintaining this information. Once that took place, we would look to add more features.

Fast forward toward to the spring of 2013, my fellow “UAT dummy” Wade Fowler was entering in Fortigent’s capital market assumptions into the newly created model management screen. At this point our development team had taken the existing framework that existed in Excel and created the tools on the web, with some input from Wade and my team. We were satisfied with how the Monte Carlo and AA Presentation tools were working online at this point and were focused on getting model management up and running. Market assumptions are step one of model management, and it was here that we went through the windshield.

Fortigent has always maintained a three-tiered hierarchy for our allocation trees – super class, asset class, asset style – and each level has a one-to-one relationship with the next. We needed to be able to add model assumptions to all three levels, but where only able to add at the lowest, the style level. Getting this accomplished meant revisiting how our hierarchy system worked, not an easy task.

In addition to driving the Monte Carlo and AA Presentation tools, market assumptions also drive the models that advisors can create, which in turn will drive trading/rebalancing functionality in the near future. Getting this right was extremely important for that reason; it was the foundation for our major initiative to help advisors more efficiently trade and rebalance portfolios. Zach Girod and Jen Alpert, the product owners, formed a task force comprised Anuj Gupta, Wade, and myself to vet the options we had and to come up with a game plan.

We decided it was time to get our advisors involved, and scheduled calls with half a dozen of our clients, with representation from both banks and RIAs. We wanted a detailed understanding of how they go about creating models, how they use those models to trade and rebalance, and what Fortigent could be doing to assist them in this process. The intel we received was very helpful, and we were able to validate our assumption that advisors need two types of models – optimized and implementable – and that the models will need to be multi-tired, or hybrid.

Once the calls were finished, our development team coded our recommend changes. Wade and I then strapped in for what we hoped would be a round of testing that didn’t involve crashing through the windshield.

We entered in the Fortigent market assumptions, created firm and client specific models, incorporated those models into a Monte Carlo analysis and AA Presentation, and included them into a proposal. The framework for entering a hybrid model is inherently complex, so we had a good amount of back and forth in our weekly demos to get that interface right. One big hurdle to overcome was the impact to the class and super class levels after adding a model weight to the style level, and how different levels could be selected in a model.

Wade and I were able to provide feedback each week, and then see how that feedback looked in the application in a real world setting. Without iterating through, I don’t believe we would have nailed the workflow to get these toolkits online. We’ve now moved passed internal UAT, and are piloting this with the advisors we conducted the exploratory calls with.

I don’t believe we would be sitting here today with as much progress as we have achieved without the manner in which we gather requirements and test. We knew going in that the Monte Carlo and AA Presentation could be developed outside of model management. We broke this project into its core components, and this allowed us to get a working version of the Monte Carlo and AA Presentation mostly finished before crashing through the windshield with market assumptions. Once we determined the solution for market assumptions, we were able to get that nailed down with a few iterations, allowing us to incorporate that solution back into the Monte Carlo and AA Presentation in short order.

I’ll be sure to post a recap of what happens in the coming weeks as we move to launch model management and our planning suite to our entire client base.

Crash (UA)Test Dummy

31 Jul

Part I:    Crash Course

I’m very fond of the phrase “laws are like sausages — it is better not to see them being made”, as even someone without the experience of being on the working end of a meat grinder can relate to not wanting to know the details of certain things. In the case of technology applications, everyone loves the new functionality being introduced, but the work to bring the application to market is purposefully and happily ignored by the users. It is bittersweet that I say I used to be in that camp, ignorance was indeed bliss. Today, I’m one of our development team’s user acceptance testers; I’m not just watching the sausage get made, I’m helping create the recipe and taste the finished product.  Let me set the stage a bit.

Over the past few years, I have steadily become more involved with our software development process, specifically on the requirements and testing phases. Here at Fortigent we run what’s called an agile process for software development. Agile in the sense that a project is broken into smaller pieces, “sprints” as they are known, and the requirements gathering and testing are continuously performed. Agile is the critique of the mainstream approach to software development known as the “waterfall” method. In that process, the requirements for the entire project are collected upfront, and the business testing is performed after all the parts have been assembled. The premise that the waterfall method is based on is getting the requirements complete and correct in the beginning, which is exactly what agile is critiquing.

My role is to provide the voice, or persona, of our advisory clients and their end clients into the development phase for each sprint. After development finishes their work, I make sure what we thought was needed works in a way that improves our client’s experience while delivering the intended solution. I’ve learned phrases like “iterate” and the term “scope creep” has been levied towards me more times than I can count. For the sake of full disclosure, I do have some experience with technology prior to this. I was a computer science major for all of one semester during my freshmen year at college, so some aspects I had a high level knowledge of.

Because you work on the same big project and go through multiple iterations of requirements gathering and testing for each feature, I feel like I’m getting behind the wheel of the same car to test out the brakes in one sprint, then the driver’s side airbags in another, and so forth. You know with each sprint that the brakes won’t stop the car completely and will need some tweaks, or the airbag deploys but not at the optimal time. What you want to avoid are the instances where you crash through the windshield. Part II will walk through what one of those crashes looked like, and how our agile process allowed us to quickly move on to keep me in the driver’s seat for the next crash test.

Fortigent Technology Infographic

25 Jul

Fortigent Technology Agile Transformation Intro

If a picture is worth a thousand words…why create a PowerPoint presentation with a bunch of bullet points?

We decided to create an infographic rather than a PowerPoint presentation after looking at a few examples online. An infographic is a visual representation of data, information, or knowledge.  Our goal was to create a presentation to give an overview of the Agile Transformation of Fortigent’s Technology team.  We believe that a visual display of information with cool imagery often catches people where words alone fail. They are simply interesting – they attract a lot of attention and are more fun to create than a PowerPoint with a bunch of bullet points!

Today, we’re drowning in data! Infographics provide a quick way to communicate data in an easy-to-understand format.  We believe that infographics are easy to digest, simple to understand and aesthetically pleasing.  We are planning to share our technology infographic through our website, LinkedIn, and this blog.  Furthermore, Prezi is used to present our infographics.  We have printed a couple of copies that are circulating around our office and management has expressed interests in showcasing it at conferences.

Our motto and belief are to keep everything we develop as simple as possible.  The same analogy was applied to this infographic.  We filtered through our large amounts of data, gathered the main points, and organized it so the infographic didn’t boggle our audience.  The finished infographic, we believe, is easily read and understood. Needless to say, we spiced up a relatively boring topic (to non-technical people) by using appealing images to engage users’ attention.

The opinions voiced in this material are for general information only and are not intended to provide or be construed as providing specific investment advice or recommendations for your clients.Securities and Advisory services offered through LPL Financial, a Registered Investment Advisor. Member FINRA/SIPC.

Advisor Use Only. Not for Client Distribution.