Archive by Author

Crash (UA)Test Dummy: Part II

1 Aug

Part II: We’re Beyond the Looking (Windshield) Glass

The opening salvo of what was a watershed year for Fortigent was the launch of our online proposal system – followed by other incredible features released in 2012. Fortigent’s planning suite, the Monte Carlo and the efficient frontier/portfolio analysis toolkit (AA Presentation), have always been Excel based and this remained so even as the rest of the system went web based. The latter was also our method for maintaining Fortigent’s capital market assumptions and models, as well as our recommended portfolio risk/return characteristics for advisors to use in proposals.

Since they were Excel based, not only did the user experience leave much to be desired, but the maintenance from our side was extremely arduous. The game plan was to simply port over the existing functionality, thereby getting advisors off the excel sheets quickly and allowing our internal resources a more efficient manner of maintaining this information. Once that took place, we would look to add more features.

Fast forward toward to the spring of 2013, my fellow “UAT dummy” Wade Fowler was entering in Fortigent’s capital market assumptions into the newly created model management screen. At this point our development team had taken the existing framework that existed in Excel and created the tools on the web, with some input from Wade and my team. We were satisfied with how the Monte Carlo and AA Presentation tools were working online at this point and were focused on getting model management up and running. Market assumptions are step one of model management, and it was here that we went through the windshield.

Fortigent has always maintained a three-tiered hierarchy for our allocation trees – super class, asset class, asset style – and each level has a one-to-one relationship with the next. We needed to be able to add model assumptions to all three levels, but where only able to add at the lowest, the style level. Getting this accomplished meant revisiting how our hierarchy system worked, not an easy task.

In addition to driving the Monte Carlo and AA Presentation tools, market assumptions also drive the models that advisors can create, which in turn will drive trading/rebalancing functionality in the near future. Getting this right was extremely important for that reason; it was the foundation for our major initiative to help advisors more efficiently trade and rebalance portfolios. Zach Girod and Jen Alpert, the product owners, formed a task force comprised Anuj Gupta, Wade, and myself to vet the options we had and to come up with a game plan.

We decided it was time to get our advisors involved, and scheduled calls with half a dozen of our clients, with representation from both banks and RIAs. We wanted a detailed understanding of how they go about creating models, how they use those models to trade and rebalance, and what Fortigent could be doing to assist them in this process. The intel we received was very helpful, and we were able to validate our assumption that advisors need two types of models – optimized and implementable – and that the models will need to be multi-tired, or hybrid.

Once the calls were finished, our development team coded our recommend changes. Wade and I then strapped in for what we hoped would be a round of testing that didn’t involve crashing through the windshield.

We entered in the Fortigent market assumptions, created firm and client specific models, incorporated those models into a Monte Carlo analysis and AA Presentation, and included them into a proposal. The framework for entering a hybrid model is inherently complex, so we had a good amount of back and forth in our weekly demos to get that interface right. One big hurdle to overcome was the impact to the class and super class levels after adding a model weight to the style level, and how different levels could be selected in a model.

Wade and I were able to provide feedback each week, and then see how that feedback looked in the application in a real world setting. Without iterating through, I don’t believe we would have nailed the workflow to get these toolkits online. We’ve now moved passed internal UAT, and are piloting this with the advisors we conducted the exploratory calls with.

I don’t believe we would be sitting here today with as much progress as we have achieved without the manner in which we gather requirements and test. We knew going in that the Monte Carlo and AA Presentation could be developed outside of model management. We broke this project into its core components, and this allowed us to get a working version of the Monte Carlo and AA Presentation mostly finished before crashing through the windshield with market assumptions. Once we determined the solution for market assumptions, we were able to get that nailed down with a few iterations, allowing us to incorporate that solution back into the Monte Carlo and AA Presentation in short order.

I’ll be sure to post a recap of what happens in the coming weeks as we move to launch model management and our planning suite to our entire client base.

Crash (UA)Test Dummy

31 Jul

Part I:    Crash Course

I’m very fond of the phrase “laws are like sausages — it is better not to see them being made”, as even someone without the experience of being on the working end of a meat grinder can relate to not wanting to know the details of certain things. In the case of technology applications, everyone loves the new functionality being introduced, but the work to bring the application to market is purposefully and happily ignored by the users. It is bittersweet that I say I used to be in that camp, ignorance was indeed bliss. Today, I’m one of our development team’s user acceptance testers; I’m not just watching the sausage get made, I’m helping create the recipe and taste the finished product.  Let me set the stage a bit.

Over the past few years, I have steadily become more involved with our software development process, specifically on the requirements and testing phases. Here at Fortigent we run what’s called an agile process for software development. Agile in the sense that a project is broken into smaller pieces, “sprints” as they are known, and the requirements gathering and testing are continuously performed. Agile is the critique of the mainstream approach to software development known as the “waterfall” method. In that process, the requirements for the entire project are collected upfront, and the business testing is performed after all the parts have been assembled. The premise that the waterfall method is based on is getting the requirements complete and correct in the beginning, which is exactly what agile is critiquing.

My role is to provide the voice, or persona, of our advisory clients and their end clients into the development phase for each sprint. After development finishes their work, I make sure what we thought was needed works in a way that improves our client’s experience while delivering the intended solution. I’ve learned phrases like “iterate” and the term “scope creep” has been levied towards me more times than I can count. For the sake of full disclosure, I do have some experience with technology prior to this. I was a computer science major for all of one semester during my freshmen year at college, so some aspects I had a high level knowledge of.

Because you work on the same big project and go through multiple iterations of requirements gathering and testing for each feature, I feel like I’m getting behind the wheel of the same car to test out the brakes in one sprint, then the driver’s side airbags in another, and so forth. You know with each sprint that the brakes won’t stop the car completely and will need some tweaks, or the airbag deploys but not at the optimal time. What you want to avoid are the instances where you crash through the windshield. Part II will walk through what one of those crashes looked like, and how our agile process allowed us to quickly move on to keep me in the driver’s seat for the next crash test.