Archive | Software Development RSS feed for this section

Crash (UA)Test Dummy: Part II

1 Aug

Part II: We’re Beyond the Looking (Windshield) Glass

The opening salvo of what was a watershed year for Fortigent was the launch of our online proposal system – followed by other incredible features released in 2012. Fortigent’s planning suite, the Monte Carlo and the efficient frontier/portfolio analysis toolkit (AA Presentation), have always been Excel based and this remained so even as the rest of the system went web based. The latter was also our method for maintaining Fortigent’s capital market assumptions and models, as well as our recommended portfolio risk/return characteristics for advisors to use in proposals.

Since they were Excel based, not only did the user experience leave much to be desired, but the maintenance from our side was extremely arduous. The game plan was to simply port over the existing functionality, thereby getting advisors off the excel sheets quickly and allowing our internal resources a more efficient manner of maintaining this information. Once that took place, we would look to add more features.

Fast forward toward to the spring of 2013, my fellow “UAT dummy” Wade Fowler was entering in Fortigent’s capital market assumptions into the newly created model management screen. At this point our development team had taken the existing framework that existed in Excel and created the tools on the web, with some input from Wade and my team. We were satisfied with how the Monte Carlo and AA Presentation tools were working online at this point and were focused on getting model management up and running. Market assumptions are step one of model management, and it was here that we went through the windshield.

Fortigent has always maintained a three-tiered hierarchy for our allocation trees – super class, asset class, asset style – and each level has a one-to-one relationship with the next. We needed to be able to add model assumptions to all three levels, but where only able to add at the lowest, the style level. Getting this accomplished meant revisiting how our hierarchy system worked, not an easy task.

In addition to driving the Monte Carlo and AA Presentation tools, market assumptions also drive the models that advisors can create, which in turn will drive trading/rebalancing functionality in the near future. Getting this right was extremely important for that reason; it was the foundation for our major initiative to help advisors more efficiently trade and rebalance portfolios. Zach Girod and Jen Alpert, the product owners, formed a task force comprised Anuj Gupta, Wade, and myself to vet the options we had and to come up with a game plan.

We decided it was time to get our advisors involved, and scheduled calls with half a dozen of our clients, with representation from both banks and RIAs. We wanted a detailed understanding of how they go about creating models, how they use those models to trade and rebalance, and what Fortigent could be doing to assist them in this process. The intel we received was very helpful, and we were able to validate our assumption that advisors need two types of models – optimized and implementable – and that the models will need to be multi-tired, or hybrid.

Once the calls were finished, our development team coded our recommend changes. Wade and I then strapped in for what we hoped would be a round of testing that didn’t involve crashing through the windshield.

We entered in the Fortigent market assumptions, created firm and client specific models, incorporated those models into a Monte Carlo analysis and AA Presentation, and included them into a proposal. The framework for entering a hybrid model is inherently complex, so we had a good amount of back and forth in our weekly demos to get that interface right. One big hurdle to overcome was the impact to the class and super class levels after adding a model weight to the style level, and how different levels could be selected in a model.

Wade and I were able to provide feedback each week, and then see how that feedback looked in the application in a real world setting. Without iterating through, I don’t believe we would have nailed the workflow to get these toolkits online. We’ve now moved passed internal UAT, and are piloting this with the advisors we conducted the exploratory calls with.

I don’t believe we would be sitting here today with as much progress as we have achieved without the manner in which we gather requirements and test. We knew going in that the Monte Carlo and AA Presentation could be developed outside of model management. We broke this project into its core components, and this allowed us to get a working version of the Monte Carlo and AA Presentation mostly finished before crashing through the windshield with market assumptions. Once we determined the solution for market assumptions, we were able to get that nailed down with a few iterations, allowing us to incorporate that solution back into the Monte Carlo and AA Presentation in short order.

I’ll be sure to post a recap of what happens in the coming weeks as we move to launch model management and our planning suite to our entire client base.

Crash (UA)Test Dummy

31 Jul

Part I:    Crash Course

I’m very fond of the phrase “laws are like sausages — it is better not to see them being made”, as even someone without the experience of being on the working end of a meat grinder can relate to not wanting to know the details of certain things. In the case of technology applications, everyone loves the new functionality being introduced, but the work to bring the application to market is purposefully and happily ignored by the users. It is bittersweet that I say I used to be in that camp, ignorance was indeed bliss. Today, I’m one of our development team’s user acceptance testers; I’m not just watching the sausage get made, I’m helping create the recipe and taste the finished product.  Let me set the stage a bit.

Over the past few years, I have steadily become more involved with our software development process, specifically on the requirements and testing phases. Here at Fortigent we run what’s called an agile process for software development. Agile in the sense that a project is broken into smaller pieces, “sprints” as they are known, and the requirements gathering and testing are continuously performed. Agile is the critique of the mainstream approach to software development known as the “waterfall” method. In that process, the requirements for the entire project are collected upfront, and the business testing is performed after all the parts have been assembled. The premise that the waterfall method is based on is getting the requirements complete and correct in the beginning, which is exactly what agile is critiquing.

My role is to provide the voice, or persona, of our advisory clients and their end clients into the development phase for each sprint. After development finishes their work, I make sure what we thought was needed works in a way that improves our client’s experience while delivering the intended solution. I’ve learned phrases like “iterate” and the term “scope creep” has been levied towards me more times than I can count. For the sake of full disclosure, I do have some experience with technology prior to this. I was a computer science major for all of one semester during my freshmen year at college, so some aspects I had a high level knowledge of.

Because you work on the same big project and go through multiple iterations of requirements gathering and testing for each feature, I feel like I’m getting behind the wheel of the same car to test out the brakes in one sprint, then the driver’s side airbags in another, and so forth. You know with each sprint that the brakes won’t stop the car completely and will need some tweaks, or the airbag deploys but not at the optimal time. What you want to avoid are the instances where you crash through the windshield. Part II will walk through what one of those crashes looked like, and how our agile process allowed us to quickly move on to keep me in the driver’s seat for the next crash test.

Fortigent Technology Infographic

25 Jul

Fortigent Technology Agile Transformation Intro

If a picture is worth a thousand words…why create a PowerPoint presentation with a bunch of bullet points?

We decided to create an infographic rather than a PowerPoint presentation after looking at a few examples online. An infographic is a visual representation of data, information, or knowledge.  Our goal was to create a presentation to give an overview of the Agile Transformation of Fortigent’s Technology team.  We believe that a visual display of information with cool imagery often catches people where words alone fail. They are simply interesting – they attract a lot of attention and are more fun to create than a PowerPoint with a bunch of bullet points!

Today, we’re drowning in data! Infographics provide a quick way to communicate data in an easy-to-understand format.  We believe that infographics are easy to digest, simple to understand and aesthetically pleasing.  We are planning to share our technology infographic through our website, LinkedIn, and this blog.  Furthermore, Prezi is used to present our infographics.  We have printed a couple of copies that are circulating around our office and management has expressed interests in showcasing it at conferences.

Our motto and belief are to keep everything we develop as simple as possible.  The same analogy was applied to this infographic.  We filtered through our large amounts of data, gathered the main points, and organized it so the infographic didn’t boggle our audience.  The finished infographic, we believe, is easily read and understood. Needless to say, we spiced up a relatively boring topic (to non-technical people) by using appealing images to engage users’ attention.

The opinions voiced in this material are for general information only and are not intended to provide or be construed as providing specific investment advice or recommendations for your clients.Securities and Advisory services offered through LPL Financial, a Registered Investment Advisor. Member FINRA/SIPC.

Advisor Use Only. Not for Client Distribution.

“It All Seemed So Easy”

9 Jul

“The IRS will help us.” 

“Officer, the light was yellow.”

“Honey, I’m a bit pregnant.”

What trouble 5 little words can hint at.  Everything is cruising along, and then 5 little words come back at you from an unexpected angle, like some hawk swooping in and going for the eyes.

Here are 5 to think about (6 if you despise hyphens):  “We used a ‘pre-positioning’ strategy.”

In early June, feeling rather pleased, I wrote about ‘retiring a Zombie App’.  (We did too.  It’s dead, and staying that way.)  And as part of that blogpost, I included this description of part of the larger approach:

“We used a ‘pre-positioning’ strategy. If you know you will need something later and can harmlessly incorporate it into the Production platform now, do it. In this case… mappings were loaded to Production and then started flowing out with the weekend DB refreshes to the cloud test environments, preventing the need for weekly reloading….”

The italicized sentence was that way in the original, but frankly it should have read “and if you really, really, KNOW you can harmlessly incorporate it into the Production platform”.  Knowing means having certitude, and in our business certitude comes from testing, not leaps of faith.

As part of another project, we recently employed the same philosophy of pre-positioning.  We even did it in the same arena, Transaction Translation mappings.  In this case, we pre-loaded transaction transformation instructions into Production we knew we would need later.

The catch was I didn’t ‘know’ they would start interacting with other existing mappings.  The effect was that about 41,000 dividend transactions were ‘anonymized’, retaining their proper value and effect, but losing the identity of the security making the payment.  Once discovered, the issue was quickly diagnosed, but it took several days to restore the affected data to complete accuracy, several days of developer time that could have been used elsewhere.

While this incident had no adverse client-facing effects, eventually it could have.  Our checks and balances are quite extensive, but didn’t include an error of this nature.  Instead, this was noted by an attentive analyst.

This incident had two main roots.  First, I didn’t sufficiently understand the inner workings of one aspect of our transaction capture application to see that the new, broad-based Transaction Translation instructions might affect all transactions, not just the ones we were targeting in the future.

Second, and far more importantly, regardless of my or anyone else’s level of understanding of the inner workings, I should have tested for potential fallout, rather than relied on my personal conviction that there would be no adverse consequences.  Testing helps form a safety net for one’s gaps in knowledge, known or unknown.

Designing such a test can be difficult.  It’s easy to test for planned failures, but how does one test for a Rumsfeldian ‘unknown unknown’?   It’s axiomatic that one can’t do so with total certainty.  We can, however, can play the odds in a way that favors catching the most common failures.  A modest amount of parallel processing (say, a week’s worth of data) would probably not test for a rare event such as a return of capital on a short position, but the mass of ordinary transactions, shunted through a test environment and compared with the same transactions in Production, would have shined a spotlight on this error long before it struck.

As a fan of the methodical, I am also a strong believer in avoiding the same mistake twice – instead, find exciting new mistakes to make.  It’s how we learn.  (The Romans used to say ‘We progress by fault’.)  This one won’t be repeated, but others will crop up.  That’s the nature of the beast.  Testing and amelioration are essential ingredients to proper risk mitigation of even seemingly mundane functions.  I believe that ‘pre-positioning’ remains a beneficial and powerful strategy, but (to paraphrase Spidey’s Uncle Ben) ‘great power means great responsibility’.

Five little words.

~~ Joseph Konrad

Dependency Injection pattern in database development

24 Jun

To get yourself familiar with dependency injection
pattern please check Wikipedia link here:

The primary database focus is to persist data and guaranty data integrity… well, at list we can say it about many databases. As a result database development evolved in its own procedural-way, and it is hard to argue against such a way or to change it, because in most cased database designs have tendency to harden, solidify, and resist a change after they get built. Besides, who want a change when over the years masses of applications got built on that-old-table. Just try to change it and wait for something to fail…
Whatever reason you have for a change try the Dependency Injection pattern. This pattern injects the “depended-on” code to the destination code automatically by knowing the requirement of the destination code and altering an expected outcome of the destination code. Let’s take very popular BOO and FOO templates for our example:

   IF dbo.FOO() < dbo.FOOTheshold()
      EXEC BOO 'Have to do BOO logic';
      ELSE MOO 'Have to do MOO logic, instead';

Here is how to introduce dependency injection into our code:

  • Create following two schemas: INTERFACE, PROD_IMPLEMENTATION
  • Move actual functions and procedures with business logic to the PROD_IMPLEMENTATION schema, like this one:
@Message VARCHAR(50)
    PRINT 'Very complicated proprietary logic';
  • Create synonyms in the INTERFACE schema that will point to the functions and stored procedures in the PROD_IMPLEMENTATION schema. Note that synonyms can have the same names, because they are located in the different schemas than actual business logic. For instance:
  • Then change dbo.sp_BOO_manager stored procedure to use synonyms from the INTERFACE schema instead of objects themselves. Here is what you will get:
      EXEC INTERFACE.BOO 'Have to do BOO logic';
      ELSE INTERFACE.MOO 'Have to do MOO logic';

Now our sample code does not bounds to the actual business logic directly, instead it calls an abstracted logic through an interface (which we can override when needed), allowing the calling process to inject desired functionality.
Let’s do it…
Make a new object somewhere in another database:

@Message VARCHAR(50)
   DECLARE @OverrideName VARCHAR(100)
   PRINT 'Very simple log message';
      (OverrideName, [Message], EventDate)
      VALUES (@OverrideName,@TmpMessage, GETDATE());

Then (on your development server) change BOO synonym to point to the mocked object:


After this change, you can call business logic and verify that expected message got logged in to a test table.

--call logic
EXEC dbo.sp_BOO_manager;
--assert result
   @v_TableName = 'TEST_MESSAGES',
   @v_ColumnName = 'OverrideName',
   @v_ValueDataType = 'VARCHAR(100)',
   @v_UserMessage = 'BOO functionality should run';

There are a few ways to introduce dependency injection pattern in databases, we have chosen SYNONYMS, because of the following benefit:

  • Synonyms are very simple database objects;
  • Synonym does not change actual business logic they represents;
  • Synonym provides less invasive way to perform injection;
  • When creating synonym developer only need the name of the synonym (already known from consumer side) and the name of the object it represents (which is also known by developer from a provider side );
  • Locating synonyms and business logic in the different schemas allows synonym to have the same name as the object it abstracts, this preserve an existing naming convention used by development team;
  • Synonym can be created on the following objects:
    CLR Stored Procedures, Table-valued Functions, Scalar Functions, and Aggregate Functions, SQL Stored Procedure, Replication-filter-procedures, Extended Stored Procedures, SQL Scalar Functions, SQL Table-valued Functions, SQL Inline-table-valued Function, Views, User-defined Tables, includes local and global temporary tables

Well, here you go, now you have one more tool in your development toolbox. Use injection wisely, make sure you have basic infrastructure tests to go with it, and don’t forget about security.