Archive | June, 2013

Dependency Injection pattern in database development

24 Jun

To get yourself familiar with dependency injection
pattern please check Wikipedia link here:
http://en.wikipedia.org/wiki/Dependency_injection

The primary database focus is to persist data and guaranty data integrity… well, at list we can say it about many databases. As a result database development evolved in its own procedural-way, and it is hard to argue against such a way or to change it, because in most cased database designs have tendency to harden, solidify, and resist a change after they get built. Besides, who want a change when over the years masses of applications got built on that-old-table. Just try to change it and wait for something to fail…
Whatever reason you have for a change try the Dependency Injection pattern. This pattern injects the “depended-on” code to the destination code automatically by knowing the requirement of the destination code and altering an expected outcome of the destination code. Let’s take very popular BOO and FOO templates for our example:

CREATE PROCEDURE dbo.sp_BOO_manager
AS
BEGIN
   IF dbo.FOO() < dbo.FOOTheshold()
      EXEC BOO 'Have to do BOO logic';
      ELSE MOO 'Have to do MOO logic, instead';
END

Here is how to introduce dependency injection into our code:

  • Create following two schemas: INTERFACE, PROD_IMPLEMENTATION
CREATE SCHEMA INTERFACE;
CREATE SCHEMA PROD_IMPLEMENTATION;
  • Move actual functions and procedures with business logic to the PROD_IMPLEMENTATION schema, like this one:
CREATE PROCEDURE PROD_IMPLEMENTATION.BOO
@Message VARCHAR(50)
AS
BEGIN
    PRINT 'Very complicated proprietary logic';
END
  • Create synonyms in the INTERFACE schema that will point to the functions and stored procedures in the PROD_IMPLEMENTATION schema. Note that synonyms can have the same names, because they are located in the different schemas than actual business logic. For instance:
CREATE SYNONYM INTERFACE.BOO
   FOR PROD_IMPLEMENTATION.BOO;
CREATE SYNONYM INTERFACE.FOO
   FOR PROD_IMPLEMENTATION.FOO;
CREATE SYNONYM INTERFACE.FOOTheshold
   FOR PROD_IMPLEMENTATION.FOOTheshold;
  • Then change dbo.sp_BOO_manager stored procedure to use synonyms from the INTERFACE schema instead of objects themselves. Here is what you will get:
CREATE PROCEDURE dbo.sp_BOO_manage
AS
BEGIN
   IF INTERFACE.FOO() < INTERFACE.FOOTheshold()
      EXEC INTERFACE.BOO 'Have to do BOO logic';
      ELSE INTERFACE.MOO 'Have to do MOO logic';
END

Now our sample code does not bounds to the actual business logic directly, instead it calls an abstracted logic through an interface (which we can override when needed), allowing the calling process to inject desired functionality.
Let’s do it…
Make a new object somewhere in another database:

CREATE PROCEDURE SOMEOTHERDB.MOCK_OBJECTS.BOO
@Message VARCHAR(50)
AS
BEGIN
   DECLARE @OverrideName VARCHAR(100)
     = 'SOMEOTHERDB.MOCK_OBJECTS.BOO'
   PRINT 'Very simple log message';
   INSERT INTO TEST_MESSAGES
      (OverrideName, [Message], EventDate)
      VALUES (@OverrideName,@TmpMessage, GETDATE());
END

Then (on your development server) change BOO synonym to point to the mocked object:

DROP SYNONYM INTERFACE.BOO;
CREATE SYNONYM INTERFACE.BOO
   FOR SOMEOTHERDB.MOCK_OBJECTS.BOO;

After this change, you can call business logic and verify that expected message got logged in to a test table.

--call logic
EXEC dbo.sp_BOO_manager;
--assert result
EXEC DBTD_ASSERT_COLUMN_HAVE_VALUE
   @v_TableName = 'TEST_MESSAGES',
   @v_ColumnName = 'OverrideName',
   @v_Value = 'SOMEOTHERDB.MOCK_OBJECTS.BOO',
   @v_ValueDataType = 'VARCHAR(100)',
   @v_UserMessage = 'BOO functionality should run';

There are a few ways to introduce dependency injection pattern in databases, we have chosen SYNONYMS, because of the following benefit:

  • Synonyms are very simple database objects;
  • Synonym does not change actual business logic they represents;
  • Synonym provides less invasive way to perform injection;
  • When creating synonym developer only need the name of the synonym (already known from consumer side) and the name of the object it represents (which is also known by developer from a provider side );
  • Locating synonyms and business logic in the different schemas allows synonym to have the same name as the object it abstracts, this preserve an existing naming convention used by development team;
  • Synonym can be created on the following objects:
    CLR Stored Procedures, Table-valued Functions, Scalar Functions, and Aggregate Functions, SQL Stored Procedure, Replication-filter-procedures, Extended Stored Procedures, SQL Scalar Functions, SQL Table-valued Functions, SQL Inline-table-valued Function, Views, User-defined Tables, includes local and global temporary tables

Well, here you go, now you have one more tool in your development toolbox. Use injection wisely, make sure you have basic infrastructure tests to go with it, and don’t forget about security.

Testing Autofac Magic

6 Jun

Kolev Says...

Using an IoC framework like Autofac is a powerful way to implement SOLID principles. From making unit testing and code organization easier, the advantages should need little or no explanation or justification.

In retrospect, the Autofac setup and functionality is great. If you usually get giddy about clean short code, it’s just magical. Autofac does a lot of the gluing for us behind the scenes which theoretically works 100% of the time but what happens when a project grows over time. Do the old registrations still work? Do new registrations step on old ones? Are there classes that match some registration they really shouldn’t?

Let’s take for example the following. It’s perfectly valid code but can make life a bit difficult.

Assuming we have an MVC application and register all classes that end on “Provider” as AsImplementedInterfaces.

That’s a reasonable thing to do so we can inject providers into our…

View original post 406 more words

Retiring a Zombie App

5 Jun

Herewith a short offering on how to kill zombies.  Even in an advanced civilization that has given us penicillin and “Here Comes Honey Boo-Boo”, zombie apps, neither growing nor dying, can still wander in and out of our resource matrixes with impunity, consuming our brains, our time, our memories (both human and mechanical), and of course our enterprise’s money.  Such was the case with Custodian Data Interface I.

Introduced, in tech-time, several geological epochs ago (okay, it was 2007), CDI 1 was a solid workhorse essential to getting Fortigent’s indigenous back-end suite of data acquisition and core database apps off the ground.  We used to have Advent DataExchange feeding into Advent Axys.  Now instead we would be in a world of CDI 1 feeding into ARI.  Raise the new flag.

Then, in 2009, CDI 2 was born.  Eventually all our Evare feeds were cut over to CDI 2, and its evolution has continued — with our first direct custodial feed, expanded IDC links, and now the beginnings of hedge fund data.

What made CDI 1 a zombie was our tardiness in cutting over one of our last and smaller data providers, DSCS.  When an app neither lets you down nor distinguishes itself with its growth potential, it can simply linger in the twilight.  That’s what CDI 1 did for nigh on 4 years, until last month, when at last the nails started being pounded into the coffin.  Here’s how:

First, we realized that permanently retiring CDI 1 was a down payment on a larger process of cleanup.  Our complement of applications has grown radically in the last 5 years.  Rationalization and streamlining are inevitably needed.  Here was some low-hanging fruit (after all, the replacement app was already working).  What was needed was willpower.  Guru Rao supplied that.

Second, knowing that this was a one-time commitment which would reap permanent benefits meant the resource allocation could be justified.  I was given primary responsibility, joined by Sarada Aragonda for testing and Bob Lewicki for file re-routing.  As to timeframe, we had no hard deadline, but we had a mandate: move slow or move fast, but don’t stop moving.  Get it done.

Third, we had to have a plan that would be quick, comprehensive, non-invasive of applications or existing operations, and as close to paralleling the expected production experience as possible.

1. In preparation, about 1,500 transaction translations and security translation mappings had to be ported from CDI 1 to CDI 2.  This required integrating data from 4 tables in CDI 1 to reside in just 2 tables in CDI 2.

2. We used a ‘pre-positioning’ strategy.  If you know you will need something later and can harmlessly incorporate it into the Production platform now, do it.  In this case, the mappings were loaded to Production and then started flowing out with the weekend DB refreshes to the cloud test environments, preventing the need for weekly reloading.  Jan Corsetty stepped in here to help.  Any necessary tweaks found during testing would be done to Prod at the same time, and those fixes would then propagate outwards in perpetuity.  This contributes to a cycle of across-the-board quality improvement and makes Production and cloud into near-exact mirror images of each other (with the primary exception of data obfuscation), improving the realism of the test experience.

3. Then, for each of 42 DSCS custodians, a test file was created whose purpose was to stress test every single mapping for its expected outcome in the new system.  Jenny Zhang from Operations ran point with any questions. This efficient move acted as both ‘outcome testing’ and a form of user-acceptance testing, with myself as proxy for the users.

MSSDTest

4. Expected results were mapped out in advance:

Expected

5. For each custodian being tested, a dedicated test account was used. These were easily and efficiently created as ‘TBD’ accounts by the very files being used for testing. For instance, for Morgan Stanley, downloading in files tagged with the letters “MSSD”, MSSDTEST was the artificial test account (above).  All Morgan Stanley test transactions would go through this account.  Once processed, they could simply be viewed through ARI and compared with expected results.  Note that each transaction was given a unique numerical tag that served as its quantity, its dollar amount and its Comment field contents. This made sorting and identifying results quick and easy.

With this method, we were able to advance from loading mappings on April 26, to file testing throughout the first week of May.  By its end, all but two anomalies localized to one custodian had been resolved, making cutover a reasonable decision.  Cutover began on May 14 with 40 custodians (2 more followed).  The data flow is now completely shunted to CDI 2, with the data capture component slated to be replaced within a few weeks.  Then the dirt goes on the coffin.

Few cutovers are flawless, and this one did experience minor hiccups, but after the go-live date all problems encountered were easily diagnosed and remediable.  The best sign of a well-done cutover experience is that it be an operational non-event.  With methodical planning and teamwork, the retirement of CDI 1 falls into that category.

Rest in peace, CDI 1.  And don’t come back.

Top Commits of May 2013

4 Jun

It’s that time again folks. All of the commits from May have been collected, tallied, analyzed, and sometimes giggled at.

And away we go!

Looks like some people are putting in extra effort to make the list. Kudos for this one:

This class is a stub. You can help Bruce by expanding it and adding citations.

Can’t pass up a chance to highlight the elevated language choice:

Removing misbegotten commit of duplicated classes

Someone gave this developer TOO much power:

no more time zonez.

Speaking of time:

build fix.. stupid datetime.now…

Thanks for keeping source control PG:

*&%$&^: Broke a JSLint Test – fixing

Got this check-in 3 times in a row. I’m guessing the answer was ‘no’:

Does this fix the EF mapping?

And our top commit of the month is a timeless adage:

Compiles fine on my machine.

Alright that wraps up this month’s edition. I can’t wait to see what wacky commits our dev team can come up with for next month.

Reflections on Custom Report Development

3 Jun

As a junior software developer in my first few months out of school, it’s exciting for me that a product that I worked on has reached the end client.  Creating a custom report page has been a long process, but a great experience.

A month ago, I was assigned the development of a new report page, the Snapshot Portfolio page.  It was to be a quick overview of a portfolio, showing the portfolio’s activity, rate returns, allocations, and growth information.  I was provided with a mockup and a meeting to review the requirements of the page.

mockup

For the next two weeks, I coded charts and tables to match the mock up as closely as I could.  I tested the new page on portfolios spanning several companies.  On Friday that week, I prepared several reports showing the new page and demonstrated my work to members of the development and consulting teams.  Reactions to the initial version of the new page were very positive, and helpful feedback, suggestions, and enhancement requests resulted from the demo.  Even though co-workers had hinted to me that a mockup is only a stepping off point of any new page, I admit to being surprised at how many changes were requested after I’d matched the mockup so closely.

firstdraft

One of the most challenging elements in the development of this page was the creation and fine tuning of the growth chart.  Although a simple chart in excel, coding the component (which included two charts each on two axes, three sets of labels, and a legend) proved to be a challenge.  In the progression of the chart, the color scheme was changed several times, the bar styles were changed, the axis labels were changed, and at one point the chart values were even displayed normalized!  Throughout this process, I had my fingers crossed that my next color combination would be the winning one.  Frustrating though it sometimes was to have my early color choices called psychedelic, in the end, the final color choices looked great.

graph1graph2

graph3graph4

Over the course of the next week, I completed the requests for enhancements and changes I’d gotten at the demo and from later emails from the consulting department.  The page was ready to be deployed for Advisor testing.

Once deployed, we started getting instant feedback from advisors.  Although a “snapshot” page was originally requested, advisors saw the page’s potential to display more customized information to be consumed at a glance.  Requests started pouring in for more options.  As well as showing high level investment segments, advisors wanted to be able to show more detailed segments.  They also wanted to control the number of benchmarks displayed.  One thing that upset me during this portion of the project was receiving a request that I just could not implement.  One firm wanted some very large asset names displayed and I found I physically could not fit the long names in addition to the other required page information.  I however, managed to squeeze several extra characters in.

When all of these changes were implemented, the final page looked like this:

final page

Although I am now working on other projects and the page has been available for several weeks, it continues to change and grow.  I recently implemented a parameter that will allow an advisor to hide the allocation chart.  Since finishing this page, I have worked on several more page customizations.  Some of the page customizations have involved me editing existing pages, which always includes the risk of breaking pages that are currently working.  I think if the practice of making customized and customizable pages is to continue we will need to make changes to how our pages are created.  It feels like there is a lot of risk involved in making changes to a heavily used page and that to mitigate this risk I have needed to duplicate a lot of code.  I feel like the recent parameterization of the pages is a great step in keeping customizable pages a viable offering.  I imagine as more innovations make our report engine even more flexible,  I will soon be working on many more customizable pages and custom enhancements to current pages.