Archive | October, 2012

Knockout Resources

26 Oct

Following on from Tim’s great post about UI development, I want to thank CapArea.NET and Scott Lock for hosting my presentation on Knockout on Tuesday evening. For many of our pages, Knockout is the tool we use to display data. Knockout is a JavaScript framework that lets you create a JavaScript view model that represents the business data used by a page. Once you have a view model, you can add data-bind attributes to HTML elements. The data-bind attributes instruct Knockout to pull data from the view model and display it within the HTML. 

As promised on Tuesday evening, here is a list of Knockout resources and links to all the samples I used.

Knockout Samples

Here’s a list of the Knockout samples I used. All of them are from  Each of these samples exists as both a web page and a JSFiddle. I’ve included both links below:

Web Page JSFiddle
Hello World Page Hello World Fiddle
Click Counter Page Click Counter Fiddle
Simple List (Observable Array) Page Simple List (Observable Array) Fiddle
Rich List Page Rich List Fiddle
Forms Controls Page Forms Controls Fiddle
Collections (foreach) Page Collections (foreach) Fiddle
Animation & Bindings Page Animation & Bindings Fiddle
Grid Custom Binding Page Grid Custom Binding Fiddle
Contact Editor Page Contact Editor Fiddle
Grid Editor Page Grid Editor Fiddle
Shopping Cart Page Shopping Cart Fiddle
Twitter Client Page Twitter Client Fiddle

Other Knockout Resources

The main page for Knockout is here. Hands-on tutorials are here. Documentation is here.

Steve Sanderson is the creator of Knockout.

Ryan Niemeyer’s blog includes some Knockout presentations.

Knockout Validation, which I didn’t cover, adds validation capabilities to Knockout.

Knockout Grid builds on the simple grid simple we reviewed to provide a more complete grid.

At the bar after the meeting, we had some discussion of Knockout competitors.  Steve Sanderson’s list of seven JavaScript frameworks is on his blog. Although Steve is the creator of Knockout, the comparison does a pretty good job of listing the pros and cons of each framework.

ToDoMVC is an application that’s been implemented in multiple JS frameworks as a comparison tool:

AngularJS is Google’s competitor to Knockout.

UI Development at Fortigent

25 Oct


User interfaces are often the most visceral aspect of software development. Most desk-bound professionals use software for 8-10+ hours per day and as a consequence have opinions and expectations about UI. We’ve all lost our precious time and sanity to poorly designed apps. We’ve also been inspired and awe-struck by novel and engaging user interfaces in the past. Therefore virtually every time we have the chance to build or improve a UI almost everybody in the room will have something to say about it.

Couple this intensity of focus with the belief held by many software developers that design is for right-brained people only and, unless you are in a team with dedicated UI design resources, you can understand why it can be very stressful. The truth is the UI is how most people external to your project will judge it regardless of how great the underlying code is.

Building the user interface for a software product in a team can be a profoundly frustrating or enjoyable experience depending on your context and approach. At Fortigent, we’ve evolved a process which we think works great for our needs and wanted to share it here.

Agile Mindset

You’ll hear a lot about agile on this blog and for good reason. We embrace it because we know from experience that it works much better than waterfall (BDUF) in our context. There’s a lot of noise out there regarding agile and it can be hard to understand its essence.

For me, software development is a creative process; more art than science. From a business perspective, agile is risk mitigation. From a creative perspective, agile is a feedback loop. Case in point:

Our team is always in the agile mindset: we crave feedback, embrace change and fail fast instead of trying to build the wrong thing perfectly the first time.

Clarity of Purpose

“Theirs not to reason why, theirs but to do and die.”- Charge of the Light Brigade by Alfred, Lord Tennyson

If you’re a developer or designer, you’ve heard this questions like this a thousand times from stakeholders: “how hard would it be to make the button open a pop-up window instead?” This is natural; the stakeholder has received or raised a concern, analyzed the problem and proposed a solution.

Building a software product based only on “how hard would it be…” questions is a recipe for failure. If you’re a Developer and are ever asked this, resist all natural urges to go into problem-solving mode and simply ask “why?” Then ask it again.

Only when the team has a deep understanding of the whys should it begin to propose and test solutions. The first idea is almost never the best one, by the way, and that’s OK.

To achieve this you should stop looking at the screen and talk about the real-world end user or business problems at hand. You need an environment of trust where questions are encouraged and time spent by developers discussing issues and white-boarding solutions is not considered to be “wasted”.

If you can’t as a team achieve a deep understanding of the whys just by asking questions then use your feedback loop.

The how is the glue arguably the most important, gets the least amount of praise, and takes the most time and experience to do well. For example, if you hastily implement a UI change to get feedback without properly unit testing or modularizing, when (not if) you are asked to change it, it will likely take you much longer than expected and/or introduce bugs along the way. On the other hand, if you over-engineer a solution to a feature which is eventually dropped from the product, you’ve wasted a lot of time for nothing (YAGNI).

The ability to know how much engineering to apply typically is correlated to how well you know your users and the product vision, not how many technologies you are proficient with.

Our Process

PowerPoint Mockups

Mockups will be created for all relevant UIs for a given feature (usually) long before development work has been prioritized. These are typically done using PowerPoint and iterated on several times. Once some consensus is reached among product owners, these mockups are posted in SharePoint and stuck to the wall as a way to provide a sort of passive hallway usability testing, to let the ideas percolate and to show off a little.

Define software interfaces/contracts

Developers reach rough consensus on an implementation path, then define the contracts between the UI layer and the business logic (in our case ViewModel and Command objects) so that they can be worked on and unit tested in isolation. These are not set in stone of course.

Fully-functional UI with realistic states

When building the UI layer we use copious amounts of semi-realistic fake data. The goals here is to get feedback so the more interaction can be built for real the better. Most of what we do involves financial data analysis and/or forms (CRUD) so we make sure our fake data covers these possible states:

  • Big Data: Can the table handle thousands of rows or does it need pagination/infinite scroll/etc.? Can the UI handle a very long name or very large dollar figure?
  • Bad Data: Have we accommodated all input and security validation scenarios? These are often overlooked and can become quite complex. Any QA professional will tell you it’s the first thing they think about so it should be yours too.
  • No Data: Have you designed for cases where there will be no available data at all?
  • Devices: Is the app usable on relevant browsers and mobile devices? At multiple pixel densities, screen sizes and orientations?

Real server-side implementation

While the UI is being built we’re also building out the server-side implementation in parallel, taking all the usual things into consideration like performance, adaptability and security. This is whole other word of constraints and challenges. Checkout Andriy Volkov’s post to learn more about some of the recent developments in this area.

Wire up the UI to real data

After some iterations on the UI and after the real server-side implementation is available we simply wire them up (usually it’s a one-line code change) and voilà.

It is now time for more demos, testing and feedback. After that, usually a few rounds of consistency checks and polish are necessary before we can release the app to our customers.

Separation of Concerns

Tools like ASP.NET MVC on the server side a Knockout.js enable us to provide clean separation between our data, or logic and our user interface. We don’t stop there however.

On the server side, we do things like avoid MVC pitfalls and apply various design and architectural patterns to further separate our concerns and achieve SRP. We use Action Filters for cross-cutting concerns like security and unit test like crazy.

On the client side, we work to separate our structure (HTML) from our data (JSON) from our styling (CSS) from our logic (JavaScript). We avoid mixing DOM-specific code into our JavaScript application logic so that it can be more easily unit tested using QUnit. We also try to keep our code DRY and build re-usable jQuery plugins or Knockout Binding Handlers at the first sign there is an opportunity to share the love.

In short, we separate our concerns a LOT because it’s really what makes software… soft.

Feature Toggles

We use continuous integration and frequent releases so how do we manage to iterate on a feature over longer periods of time? Our answer has been feature toggles. Basically, we use configuration to hide work-in-progress features, enabling us to demo them and not worry about feature branches. This can get tricky with major data persistence changes but it’s manageable.

Always Learning

Where we are now has been the result of a learning process and will never be perfect. Here are a few ways I think we can improve.

Nothing is Intuitive, Only Familiar

We work at a rapid pace. We have a flat organization and many senior developers with strong opinions and no dedicated team of UI designers. Achieving 100% visual design, interaction design and information architecture consistency in our larger applications has been a challenge. We encourage innovation so if a developer finds or makes a cool new UI widget (say for an auto-complete drop-down menu with spell check and voice recognition) and implements it to make a great user experience, then great!

There may however eventually be temptation to implement that shiny new menu across the application for the sake of consistency. So it’s important that we as developers recognize this, especially when under pressure, to not get attached to our creations and keep the importance of consistency in mind.

Managing Expectations

“It’s not an iteration if you only do it once”- Jeff Patton

The one downside of building the UI with fake data first is that some people think it’s done! It is far from “finished” (software, like many things, is rarely ever finished) in fact. Sometimes we get so excited about a new UI that we wind up showing it off to external folks before it’s even been through a single iteration. We need to do a better job of explaining the creative process and inviting more people to contribute to it.

Managing Feedback and Design

“If everything is important, than nothing is”- Patrick Lencioni

We’re very skilled developers and super confident in our ability to build the thing right. We crave feedback to make sure we’re building the right thing.

However, just because we want this precious feedback doesn’t mean we’ll always use it. That goes back to the basics of design and product management. That’s a much bigger topic but the point is we must strive to distil the feedback into short-term and long-term plans. This sometimes means saying “no” which can be very awkward for people who excel at bending over backwards for customers.

Measure It!

“If you cannot measure it, you cannot improve it.”- Lord Kelvin

A core part of HCI and UX is to empirically understand what works for users and what doesn’t.

We can build a UI that we love internally but if we never collect data on their experiences than we are ultimately are missing the mark. We could benefit greatly from spending more time with end users, performing usability tests and A/B testing to guide our decisions.

Nothing gets a developer more engaged with their work then watching a customer use their app.


Fortigent has evolved a process to build software products with rockstar ninja guru hipster awesomesauce user interfaces. The keys to this have been the feedback loop, clarity of purpose, separation of concerns and feature toggles. Thanks for reading!

– Tim Plourde

Seven Wisdoms From The Theater World

16 Oct

One way to think outside of the box is to actually get outside of the box, and see how other industries or disciplines solve problems.  This cross-pollination can spark some creative approaches to handling issues. It can also reinforce good practices as well as deliver reminders about nonsense that should be avoided. I recently had a chance to jump outside the box and to immerse myself in the production of a play. This was a process that was not only enjoyable in its own right, but also offered ample lessons to apply to my daily toil of creating software.

I had often thought that the process of developing an application had a lot in common with putting on a play. There is a group of people who come together to conjure a creative work out of an idea. My participation in the Rockville Little Theatre’s production of “A Flea in Her Ear” confirmed that the similarities are striking. Both start with an idea that is wrought into a workable design. Based on the design a team of people with the necessary talents is assembled and a schedule is fleshed out. As the team begins to work towards its goal, time and money constraints shape decisions, design flaws are uncovered, and various obstacles and mishaps slow the project’s progress and shift its direction.

I will not belabor the metaphor any further, but here are the seven wisdoms that I will take back with me to the world of software development.

Delivery Trumps All

A play has a hard deadline.  The theater is rented and the publicity has started before the cast has even been assembled.  Delivery is not only a feature, it is The Feature. There is a countdown calendar to opening night and unlike software development  – no way to slip the schedule.

This hard time constraint forces the director to get the play into a deliverable state as soon as possible.  It might not be pretty after three weeks, but it gets the job done. This restriction also leads to a ruthlessness with features that cannot be made ready in time – be it a piece of set decoration or the blocking of the scene.  If it doesn’t work, it is stripped down to its serviceable minimum to focus the limited time available on what can be made to work.

The possibility of slipping the schedule in software kills its most important feature – delivery.  Users can’t use what they can’t touch. Better to cut out features that won’t make the date and put them in the next release.

Be Iterative

The production of a play cannot be incremental — it must be iterative.  The director knows she must deliver three acts on opening night, so all three are worked on throughout the rehearsal window.  The acts are made workable, then the outline is filled in, adding the layers of design and features that take the play from functional to fabulous. The director will not wait until the final moment to “bolt on” the last act, because two kick ass acts followed by 35 minutes of junk is unusable.

My colleague, Tim Plourde, has a wonderful chart from Jeff Patton ( ) in his cube that illustrates the difference between incremental and iterative approaches…

Software development should be iterative, get the basics working and then add the depth, color and texture as time allows.  Get it out there, get feedback from the users, change and repeat.

Banish the Creep

There is no scope creep with a play, no temptation to add a fourth act or change characters or transform a farce into a tragedy – because this will cause the end product to suffer.  The producer didn’t tell the director, “Hey, while you’re working on Act II, why don’t you add in a new character that moves the couch while singing ‘Back in Black’.”

We’ve all heard these phrases “well, it is only a few lines of code”, or “while you’re in there” too many times. And too many times, we have all been beaten on the head by the law of unintended consequences when these little creeps devour the timely completion of a project.

Do the Critical Bits First

There was one critical set piece that wasn’t finished until the day before the set was loaded in the theater.  It was then discovered it would not work in its initial form unless one of the actresses could replicate the vertical leap of Michael Jordan while wearing high heels.   It all came together at the end (as will happen when you have a talented team), but it caused a small cascade of other design changes that made for angst filled tech week, with work on the set continuing until just before the opening curtain.

The temptation in any endeavor is do easy bits first, and get them “done”. But if one of the pillars of the software design turns out to be faulty, it can cause even the easy bits to be redone or morph into hard bits themselves.

Share the Vision

Everyone on the cast and crew has to be intimate with the vision for the whole work.  While the actors have to know their parts, they also have to know how their roles mesh with the entire production, so the decisions they make are aligned with the vision for the play.  This provides the cast the framework for forming their roles and making suggestions for how various interactions between characters could be more effective in making the vision a reality. Our director did an excellent job in communicating her vision and outlining how every part fit within the whole, and reminding us from time to time what we were aiming for.

When developers and business analysts know the why – the what, the how and the when follow much more easily.  Design and implementation decisions are better made with context than in a vacuum.  Often the shared vision is assumed, and this can be a faulty assumption, especially once developers dive into their particular bytes. It is  good for the product owner to reiterate the vision from time-to-time to keep the developers from getting lost in implementation silos.

Right People In Right Places

The play provided a clear reminder that placing people in roles they can succeed in makes for the best possible outcome of any endeavor.  We did not have a few roles filled when rehearsals began. The delay and extra auditions were worth it to find the right cast.  As every part is critical to the functioning of the play, it is better to fill a role with someone talented, even if they are too old, too young, too tall or the wrong gender than to weigh it down with someone who meets the role’s description, but can’t play the part.

Setting up the right team is often the difference between success and failure of a software project. As the agile manifesto states, the key is people over process.

No Room For Divas

Finally, the cast and crew did more than was asked if them. They found myriad ways to help the production succeed, and did not have tantrums, even when they might have been warranted.  The overwhelming approach to problems was “we can figure that out, we’ll make it work”, and everyone pitched in to make that happen.

This generosity of effort is one of the things that I really enjoyed during the play, and one that we fortunately have in the culture at Fortigent. People take the time to make sure the whole is healthy by taking on the unglamorous tasks when necessary, even if that requires personal sacrifice.  Having this kind of environment makes it fun to come into work in the morning.

While I will miss working with the wonderful cast and crew from my sojourn into the theater world, at least I will be able to take back some lessons into my daily software wrangling. This adventure has already strengthened my belief that Delivery is The Feature, iteration rules and it’s the people that make it all happen. And it has already provided some creative ways of looking at problems, which should help me not only think outside box, but get outside of it too.

Messaging middleware at Fortigent. Challenges and perspectives.

1 Oct


A large enterprise such as LPL has multiple applications that are being built independently. These applications often need to work together and exchange information.

One approach to integrating multiple applications is data integration. This is when the application workflows proceed independently of each other, and the applications exchange information implicitly, by reading and writing their data to and from the same database. An example of this is a Customer Relation Management (CRM) application having access to the sales orders data created by the web store application.

Another major kind of application integration is process integration (Ross, 2006). This is when an event occurring in one application triggers a reaction in another application. An example of this would be a creation of a sales order by the web store triggering a shipment process in the delivery subsystem.

Evolution of application integration

In practice, the process integration often begins its life as an aspect of data integration, and only later, as architecture complexity grows, it gradually acquires its distinct message-centric character. To build up on the previous example, a sales order may initially be a simple insert into an Orders table in a Relational Database Management System (RDBMS). The CRM app simply reads from the Orders table. When the Shipment Subsystem appears on stage, it starts treating the Orders table as its work queue in which an individual order represents a work request, or message. This marks the advent of process integration.

Initially, the process integration may be pollingbased — the Shipment System would simply query the table at regular intervals. As data volumes grow, the scalability of the system becomes important. To avoid bottlenecks, some of the subsystems may have to exist in multiple instances in order to allow for increased throughput. For process integration this poses a problem of concurrency as a work item is handed off from one subsystem to another. In our order processing example, if second instance of the Shipping System is added, the polling-based queue solution needs to be robust enough to guarantee that the order will not be shipped twice. In RDBMS world, this can be solved by wrapping the access to the queue table in a database transaction, locking the record for the duration of the status update.

As the number of downstream subsystems grows (adding e.g. payment, service activation etc.) the database-driven solution gets progressively more complex. An order in the Orders table may need one or more status columns added, to indicate order’s position in each of the downstream workflows. At some point, instead of having the multiple workflows feed off of the same Orders table, their state and transition history data gets spun off to a separate set of tables, representing multiple queues and event logs. Furthermore, as business matures and the Service Level guarantees get tighter (Smith, 2006), the polling-based solution may hit its scalability limits. This warrants an eventbased process integration. At this point a mechanism (based e.g. on database triggers) needs to be devised to signal the Shipping Subsystem that a new order has been created.

Message Brokers

The above was meant to illustrate why even though in principle both data- and process- integration solutions can be built on top of a generic DBMS, a more strategic approach is to base the process integration on something specifically built for the purpose.

This is because the message-centric approach requires operating at a higher level of abstraction than what a generic DBMS provides out of the box. Implementing messaging solution on top of a generic DBMS requires lots of custom coding to translate lower level database concepts (e.g. tables, queries, status columns) into higher level messaging concepts (e.g. queues, subscriptions, message delivery).

From the above context, a message broker emerges as a specialized kind of database. Unlike generic databases, designed first and foremost for storage and retrieval of data, message brokers are designed from the ground up to allow applications to exchange packets of data “frequently, immediately, reliably, and asynchronously, using customizable formats” (Hohpe, 2003).

Before we get to choose a specific message broker implementation, we need to know how to compare them. Let us zoom in on the messaging problem and see what features make a difference.

Messaging concepts

While data-centric systems usually manipulate data in bulk (indeed, in SQL systems even a single-record update operation is but a special case of a multi-record update), message-centric systems work with one individual message at a time. One application sends messages to the broker, one at a time. Another application receives messages from the broker, again, one at a time.

How does the broker know which application should receive which message? This depends on the capabilities of a message broker. In simple brokers, each application has its own inbox, or queue. The sender specifies an exact destination address. If multiple copies of the same message need to be delivered to certain recipients, it is sender’s responsibility to identify and target each recipient.

In more advanced brokers the senders do not target the messages at specific receivers. Instead, the new message is evaluated against a set of routing rules configured by a system administrator. Based on the rules, the message may be delivered to one or more applications. In even more advanced brokers, applications themselves express their interest in receiving certain messages. This capability serves to enable a popular messaging pattern known as the Publish-Subscribe, or pub/sub for short. In a pub/sub architecture subscribers receive messages without knowledge of what specific publishers there are (Gorton, 2006).

The most widespread kind of message routing is topic-based routing. In this approach, a message carries a special meta-data field, called a topic, which serves as the main routing factor. Alternatively, content-based routing allows arbitrary elements of the message examined by the routing rules, allowing more flexibility at the cost of reduced runtime performance.

What if a message cannot be delivered, for example, due to the recipient application being offline? Different message brokers provide different guarantees. Most brokers will try to redeliver a message a configurable number of times before discarding it, or rerouting to a dead-letter queue. Some brokers store their messages in memory and lose them when the broker itself is restarted. Some will persist the message to disc and recover them upon broker restart. Some message brokers allow recipient to examine and conditionally reject a message. Most message brokers allow their messages to have a limited life span, such that an unpicked message will eventually expire instead of sitting in queue indefinitely.

Finally, depending on concrete implementation, message brokers differ in their performance characteristics (lag, throughput), ease of administration, and the level of integration with the underlying operating system.

Fortigent Messaging Story

Similar to the web store example above, our journey started with simple data-centric integration solution. Our applications shared the same database, with consumers doing most of the work to aggregate our highly normalized data into the presentable form, at query time.

When this simple approach could no longer scale, there were several worker processes introduced, such as ABP (Account Based Positions) and CBP (Composite Based Positions) that would pre-aggregate the data in bulk. As we continued to optimize away any redundant computations, the worker processes evolved from recalculating all data every time, to polling for changes, and finally, to SQL-trigger-based activation.

Since some of our computations were easier to express in C# than in SQL, with passage of time certain operations (such as exploding our bucket inheritance hierarchy) were moved outside the database into managed code. A stored procedure, fed by a SQL trigger, would insert the work item into the queue table polled by a .NET application. Such hybrid architecture, known as the “Master Queue”, still relied on SQL server for persistence and concurrency management.

Our first attempts at employing a dedicated message broker used MSMQ. This choice seemed obvious given MSMQ status as a native Windows component, requiring virtually no installation and providing full integration with Active Directory, native Windows transactions (including DTC), and having its API built-in the standard .NET Base Class Library. Before long we discovered some of the more painful limitations of MSMQ.

This may be a good point to pause and review all the issues we had with MSMQ to prepare the stage for an alternative message broker.

Problems with MSMQ

First, in MSMQ there is no concept of a central broker. Each application is supposed to have its own queue defined locally on the same machine. The sender is required to fully specify the destination address, including the remote machine(!) and queue name. This forces the applications to depend on stable network infrastructure, and is made worse by MSMQ’s lack of support for dynamic DNS. The impact of this limitation can be somewhat reduced by going against Microsoft recommendation and having all applications have their queues reside on a central server — which introduces a single point of failure, reduces the performance and nullifies the delivery guarantee. Also, because MSMQ disallows creating queues on remote machines, the central server approach taxes the efficiency of the Continuous Integration process. Because the queues cannot be created at application installation time, a separate manual step is required to define queues on the central server.

In addition not only does MSMQ  use 3 TCP and 2 UDP ports(!), it dynamically allocates ports at OS startup! This significantly complicates firewall configuration to let MSMQ traffic through.

Second, MSMQ lacks any routing capability whatsoever. This makes implementing a Publish-Subscribe pattern rather difficult and leaves applications painfully aware of each other’s existence.

In short, MSMQ dates back to 1997 and that shows. Its distributed design combined with the lack of routing support makes it inadequate for a role of enterprise-grade middleware unless an extra development effort is made to build the lacking facilities on its top.

Options Considered

Before abandoning MSMQ altogether, we tried to squeeze a maximum out of it, by implementing our own routing mechanism on top of MSMQ’s barebone queues. Our solution was inspired by one of the approaches described in (Hohpe, 2003) and was a hybrid of their Routing Slip and Process Manager patterns (see the book for more information about these patterns).

Eventually convinced about inadequacy of MSMQ for our messaging needs, we decided to try a third-party message broker. Among the alternative message brokers we considered were ActiveMQ, 0MQ, and RabbitMQ. All three are relatively new products, incorporating the lessons learned by the industry in the 15 years since the inception  of MSMQ.

  • ActiveMQ is a central server type broker, rich with features and very popular among the enterprise Java crowd. It is a mature solution with full support of flexible rule-based routing. It is Open Source software distributed under the Apache license and requires a Java VM to run.
  • 0MQ is broker-less distributed solution (this aspect makes it similar to MSMQ) optimized for faster-than-TCP low-latency high-throughput local network operations. It is a native C++ app distributed under the LGPL license.
  • RabbitMQ is a central server type broker, commercially supported by VMWare and freely distributed under the Mozilla Public License. One of the most attractive features of RabbitMQ is its robust routing support with fully programmable subscription declaration.

After careful consideration and consultations with colleagues in other companies we have settled on RabbitMQ.

RabbitMQ advantages

Here are some of the advantages RabbitMQ offers over MSMQ:

  • RabbitMQ is a central server type solution, with full support for DNS and failover clustering. This simplifies configuration management while avoiding a single point of failure.
  • RabbitMQ fully supports topic-based routing. This makes implementing a full-blown Publish-Subscribe architecture very simple.
  • RabbitMQ traffic goes through single TCP port — making it easier to manage firewall configuration.
  • AMQP, the standard RabbitMQ is based on, is a programmable protocol. This means the admin does not have to define queues and subscriptions manually — they are created as needed by the client application.
  • According to some tests, RabbitMQ has about 7-8 times higher message throughput than MSMQ.

Even with the above advantages, adoption of RabbitMQ was not completely painless. The biggest challenge that has surfaced so far is RabbitMQ’s lack of integration with Windows Authentication (aka trusted connection). This requires passwords to be stored in application configuration files, which triggered a knee-jerk reaction from our IT infrastructure team. In one case we were able to work around the issue by encrypting the application’s configuration file. In another case, we had to build a wrapper WCF service to avoid exposure of RabbitMQ credentials to application users.

Despite the difficulties, our experience with RabbitMQ was very pleasant. Most importantly, its support of the pub/sub model allows us to achieve a high degree of decoupling between the individual applications which will help to make our architecture more flexible and future-proof.


The need to sustain system responsiveness given the growing volume of data at Fortigent has led to proliferation of worker processes whose job it is to aggregate the data through intermediate stages to its presentable form. As the final result depends on a multitude of configurable factors, the data needs to be recomputed in response to the configuration changes. This puts pressure on integrating the computation across multiple processing agents, and led to the advent of an event-driven messaging architecture at Fortigent.

The recent adoption of RabbitMQ is a step in an ongoing process of sophistication of software architecture in response to challenges posed by the continuing business development.


Gorton, I. (2006). Essential Software Architecture. Springer.

Hohpe, G. (2003). Enterprise Integration Patterns. Addison-Wesley.

Ross, J. W. (2006). Enterprise Architecture As Strategy. Harvard Business Review Press.

Smith, G. (2006). Straight to the Top. Wiley.

Videla, A. (2012). RabbitMQ in Action. Manning Publication.