I was discussing a system process with a business client, and during the conversation it hit me that the client has no idea what the business process is. At some points during the development of the system, a business specification was written, perhaps even by this same person, but overtime, the system evolved, and the process forgotten. Now, my business client has no idea what the system is actually doing. To him, the system is a black box, one big scary unknown. He has a general idea of what it does - it takes data from some place, does some filtering and calculation, populates a few tables, does some more calculations and filtering and eventually generates some results. The business analyst is then charged with the verification of the results - the irony is that the business analyst isn't completely sure of how the number was generated. He has a certain level of trust, but overtime that trust erodes and he is left feeling uneasy and unsure.
Later that day, I sat in a meeting where a discussion was taking place about a new business system. It was a usual meeting, many people spoke about many things, most of the conversations strayed far from the actual deliverable or even reality, nothing was agreed, a few people tried to cock fight each other, and, overall, the meeting reversed progress - usual meeting. Anyways, during the meeting, specifically during one of the cock fights, one of the analysts spoke up and said something very profound and interesting: this was followed by a cock block, but that's a different story. So this interesting statement was that he believed that quality analysis testing should not end when the system goes to production but should be an ongoing part of the system. He believed that it was important to have a stable validation layer in the system in order to provide basic sanity checks that the system is performing as expected in an endless parade of changing data. My team-members rose up in anger against him, some claimed he was a heretic, others threatened ex-communication. I sat silent listening and wondering.
Each system is basically a workflow. Once you remove some of the techy parts, you end up with a business process. In fact, at some point this system was a very clean visio diagram. Each box was then blown up into a class diagram, and then some crack smoking code monkey (developer) defecated all over it - an enterprise system is born. This workflow is then overlaid with data. The workflow reacts differently to different pieces of data, but its still functionally a flow - actually more of a graph. The graph is a mix of generalized technical aspects and business logic. The problem these days is that the business logic is sprinkled all over the system making it very hard to re-create exactly what happened.
So, I wonder if it would be possible to overlay an actual system with a meta-system. Would it be possible to create a set of, let's say annotations, to add along side code and possible some additional hooks to allow another system to walk the system code to generate the graph, and overlay the graph with the business documentation - sprinkled throughout the code. The end result can be a self-documenting system. No, I am not talking about javadoc, or external specification. I am talking about a tool for the business user to verify what a given system is doing. Because the documentation and the code are living side by side, perhaps are even the same thing, the business user can be confident in what they are seeing.
The second part is that a lot of data centric systems live and die by the data they are receiving. Garbage in, garbage out they say. Well, I am not quite sure this statement needs to be true. After a long deep thought, I agreed with the business analyst and took a stand to support him. I think he is right, QA should not end once the system is in production. Each system should be built to be in a constant state of testing itself. The point isn't to test the code, the point is to test the data. The data is the most important thing. As developers and architects we treat data as a second citizen. What comes in to the system should be checked, what happens in the system should be checked, and what comes out of the system should be checked. It would help if the checks are a hypothesis test. The analyst proposed having a parallel testing dataset. He figured that a constant check against a constant may provide a basic sanity check or at least maybe raise some red flags if the data is too far from the norm. Of course, this type of test is context specific, but I think the basic principle has value. Data isn't just data, it's the most important thing. When the business analyst receives the end result, and the end result is wrong, the analyst spends hours trying to narrow down what went wrong. Sometimes the problem is the inputs, sometimes the problem is the business logic, other times, he just doesn't know.
I wanted to get this post out, but overall I am still thinking through a lot of these concepts. I think there is something conceptually there, but its a bit foggy.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment