I need to get around to writing this, and more in general, to finish some of the shit I start.
A week later:
Of course, this entry is a lot more complicated than a simple statement such as "get shit done." The issue is a delicate balance of life and work, and the venn diagram where they cross.
Projects start as fun little side things. You play around with them for a few hours, put some things together, and call it a day. Then, you get an email from the user saying, hey this is pretty cool, but it needs to be a lot more to be useful. Sure, you say, I'll add a few more lines of code. Unfortunately, now its not a small little project but a junior system. And not only that, but its a junior system which is poorly tested. You try to maintain the same schedule, but you realize that you can't add the kind of functionality that's needed, or maintain the level of quality necessary for a production system. You make mistakes, take shortcuts. Before you know it, your users are pretty angry. They are starting to question the whole thing. And frankly, so are you. You want to finish it. You desperately need to finish it. You've came this close, dedicated this much, but you realize that finishing will require even more.
This is an interesting struggle. The few lucky of us actually enjoy building things. So, this side little project may seem like work to some but is actually seen as basically a hobby. Unfortunately, some people are relying on your hobby, and that's when the pressure kicks in, and the problems start. On the other hand, unless you had a user who wanted something, you probably wouldn't have choose to build this particular thing as your hobby.
The other interesting observation is you are starting to see this project as something more. Maybe this project is the way out of the rat race. If it works, it could be your ticket. But its so much work you say.
How do you maintain the delicate balance? Is it even possible to maintain the balance? You're working with fixed items. There is a fixed amount of time. That amount is then reduced by constants such as actual work hours, sleeping, eating, showering, spending time with the family.
A week has 168 hours, 45 hours is spent at work, 49 hours is spent sleeping, 14 hours is spent eating, 4 hours is spent on toiletries. What remains is 54 hours to spend time with the family, work on the side projects, wash the dishes, do laundry, go to the movies, sleep in, watch tv, do the bills, etc... What ends up happening is you can probably take maybe 9 hours for the week - 1 per workday and 2 per weekend. Unfortunately, as everyone knows spending 1 hour programming is like watching ballet dancers do hip hop (it's not right). You can't accomplish anything major in 1 hour or even 2 hours. So you may start, but you tend to aim lower, and make a lot of mistakes in the process.
Wish me luck!
Tuesday, October 17, 2006
Thursday, August 24, 2006
Randomness
Here is an interesting question; if you know the past, can you guard against a similar event in the future? You know that the great depression took place. A lot of research has been done to understand what led to the great depression, and a lot of research has been done to understand how to get out of the great depression. In fact, the current chairman of the Federal Reserve is a specialist on the great depression. So, after all that, do you think it can happen again. With all this acquired knowledge, would we see it coming and be able to guard against it?
This question has been occupying me lately, and I am leaning towards a no. I don't think we'll see it coming. We may know how to guard against that specific event, but I am starting to believe that history never repeats itself. Events may seem similar, but there are infinite combinations of how they are triggered, how we react to those triggers, consequences, possibilities, and, of course, conclusions. If history never repeats, then studying history may not provide much value other than protecting us from that exact event.
My other opinion is that the world is getting more interconnected and more complicated. By this I mean that connections are forming that we may not realize exist or are even correlated. The world of the past will never happen again, and if the event of the past happens in the current world, the consequences will be quite different than before. Unfortunately, some other event may take place that may bring us the same type of devastation. Basically, my theory is that history can never repeat itself because the world is continuously changing.
There is some kind of random under-tone to the world. Some people call it luck, others misfortune. Let's say you trade stocks. You've read all there is about the company. You think you understand the fed, the government, the currency, etc... You believe very strongly in the financials of this company. You buy the stock. First it goes up, but then it drops like a rock. It seems that this company was dependent on the knowledge of a single engineer who was hit by the train. An unforeseen circumstance knocked you out of the market. What is that circumstance, is it randomness. Can you foresee it? Can you calculate its probability of occurrence? Do you understand its impact? I don't know, but it doesn't seem likely especially with our current understanding of probability. What is more likely is that we may get a safe feeling of security due to our acquired knowledge or perhaps our previous fortune, and this if nothing else will lead us to ruin.
The other item I wanted to cover was noise. This blog is noise. CNN is noise. In fact, a large part of the internet is noise. First off, the question is whether more information is noise or valuable artifacts. And if more information is noise, is it harmful? Does having more information actually increase your probability of making a wrong decision? Can you measure what information is valuable and what is noise. These statements seem very counterintuitive. What I am basically saying is that knowledge may actually be bad for you. Our brains seem to have adapted to this by actually reducing the large amounts of knowledge into manageable chunks. A lot of knowledge is simply forgotten, other knowledge gets reduced into some basic concepts and understandings. Does learning everything there is to know about a company such as all their news statements, their financial statements, statements made by their peers, etc... somehow takes away from the bigger picture?
If anyone out there has an answer, please, do write a comment.
This question has been occupying me lately, and I am leaning towards a no. I don't think we'll see it coming. We may know how to guard against that specific event, but I am starting to believe that history never repeats itself. Events may seem similar, but there are infinite combinations of how they are triggered, how we react to those triggers, consequences, possibilities, and, of course, conclusions. If history never repeats, then studying history may not provide much value other than protecting us from that exact event.
My other opinion is that the world is getting more interconnected and more complicated. By this I mean that connections are forming that we may not realize exist or are even correlated. The world of the past will never happen again, and if the event of the past happens in the current world, the consequences will be quite different than before. Unfortunately, some other event may take place that may bring us the same type of devastation. Basically, my theory is that history can never repeat itself because the world is continuously changing.
There is some kind of random under-tone to the world. Some people call it luck, others misfortune. Let's say you trade stocks. You've read all there is about the company. You think you understand the fed, the government, the currency, etc... You believe very strongly in the financials of this company. You buy the stock. First it goes up, but then it drops like a rock. It seems that this company was dependent on the knowledge of a single engineer who was hit by the train. An unforeseen circumstance knocked you out of the market. What is that circumstance, is it randomness. Can you foresee it? Can you calculate its probability of occurrence? Do you understand its impact? I don't know, but it doesn't seem likely especially with our current understanding of probability. What is more likely is that we may get a safe feeling of security due to our acquired knowledge or perhaps our previous fortune, and this if nothing else will lead us to ruin.
The other item I wanted to cover was noise. This blog is noise. CNN is noise. In fact, a large part of the internet is noise. First off, the question is whether more information is noise or valuable artifacts. And if more information is noise, is it harmful? Does having more information actually increase your probability of making a wrong decision? Can you measure what information is valuable and what is noise. These statements seem very counterintuitive. What I am basically saying is that knowledge may actually be bad for you. Our brains seem to have adapted to this by actually reducing the large amounts of knowledge into manageable chunks. A lot of knowledge is simply forgotten, other knowledge gets reduced into some basic concepts and understandings. Does learning everything there is to know about a company such as all their news statements, their financial statements, statements made by their peers, etc... somehow takes away from the bigger picture?
If anyone out there has an answer, please, do write a comment.
Friday, July 07, 2006
Reflexive Theory
The word reflexive means to direct back on itself. Don't confuse this with reflection, which means careful consideration or self reflection, which means careful consideration of oneself.
Reflexive Theory was originally created by a Russian mathematician Vladimir Lefebvre who is now part of a US think tank dealing with terrorism. Reflexive Theory was born during the cold war in Russia as a response to game theory which was widely adapted by the West.
What brought this theory to my attention is an article by Jonathan David Farley, San Francisco Chronicle: The torturer's dilemma: the math on fire with fire which was published on the Econophysics blog. I started trying to get a bit more information on the Reflexive Theory - Wikipedia had nothing, Google came up short, in fact, the only reference I could find is a link from a Russian site to a very old publication - go to page 86 for the relevant paper. And even there, the theory is never defined but is just applied in a simplified mathematical model of border protection from terrorism. I am wondering whether the fact that there is almost no mention on the web of reflexive theory has anything to do with the founder of the theory now being employed by the United States Government. Of course this thought pattern is better pursued on a big brother paranoia blog.
Reflexive Theory tries to explain mathematically why individuals take certain actions and what the consequence of those actions are. The theory takes into consideration how individuals perceive themselves whether good or evil, and whether those perceptions are valid or not.
The interesting thing about reflexivity is that its derived from psychology. The term actually implies that "reality and identity are reflexive". One implies the other. What we perceive is how we view ourselves and what we believe is true. This is a very powerful statement. This means that our reality is based on what we know, which is derived from our perceptions, which are based on our reality. This is a bit tough to swallow, but stay with me a bit longer. The whole point is that our reality defines us and influences our actions. In order for us to get a better understanding of our actions, and the consequences, and make additional evolutionary leaps, we need to step outside of our reality and view our knowledge and actions from that medium. I wonder whether traveling across realities is simply an evolutionary step where we can let go of our reality and understand the possibility of another. Alright, this last sentence is something that belongs in a sci-fi book rather than a blog on technology.
O. K. This blog is about technology not philosophy nor psychology or even mathematical models of terrorism. I am still working on how to tie this with technology. It's do able, but a bit theoretic, so I'll leave it for future entries.
Reflexive Theory was originally created by a Russian mathematician Vladimir Lefebvre who is now part of a US think tank dealing with terrorism. Reflexive Theory was born during the cold war in Russia as a response to game theory which was widely adapted by the West.
What brought this theory to my attention is an article by Jonathan David Farley, San Francisco Chronicle: The torturer's dilemma: the math on fire with fire which was published on the Econophysics blog. I started trying to get a bit more information on the Reflexive Theory - Wikipedia had nothing, Google came up short, in fact, the only reference I could find is a link from a Russian site to a very old publication - go to page 86 for the relevant paper. And even there, the theory is never defined but is just applied in a simplified mathematical model of border protection from terrorism. I am wondering whether the fact that there is almost no mention on the web of reflexive theory has anything to do with the founder of the theory now being employed by the United States Government. Of course this thought pattern is better pursued on a big brother paranoia blog.
Reflexive Theory tries to explain mathematically why individuals take certain actions and what the consequence of those actions are. The theory takes into consideration how individuals perceive themselves whether good or evil, and whether those perceptions are valid or not.
The interesting thing about reflexivity is that its derived from psychology. The term actually implies that "reality and identity are reflexive". One implies the other. What we perceive is how we view ourselves and what we believe is true. This is a very powerful statement. This means that our reality is based on what we know, which is derived from our perceptions, which are based on our reality. This is a bit tough to swallow, but stay with me a bit longer. The whole point is that our reality defines us and influences our actions. In order for us to get a better understanding of our actions, and the consequences, and make additional evolutionary leaps, we need to step outside of our reality and view our knowledge and actions from that medium. I wonder whether traveling across realities is simply an evolutionary step where we can let go of our reality and understand the possibility of another. Alright, this last sentence is something that belongs in a sci-fi book rather than a blog on technology.
O. K. This blog is about technology not philosophy nor psychology or even mathematical models of terrorism. I am still working on how to tie this with technology. It's do able, but a bit theoretic, so I'll leave it for future entries.
Tuesday, June 20, 2006
Scene 1
Scene 1:
The following conversation took place between two co-workers over an instant messaging product called Sparc.
The setting is a corporate office with many cubicles. The two co-workers have been asked to design
an enterprise, grid enabled architecture. They received a single 8 by 11 Visio diagram of the architecture.
They were also directed to leverage a FpML (financial markup language) as a messaging protocol between enterprise
systems.
Ben is an existentialist with a bend on fatalism. Mike is generally an optimist, unless Ben gets too fatalistic.
[4:45 PM] Ben: Boss wants to discuss FpMl
[4:52 PM] Mike: As long as we don't have to use it internally ....
[5:00 PM] Ben: I love it when the architecture is dictated from above. It makes designing so much easier.
[5:04 PM] Mike: And simpler, too. The whole system fits into a picture with a few boxes in it
[5:05 PM] Ben: it's pretty cool.
[5:05 PM] Ben: What's wrong with using FpML internally? You suck!
[5:06 PM] Ben: It's a nice intermediary format that allows system to communicate in a well defined language. Honestly Mike, what kind of an architect do you call yourself?
[5:06 PM] Mike: Hey, we've got enough on our plate writing our own database, our own operating system and our own programming language. I just don't need to have to use FpML as well as all that. I think it could jeopardize the entire project!
[5:08 PM] Ben: come on, that's crap. We can introduce this conversion in our custom db level, or even add it natively into our custom language.
[5:08 PM] Ben: think about it seamless integration with FpML, beautiful. Too bad FpML only covers derivatives, haha
[5:09 PM] Ben: I am sure we can work through that. We just need to work with the FpML working group to add a few parts to their spec.
[5:10 PM] Ben: 10-4?
[5:11 PM] Mike: Yeah, OK, it's taking me a while to think of a witty reply. 10-4
[5:11 PM] Ben: sorry to rush you, take your time. I just thought you were ignoring me because you are working.
[5:13 PM] Mike: Does FpML support images? In case we need to attach screenshots when we're reporting errors in market data?
[5:14 PM] Ben: Not yet, but I think we should bring this up when we discuss with them about expanding their specification to support other products.
[5:15 PM] Mike: I think whatever solution we go with, it's vital that we can scavenge unused cycles from people's mobile phones
[5:15 PM] Ben: and PDA's
[5:16 PM] Mike: and pacemakers
[5:16 PM] Ben: and watches
[5:16 PM] Mike: and elevators
[5:16 PM] Ben: maybe we can work something out where we can use the employee’s home machines.
[5:17 PM] Mike: Or people with Bluetooth devices in their briefcases as they wander past the building
[5:17 PM] Ben: good thinking, what about fax machines?
[5:17 PM] Mike: I wouldn't like to see any proposal signed off until we've really considered all these factors
[5:18 PM] Ben: I am glad at least you and me are on the same page.
[5:18 PM] Ben: We need to write up a document, someone will sign off, and then we can proceed with the development.
[5:19 PM] Ben: I think this conversation is sufficient as design.
[5:19 PM] Mike: Especially once we've deleted the Sparc logs
[5:19 PM] Ben: man, if someone has a sniffer, we're doomed.
[5:20 PM] Mike: That would be sad - especially as you introduced this product into CompanyX
[5:21 PM] Ben:
[5:21 PM] Mike: Do I have to 10-4 the smileys ?
[5:21 PM] Ben: no worries, I believe encryption is on.
[5:22 PM] Ben: no, don't worry about the smileys
[5:24 PM] Mike: I don't think we should restrict ourselves to FpML, either. We should have a meta-markup framework where we can just plug-in any standard that comes along - in case we need to support fPml or FPml or fPmL later on
[5:24 PM] Ben: I love it. Consider it added to the spec.
[5:24 PM] Mike: The spec which I hope is written in specML ?
[5:25 PM] Ben: should we look into whether we can leverage specML along side FpML as the messaging protocol?
[5:25 PM] Mike: Absolutely
[5:26 PM] Ben: http://www.mozilla.org/rhino/
[5:26 PM] Ben: I think we should use rhino tool to build out entire framework
[5:26 PM] Ben: think about it, we can release partial code, no reason to compile.
[5:27 PM] Mike: I've used Rhino before (indirectly) - it's built into JWebUnit
[5:27 PM] Ben: so, what do you think of using it as our core language?
[5:30 PM] Mike: Might be a bit low-level. I want something high-level that maps 4 boxes on a diagram into a fully built-out, productionised, resilient, performant, scaleable, internationalised system that runs on everything from a supercomputer to a Beowulf cluster to a digital watch.
[5:32 PM] Ben: do you have a copy of our entire conversation; I think it would make a wonderful blog entry.
The following conversation took place between two co-workers over an instant messaging product called Sparc.
The setting is a corporate office with many cubicles. The two co-workers have been asked to design
an enterprise, grid enabled architecture. They received a single 8 by 11 Visio diagram of the architecture.
They were also directed to leverage a FpML (financial markup language) as a messaging protocol between enterprise
systems.
Ben is an existentialist with a bend on fatalism. Mike is generally an optimist, unless Ben gets too fatalistic.
[4:45 PM] Ben: Boss wants to
[4:52 PM] Mike: As long as we don't have to use it internally ....
[5:00 PM] Ben: I love it when the architecture is dictated from above. It makes designing so much easier.
[5:04 PM] Mike: And simpler, too. The whole system fits into a picture with a few boxes in it
[5:05 PM] Ben: it's pretty cool.
[5:05 PM] Ben: What's wrong with using FpML internally? You suck!
[5:06 PM] Ben: It's a nice intermediary format that allows system to communicate in a well defined language. Honestly Mike, what kind of an architect do you call yourself?
[5:06 PM] Mike: Hey, we've got enough on our plate writing our own database, our own operating system and our own programming language. I just don't need to have to use FpML as well as all that. I think it could jeopardize the entire project!
[5:08 PM] Ben: come on, that's crap. We can introduce this conversion in our custom db level, or even add it natively into our custom language.
[5:08 PM] Ben: think about it seamless integration with FpML, beautiful. Too bad FpML only covers derivatives, haha
[5:09 PM] Ben: I am sure we can work through that. We just need to work with the FpML working group to add a few parts to their spec.
[5:10 PM] Ben: 10-4?
[5:11 PM] Mike: Yeah, OK, it's taking me a while to think of a witty reply. 10-4
[5:11 PM] Ben: sorry to rush you, take your time. I just thought you were ignoring me because you are working.
[5:13 PM] Mike: Does FpML support images? In case we need to attach screenshots when we're reporting errors in market data?
[5:14 PM] Ben: Not yet, but I think we should bring this up when we discuss with them about expanding their specification to support other products.
[5:15 PM] Mike: I think whatever solution we go with, it's vital that we can scavenge unused cycles from people's mobile phones
[5:15 PM] Ben: and PDA's
[5:16 PM] Mike: and pacemakers
[5:16 PM] Ben: and watches
[5:16 PM] Mike: and elevators
[5:16 PM] Ben: maybe we can work something out where we can use the employee’s home machines.
[5:17 PM] Mike: Or people with Bluetooth devices in their briefcases as they wander past the building
[5:17 PM] Ben: good thinking, what about fax machines?
[5:17 PM] Mike: I wouldn't like to see any proposal signed off until we've really considered all these factors
[5:18 PM] Ben: I am glad at least you and me are on the same page.
[5:18 PM] Ben: We need to write up a document, someone will sign off, and then we can proceed with the development.
[5:19 PM] Ben: I think this conversation is sufficient as design.
[5:19 PM] Mike: Especially once we've deleted the Sparc logs
[5:19 PM] Ben: man, if someone has a sniffer, we're doomed.
[5:20 PM] Mike: That would be sad - especially as you introduced this product into CompanyX
[5:21 PM] Ben:
[5:21 PM] Mike: Do I have to 10-4 the smileys ?
[5:21 PM] Ben: no worries, I believe encryption is on.
[5:22 PM] Ben: no, don't worry about the smileys
[5:24 PM] Mike: I don't think we should restrict ourselves to FpML, either. We should have a meta-markup framework where we can just plug-in any standard that comes along - in case we need to support fPml or FPml or fPmL later on
[5:24 PM] Ben: I love it. Consider it added to the spec.
[5:24 PM] Mike: The spec which I hope is written in specML ?
[5:25 PM] Ben: should we look into whether we can leverage specML along side FpML as the messaging protocol?
[5:25 PM] Mike: Absolutely
[5:26 PM] Ben: http://www.mozilla.org/rhino/
[5:26 PM] Ben: I think we should use rhino tool to build out entire framework
[5:26 PM] Ben: think about it, we can release partial code, no reason to compile.
[5:27 PM] Mike: I've used Rhino before (indirectly) - it's built into JWebUnit
[5:27 PM] Ben: so, what do you think of using it as our core language?
[5:30 PM] Mike: Might be a bit low-level. I want something high-level that maps 4 boxes on a diagram into a fully built-out, productionised, resilient, performant, scaleable, internationalised system that runs on everything from a supercomputer to a Beowulf cluster to a digital watch.
[5:32 PM] Ben: do you have a copy of our entire conversation; I think it would make a wonderful blog entry.
Saturday, June 10, 2006
Commodity, Speed of Light, EAB
This post was actually going to be about technology commodity. I actually even wrote a part of it. I had all kinds of things in there - a definition of the word commodity from the wikipedia, a reference to Karl Marx, processing grid, service grid, data grid, etc... It was going to be a pretty good post before I erased it.
What the hell? Well, I had no point, I was just writing the obvious. Let me start over. Any enterprise architecture is going to be distributed, but that's not enough. Systems need to communicate and share data. Some systems may provide a service to other systems. Some systems may be in-charge of routing messages. Other systems may be in charge of doing calculations, other systems provide auxiliary services like calendar, or caching. The point is that a whole lot of systems are going to be communicating. In fact, in some cases, that communication will be very heavy and may become a liability. An Enterprise Architecture Bottleneck(EAB). You gotta love acronyms. They make everything sound so much more impressive.
In order to reduce EAB, your system will need to reduce the amount of data being transferred, figure out a faster transfer method, go faster than the speed of light, or all of the above. For the sake of simplicity, let's assume the last point is currently not feasible. For the second item, you can buy a bigger pipe, but you are still stuck to a certain latency. The cost to transfer a bit from NY to London will always be bound to the speed of light. So, can the system reduce the amount transferred. I think it's possible if the system is aware of the data patterns and can remove any unnecessary or redundant information. For example, let's say me and you are carrying a conversation. Certain things are obvious, other things can be deduced by you without me saying anything. For other things, I may only need to say a few things for you to understand much more, and in other cases, you may already know certain things, because I've already mentioned them. What does this all mean? The sending system will need to analize the data stream and learn to reduce the load. Of course, this assumes that the sender and receiver have agreed on some transfer protocol.
Well, this post is still pretty obvious, but maybe a bit more interesting than another tirade on processing grids.
What the hell? Well, I had no point, I was just writing the obvious. Let me start over. Any enterprise architecture is going to be distributed, but that's not enough. Systems need to communicate and share data. Some systems may provide a service to other systems. Some systems may be in-charge of routing messages. Other systems may be in charge of doing calculations, other systems provide auxiliary services like calendar, or caching. The point is that a whole lot of systems are going to be communicating. In fact, in some cases, that communication will be very heavy and may become a liability. An Enterprise Architecture Bottleneck(EAB). You gotta love acronyms. They make everything sound so much more impressive.
In order to reduce EAB, your system will need to reduce the amount of data being transferred, figure out a faster transfer method, go faster than the speed of light, or all of the above. For the sake of simplicity, let's assume the last point is currently not feasible. For the second item, you can buy a bigger pipe, but you are still stuck to a certain latency. The cost to transfer a bit from NY to London will always be bound to the speed of light. So, can the system reduce the amount transferred. I think it's possible if the system is aware of the data patterns and can remove any unnecessary or redundant information. For example, let's say me and you are carrying a conversation. Certain things are obvious, other things can be deduced by you without me saying anything. For other things, I may only need to say a few things for you to understand much more, and in other cases, you may already know certain things, because I've already mentioned them. What does this all mean? The sending system will need to analize the data stream and learn to reduce the load. Of course, this assumes that the sender and receiver have agreed on some transfer protocol.
Well, this post is still pretty obvious, but maybe a bit more interesting than another tirade on processing grids.
Thursday, June 08, 2006
Business Process Modeling
I was discussing a system process with a business client, and during the conversation it hit me that the client has no idea what the business process is. At some points during the development of the system, a business specification was written, perhaps even by this same person, but overtime, the system evolved, and the process forgotten. Now, my business client has no idea what the system is actually doing. To him, the system is a black box, one big scary unknown. He has a general idea of what it does - it takes data from some place, does some filtering and calculation, populates a few tables, does some more calculations and filtering and eventually generates some results. The business analyst is then charged with the verification of the results - the irony is that the business analyst isn't completely sure of how the number was generated. He has a certain level of trust, but overtime that trust erodes and he is left feeling uneasy and unsure.
Later that day, I sat in a meeting where a discussion was taking place about a new business system. It was a usual meeting, many people spoke about many things, most of the conversations strayed far from the actual deliverable or even reality, nothing was agreed, a few people tried to cock fight each other, and, overall, the meeting reversed progress - usual meeting. Anyways, during the meeting, specifically during one of the cock fights, one of the analysts spoke up and said something very profound and interesting: this was followed by a cock block, but that's a different story. So this interesting statement was that he believed that quality analysis testing should not end when the system goes to production but should be an ongoing part of the system. He believed that it was important to have a stable validation layer in the system in order to provide basic sanity checks that the system is performing as expected in an endless parade of changing data. My team-members rose up in anger against him, some claimed he was a heretic, others threatened ex-communication. I sat silent listening and wondering.
Each system is basically a workflow. Once you remove some of the techy parts, you end up with a business process. In fact, at some point this system was a very clean visio diagram. Each box was then blown up into a class diagram, and then some crack smoking code monkey (developer) defecated all over it - an enterprise system is born. This workflow is then overlaid with data. The workflow reacts differently to different pieces of data, but its still functionally a flow - actually more of a graph. The graph is a mix of generalized technical aspects and business logic. The problem these days is that the business logic is sprinkled all over the system making it very hard to re-create exactly what happened.
So, I wonder if it would be possible to overlay an actual system with a meta-system. Would it be possible to create a set of, let's say annotations, to add along side code and possible some additional hooks to allow another system to walk the system code to generate the graph, and overlay the graph with the business documentation - sprinkled throughout the code. The end result can be a self-documenting system. No, I am not talking about javadoc, or external specification. I am talking about a tool for the business user to verify what a given system is doing. Because the documentation and the code are living side by side, perhaps are even the same thing, the business user can be confident in what they are seeing.
The second part is that a lot of data centric systems live and die by the data they are receiving. Garbage in, garbage out they say. Well, I am not quite sure this statement needs to be true. After a long deep thought, I agreed with the business analyst and took a stand to support him. I think he is right, QA should not end once the system is in production. Each system should be built to be in a constant state of testing itself. The point isn't to test the code, the point is to test the data. The data is the most important thing. As developers and architects we treat data as a second citizen. What comes in to the system should be checked, what happens in the system should be checked, and what comes out of the system should be checked. It would help if the checks are a hypothesis test. The analyst proposed having a parallel testing dataset. He figured that a constant check against a constant may provide a basic sanity check or at least maybe raise some red flags if the data is too far from the norm. Of course, this type of test is context specific, but I think the basic principle has value. Data isn't just data, it's the most important thing. When the business analyst receives the end result, and the end result is wrong, the analyst spends hours trying to narrow down what went wrong. Sometimes the problem is the inputs, sometimes the problem is the business logic, other times, he just doesn't know.
I wanted to get this post out, but overall I am still thinking through a lot of these concepts. I think there is something conceptually there, but its a bit foggy.
Later that day, I sat in a meeting where a discussion was taking place about a new business system. It was a usual meeting, many people spoke about many things, most of the conversations strayed far from the actual deliverable or even reality, nothing was agreed, a few people tried to cock fight each other, and, overall, the meeting reversed progress - usual meeting. Anyways, during the meeting, specifically during one of the cock fights, one of the analysts spoke up and said something very profound and interesting: this was followed by a cock block, but that's a different story. So this interesting statement was that he believed that quality analysis testing should not end when the system goes to production but should be an ongoing part of the system. He believed that it was important to have a stable validation layer in the system in order to provide basic sanity checks that the system is performing as expected in an endless parade of changing data. My team-members rose up in anger against him, some claimed he was a heretic, others threatened ex-communication. I sat silent listening and wondering.
Each system is basically a workflow. Once you remove some of the techy parts, you end up with a business process. In fact, at some point this system was a very clean visio diagram. Each box was then blown up into a class diagram, and then some crack smoking code monkey (developer) defecated all over it - an enterprise system is born. This workflow is then overlaid with data. The workflow reacts differently to different pieces of data, but its still functionally a flow - actually more of a graph. The graph is a mix of generalized technical aspects and business logic. The problem these days is that the business logic is sprinkled all over the system making it very hard to re-create exactly what happened.
So, I wonder if it would be possible to overlay an actual system with a meta-system. Would it be possible to create a set of, let's say annotations, to add along side code and possible some additional hooks to allow another system to walk the system code to generate the graph, and overlay the graph with the business documentation - sprinkled throughout the code. The end result can be a self-documenting system. No, I am not talking about javadoc, or external specification. I am talking about a tool for the business user to verify what a given system is doing. Because the documentation and the code are living side by side, perhaps are even the same thing, the business user can be confident in what they are seeing.
The second part is that a lot of data centric systems live and die by the data they are receiving. Garbage in, garbage out they say. Well, I am not quite sure this statement needs to be true. After a long deep thought, I agreed with the business analyst and took a stand to support him. I think he is right, QA should not end once the system is in production. Each system should be built to be in a constant state of testing itself. The point isn't to test the code, the point is to test the data. The data is the most important thing. As developers and architects we treat data as a second citizen. What comes in to the system should be checked, what happens in the system should be checked, and what comes out of the system should be checked. It would help if the checks are a hypothesis test. The analyst proposed having a parallel testing dataset. He figured that a constant check against a constant may provide a basic sanity check or at least maybe raise some red flags if the data is too far from the norm. Of course, this type of test is context specific, but I think the basic principle has value. Data isn't just data, it's the most important thing. When the business analyst receives the end result, and the end result is wrong, the analyst spends hours trying to narrow down what went wrong. Sometimes the problem is the inputs, sometimes the problem is the business logic, other times, he just doesn't know.
I wanted to get this post out, but overall I am still thinking through a lot of these concepts. I think there is something conceptually there, but its a bit foggy.
Tuesday, May 23, 2006
Robo Blogger
I recently had a discussion with an associate of mine, let's call him Philip. I made a wager with Philip. I bet him that if I wrote a robo-blogger, his blog will be more popular than mine. The robo-blogger is going to subscribe to all the popular blogs. He will summarize the blogs, and then comment on them. The comments will range from outright bashing to a more supportive tone. The bashing will also occur using different regional dialects such as Australian. For example, this blog is just piss-farting around - translation: this blog is just wasting time. Another example, this program is so cactus - translation: this program is just wrong. For more: http://en.wikipedia.org/wiki/Australian_words
I believe the only way I can loose this bet is if I don't build it. Watch out Philip, YOU ARE GOING DOWN!!!!
I believe the only way I can loose this bet is if I don't build it. Watch out Philip, YOU ARE GOING DOWN!!!!
Sunday, April 23, 2006
Ontology
Ontology is "...a systematic arrangement of all of the important categories of objects or concepts which exist in some field of discourse, showing the relations between them. When complete, an ontology is a categorization of all of the concepts in some field of knowledge, including the objects and all of the properties, relations, and functions needed to define the objects and specify their actions."
So, an ontology is a language to represent all the objects in your field, along with their properties, and relationships. For example, a trade contains an instrument. A trade contains a price. An option instrument is a type of instrument. An option instrument contains an underlying instrument. An index instrument is a type of instrument. An index instrument consists of sub instruments. etc... I am thinking that in theory, you could represent an entire industry or a body of knowledge using an ontology. In theory, once you have a representational space, and given a new instance of an object, you could apply the object to the ontology; find your place, and then be able to walk the ontology to figure out all the possible causes and effects.
There has been some work done. W3C came out with a language specification - owl. Also, see the links below for more references.
Links
http://jena.sourceforge.net/
http://sweetrules.projects.semwebcentral.org
http://protege.stanford.edu/
So, an ontology is a language to represent all the objects in your field, along with their properties, and relationships. For example, a trade contains an instrument. A trade contains a price. An option instrument is a type of instrument. An option instrument contains an underlying instrument. An index instrument is a type of instrument. An index instrument consists of sub instruments. etc... I am thinking that in theory, you could represent an entire industry or a body of knowledge using an ontology. In theory, once you have a representational space, and given a new instance of an object, you could apply the object to the ontology; find your place, and then be able to walk the ontology to figure out all the possible causes and effects.
There has been some work done. W3C came out with a language specification - owl. Also, see the links below for more references.
Links
http://jena.sourceforge.net/
http://sweetrules.projects.semwebcentral.org
http://protege.stanford.edu/
Saturday, April 01, 2006
Friday, March 31, 2006
Metaverse continued ...
I've recently came upon this site: http://www.youos.com/
and this site: http://www.ajaxwrite.com/
and I've found these to be absolutely exciting. The YouOs is absolutely great. They redefined the concept of an operating system.
I think what needs to happen is for applications to break out of the browser. YouOs, and AjaxWrite are a beginning of this. What is the point of the browser. It is a very heavy and limiting medium. What if you created a browser wrapper, something that lives neither here nor there. With Ajax taking over, it is possible to move a piece of the website to your desktop and work with that chunk as if it was a real application running on your PC. The whole browser is only confusing the matter. It's really not needed. A browser is nothing but an interpreter in an interpreted language. The only difference is that, right now, the browser is also imposing a look and feel and is constraining the interaction between the user and the service.
Imagine a powerful application like the AjaxWrite. Now remove the browser, and create a link on your current desktop that instead of opening a browser just runs the app. The app decides the look and feel, etc... I know there are security concerns, but for the sake of progress I would rather ignore them for the moment. Now, you have a part of your desktop running a web-based application. The whole thing is running on some other server, and you are simply interacting with it. The difference is that you have a seemless integration with your environment.
At the moment, there is such a clear separation between desktop and web. In my opinion, they really are the same thing. What is the difference between running something locally and running something remotely and only bringing back the display. Both systems react to the user events, the only difference, is that the desktop system is bound to your machine.
and this site: http://www.ajaxwrite.com/
and I've found these to be absolutely exciting. The YouOs is absolutely great. They redefined the concept of an operating system.
I think what needs to happen is for applications to break out of the browser. YouOs, and AjaxWrite are a beginning of this. What is the point of the browser. It is a very heavy and limiting medium. What if you created a browser wrapper, something that lives neither here nor there. With Ajax taking over, it is possible to move a piece of the website to your desktop and work with that chunk as if it was a real application running on your PC. The whole browser is only confusing the matter. It's really not needed. A browser is nothing but an interpreter in an interpreted language. The only difference is that, right now, the browser is also imposing a look and feel and is constraining the interaction between the user and the service.
Imagine a powerful application like the AjaxWrite. Now remove the browser, and create a link on your current desktop that instead of opening a browser just runs the app. The app decides the look and feel, etc... I know there are security concerns, but for the sake of progress I would rather ignore them for the moment. Now, you have a part of your desktop running a web-based application. The whole thing is running on some other server, and you are simply interacting with it. The difference is that you have a seemless integration with your environment.
At the moment, there is such a clear separation between desktop and web. In my opinion, they really are the same thing. What is the difference between running something locally and running something remotely and only bringing back the display. Both systems react to the user events, the only difference, is that the desktop system is bound to your machine.
Sunday, March 26, 2006
Rules and Monads
Word of the day, Monad: I believe Leibnitz first applied it to science or perhaps philosophy. In any case, the word is defined as "...an indivisible, impenetrable unit of substance viewed as the basic constituent element of physical reality..."
I am currently contemplating a system that will allow the creation and testing of validity of atomical units, monads. Small units build up to larger units, to even larger, to ever so more larger. At the end, by testing the indivisible, impenetrable unit of substance, I can find out whether the sum of sums of sums of sums equaling to an even larger and more complicated entity is valid.
Well, solution one, write lots of lines of code and hope to God that nothing changes and your infinite amounts of if-then-else statements nested ever so many layers deep across ever so many classes in your lovely object oriented patterned out piece of shit.
Solution two requires a bit of imagination, a workflow, and rules on monads. Let's assume that you've built some wonderful gui that allows you to create and modify rules.
Now, each monad can actually be represented in a 2 dimensional space. Each monad is for a specific type of data: stock price, live stock, live stock weight, insurance policy, etc... Each monad represents a specific type of data. Let's say that's the X-axis. Each monad is also for a given instance of that data. For example, a price is uniquely identified as cusip/date, and each cow is uniquely identified by its id. Each transaction has a transaction id, etc... We can call this the Y-axis. So, given the X and the Y, we can uniquely identify every monad.
At this point, we can right a rule that tests that monad. So, we can test every indivisible unit, and then we can test the summation of these units using other rules that perhaps only group the indivisible monad rules. At the end, all this adds up to a single high level rule that tells you whether your data is valid or not. Along the way, your rules may change data, or take you on a different path, but that's another blog entry.
It's actually extremely elegant. I apologize if my description skewed the beauty or confused the matter. The more I think about this, the more I believe in it. Now you may be wondering: am I not describing an expert system built using a rule engine. Somewhat true, in fact I am describing an over simplified rule engine. The big difference is that instead of allowing each rule to be a self contained entity, this model assigns a rule to a particular type, and the rule outcome to the type and instance of the data object. So, instead of having a bunch of free for all rules, you have a controlled rule structure, and a controlled firing. This is one thing that I found to be very brittle in rule engines. The writer of the rules does not know when the rules will fire and has to be very careful in writing the rule filter. Now, some will argue that that's the whole point. Well, try selling a none-deterministic system to your business manager who is weary of the whole thing to begin with.
Anyways, as you can see I am a big fan of externalizing the ever changing business rules from the underlying system framework. This is an interesting idea, and really should be developed further.
I am currently contemplating a system that will allow the creation and testing of validity of atomical units, monads. Small units build up to larger units, to even larger, to ever so more larger. At the end, by testing the indivisible, impenetrable unit of substance, I can find out whether the sum of sums of sums of sums equaling to an even larger and more complicated entity is valid.
Well, solution one, write lots of lines of code and hope to God that nothing changes and your infinite amounts of if-then-else statements nested ever so many layers deep across ever so many classes in your lovely object oriented patterned out piece of shit.
Solution two requires a bit of imagination, a workflow, and rules on monads. Let's assume that you've built some wonderful gui that allows you to create and modify rules.
Now, each monad can actually be represented in a 2 dimensional space. Each monad is for a specific type of data: stock price, live stock, live stock weight, insurance policy, etc... Each monad represents a specific type of data. Let's say that's the X-axis. Each monad is also for a given instance of that data. For example, a price is uniquely identified as cusip/date, and each cow is uniquely identified by its id. Each transaction has a transaction id, etc... We can call this the Y-axis. So, given the X and the Y, we can uniquely identify every monad.
At this point, we can right a rule that tests that monad. So, we can test every indivisible unit, and then we can test the summation of these units using other rules that perhaps only group the indivisible monad rules. At the end, all this adds up to a single high level rule that tells you whether your data is valid or not. Along the way, your rules may change data, or take you on a different path, but that's another blog entry.
It's actually extremely elegant. I apologize if my description skewed the beauty or confused the matter. The more I think about this, the more I believe in it. Now you may be wondering: am I not describing an expert system built using a rule engine. Somewhat true, in fact I am describing an over simplified rule engine. The big difference is that instead of allowing each rule to be a self contained entity, this model assigns a rule to a particular type, and the rule outcome to the type and instance of the data object. So, instead of having a bunch of free for all rules, you have a controlled rule structure, and a controlled firing. This is one thing that I found to be very brittle in rule engines. The writer of the rules does not know when the rules will fire and has to be very careful in writing the rule filter. Now, some will argue that that's the whole point. Well, try selling a none-deterministic system to your business manager who is weary of the whole thing to begin with.
Anyways, as you can see I am a big fan of externalizing the ever changing business rules from the underlying system framework. This is an interesting idea, and really should be developed further.
Sunday, February 19, 2006
Cromwell's rule
Cromwell's rule is derived from a letter Oliver Cromwell wrote to the synod of the Church of Scotland on August 5th, 1650 in which he said "I beseech you, in the bowels of Christ, consider it possible that you are mistaken." The rule is, very simply, do not take anything for absolute; totality is wrong.
I am not Christian, in fact, I am very much Jewish, and certain parts of my anatomy will certainly show that, but I have a very great respect for that statement. The world is entering a very dangerous period. Everyone, and I mean EVERYONE, Jewish, Christian, Muslim, Buddhist, Zoroastrian, must consider the possibility that they are mistaken.
Very recently a small Danish newspaper published a set of cartoons making fun of Mohammed. They did it to show that the media is censoring itself on criticizing Mohammed. So, what happened. Well, the Muslim world went bizerk. They burned consulates, killed people, robbed banks, oh yes, robbed banks, marched, protested, burned flags, fired oozies in the air, chanted death to Israel, etc... An Iranian newspaper even created a competition for the best holocaust cartoon. Undoubtedly, numerous entries were submitted.
Everyone considers their position to be superior. No side understands or cares about the other. One side will tell you that the other does not understand or care. The other side will reverse the statement and tell you the same. Don't believe me. Here is an example. The greatest pain a person can feel is the passing of a loved one. That is the greatest fear and the greatest pain. A part of you dies as well. The more you love someone, the more you will understand this statement. On the other hand, one hundred Palestinian children being killed by a stray bomb will produce a quickly passing feeling of regret for their unfortunate passing.
Every culture, at every point in history that believed in totality brought about the greatest suffering up to that point in history. Consider Hitler, Stalin, Mussolini, Ferdinand and Isabella of Castile, Lenin, Mao, Christian Crusades, and so forth. This could be a very long list, but a few of these names should prove my point.
Someone has to be correct right? Maybe. This question seems extremely complicated. I don't know.
What I do know is that any culture or any person that believes something in such an absolute is dangerous and no matter how well meaning is horribly wrong. Nothing is simple and clear cut. Everything has multiple sides, and therefore any totality is always wrong no matter how well meaning.
I think Queen Margrethe II of Denmark said it best:
"There is, as said, something moving about people, whom to this degree surrender to a faith. However there is also something frightening about such a totality, which also is a side of Islam. There must be shown counter-play [interplay of an alternative / sparring], and once in a while you have to run the risk of getting a less flattering label stuck upon you. Because there is certain things before which one should not be tolerant."
I am not Christian, in fact, I am very much Jewish, and certain parts of my anatomy will certainly show that, but I have a very great respect for that statement. The world is entering a very dangerous period. Everyone, and I mean EVERYONE, Jewish, Christian, Muslim, Buddhist, Zoroastrian, must consider the possibility that they are mistaken.
Very recently a small Danish newspaper published a set of cartoons making fun of Mohammed. They did it to show that the media is censoring itself on criticizing Mohammed. So, what happened. Well, the Muslim world went bizerk. They burned consulates, killed people, robbed banks, oh yes, robbed banks, marched, protested, burned flags, fired oozies in the air, chanted death to Israel, etc... An Iranian newspaper even created a competition for the best holocaust cartoon. Undoubtedly, numerous entries were submitted.
Everyone considers their position to be superior. No side understands or cares about the other. One side will tell you that the other does not understand or care. The other side will reverse the statement and tell you the same. Don't believe me. Here is an example. The greatest pain a person can feel is the passing of a loved one. That is the greatest fear and the greatest pain. A part of you dies as well. The more you love someone, the more you will understand this statement. On the other hand, one hundred Palestinian children being killed by a stray bomb will produce a quickly passing feeling of regret for their unfortunate passing.
Every culture, at every point in history that believed in totality brought about the greatest suffering up to that point in history. Consider Hitler, Stalin, Mussolini, Ferdinand and Isabella of Castile, Lenin, Mao, Christian Crusades, and so forth. This could be a very long list, but a few of these names should prove my point.
Someone has to be correct right? Maybe. This question seems extremely complicated. I don't know.
What I do know is that any culture or any person that believes something in such an absolute is dangerous and no matter how well meaning is horribly wrong. Nothing is simple and clear cut. Everything has multiple sides, and therefore any totality is always wrong no matter how well meaning.
I think Queen Margrethe II of Denmark said it best:
"There is, as said, something moving about people, whom to this degree surrender to a faith. However there is also something frightening about such a totality, which also is a side of Islam. There must be shown counter-play [interplay of an alternative / sparring], and once in a while you have to run the risk of getting a less flattering label stuck upon you. Because there is certain things before which one should not be tolerant."
Sunday, February 12, 2006
Frustration
I am extremely frustrated with the keyboard and mouse, specifically the mouse. I find the mouse to be a very outdated device that simply doesn't keep up with me. Good hackers don't even use the mouse. A lot of times, I find myself simply memorizing all the possible shortcuts and deal only with the keyboard. The reason is exactly that, shortcuts. Its faster to type out some cryptic combination like holding a key down while pressing a set of other keys rather then to nagivate the mouse to a menu, and perform a few clicks. I think the problem is that the mouse is not an extension of the hand. It's unnatural. The eyes and brain move infinitely faster. Perhaps, if the hand was the mouse. It takes such a long time to move the mouse from one side of the screen to another. The mouse is made slow on purpose because it is not a very precise device. Imagine how hard it would be to navigate it, if the mouse flew across the screen. The human hand, fingers are a lot more precise. In theory, I think the hand acting as a mouse can move much faster. There is also a possibility of interacting more naturally with the computer if your hands can be involved. For example, physically taking and moving windows rather then performing some sort of click and hold motion. There are also 2 hands and 10 fingers providing a potential for even fuller interaction. There is no reason that the mouse has to be a single item. What about 10 different mouses, or perhaps there is no mouse at all, but your hands can physically interact with the computer in a virtual plane.
Perhaps part of my frustration stems from that fact that it takes a very long time to do anything on the computer. Imagine building a simple web based program. There are a couple of screens, lets say 5 queries, some fancy interaction such as ajax, perhaps some column sorting, expanding, maybe some data pop-ups, some drill downs. A little program like this will probably take one person a few days, maybe even a week to fully build out and test. That's extremely slow. This system might not even be what the business wants causing further delays. There has to be a faster way. There is nothing even remotely complicated or interesting about these little programs. These programs are simply a test of how fast one can type and how easy they can define the architecture to minimize typing in the first place. We are developing stupid little programs at such a low level. It is extremely inefficient, not to mention boring.
The solution so far has been to build tools on top of tools. IDE's are attempting to simplify certain actions, visual development tools are attempting to convert languages into workflow models. IDE's are replacing development. They are simply making it faster. Workflow based languages are creating another layer of complexity. Instead of writing an if statement you move
a picture of an if statement into a plane, and then configure it by filling out a form. This is an interesting idea, and has the capacity to produce very powerful systems rather quickly, but I am not a true believer. I don't have a solution, I am just really frustrated.
Perhaps part of my frustration stems from that fact that it takes a very long time to do anything on the computer. Imagine building a simple web based program. There are a couple of screens, lets say 5 queries, some fancy interaction such as ajax, perhaps some column sorting, expanding, maybe some data pop-ups, some drill downs. A little program like this will probably take one person a few days, maybe even a week to fully build out and test. That's extremely slow. This system might not even be what the business wants causing further delays. There has to be a faster way. There is nothing even remotely complicated or interesting about these little programs. These programs are simply a test of how fast one can type and how easy they can define the architecture to minimize typing in the first place. We are developing stupid little programs at such a low level. It is extremely inefficient, not to mention boring.
The solution so far has been to build tools on top of tools. IDE's are attempting to simplify certain actions, visual development tools are attempting to convert languages into workflow models. IDE's are replacing development. They are simply making it faster. Workflow based languages are creating another layer of complexity. Instead of writing an if statement you move
a picture of an if statement into a plane, and then configure it by filling out a form. This is an interesting idea, and has the capacity to produce very powerful systems rather quickly, but I am not a true believer. I don't have a solution, I am just really frustrated.
Thursday, January 26, 2006
Soft Asset Management
Each organization struggles to harness and manage 3 simple assets.
Ideas
The rank and file engineers have a lot of very good ideas that never see the light of day. The reason for this tends to be a layer of dead managerial weight separating any developer from any person capable and willing to make a decision. The middle management is also extremely terrified of anything that's perceived as rocking the boat or anything that causes them to stick out ( such as championing an idea). The problem is that a lot of these people are simply afraid of showing how incompetent they really are. What needs to happen is a way to send ideas directly to the senior management or some sort of governing committee that can review the ideas and has the power to fund them. I've seen one organization implement this very well. They've setup a committee consisting of senior management to review all incoming ideas. The idea is emailed directly to your regional director. (There is one regional director per continent.) The director would then take your idea to the committee for review. At the end of the year, the committee chooses a handful of ideas to pursue and those people receive awards such as trips to Paris, trophy, envelops, and recognition - due to a large assembly that you get to speak in front of.
Prior Art
"Prior Art" is a term used in patent law to describe inventions that someone has already patented. Each organization produces a lot of innovation: software, libraries, fixes, procedures, etc... The ironic thing is that each group in the organization tends to develop its own solution to a problem probably faced by every group. In a couple of places, I've seen organizations attempt to avoid some duplication, but never very successfully. At the core, the problem is that each group has no idea what the other is doing. Additionally, there is rarely a single architectural vision or a way to search for existing solutions. Basically, the proposal is to setup an internal open-source community. The community would know of all projects going on in the organization, and would manage certain projects submitted to it such as general projects like scheduling systems, or monitoring libraries. Everyone should be aware of what the other is doing. If someone in the organization is working on a security model, and I am about to start writing my own password management system, well, hopefully I know about the other initiative.
Knowledge
Each engineer accumulates a wealth of knowledge. The most relevant and long lasting learning occurs from your peers and not from a 2 day intensive class. Although, its great to miss 2 days of work and get catered lunch. What organizations struggle to do is share individually acquired knowledge. In fact, a number of organizations, because of politics discourage the practice or do it with a heavy managerial hand. For example, in one company I've worked for, the management decided that the way to share knowledge is to hold bi-weekly developer meetings. Their solution was that each manager or director would bring some of their people to the meeting. In some cases, the directors brought only managers. The end result was that the meeting consisted of 30% developers and 70% managers/directors. The funny thing is that the topics started technical and quickly moved to managerial such as billing. I spoke up in one of these meetings. I believe I said something to the degree that this is mockery, and if both my manager and my director were not sitting next to me, I wouldn't be here. Well, I am no longer with that particular company. Some ways to promote knowledge sharing is to get developers talking and debating. For example, the organization can create a developer community and hold weekly meetings. During each meeting, one of the developers can present something useful and interesting to discuss such as programming patterns, algorithms, and standards. Additionally, within a team, discussion should be encouraged. Team leaders should promote a culture of learning and challenge such as emailing logic problems. The only reward is the recognition that the person is better then everyone else. The organization should also implement code review and design review. The reviews will enforce a level of competency which is sometimes missing during coding and designing.
Ideas
The rank and file engineers have a lot of very good ideas that never see the light of day. The reason for this tends to be a layer of dead managerial weight separating any developer from any person capable and willing to make a decision. The middle management is also extremely terrified of anything that's perceived as rocking the boat or anything that causes them to stick out ( such as championing an idea). The problem is that a lot of these people are simply afraid of showing how incompetent they really are. What needs to happen is a way to send ideas directly to the senior management or some sort of governing committee that can review the ideas and has the power to fund them. I've seen one organization implement this very well. They've setup a committee consisting of senior management to review all incoming ideas. The idea is emailed directly to your regional director. (There is one regional director per continent.) The director would then take your idea to the committee for review. At the end of the year, the committee chooses a handful of ideas to pursue and those people receive awards such as trips to Paris, trophy, envelops, and recognition - due to a large assembly that you get to speak in front of.
Prior Art
"Prior Art" is a term used in patent law to describe inventions that someone has already patented. Each organization produces a lot of innovation: software, libraries, fixes, procedures, etc... The ironic thing is that each group in the organization tends to develop its own solution to a problem probably faced by every group. In a couple of places, I've seen organizations attempt to avoid some duplication, but never very successfully. At the core, the problem is that each group has no idea what the other is doing. Additionally, there is rarely a single architectural vision or a way to search for existing solutions. Basically, the proposal is to setup an internal open-source community. The community would know of all projects going on in the organization, and would manage certain projects submitted to it such as general projects like scheduling systems, or monitoring libraries. Everyone should be aware of what the other is doing. If someone in the organization is working on a security model, and I am about to start writing my own password management system, well, hopefully I know about the other initiative.
Knowledge
Each engineer accumulates a wealth of knowledge. The most relevant and long lasting learning occurs from your peers and not from a 2 day intensive class. Although, its great to miss 2 days of work and get catered lunch. What organizations struggle to do is share individually acquired knowledge. In fact, a number of organizations, because of politics discourage the practice or do it with a heavy managerial hand. For example, in one company I've worked for, the management decided that the way to share knowledge is to hold bi-weekly developer meetings. Their solution was that each manager or director would bring some of their people to the meeting. In some cases, the directors brought only managers. The end result was that the meeting consisted of 30% developers and 70% managers/directors. The funny thing is that the topics started technical and quickly moved to managerial such as billing. I spoke up in one of these meetings. I believe I said something to the degree that this is mockery, and if both my manager and my director were not sitting next to me, I wouldn't be here. Well, I am no longer with that particular company. Some ways to promote knowledge sharing is to get developers talking and debating. For example, the organization can create a developer community and hold weekly meetings. During each meeting, one of the developers can present something useful and interesting to discuss such as programming patterns, algorithms, and standards. Additionally, within a team, discussion should be encouraged. Team leaders should promote a culture of learning and challenge such as emailing logic problems. The only reward is the recognition that the person is better then everyone else. The organization should also implement code review and design review. The reviews will enforce a level of competency which is sometimes missing during coding and designing.
Monday, January 16, 2006
Metaverse
The word meta in Greek means "about", "beyond", and in English is used as a word prefix to "... indicate a concept which is an abstraction from another concept." The word verse is "... a single metrical line in a poetic composition; one line of poetry. "
The word metaverse was coined by Neal Stephenson's in his book Snow Crash in 1992. Neal uses the word to describe a virtual world that allows people to connect into and physically participate in a virtual world. People physically jack-in. The world is part virtual reality, part internet, part story.
At the moment there are a few games that are starting to broach this subject of metaverse. Games like everquest, for example, bring together all players into a single universe. People now hear of things like weddings occurring in cyberspace or real-estate being sold for real money. There is also a commodity market developing for game items such as magical weapons. Virtual items are being sold and bought with real money. Of course, one can argue that money is as real as the magical cloak that can make the owner appear or disappear or protect the wearer from all sorts of attacks.
The major difference between the cyberworlds these games create and the metaverse, is that the metaverse is not a game but real life. It has stores and restaurants, bars, and clubs. It has side walks, and bus lines and a very sophisticated police infrastructure. There is a class system, and a stable real-estate market. To get there, we need a basic virtual reality system, and a simple meta language like HTML. It should be relatively simple to create stores and shelves, and chairs, the humans will act as humans. The imaging can be performed locally, with only meta language being passed around. You should be able to walk into a store on the internet and have the same or better experience as walking into a real store.
Check this out: http://sketchup.google.com/
The word metaverse was coined by Neal Stephenson's in his book Snow Crash in 1992. Neal uses the word to describe a virtual world that allows people to connect into and physically participate in a virtual world. People physically jack-in. The world is part virtual reality, part internet, part story.
At the moment there are a few games that are starting to broach this subject of metaverse. Games like everquest, for example, bring together all players into a single universe. People now hear of things like weddings occurring in cyberspace or real-estate being sold for real money. There is also a commodity market developing for game items such as magical weapons. Virtual items are being sold and bought with real money. Of course, one can argue that money is as real as the magical cloak that can make the owner appear or disappear or protect the wearer from all sorts of attacks.
The major difference between the cyberworlds these games create and the metaverse, is that the metaverse is not a game but real life. It has stores and restaurants, bars, and clubs. It has side walks, and bus lines and a very sophisticated police infrastructure. There is a class system, and a stable real-estate market. To get there, we need a basic virtual reality system, and a simple meta language like HTML. It should be relatively simple to create stores and shelves, and chairs, the humans will act as humans. The imaging can be performed locally, with only meta language being passed around. You should be able to walk into a store on the internet and have the same or better experience as walking into a real store.
Check this out: http://sketchup.google.com/
Wednesday, January 11, 2006
Patent Madness
I am currently in the process of trying to patent an idea. The idea is relatively simple, but this post is not about that. Instead, it is about patent law. In my search, I got the opportunity to speak with a highly paid patent lawyer ($460 per hour) from an established firm. He gave me a breakdown of what is required to patent an idea.
For technology related ideas there are 2 types of patents, a full blown utility patent, and a provisional patent. The provisional patent is good for a year, but requires that you file for a full patent within that year. It's relatively cheap, and doesn't go through the rigor of a full patent. In fact, there is no rigor, whatever you file is good. The problem, as explained, by the lawyer, is if there is a dispute within that year, the language of the provisional application is closely scrutinized. Basically, if its written by the lay man, me, it won't stand up in court, and therefore is useless. The only advantage is a marketing gimmick, because it lets you say "Patent pending" for the duration of that year, and if it does stand up in court, your patent is for 21 years rather than 20.
To file a full patent with this law firm, you need to do the following:
$400 = initial 1 hour consultation with a junior lawyer and some feedback from a senior partner
$1500-$2000 = a professional search to see if your patent is already taken
$5000-$10000 = to write up the application form, the price ranges depending on complexity.
+ cost of filing, for small entity, (adds up to a couple of hundred)
He also mentioned that almost everyone gets rejected on their first submission. Your patent lawyer and the US Patent Office than negotiate on how broad your patent should be.
At the end of the day, you can have a patent for just under $20,000. The industry average, as explained by another lawyer, is $15,000 to get a patent. The cost fluctuates between $10,000 and $20,000 depending on who you get as the US patent representative, and how general your patent is. The expensive lawyer mentioned that the Amazon single click patent was very expensive. Another lawyer also added that the international patent costs even more than US. The filing fee alone is $4,000.
US Patent fees (Most of us are small entities)
World intellectual Property Organization
The patent process also takes on the average of 2 years.
If you do it yourself, you may get a patent for much less, probably around $2 -3k, but without the lawyer babel, its not worth the paper its printed on. Basically, as the highly paid lawyer explained to me, patents are written by lawyers for lawyers. Patents are only useful in court. He also kindly explained to me that the patent process is not meant for the small guy. Even if you get a patent, you may not be able to afford to defend it. Another lawyer chirped in that if you do have a patent and a requirement to defend, there are certain companies that will pay the defense fees for a share of the patent. This same lawyer also said that a patent is only necessary if you know how to make money from it. For example, license your idea, sell the patent, protect your idea so as to control the market. Patents are not required in a lot of cases, also, if you've already built the software and somehow released into public knowledge, in court, you do get some leniency during a patent dispute.
For technology related ideas there are 2 types of patents, a full blown utility patent, and a provisional patent. The provisional patent is good for a year, but requires that you file for a full patent within that year. It's relatively cheap, and doesn't go through the rigor of a full patent. In fact, there is no rigor, whatever you file is good. The problem, as explained, by the lawyer, is if there is a dispute within that year, the language of the provisional application is closely scrutinized. Basically, if its written by the lay man, me, it won't stand up in court, and therefore is useless. The only advantage is a marketing gimmick, because it lets you say "Patent pending" for the duration of that year, and if it does stand up in court, your patent is for 21 years rather than 20.
To file a full patent with this law firm, you need to do the following:
$400 = initial 1 hour consultation with a junior lawyer and some feedback from a senior partner
$1500-$2000 = a professional search to see if your patent is already taken
$5000-$10000 = to write up the application form, the price ranges depending on complexity.
+ cost of filing, for small entity, (adds up to a couple of hundred)
He also mentioned that almost everyone gets rejected on their first submission. Your patent lawyer and the US Patent Office than negotiate on how broad your patent should be.
At the end of the day, you can have a patent for just under $20,000. The industry average, as explained by another lawyer, is $15,000 to get a patent. The cost fluctuates between $10,000 and $20,000 depending on who you get as the US patent representative, and how general your patent is. The expensive lawyer mentioned that the Amazon single click patent was very expensive. Another lawyer also added that the international patent costs even more than US. The filing fee alone is $4,000.
US Patent fees (Most of us are small entities)
World intellectual Property Organization
The patent process also takes on the average of 2 years.
If you do it yourself, you may get a patent for much less, probably around $2 -3k, but without the lawyer babel, its not worth the paper its printed on. Basically, as the highly paid lawyer explained to me, patents are written by lawyers for lawyers. Patents are only useful in court. He also kindly explained to me that the patent process is not meant for the small guy. Even if you get a patent, you may not be able to afford to defend it. Another lawyer chirped in that if you do have a patent and a requirement to defend, there are certain companies that will pay the defense fees for a share of the patent. This same lawyer also said that a patent is only necessary if you know how to make money from it. For example, license your idea, sell the patent, protect your idea so as to control the market. Patents are not required in a lot of cases, also, if you've already built the software and somehow released into public knowledge, in court, you do get some leniency during a patent dispute.
Subscribe to:
Posts (Atom)