Autonomous Computing is a very popular buzzword at IBM. The term describes a system that has self-configuration, self-healing, self-optimization, and self-protection. (copied from IBM)
A system that has all these attributes is excellent. The system does not require configuration. If you put a system on a production machine, it will automatically recognize that its in production and identify production configuration settings. In case of an error, the system will attempt to identify where the error originated from, fix the problem, and correct side affects by re-processing business logic, for example. The system is also able to monitor itself and perform tuning. For example, the system would recognize that certain data-structures tend to be of specific sizes, and initialize the data structures with the necessary size rather than continuously performing costly re-size operations. The system might also identify certain processing that can be done in parallel, and automatically split the processing. The system will also attempt to survive. If the production server is inadvertently stopped, the system will migrate to a different server, re-configure, and continue. In another scenario, if the database fails, the system will switch to a different storage medium.
Great, absolutely great; really hard to do. At interesting problem arises when building non-deterministic systems. They are very hard to test. More specifically, it is very hard to know exactly how a system will react in a scenario that hasn't been considered. For example, a system might rerun certain jobs numerous times causing data corruption, or inadvertently switch servers causing data fragmentation. The system might fix a data error by tweaking variables that could cause data problems without generating any errors. The bottom line is, for risk critical systems, non-deterministic machines have a potential to cause more harm then good. This might be why the business community has been weary for AIish technologies.
I am a great believer in non-deterministic systems. I think there is a great benefit to them. The problem is how to introduce them in a way that makes them more deterministic. The probable answer might be to build more complex systems. Another answer might be more descriptive languages. Each function might have attributes telling the system what could be done with this function. If the function modifies data, then it is not idempotent, etc... The system almost has to understand what its limits are, and work within the given confines.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment