Tuesday, June 10, 2008

Technology and Intelligence

Nick Carr has an article in the Atlantic Monthly entitled "Is Google Making Us Stupid?" In addition to the points he made, here's a few comments:
  • Much of his argument is similar to Neil Postman's "Amusing Ourselves To Death" (one of my favorite books). Although Postman is dealing with print vs. video, it's a similar contrast.
  • I wonder if some of this parallels changes that come as we grow older. It seems to me that as we age, we generally accumulate a rich store of narrative fragments, frames, and models, and therefore are more likely to fit new knowledge into one of those frames than to thoughtfully consider the need to create a new frame. I realize that individual personality is probably a more significant factor, but I know that I'm not nearly as likely to work hard at following the nuances of an statement as I used to be. Of course, the sheer volume of info I skim each day only makes this worse (which is Nick's point).
  • Print knowledge does tend to be more monolithic...especially if it is recognized as fundamental to a field. In contrast, almost all the Internet consists of fragmented information.
  • I wonder if this is yet another cultural factor that's degrading our ability to procure, define, design, build, and test large complex capabilities (e.g., in the defense arena). If so, this degradation could continue for years to come.
  • I wonder if Nick is perhaps over-reaching when he accuses Google of Taylorism. I see his point, and think it's valid, but just because you measure something doesn't mean that you've adopted a reductionist framework...or maybe I'm just very skeptical about how far Google can go with "build[ing] artificial intelligence."
  • I suspect that a broader cultural factor may be postmodernism's loss of belief in the possibility of organizing frameworks. Why try to discern a complex and nuanced perspective if it's all merely a reflection of past power structures?
  • And, I suspect that modernity's loss of belief in an organizing purpose may be a factor. Why try to gain deep understanding when "all is vanity?"
  • Finally, any speculation about the limits of the brain's plasticity (or lack thereof) must be taken as just that. It's unclear if we'll ever have a deep understanding of this area.
Regardless, Nick's always worth reading...I just wish he had mentioned Postman.

Is IA Becoming Mostly Complex?

Trend Micro's decision to stop seeking VB100 certification for its security product triggered these observations about IA in general:
  • Events that are part of repeatable cause-effect relationships can be anticipated. This implies that it is likely we can gather data, perform analysis, and make good decisions based on that. In case you haven't recognized it, this is Knowable terrain (Cynefin taxonomy).
  • To the degree that threats and threat mitigation capabilities are Knowable, it seems reasonable that de facto (or even de jure) metrics could be created to assess both areas.
  • To the degree that threats and threat mitigation capabilities are Complex, it seems that traditional metrics are unlikely to be effective (though, as Dave Snowden has discussed, there are pattern-oriented approaches to traversing Complex space).
I wonder if Trend Micro has decided that the anti-malware domain has become largely Complex and that traditional measurement approaches are not only misleading, they're increasingly dangerous.

Or maybe I'm just projecting....

Miscellaneous Items

"Oriented"? Architectures

What's with "Oriented" architectures? Though lots of heat has been generated by discussions of SOA, REST, WOA, etc. over the past few years, I'm not sure I've heard anyone address the emergence of architectures that are more style than substance (as I explain below, I'm not using "style" or "substance" in the sense they normally take in this cliche).

And, the term "Oriented" seems to point to the ontological nature of an architecture, implying that our focus should be on analyzing this Knowable terrain (Cynefin taxonomy). If the terrain is more Complex than Knowable, perhaps we should call these architecture styles "Orienting" to emphasize the epistemological activity of probing a Complex space.

My initial reaction to the SOA/WOA/etc. swirling was that it indicated we were in the early stages of defining a new area of knowledge, and therefore were debating taxonomy (which is step #1 in creating formal knowledge...define the foundational distinctions). The latest summary I've seen on this topic asks if WOA is "An Acronym Too Far?"

On further reflection, it seems to me that perhaps traditional architectures are frameworks for creating capabilities in domains that are largely Knowable (Cynefin taxonomy). An frequently cited example is NOAA's NOSA.

"Oriented" architectures, on the other hand, seem to be more about creating capabilities in domains that have a significant Complex aspect (Cynefin taxonomy). Dave's observations about decision making in Complex domains would seem to point us toward tactics like the following:
  • Instead of static requirements, deploy capability creation attractors. These would include items that enable users to create their own capabilities (e.g., mash-up tools like JackBe, Yahoo Pipes, Popfly, etc.), and social linking tools (e.g., tagging, graphing, etc.) to catalyze coherent exploratory activities across an enterprise.
  • Instead of well-defined processes, design clear boundaries for how these tools will be used. This area is less clear since the gating factors today are more related to a scarcity of attractors. However, as that changes, policy & governance capabilities that provide flexible boundaries will be required.
  • Perhaps most difficult of all, seed a culture that effectively balances the Complex & Known/Knowable aspects of all capability creation needs and contexts. This means that individuals and groups can quickly and instinctively understand where they need to take an "Orienting" approach, where they need to take an analytical approach, and how they move between the two.
Since this is all relatively new, especially for enterprises, significant challenges remain. For example (as Dave has noted), social media tools (at Internet scale) currently need more structure to become relevant to many Complex decision making contexts. And, most enterprises aren't large enough to effectively leverage many of these tools.

Most of all, almost no one within today's enterprises understands that "Oriented" architectures are qualitatively different. We have decades of experience in building traditional architectures, and that experience is built on top of centuries of experience with analytical (rational + empirical) frameworks.

For now, it may be that what's most needed is to (a) run lots of small/fast experiments to better understand how to effectively architect in an "Oriented" fashion, and (b) be very careful about creating "Oriented" capabilities in the image of a traditional architecture.

And, consider calling them "Orienting" architectures.

Sunday, June 8, 2008

User-Dominated "Technologies"

Gardner came out with a "Top 10 Disruptive Technologies" list. Although some of these were mostly technology (cloud computing, multicore and hybrid processors), I was struck (but not surprised) by how user-centric this list is.

As technology commoditizes and is broadly interoperable up to (and including) the application layer, most "technological" disruptions will occur above that layer...meaning that most technologists probably need to better understand how technology is coupled to cognition, sociology, anthropology, organizational behavior, etc.

An Accenture report that "95% of returned gadgets still work" does not bode well for the future...though I suppose that if user-centric technologies enable users to create & deploy more usable capabilities, users may eventually control significant portions of the technology-based value nets.

Bottom-up SOA

Here's an interesting take on building SOA from the bottom up.

And, a post (with references) on EA being a joke....an interesting conversation.

"Data" Dangers in SOA

Dennis Howlett has an interesting post on the pervasive nature of Excel errors.

The spreadsheet is the original killer app. It allows users to easily capture and automate small decision models for their contexts. Unfortunately, its ease of use means that errors can balloon into catastrophe before they are caught.

To the degree that SOA and Web 2.0 enable the same kind of user-governed modeling, the same risk emerges...but on a much larger scale since these models will by definition exist within a much larger modeling ecosystem (much of it "in the cloud") that is constantly churning.

In such an environment, risk avoidance (the typical "build quality in" approach to SW) may not be feasible. Ongoing, robust, and agile risk management (i.e., defining risks, accepting risks, tracking risks, mitigating risks, etc.) may be the only way to obtain the promised agility and adaptability.

Data vs. Function

A pervasive compare/contrast in computing is data vs. function. You see it in such fundamental areas as language types (procedural, object, functional) and the associated design processes.

In the Web arena, the conversation I usually think of when this comes up is REST/WOA vs. SOA. However, I'm increasingly thinking of event processing. Here's a few recent items that prompted this post:
So much of the SOA conversation has focused on the "function" aspect since services potentially enable process agility and adaptability. However, the "data" aspect (events/rules) will be equally important (and probably more difficult) if the past is any indicator.

Cloud Costs

We're still very early in defining computing clouds, and the landscape is changing rapidly. So, it would be impossible to make any mature statement about the economics of the cloud, especially considering that any such statement would have to address economics across a wide range of needs.

However, this post by Phil Wainewright is interesting. It reminded me of a study I read about long ago (and have been unable to find since) that asserted that mass switching to a new technology did not occur until it was a 10x improvement over the current technology. The basic point was that switching costs (capital, process changes, personnel training/re-orientation, organizational changes, etc.) are far higher than most people intuitively would believe.

If the numbers Phil cites are representative, cloud-based architectures may emerge and mature far faster than most of us would have guessed. This is the sort of hard data that may eventually justify the hype associated with SOA, Web 2.0, etc.

The Limits of Monolithic Knowledge

I suspect the following observations are more about how long I've lived than anything really new. Regardless, I've become increasingly interested in the factors that limit the size/complexity of a capability.

I suppose some of these factors are ontological...a capability (by definition) has a constrained scope of use/applicability across a limited range of contexts. Since the capability (in effect) models both aspects (implicitly and explicitly), the amount/complexity of modeling generally is a key predictor of the size/complexity of the tightly-coupled aspects of the capability.

Bottom line: as the capability increases in scope (and therefore, size/complexity), so does its implicit/explicit models, and, so does the difficulty of creating it.

These ontological factors alone would seem to point toward a capability creation boundary where increasing knowledge inputs yields less and less incremental capability....more pithily alluded to in sayings like "large complex systems that work evolve from small simple systems that work" ("evolve" being used in an engineering sense, since there are no small simple biological systems).

Since we can't really do much about the ontological aspect, we tend to focus on the factors we can influence: personnel, process, culture, etc.

Which brings me to a few items that triggered these thoughts this week:
  • Various government leaders have expressed concern about spending overruns in the defense area over the past few years. This article in the New York Times is a good summary of the latest declaration of a "crisis." Critics in the past have pointed to factors like unconstrained requirements growth, loss of systems engineering expertise, poor engineering process & execution, poor management process & execution, poor acquisition process & execution, lack of discipline (across the board), etc. I'll limit my comments to observing that needs with radically constrained resources (e.g., time, budget) tend to get addressed more efficiently. While I recognize the dangers of faster, better, cheaper for large/complex capabilities like the Space Shuttle, I also suspect that the ultimate needs can often be addressed with simpler capabilities. Constraining resources focuses the mind.
  • A related LM story here;
  • And, an Augustine-centric article from Defense Acquisition Review.