Sunday, December 21, 2008

Why Social Media is Counter-Enterprise

I admit it's impossible for me to really put myself in the shoes of an engineer or manager who is grounded in traditional frameworks and processes. Having been trained traditionally (CompSci undergrad, and MBA with an emphasis in Accounting), I understand the traditional perspectives.

However, since I've spent 20+ years exploring non-traditional perspectives to knowledge creation and management, I tend to see things as much (or more) from a Complex perspective as a Complicated perspective.

So, I often find I've grossly overestimated how much someone "gets" Complex aspects of a decision space. This post by David Wilcox is a nice discussion of the challenge. Enterprises are fundamentally grounded in what David Gurteen calls World 1.0. Their structures, processes, tools, and personnel are 1.0 in virtually all their formal aspects, and in many of their informal aspects.

The challenge is not, as some have implied, to change from 1.0 to 2.0. The challenge is to recognize how 1.0 and 2.0 interact in a specific context, and act accordingly (hence the unique value of Cynefin).

These kinds of discussions always make me wonder....if Hyperconnectivity is changing things this much in its early infancy, what sort of changes will we see a decade or two from now? Will those changes be, on balance, positive? And just how plastic are individuals and organizations in weaving an ever evolving tapestry of 1.0 warp and 2.0 woof?

Saturday, December 13, 2008

Stories: Probing vs. Analysis

I've blogged previously about storytelling (though not nearly as much as you'd expect of someone who's interested in naturalizing approaches to complex decision contexts).

As social media begins to enter the enterprise, I better understand why those who emphasize the naturalizing aspects of KM also see stories as key sensemaking artifacts.

The intersection of the analytical culture that dominates large organizations with the probing nature of storytelling is fascinating...especially when experts try to synthesize analysis and stories. If you're a technically-oriented analyst who's grappling with the storytelling aspect of social media, here's a post from Anecdote that helps convey the complex probing nature of stories.

Sunday, December 7, 2008

Micro-watts per Function Point?

Nick Carr's latest posting (along with a conversation with an IT facilities colleague) prompted me to wonder: Will Moore's Law (along with the associated standardization & interoperability at all levels, including semantic) eventually result in power being the primary way we measure SOFTWARE productivity?

Snowden on Social Media

Several items on social media appeared this week on Cognitive Edge:
  • A new podcast and set of slides on the use of social media in what some are calling Enterprise 2.0. Well worth reviewing:
    http://www.cognitive-edge.com/blogs/dave/2008/11/km_asia_keynote_on_social_comp.php
  • A subsequent conversation about IT's tendency toward centralized Complicated systems in a domain that's inherently Complex:
    http://www.cognitive-edge.com/blogs/dave/2008/11/the_major_obstacle_to_the_adop.php
  • And, an interesting discussion about social computing and IT by the current guest blogger (Keith Fortowsky):
    http://www.cognitive-edge.com/blogs/guest/2008/11/serious_play_in_a_complex_part_1.php#more

Here's my summary:

IT is about Simple & Complicated stuff (Cynefin), and social computing is about Complex (Cynefin) stuff.

So, IT tends to inadvertently stifle social computing by over-constraining it. As a result, it does not catalyze the kind of exploratory activities that characterize social computing.

The case may be overstated, but I think the basic concern has some merit.

This discussion seems to hint at a wider concern: large organizations are inherently bureaucratic and (mostly unintentionally) uneasy about relatively unconstrained activities. If you can't measure it, how do you know if it's worth it? How do you compute & monitor ROI?

This implies that social computing in a large organization is a bit oil-and-water...perhaps more of a challenge than it would seem at first glance. If social media is as revolutionary as some think it is, this aspect of it points toward a level of internal turmoil in large organizations that is unprecedented since their widespread emergence over 100 years ago....along with a change in structure so fundamental that it calls forth a new label. Identity crisis is just the jumping-off point.

Miscellaneous Complexity

I ran across a couple of interesting Cynefin-related items this week:
  • 10 Principles of the New Business Intelligence (Tom Davenport, HBS) - Tom shows a pyramid that looks a lot like Cynefin's Simple, Complicated, Complex. It's titled "The Relationship Between Decisions and Information: Three Options", and shows them as Automated, Structured Human, and Loosely-Coupled. It's definitely highlighting key issues: the contextual aspect of decision support, the linking of information to decisions, and the emerging pattern of loosely-coupled. Several of the principles seem a bit slanted to the Complicated/Analytical perspective (e.g., loosely-coupled is efficient to provision, but often not effective...that seems a bit simplistic to me), but will definitely catalyze the kinds of conversations about information and decisions that remain all too uncommon.
  • Trust and coherent group action ("When Should We Collaborate", Shawn Callahan at Anecdote) - Shawn describes three levels of interaction (Coordination, Cooperation, and Collaboration) that are characterized by increasing trust, increasing informal interaction, and decreasing formal interaction. He maps these directly to Cynefin's Simple, Complicated, and Complex. I like the Coordination, Cooperation, Collaboration taxonomy since these terms tend to get used a bit sloppily, and I really like the linkages to trust levels, formality, and Cynefin. This kind of thinking helps clarify some of the issues associated with group movement among the three types of activities (I remain optimistic, perhaps naively, that relatively small assemblages (~5 distinctive chunks at all levels above the individual) can move with some agility among all three domains...though the identity-shifting may turn out to be just too difficult for a multi-layer assemblage (i.e., more than 5 people)).

Thursday, November 27, 2008

Design Thinking

I ran across recent article by Tim Brown of IDEO on design thinking (June 2008 HBR). A statement in the introductory paragraph caught my eye:


  • "...Edison understood that the [light] bulb was little more than a parlor trick without a system of electric power generation and transmission to make it truly useful."

Brown highlights the need to fit technological innovation into a larger ecosystem. Big-picture engineers probably are better at this than puzzled-oriented engineers. Brown describes the personality profile of design thinker:

  • Empathy
  • Integrative Thinking
  • Optimism
  • Experimentation
  • Collaboration

He also describes a design thinking dynamic with 3 nodes: Inspiration, Ideation, and Implementation. Inspiration and Ideation would seem to be more divergent (though Ideation is a mixture of divergent and convergent), and Implementation more convergent.

Brown's article is a pointed reminder that truly disruptive innovation often spans multiple levels (system, enterprise, ecosystem), domains (technology, people, process), and perspectives (strategy, operations, consumption, complements, etc.).

Divergent-Convergent Zones

I've come to think of techies as falling into one of two camps: puzzle (likes well-defined puzzles) or big picture (likes understanding how well-defined puzzles fit into a larger context). Although I only have first hand experience with engineers, I suspect other technically-oriented disciplines (e.g., accounting, medicine) have a similar division.

I mention this in the context of a model I stumbled across recently. It's in Viv McWaters' blog and describes how groups work through a specific issue. The phases shown are (a) a new topic emerges, (b) the topic is potentially closed via "business as usual", (c) the topic diverges into a Complex exploratory space, (d) a "groan zone" is entered where the group struggles to create a frame that will move the issue into a Complicated space, (e) the topic moves into a convergent Complicated space, and (f) the topic is resolved.

Traditional engineering is largely in the convergent zone. Non-engineers (e.g., business analysts, marketing) usually inhabit the divergent zone. They are responsible for exploring topics in the divergent zone and organizing them so that they can be handed over to engineers for creating specific capabilities. Engineers use traditional processes and tools to converge on an implemented capability. And, engineering education is largely devoted to training engineers to use a range of tools that transform an abstract description into a concrete implementation.

This works well enough when capabilities are relatively decoupled (from each other and various use contexts) chunks of knowledge (e.g., systems, machines, applications, etc.). However, it becomes rigid, stovepiped, and slow when small chunks of composable knowledge (along with interoperable data) begin to dominate a topic area.

Technology seems to be in the early stages of such a shift today in the area of information technology. Traditional IT has a clear divergent-convergent divide...divergent in the need/requirement exploration phase, and convergent in the analysis/design/implementation phase. New IT (SOA, cloud computing, and similar composable technoogies) is beginning to blur the distinction between the divergent and convergent zones.

In topic areas where capabilities are information-intensive (and most capabilities are increasing in the amount of information they store and process), this shift means that the divergent and convergent zones will overlap in an increasingly fractal fashion.

This trend seems to point toward small teams of big picture-oriented explorers, puzzle-oriented implementors, topic experts, and a few part-time cognitive and social domain experts...first in IT-intensive domains, then in all information-intensive domains where agile decision making is important.

On the other hand, information storage, processing, and communication infrastructure would seem to be headed toward being a convergent commodity that requires deep and narrow technical expertise (similar to the production and dissemination of electricity)...except when new technologies disrupt (then displace) existing infrastructure technologies.

Monday, November 17, 2008

Structuring Innovation

The need and opportunity to innovate is an ongoing challenge/risk for all competitive contexts. Since the modern world has (for good reasons) an analytical bias, it's not surprising that there's a tension between incremental innovation that can be produced using analytical approaches, but yields only modest improvements, and disruptive innovation that seems to be largely the result of serendipity, but yields dramatic improvements.

This presentation by Robert Austin (HBS) is a nice discussion of this contrast/tension. My impression is that very few organizations have even a basic awareness of many of the key questions surrounding the full scope of innovation possibilities. This is especially true of large organizations whose DNA usually contains a "process maturity" gene that suppresses the expression of what Austin calls "artful making."

Here's a similar Austin presentation with an IT emphasis.

And, a NYT article on the fact that very few organizations manage the radical changes required to survive more than a few decades.

Data-Centricity

I suppose that software's malleability is one reason I first heard about the contrast between data/state and process/sequence in the SW engineering domain. It seems like this tension continues to evolve as technology changes...imperative vs. functional programming, SOA vs. REST, etc.

EMC's latest technology, Maui, is an interesting take on an object-centric world where processing related to storage policy is highly decentralized. A short article describing it appeared recently in The Register.

Beyond Analysis

Since the emergence of the unique blend of empiricism and rationalism called "science", many critiques of its power and limits have been made. I've discussed some of Dave Snowden's thoughts on the topic. Here are two other perspectives that also highlight the value of non-analytical tools:

I suppose it's no coincidence that these two perspectives (along with Cynefin) emphasize the epistemological limits of individuals and groups. Although there's a risk that such an emphasis may overstate the limits of analysis, that risk may be justified for those whose training and experience has tended to emphasize its power.

Tuesday, November 4, 2008

Innovation and Sensemaking

As someone who finds Dave Snowden's writing generally thought-provoking, I was glad to see that he recently discussed innovation with an Australian government network that focuses on continuous improvement.

Although Dave seems to be more focused on disruptive improvement in the Complex domain, and continuous improvement seems to be more consonant with the Complicated domain, Dave's talk was an intriguing critique of the dangers of using analytical and/or best practice approaches to innovation. Although I found some of the rationale for his conclusions unconvincing, the overall message was a refreshing change from the Complicated/Analytical approaches to innovation usually seen in large organizations.

A nice summary is here. It contains a link to the (partial) podcast. And, here's Dave's summary: http://www.cognitive-edge.com/blogs/dave/2008/10/to_distinguish_the_ordinary.php#more

Dis-Integrating Architectures

Although SOA gets most of the attention when the topic turns to composable business capabilities, the discussion is often either superficial or trite. A couple of recent items are welcome exceptions.

The first is a series of articles in the Economist on corporate IT. Although the overall theme is cloud computing, the articles discuss a range of trends that are driving increasing composability. Despite the lack of depth, the collection provides a nice summary of how IT is dis-integrating.

The second, however, provides some real depth. Carliss Baldwin and Jason Woodard recently published "The Architecture of Platforms: A Unified View." This fascinating HBS Working Paper (09-034) reviews three waves of research on business platforms (product-oriented, technological system-oriented, and transaction-oriented), considers various aspects of platform architectures, and describes three ways of representing platforms and their architectures (network graphs, design structure matrices (a long-time favorite of mine), and layer maps). This is the sort of article that is essential for understanding how dis-integrating IT intersects the business. Unfortunately, it is also the sort of article that most technology-centric architects are unlikely to ever stumble across since there remains a largely unbridged gap between IT-centric engineers (who read IT publications) and and business analysts (who read HBR and The Sloan Management Review).

Finally, Nick Carr created a bit of a fuss recently in his discussion of cloud computing. His post describing how the emergence of the electric grid triggered a wave of products that had a standard way to plug into that grid was interesting, as was his "typology of network strategies."

These articles/posts provide valuable insight into how dis-integrating technologies are changing the business environment, especially for those who think of WS-* or ESBs when someone says "SOA" . As IT shifts from being an expense to being a source of competitive advantage, the need for those who can bridge the IT-business gap will continue to grow.

Sunday, October 19, 2008

Governance and Strange Loops

I remember a retrospective on the 90's asserting that the two least read books of the 80's were Bloom's "Closing of the American Mind" and Hofstadter's "Godel, Escher, and Bach" (Hawking's "Brief History of Time" is a 3rd candidate).

Although I disagree with some of the basic assumptions in each book, their books are interesting explorations of key issues. Bloom helped catalyze my interest in various understandings of basic questions in philosophy. And, Hofstadter remains the only person I've run across to explore the epistemological implications of recursion.

Since I've never finished GEB (unlike Bloom's book), I'm not familiar with all the nuances Hofstadter explores. However, the basic theme of the intertwining of what might be called "sensemaking of a context" and "the governance of that sensemaking" is a profound one.

It recognizes an aspect of sensemaking that we constantly juggle, but rarely think about. Both Boyd's OODA loop and Klein's Data-Frame model recognize that sensemaking is perhaps most distinguished by the pervasiveness of a GEB-style "strange loop" of Orientation (Boyd) or Questioning/Reframing (Klein).

Since engineers are grounded in the "sensemaking execution" thread that is basically analytical, the orientation challenge of "governance" is often ignored or assumed to be static...which may be why I've found few engineers who really latch on to frameworks that place equal (or more) emphasis on the "strange loop" that makes all sensemaking adaptable, agile, and real-world.

And, it may be yet another reason why engineers seem to have a difficult time grasping why sensemaking governance must be distributed/decentralized, limited in scope to a specific type of context, as informal as possible, and an 80/20 solution (i.e., not optimized).

Monday, October 6, 2008

My Virtual Crews

Will it become feasible to assemble a virtual crew from composable IT chunks (local and “in the cloud”) to automate significant aspects of a decision making context? The following comments explore this question.

The decision maker faces several simultaneous challenges. Among them are (a) managing goals, strategies, and tactics by constantly seeking alignment across all three levels, (b) understanding an evolving decision context, and (c) constantly making adjustments to goals, strategies and tactics to more effectively address the decision context. Sensemaking frameworks are intended to provide insight into the dynamics of these challenges.

The sensemaking frameworks I’m most familiar with seem to assume a “single CPU” decision maker. John Boyd’s OODA loop, Mica Endsley’s Situation Awareness model, Gary Klein’s Data-Frame model, and Karl Weick’s Enact-Select-Retain framework all highlight key aspects of sensemaking, but only Weick’s E-S-R seems to have been intended to apply to both individual and group sensemaking.

In addition to individual and group sensemaking, there seems to be a middle ground emerging where a single decision maker “outsource” contextually-oriented chunks to IT that is partially autonomous. These chunks are “smarter” than traditional IT, but are also more coupled to the decision maker’s sensemaking process than traditional IT.

Most discussions of IT in this context involve systems (or “intelligent” agents) which are largely decoupled from the decision maker. However, technologies/styles that are loosely coupled raise an interesting question about how decision makers interact with technology…will the increasing fragmentation of IT eventually result in a complex interleaving of decision maker and technology that allows an individual to achieve the kind of sensemaking agility currently only achievable by a group…specifically the kind of group often referred to as a crew?

The use of traditional IT by a decision maker looks more like a carpenter with a hammer than a human with a hand. IT is generally not much “smarter” than a hammer…it does what it’s told to do, and nothing more. Although a hand also does what it’s told (usually), it also provides complex feedback that allows the hand to take on several roles simultaneously.

The following is a proposal for four fundamental roles (and one meta-role) that IT can take on to allow a composed “IT Crew” to act more like a hand.
  • Watcher – IT that observes a specific context (physical, logical, semantic, etc.) for events of interest, and issues reports about these events. This includes rules about how events are linked (causually or correlated) and about what events to report. Depending on how complex the watched context is, it might include meta-rules that allow the Watcher to “shift focus.” And, it probably involves an ability to do some predictive modeling of the context.
  • Miner – IT that “digs” deeply into a specific context when a Watcher detects events of interest. The context must be constrained enough to allow mining, but ambiguous enough that further information is needed before a decision can be made.
  • Actor – IT that initiates actions in a specific context to achieve goals
  • Decider – IT that consists primarily of rules-based models that tie Watchers, Miners, and Actors together. As with the Watcher, this probably includes meta-rules that allow the Decider to exhibit a limited amount of adaptability. It almost certainly includes some predictive modeling of the decision context.
  • Executive – forms goals, composes Watchers, Miners, Actors, and Deciders (WMAD) patterns to pursue goals, creates and deploys WMAD packages to achieve the desired effects, and monitors and modifies WMAD patterns and packages as needed to cope with environmental changes. Includes predictive modeling of how contexts affect progress toward a goal. While some of this involves IT, complete automation would seem to be feasible only for constrained and predictable contexts/goals.
This sort of taxonomy is similar to that seen in discussions of autonomous software agents (see, for example, IBM’s ABLE toolkit (). Although the IT Crew taxonomy resembles an AI (techno-centric) taxonomy, it differs in several ways. First, it uses a sociological construct (a crew) instead of a brain/mind construct. Second, it focuses on dynamic crew composition and deployment by the decision maker for a specific decision context. And, as noted above, autonomous agents usually do not involve the complex human-tool coupling that is seen between a person and their hand.

In an “IT Crew” framework, IT is built to fill one of the roles listed above. It is also built to be composable (loosely-coupled). Each composable IT chunk is explicitly designed for a specific range of contexts. A designer defines key aspects of a decision context, brings up a palette of IT crew members for each role listed above, and selects and assembles the selected crew members into a WMADE package to be tested and deployed. Note that a package may have multiple members of a given type (e.g., Miners), or it may have only one member type (e.g., Actor).

While much of the research on crews appears to have been focused on high-reliability contexts that require group agility and adaptability (e.g. operating room, aircraft carrier, airplane, etc), it seems to me that a similar pattern could also describe an individual decision maker using a wide range of local and remote Information Technologies to cope with a complex sensemaking challenge.

I suspect that variants of this sort of thing have been discussed in association with artificial intelligence, complex adaptive systems, agent technologies, etc. If someone knows of proposals or research where the organizational concepts associated with crews have been combined with recent types of composable IT, I’d be very interested in hearing about it.

Bits and pieces of today’s IT that seem to fit into this framework continue to appear in the Web domain: e-Bay sniping software, wiring frameworks like Yahoo Pipes and Ubiquity, etc. However, I can’t remember seeing a taxonomy (like that mentioned above) that would provide a crew-style plug-n-play design framework.

Bottom line: I’ve been a bit frustrated by the lack of architectural structure associated with most composable IT technologies/styles (e.g., SOA). Perhaps connecting these concepts to the organizational concept of crews would catalyze some useful design activity by (a) focusing on types of decision contexts, (b) parsing a context type into a small number of decoupled roles, and (c) providing enough structure to enable relatively simple, but robust, interoperability.

Monday, September 29, 2008

Context & Connection

At KMWorld 2008, Dave Pollard presented an updated slideshare of a previous presentation. It highlights the themes of "Context" and "Connectivity." These two themes characterize an emerging group of capabilities that address Complex decision making contexts, a deficiency in traditional analytically-oriented KM tools, concepts, and frameworks.

The title of his presentation is “From Content to Context and from Collection to Connection.”
  • Content-to-Context – traditional KM focuses on the capture, creation, and provisioning of content as a formal artifact. Pollard’s summary is “acquire > add value (& store) > disseminate”, and “Know-what, Collection, Content, Just-in-case.” This implies that the key challenge is the identification, specification, design, capture, and management of relatively generic chunks of content for future consumption by a decision maker. These chunks are assumed to be relevant to a wide range of decision contexts; thereby justifying the investment in formally capturing them and provisioning them for future consumption. And, the (often tacit) assumption is that there is a low barrier to entry for most consumers of these chunks. Anyone who’s tried to design content for future consumption knows that (a) creating content that is really used across a wide range of contexts is surprisingly difficult, and (b) locating, filtering, and fitting pre-provisioned content to a decision context is a lot more work for the decision maker than the casual observer might think.The new focus is on the consumption context. This implies that the key challenge is primarily how the decision maker locates and incorporates relevant content into a decision context. A key emerging aspect of this is to (a) provision content for findability and mashability, and (b) provide services that allow the decision maker to easily match content to context (and vice versa).
  • Collection-to-Connection – as mentioned above, traditional KM focused on the collection of content. Shifting the focus to “Connection” emphasizes connecting the decision maker to resources that are relevant to the decision context. Pollard’s summary is “scan > make sense (& connect/canvas) > publish”, and “Know-who, Connection, Context, Just-in-time.” It’s not a bad summary, but his diagrams of scanning and canvassing have a strangely Content/Collection flavor to them…or maybe Dave Snowden has just made me allergic to anything that looks like categorization/analysis in a Complex context…:-)

Anyway, if you’re looking for a perspective on Ordered vs. Complex contexts that’s less academic than Snowden, this slideshare’s worth a look.

Distrust 2.0

Maybe I’m projecting, but there seem to be some parallels between the current liquidity crisis and Web 2.0. The financial engineering associated with bundling millions of mortgages and securitizing them seems similar in some ways to the Web 2.0 vision of a composable CONOPs and an enterprise IT capability that’s 90% in the cloud. Here’s a few points of commonality:
  • In both cases, entities (loans, services, content) are being engineered for interoperability. Although the details differ, the basic purpose is to allow something heterogeneous (specific to a context) to be made homogeneous so that it can be combined/recombined to create new value
  • There is a dramatic reduction in transparency in both areas. This is seen in part by the discussions of governance in both areas. For mortgages, the talk is about increasing governance (and reducing some existing governance that incentivized risk taking). For Web 2.0, the talk is about the governance needed to provide predictability (e.g., SLA’s), and to enable the required agility and adaptability.
  • Both areas highlight an issue that often overlooked: trust. The fundamental importance of this issue cannot be overemphasized. In a stovepiped world, mortgages and IT depend on the stovepipe to ensure trust. Superficially, stovepipes enforce trust via the fortifications surrounding specific stovepipe entry/exit points. At a more fundamental level, stovepipes are trustworthy because they’re transparent and static. Composability in a cloud dramatically reduces transparency, and in doing so, dramatically increases the need for more robust trust mechanisms.

    The exploding alphabet soup of security related services, protocols, and frameworks in Web 2.0 indicates that the need for formal management of trust is clearly understood. However, I’m not sure I’m seeing a clear appreciation for the fact that formal mechanisms may severely constrain the very adaptability and agility that Web 2.0 promises. Informal mechanisms (e.g., emergent trust networks) are being explored, but I suspect that both the IT and financial industries will eventually have to come to grips with the tension between increasing mashability and maintaining a desired level of trust.
  • Distrust is discontinuous; trust is continuous. Trust is built slowly and incrementally over time. Distrust often emerges instantaneously when a single incident reveals that fundamental predictions about how someone or something will act are dangerously mistaken. We’ve seen the fallout in the credit markets; a similar fallout in Web 2.0 would seem inevitable.

Finally, since this is not a sociological or anthropological blog, I’ll just mention that the type of cultural ecosystem your enterprise inhabits (i.e., low-trust vs. high-trust) may be the most fundamental driver of all.

Sunday, September 21, 2008

Process & People

A traditional approach to organizational engineering is to document business processes and functions. Although this approach can be traced (in modern times) back to Taylor, it received renewed attention in the 80's & 90's with the popularization of Michael Porter's Value Chain concept.

As an engineer there's a lot about this I find appealing. It neatly bounds the problem and supports the kind of analytical slicing and dicing that we techies live for.

However, as a framework, I suspect it may be a better fit for incremental innovation than for disruptive innovation.

These thoughts were triggered by a discussion of Value Nets on a journalism blog. Two diagrams (showing value nets for news) focus on roles and the value that each role provides to other roles.

My snap reaction was that the focus on People and the relationships among them would probably catalyze more innovation than a focus on Process. At the very least, it's a nice complement to the traditional Process-centric approach.

Monday, September 15, 2008

Complex System Engineering

As a systems engineer with almost 20 years of sporadic reading in the areas of chaos theory and complexity theory, I tend to overestimate how much the typical system engineer understands about these topics.

When the topic comes up, I usually get a response that amounts to "I've heard about that." Among older engineers, I occasionally hear a reference to some variation of cybernetics or system dynamics. And, in reviewing more recent system engineering literature, I'm most likely to see a reference to "wicked problems."

There are a number of fields of study and concepts that are related to these topics. My favorite "big picture" is this diagram from the International Institute for General Systems Studies.

Regardless, most system engineers don't appear to see these topics as anything more than intellectually intriguing. Since very few of us build systems that actually exhibit complex or chaotic behavior, I suppose that's understandable.

However, as decentralizing & hyperconnecting technologies become commoditized, the capabilities we create are increasingly being tightly coupled to complex decision contexts. So, I'm starting to see a few references to system engineering for contexts that are primarily complex.

This presentation by Christof Fetzer discusses how complexity comes to dominate many Systems of Systems as they grow in size, and how system engineering must change to effectively address a complex context. And, George Rebovich of Mitre gave an interesting presentation on "Systems Thinking for the Enterprise" at the 2006 International Conference on Complex Systems. It it, he covers much of the same terrain as Fetzer, although from a different perspective. A more in-depth discussion by Rebovich is found here.

And, there's been some recent activity by INCOSE in this area. However, after reviewing a bit of what's being written recently about complexity by system engineers, I better understand why I got such blank looks when I raised the topic (from the perspective of social psychology) with some INCOSE leaders at a local chapter meeting about a decade ago. Even today, most of what's written tends to one of two extremes: (a) a deterministic approach wrapped in a complex sheepskin, or (b) some kind of "emergence magic."

Much of the recent material is helpful for giving traditional system engineers a better understanding of some of the key issues. And, some of it draws some important distinctions. But, I'll continue to look to Snowden's Cynefin, Klein's data-frame, and Weick's social psychology when I'm pondering basic concepts that illuminate how to design deterministic capabilities that mesh cleanly with complex decision contexts.

Postscript: Here's a critique (by George McConnel) from a traditional engineering perspective that has some good points. And, a summary (by Sarah Sheard) that covers much of the waterfront (also from a traditional perspective). Both are from a recent Symposium on Complex Systems Engineering.

Thursday, September 11, 2008

Is the Edge a Separate Organization?

In a fascinating paper given at the 13th ICCRTS, Frank Barrett and Mark Nissen assert that the answer to this question is “yes.” In this paper, they discuss the agile and adaptable organizational pattern sometimes called an “edge organization.”

The authors assert that the key barriers to creating this organization are found in two basic features of hierarchical organizations:
  • They are grounded in a “rational-cognitive framework” that is dominated by analytically- oriented processes focused on planning, organizing, and controlling.
  • They are guided by “teleological action” that assumes clear and relatively static purposes and goals.
A purist might argue from an epistemological perspective that there’s no escaping some amount of reason-cognition and telos. However, I think the authors are using these terms to refer what dominates the organizational culture, not an organization that is either all reason-cognition/telos or no reason-cognition/telos.

The edge organization breaks both assumptions:
  • Its decision context is too ambiguous and dynamic to be analyzed and planned.
  • Its decision context is too ambiguous and dynamic to support the creation and/or sharing of a formal and detailed description of purposes and goals
If you’re familiar with Cynefin, this sounds like the contrast between Unordered and Ordered domains. Or, Tushman’s Exploration-Exploitation contrast.

Regardless, the authors focus on how organizational identity is formed and how that identity constrains organizational sensemaking. Their conclusion is that hierarchical and edge identities are so different that it’s not feasible to try to morph a hierarchy into an edge.

This question is an important one in NCW. I've assumed over the past few years that selected individuals could form up an edge organization within an existing hierarchy, and depicted this as an edge overlay on a standard hierarchy. The overlay resembles the informal social networks that allow any structured organization with formal processes to adapt to an inherently messy world.

And, I've assumed that the decentralizing technologies (SOA, Web 2.0, etc.) that are beginning to emerge would tend to catalyze this sort of transition from the bottom up since these technologies seem to have a strong edge orientation.

None of this is trivial; the agile management of roles/responsibilities/rights is a significant challenge, but I’ve always assumed that as the technology matured, edge-like communities (e.g., COIs) would be chartered and/or emerge. These authors seem to be implying that my assumptions are both naive and dangerous.

They cite several theoretical bases for their assertion. Among those are:
  • Gidden’s structuration theory - this proposes that structure is created and recreated by action, and action is constrained or enabled by structure.
  • Situation action theories - these focus on the dynamic interplay between the subject and the context.
  • Pragmatic theories - these emphasize the interdependent nature of means and ends
  • The phenomenological philosophy of Merleau-Ponty - among other things, this asserts that “embodiment is constitutive of perception and cognition.”
Merleau-Ponty’s perspective is especially provocative in that it highlights not only how our actions shape our perception of an environment and how our perception shapes our actions, but also how this recursive dynamic depends on the sensemaking locus of a body.

My summary of this dynamic would be “context elicits available embodied skills, which frame the context for action.” Or, in data-frame (Klein) terms, our repertoire of frames is largely the result of acquired embodied skills. The authors summarize this discussion by saying “Most of the time, we act spontaneously and pre-reflectively in accord with embodied skill.” For the individual this may seem obvious, but for a group, it raises interesting questions.

For example, what exactly is a group’s “perception” (or frame set) and how does that “perception” (frame set) interact with individual members’ “perceptions” (frame sets). A group’s “actions” is perhaps a bit clearer, even if the nature of group embodiment is not. In both cases, emergent behavior and understandings come from the interplay between individual actions and each individual’s perception of their own actions and other’s actions. A reductionist might approach these questions with some sort of modeling framework, but a much more appropriate and common locus seems to be that of "identity", which is perhaps the closest concept we have to individual and group "embodiment."

The authors assert that the various theoretical frameworks imply that individual and group identity is so strongly shaped by either a hierarchical culture or an edge culture that it’s not possible for a group of individuals to morph between the two cultures/identities. Instead, one must grow an edge organization outside of a hierarchical context, and the edge organization must be kept separate from the hierarchy to maintain its effectiveness.

In the second half of the paper, the authors describe five levels of competency using the perspective described above. They then assert three maxims for practice (emphasis added):
  • “The doing, learning and on-the-job experience required to develop edge-like behaviors must take place in an environment that encourages and reinforces such edge-like behaviors.”
  • “Edge organizations can emerge [only] from the activities, dialogs and interactions of people working together in an environment that encourages and reinforces edge-like behaviors.”
  • “The people working together in an environment that encourages and reinforces edge-like behaviors must learn the kinds of activities, dialogs and interactions required for Edge organizations.”
Finally, they propose a three phase approach to building an edge organization. This approach focuses on increasing levels of organizational competence and involves (1) selecting and developing edge-oriented personnel, (2) creating edge-oriented conditions, and (3) engaging individuals and the group in edge-oriented activities.

Bottom line: In a sea of papers that are often techno-centric rehashes of existing sensemaking/NCW frameworks, this paper is a much needed reminder of the centrality of individual and group identity/skill formation and development in creating an ability to thrive at the edge.

Monday, September 1, 2008

Why Blogging Will be Rare In Enterprise 2.0

Merlin Mann has an interesting list of criteria for “What Makes a Good Blog?” I suppose you could say his criteria don’t apply to blogging within an enterprise, but I suspect that any blog that doesn’t meet most of them won’t be widely read.

Anyway, the item that really popped out for me was “Good blogs reflect obsessions.” Within even a large enterprise, I suspect there are very few people that (a) are obsessed about a work-related topic, (b) have the time and ability to create something distinctive and engaging, and (c) are willing to actually invest the required effort.

I think I meet the obsession criterion on the topic of sensemaking and its relationship to information technologies and organizational behavior. However, since workplace blogging is on my own time, I definitely struggle with investing the required time/effort. There’s only so many minutes in a lifetime, and we all have multiple roles/responsibilities to juggle.

Even if an enterprise had a process for identifying employees with work-related obsessions and believed the ROI was good enough to justify funding their blogging, I suspect good blogs would remain rare within the enterprise…they’re rare enough even in the much wider domain of the Internet.

Sunday, August 31, 2008

The Death of EBO?

General Mattis's guidance that USJFCOM cease the use of Effects Based Operations (EBO) concepts does not bode well for its future. Here's a few extracts I found interesting:

  • "The joint force must act in uncertainty and thrive in chaos, sensing opportunity therein and not retreating into a need for more information."

  • "EBO ... assumes an unachievable level of predictability"

  • "... any planning construct that mechanistically attempts to provide certainty and predictability in an inherently uncertain environment is fundamentally at odds with the nature of war."

  • "The use of 'effects' has confused what previously was a well-designed and straightforward process for determining 'ends.'"

  • "The underlying principles associated with EBO, ONA [Operational Net Assessment], and SoSA [System of Systems Analysis] are fundamentally flawed and must be removed from our lexicon, training, and operations. EBO thinking, as the Israelis found, is an intellectual 'Maginot Line' around which the enemy can maneuver. Effective immediately, USJFCOM will no longer use, sponsor, or export the terms and concepts related to EBO, ONA, and SoSA in our training, doctrine development, and support of JPME."

My limited understanding of EBO was acquired incidentally when I became interested in Network Centric Warfare a few years back. So, I don't know enough about the details of EBO to have strong opinions about it.

However, General Mattis's statement did prompt a few snap reactions:

  • EBO, like NCW, seems to have come to prominence in part because of the emergence of hyperconnectivity in communications and information technology. This revolution prompted visions of coordinating effects with unprecedented precision across a battlespace. As I've said before, I think the basic concepts of NCW (as depicted in Figures 5 and 11 of "The Implementation of Network Centric Warfare") are exactly right in their emphasis on the social and cognitive aspects of turning information into actions. However, I've also noted that the U.S. implementation of these concepts seems to have overemphasized information sharing, to the detriment of the social and cognitive domains. General Mattis's critique of EBO recognizes that robust information may actually degrade decision making if the cognitive and social domains are not also considered.

  • There seems to be a widespread feeling among John Boyd's followers that the intellectual leaders of NCW have promoted concepts that are flawed understandings of Boyd's ideas (e.g., the Observe, Orient, Decide, Act (OODA) loop). I may have misunderstood these critiques, or it may be that the NCW theorists were on a track parallel to Boyd (a common situation when a fundamental new idea is on the horizon). Regardless, I think it's true that NCW does not emphasize "staying inside the opponent's decision making loop" the way OODA does. General Mattis's critique seems to imply that EBO reflects this deficiency.

  • Users of a process must own it. According to Mattis, EBO is "staff, not command, led." All of us who live in large organizations understand that creating robust and mature processes is relatively easy (though expensive). The hard part is getting those processes into the heads of those who use them, and to do so in such a way that robust coherent decisions and actions are established and maintained.

  • There seems to have been a fundamental misunderstanding of the nature of war. This is why I think Cynefin is such a useful framework. It helps analytically oriented individuals and organizations understand the limitations of analysis and the need for agile probing in Complex contexts (which is what most battlespaces are).

  • Trying to attack a Complex problem using analysis, complicated organizations, and thick processes is a recipe for failure. What's needed are relatively simple organizations and processes that (a) provide "just enough" structure to maintain coherent action and (b) are able to run OODA loops fast enough to maintain relevance. Again, see Cynefin.

  • Finally, I can't help wondering if there's a bit of "emergence magic" mixed into EBO. As with NCW, I may be misunderstanding what I've seen, but there's an "information for free" mentality that seems to occasionally pop up in discussions of pervasive hyperconnectivity. As someone who has been entranced by a number of books describing complex adaptive systems and emergence, I understand the temptation to think that mixing the right ingredients with the right incantation might result in the emergence of an unexpected synergy. However, our ability to design emergent behavior to achieve a specific goal seems to be very limited at this point.

Anyway, if you're the least bit interested in asymmetric and irregular warfare, General Mattis's article is must reading.

Saturday, August 30, 2008

The Decontextualization of IT

Seems like every few weeks there's new swirling about SOA vs. WOA/REST/ROA.

I doubt I can add anything new to the discussion, but it occurs to me that a part of what we're seeing may be the gradual and ongoing decontextualization of IT. Here's the progression:
  1. Mainframe era - an entire entire enterprise of related contexts are modeled on a single box. User interaction was limited to block-mode terminals & printouts. My first job after leaving college was programming on such a box.
  2. Microcomputer era - similar to mainframe; for smaller organizations
  3. PC era - a small set of a specific users' or group's contexts are modeled. New tools allow users to do their own modeling (Excel, Access). IT spends a decade or more figuring out how to govern the resulting dis-integration.
  4. Web 1.0 - entire enterprise of contexts are exposed via a standard presentation layer (i.e., a browser); in some ways this looks like the return of the mainframe; albeit a distributed one.
  5. Web 2.0 - coarse-grained business-oriented subsets of specific contexts are exposed. Although the integration of those services remains largely an IT job, the services carry much less context with them than the apps/systems they replace. As a result, the SOA style is promoted as enabling increased business agility and innovation.
  6. Resource-oriented web - decontextualized resources are exposed. IT builds apps, services, workflows, etc. that contextualize resources. Users string resources together to contextualize them "at the speed of need." (e.g., Ubiquity).

If you emphasize the contextual aspect of knowledge (as I do), you may find this trend slightly puzzling. Ever since the birth of computing & IT, there's been a focus on increasing the amount of context that's automated. There was a concerted effort, now seen as largely failed, to create what became known as "strong AI." And, recently, there's been lots of speculation about a Web 3.0 or Semantic Web.

Even Dion Hinchcliffe, in an interesting discussion about how the Web is increasingly about user-driven contextualization of resources, has a diagram that speculates that the next step may be Semantic.

He may be right...but, if so, it will be a dramatic swing back toward contextualization. Since it's unclear how resources/services can become much more decontextualized, maybe it's time to move back a little. If so, a key issue will be how to do so while maintaining the adaptability that comes with decontextualization.

Regardless, increasingly decontextualized IT, along with standards/provisioning/tools that make it easy to recontextualize "at the speed of need", would seem to be exactly what's needed to support more adaptable and agile sensemaking and decision making.

If you're unaware of the SOA, WOA/ROA/REST, etc. swirling, here's a few recent items: Burton Group blogger, here, here, and here. This shows no sign of clearing up any time soon...which, given how fundamental it is, may be a healthy sign.

Thursday, August 28, 2008

Drawing Distinctions

Two recents posts got me to thinking about the drawing of distinctions.

Dave Snowden discussed various types of stories, and referenced a Patrick Lambe discussion of the dangers of simple, closed typologies. If you're unfamiliar with this topic, both are well worth reading.

Anyway, my reaction was that there are two extremes that folks tend to react against. At one end is the logical positivist dream of "one taxonomy to rule them all." At the other end is the postmodern "taxonomies are a tool used by the dominant culture to oppress those who are not of it." In this latter perspective, the only appropriate action is to deconstruct the taxonomy to uncover the underlying assumptions and power structures.

Both extremes highlight real concerns. Drawing distinctions is an inescapable activity in creating meaning and taking action. However, the distinctions drawn do reflect historical and environmental considerations that are often invisible to those using them. And, it seems likely that the creation and using of distinctions forms a sort of "strange loop" that is inextricably entwined with identity formation.

Practically speaking, Known/Knowable (Cynefin) domains tend to be Exploitation/Execution-oriented, and generally have relatively stable taxonomies/typologies that are relatively invariant across those who use them (though an expert's will be much richer than a novice's). If you want to push a Known/Knowable context into the Complex domain, deconstruct its taxonomies.

Complex domains, on the other hand, are likely to take a bite out of any taxonomy you bring to the table. This is one of the challenges of probing these domains...you're constantly shuffling a morphing deck of distinctions to keep the useful ones in play...not the sort of thing taught in the average engineering or business curriculum.

Teaching Programming To Accountants

When I was in college in the late 70's, I had the opportunity to attend the national meeting of DEC minicomputer users. At lunch one day I sat at the same table as the president of a small accounting software vendor. When I asked him about what kind of background he looked for in a developer, he said "I hire accountants; it's easier to teach programming to accountants than to teach accounting to programmers."

I was reminded of that while reading a Forrester report ("Complex Event Processing in a Quant World"). In it, Charles Brett interviews Robert Almgren, a pioneer in using Complex Event Processing to build algorithmic trading strategies. Regarding the skills he looked for, Almgren says "I wanted quants — not IT or technology people. This was because creating the algorithms determines the event processing. If you do not know what you want to do with the events available, no amount of CEP will help."

It seems like IT is splitting into two pieces: commoditized & standardized (but complex) data processing, and domain-specific business-focused capabilities that are becoming deeply woven into the non-IT fabric of the business.

It's not clear to me that IT-trained personnel will ultimately perform much of the work in the latter category...at least where the related business knowledge is largely Knowable (the domain of experts) or Complex (the domain of pattern management). Then again, maybe it's always been that way...

Saturday, August 16, 2008

Dis-Integrating E-Mail

As asynchronous electronic connectivity became commodotized in the 80's and 90's, two types of tools became pervasive: e-mail for one-to-one (1:1) or one-to-few (1:F) communications, and bulletin boards (then Internet forums) for many-to-many (M:M) communications. Since I'm mostly interested in group sensemaking, I'm going to ignore 1:M communcations for this post.

Enterprise 2.0 has begun to explore how social media can be used on a scale that is much smaller and more formal than the Internet. That exploration is mostly of 1:1 & 1:F capabilities, since most group conversations inside an enterprise are on that scale.

Given that e-mail is basically snail mail in an electronic form, it would seem that e-mail might disintegrate into multiple capabilities that are crafted for the various types of small group activities.

Matt Moore (engineerswithoutfears.blogspot.com) has an interesting post & SlideShare of how he sees e-mail disentegrating...check it out.

Monday, August 11, 2008

Mashup Wiki Apps

I suppose that sounds like a bad translation from some foreign language, but it seems to be what MindTouch Deki enables. In this blog post, Dennis Howlett discusses the latest release.

I heard an interview over the weekend with the inventor of the wiki and thought it was interesting that he invented it to replace e-mail-based collaboration on project. What little I know about wikis indicates that, within the enterprise, they still work best in the context of a group that needs to collaborate to complete a task.

I suspect it has to do with the fact that individual identity within an enterprise is usually tightly coupled to specific goals and tasks. As a result, most individuals are a bit at sea when asked to start or contribute to a wiki outside the context of some specific goal/task.

I'm not sure anyone can keep track of all the innovation going on in the social media arena, and I don't even try. So, there may be a number of products that look like MindTouch Deki. Regardless, it's an intriguing concept: combine the artifact-oriented wiki with the ability to mashup application inputs/outputs and you get something like a collaborative workflow exploration tool. Something about this sounds catalytic...

Saturday, August 9, 2008

Will IT Be Run by Social Scientists?

As a big fan of Karl Weick & Gary Klein, I'm sympathetic to Gartner researcher Tom Austin's answer of "yes" to this question in this interview.

However, I'd probably answer "no, but." I do think that systems engineers and architects are going to have to become much more literate about social and organizational behavior, and about how information technologies affect individual and group sensemaking and decision making.

And, managers at all levels are also probably going to traverse a similar learning curve.

IT might even have a few social scientists on staff who are techno-literate. But, I don't think you'll have to have a degree in the field to effectively use the required social science knowledge.

Conversation Types

This blog entry by Jennifer Legio references a "Conversation Prism" as a tool in enterprise social media strategy formulation.

It's an interesting taxonomy that raises an interesting question...when will the rate of change in the taxonomy/structure of social media begin to flatten out? I certainly don't have a clue, but do have a couple of snap reactions:
  • The variety of social media tools will probably continue to increase for 5-10 years
  • So far there's very little of what I'd call "user governance" tools for managing your social net. I can't imagine social media being considered mature without mature governance capabilities across a range of contexts (e.g., individual, internal, external, ad hoc, group, enterprise, etc.).
  • Given that this ultimately will result in a much more complex mixture of Exploration and Exploitation across multiple organizations, along with the likelihood of more fluid individual and group identities, I lean toward 30+ years before this begins to shake out.

Friday, August 8, 2008

Fitting Organizations

I suppose we all struggle some with separating ends from means. But it is still frustrating to see individuals and groups pursue tools like process and organizational structures as an end in themselves.

To a certain degree, focusing on a tool is useful. Human-intensive tools are always evolving, and the environment in which a tool is being applied also evolves. So, there's a certain amount of tool focus that's required to engage in the ongoing task of molding the tool to the context.

However, it seems that we're more likely to shift our focus from a goal to a tool than vice-versa. And, finding a good balance between ends and means is often difficult.

This task is considered in an article by David Tucker (from Homeland Security Affairs). It's an interesting discussion of fitting organizational structure to the context of fighting terrorism.

Hyperconnectivty in the Enterprise

Posts by Andrew McAfee about micro-blogging and messaging in the enterprise ignited a small firestorm about whether a company should set up constrain how these tools are used.

Seems like this is classic Complex (Cynefin) territory, so boundaries and attractors make lots of sense.

Furthermore, enterprises have goals and engage in a mixture of Exploration and Exploitation activities to reach those goals, so integrating exploratory tools like micro-blogging and messaging into ongoing Exploitation activities makes a lot of sense (e.g., many web retailers now have chat integrated into their sales workflow).

Regardless, a related item may be more interesting in the long run. Mozilla has released a Firefox plugin called Snowl that aggregates messaging and micro-blogging. While Flock is more established, Snowl's focus on aggregating messages raises some interesting questions. The one I find most intriguing is how this tool may evolve to allow users to manage hundreds to thousands of message sources (and tap into the associated social network). I don't have an answer, but this may be the cutting edge of hyperconnectivity in the enterprise.

Tuesday, August 5, 2008

Complex Organizations

There's been a lot written about social media and modern organizations, but publicly available research in this fast-changing area is relatively scarce. Which makes this HBS Working Paper interesting.

The fact that Micheal Tushman is one of the authors makes it interesting to me...he's one of the main popularizers of the Exploitation-Exploration contrast.

The focus is on coordination across the organization, and has some interesting observations of the single large company studied. Here's one that caught my eye: "...the category spanners in the firm are women concentrated in the upper-middle management ranks and in a few functions, most notably sales, marketing, and general executive management."

Focus and Resource Scarcity

We tend to focus intently on key resources that are relatively scarce.

This truism seems so pervasive as to be trite...so why mention it? Where resource supply and demand change relatively slowly (or oscillate in a bounded range) I suspect there's no compelling reason to think much about this topic.

However, the world of IT continues to change quickly, and Moore's Law remains in force. One implication of this is being discussed by Joe McKendrick in his coverage of the emergence of a Service Science curriculum.

How does engineering and IT change when processing, communications, and storage becomes commoditized (i.e., cheap, pervasive, and standardized/interoperable)? Since the telegraph was invented, an enormous amount of effort has been focused on dealing with resource scarcity in these areas. As that focus is freed up, it's beginning to shift to the business application of hyperconnected and commoditized IT.

It's a very different kind of problem demanding a different set of skills...and, IBM is recognizing this.

Finally, Dion Hinchcliffe has a nice summary of one current aspect of this transition...cloud computing.

Twitter in the Enterprise

SAP is rolling out an Twitter-like capability for the enterprise, discussed here by Oliver Marks.

I suspect no one really knows whether/where this might work. Seems like it might be useful for individuals and groups that are in a sensemaking/orientation role, especially if constant conversation is needed (e.g., marketing).

Like all social media, using metrics to track usage and infer value may be difficult since gaming and unintended consequences may become pervasive (i.e., just knowing something is being measured can turn it into a goal).

Andrew McAfee also has a couple of posts on the topic (here and here).

Sunday, August 3, 2008

Knowledge Binding

Few (if any) contexts are pure Known/Knowable. If a human is involved, there's at least a few Complex threads. Which means that there's usually a trade space involving when to bind Knowledge to Context.

I was reminded of this late last year when I heard an NPR piece on reducing catheter-related infections in the ICU. The traditional approach was to create a more sophisticated (and expensive) technology...that seemed to be more "idiot-proof." The non-traditional approach was to create a process and roles that made the existing technology less risky.

Here's a few observations:

Technology solution - antibiotic-coated catheter
  • Complicated point technology
  • Intended to reduce risk in a range of contexts
  • Expensive; one-size-fits-all "silver bullet"
  • Knowledge is statically bound to all potential contexts at the time the the technology is designed and created
  • Unanticipated risks are not mitigated
  • Infection rate remains unacceptably high

Process/roles solution - checklist to control infection sources, non-traditional roles/responsibilities to increase organizational reliability

  • Simple process
  • Intended to ensure the Context is low-risk
  • Cheap; humans ensure fit between context and technology
  • Knowledge is dynamically bound to a specific context at the time of need
  • Unanticipated risks are addressed when cather is inserted
  • Infection rate drops to near-zero
As technology becomes more sophisticated, agile, and adaptable, the temptation to create sophisticated "silver bullets" increases. And, it gets easier for a designer to be seduced by the illusion that smarter technology can move the Complex into the Known/Knowable.

When you add in the fact that financial incentives tend to be biased toward Technology (i.e., a product or system that's easily monetized) and away from Process/People/Organization, it's not surprising that the doctor in the NPR story had a difficult time getting hospitals to adopt his very successful solution.

Bottom line: Complex threads require late knowledge binding (driven by Context)...which often means humans in/on the loop. And, short-term financial incentives often point away from the most effective solution.

See also the original New Yorker article.

Monday, July 28, 2008

SOA Generations?

In the commentary section of recent post on causes of IT complexity, Roger Sessions (ObjectWatch) described 3 eras of Enterprise Architecting:
  • Need to align IT and business - Zachman
  • Need a process - FEA, TOGAF
  • Need something thinner - VPEC-T, AEA, SIP
This sounds a lot like the SOA discussions I've seen recently...with most of the current focus being on the pre-process phase (e.g., how to carve up the objects in "SOA-space" so that we can begin to talk about a SOA-centric process). Or maybe I'm just seeing connections where there really are none... :-)

Does NCW Include Social-Cognitive?

After 3+ years of reading NCW literature, I'm starting to wonder if I just imagined that "The Implementation of Network Centric Warfare" has diagrams delineating four domains: Information, Social, Cognitive, and Physical.

Much of what is written either ignores the Social and Cognitive, or decries its absence. The latest item I've run ("Unintended Consequences of the Network Centric Decision Making Model") across falls into the latter category. It's actually a couple of years old, but it's a good critique of why NCW must include the Social and Cognitive domains. If you're still unconvinced, go read the various online books by Garstka and/or Alberts which provide a more abstract rationale (e.g. "Understanding Command and Control").

Communities of Exploration

Dion Hinchcliffe has a new post discussing "online customer communities", which triggered these thoughts:

  • It seems like there's two broad online community types: top-down and bottom-up.
  • The top-down types are often associated with formal organizations that have a specific mission and associated goals. The two types I think of (often associated with Knowledge Management) are Communities of Practice (cut across multiple organizations, but the Practice tends to be a formal organizational function and usually has a formal body of knowledge) and Communities of Interest (see, for example, how the US DoD has defined this).
  • The bottom-up types are less well-defined, but Dion describes 3 broad categories: Social Networks, Grassroots, and Customer. I realize that Dion calls Customer "top-down", but I'm using "top-down" in the governance sense of the word. Since Customers are by definition not governable in a traditional sense, I'm calling them bottom-up...but they could eventually evolve into something that's a mixture of both.
  • Finally, it would seem that both types (bottom-up and top-down) are struggling with structure.
  • The top-down communities have too much structure to achieve the adaptability and agility organizations desire. So, they're introducing bottom-up concepts (e.g., tagging) and hoping that will do it. I personally think these concepts have some value, but that the real issues revolve around agile/adaptable governance and what might be called composable community objects (process fragments, data/info objects, etc.), not some "emergence magic."
  • The bottom-up communities have too little structure to reliably allow coherent action toward a specific goal. Dion's post offers a number of helpful insights into this issue with some interesting suggestions. But, this feels like a topic that's still in the forming stage.
  • I'm a little uneasy with the Deloitte SlideShare that proposes a "tribalism" metaphor. I like the implication of adaptability and shared purpose. But, "tribalism" also seems to imply a structure (governance and community objects) that is too static for most online communities.
  • How soft issues like trust and identity interact with structural issues is still very unclear...but since both areas are important, how they interact/relate seems like an important question that should yield valuable insights.
I like the Exploitation-Exploration contrast, so I wonder if the right balance of top-down and bottom-up might be called a "Community of Exploration." Regardless, I don't expect to see a mature understanding of this concept in the near future.

Saturday, July 26, 2008

Whither ESBs?

I've used ESBs only as a workflow/messaging infrastructure for small-scale prototypes, and haven't thought much about how ESBs fit into the rapidly evolving web-oriented terrain.

So, I was intrigued by Joe McKendrick's latest post "Enterprise Service Busted", in which he summarizes a recent dust-up about the place & value of ESBs. I really learn a lot from these periodic spats since they seem to occur when concepts surrounding a key technology have matured to the point that a robust understanding is emerging (the last one of these I saw was around REST/WOA a few months back).

In reviewing the genealogy of Joe's post, I came across a blogger I had not read before. Todd Biske, in this post from October 2006, has a list of 18 functions that should be external to a web service. He then maps those functions into 3 overlapping spaces (software infrastructure, networking & security, and system management) and places various vendors/products on the map. One of the most interesting aspects of the emerging hyperconnectivity is why/how IT should be partitioned and the implications the resulting structure has for the developer, operator, and user. This is one of the better commentaries I've seen. Highly recommended.

Here's some of the key forums in the genealogy:

Constructing Meaning

I try to spend a few hours at the library every 3-4 months to skim several periodicals I'm interested in. Since I've let my Harvard Business Review subscription lapse, it's on the list.

A couple of HBR articles I ran across last week in my latest skimming prompted this post (these articles are from over a year ago, since I've been fairly busy the past 18-24 months).

The first is from May 2007 ("Inner Work Life", Amabile & Kramer). The authors divide the "inner life" into Perceptions, Emotions, and Motivation. Most of my interest has been in the arena of perceptions. This article reminded me that I probably tend to underappreciate the centrality of emotions and motivation. I suppose I'm no different from most folks in that I assume most people are more like me than they really are. Those differences may be more difficult to understand and bridge when the differences are grounded mostly in emotions and/or motivation...perhaps because emotions and motivation tend to be deeply entwined with key personal narratives and the mystical/mythical terrain of innate personality structure.

I suppose that I need to be more aware of the potential danger and difficulty of treading on emotive/motivational issues. We've all seen managers and organizations that were amazingly tone-deaf on some issue. However, this landscape also offers tremendous opportunity (i.e., the intense devotion of some Apple users). See the article for an in-depth discussion.

The second article is "Promise-Based Management" (Sull & Spinosa) from April 2007. It has a box entitled "A Primer on Speech Act Theory" which discusses how speech and actions are related. My initial reaction was to recall Weick's "how can I know what I mean until I see what I say?"...a question that calls attention to the degree to which we talk/act ourselves into meaning. This topic has a way of generating lots of heat w/ little light, so I'll limit myself to a few comments:
  • We probably expect (and often demand) more agreement than is required for coherent decisions and action. Since this blog is not about philosophy/religion, I'll just note in passing that we seem to have an innate need for this that often results in unnecessary conflict...though separating this from some folk's apparent love of conflict may be difficult... :-)
  • The descriptive-active spectrum of statements is mentioned. I'm not sure what to make of this. While descriptive statements may be purely aesthetic, even they seem to often have strong undercurrents of action. I guess I'm wondering if even the purest of descriptive statements aren't usually grounded in some sort of telos...ok, enough philosophy.
  • Over the past few centuries there's been a whipsawing between the philosophical poles of "it's all objective & knowable" and "it's all subjective and unknowable" that continues to reverberate. Most people tend to act (whether they admit it) as if certain things are objective, knowable, and true. However, most of us probably spend more time arguing about "what's real" than is really needed (again, see the first bullet).

In a late modern (or postmodern) age, we perhaps underappreciate the power of words to create action. This article helps remind us of that linkage.

Sunday, July 13, 2008

Why SOA is Orienting

On a morning run this week, I listened to a recent Dave Snowden podcast of a presentation given to IT professionals at an Agile Conference in Limerick. Some Dave's points about sensemaking triggered a few thoughts about how SOA is more aligned with sensemaking than traditional IT.

Before I get into those, I think it's worth noting that architecting frameworks/styles have "orienting" or sensemaking needs. Whether it's Zachman or MODAF or TOGAF or etc., a useful framework highlights certain fundamental distinctions and relationships that are used by an architect as a map to organize the the architecting territory for architecting decisions.

Although the SOA "style" is obviously different from frameworks that involve an architecting process, there are similarities in that both styles and processes fundamentally shape how architecting is performed.

The process of applying an architecting framework/style to a real world context involves at least three sensemaking/orienting activities:
(a) the architect orienting to the context (in light of the framework/style),
(b) the architect orienting to the framework/style (in light of the context), and
(c) the architect orienting to the (never completely clear and continually evolving) relationship between the context and the framework/style.

In traditional IT architecting, we tend to complete most of this orienting work by late in the requirements phase or early in the design phase, and the bulk of the work is usually associated with some mixture of (a) and (c).

In SOA, significant chunks of this work are potentially not completed until the user "composes a CONOPS" by discovering and weaving together previously deployed services. Allowing key architectural binding decisions to be delayed until the user interacts with developed capabilities is a significant departure from traditional systems which "give you any color you want as long as it's black."

Since SOA is a relatively immature style, IT architects have just begun to understand its strengths, weaknesses, basic patterns, etc...especially when compared to traditional architecting frameworks/styles.

And, it would also seem that this difference in how sensemaking occurs in SOA's will result in major changes in how IT architects orient to an architecting context and, finally, the interplay between context and framework.

Hence my assertion that SOA is as much about orienting as Oriented.

Back to the thoughts triggered by Dave...

As my tagline ("the intersection of decisions and technology") implies, there are decision making influences on technology and technological influences on decision making. Here's some of the human aspects that Dave highlights, along with my observations about traditional IT and SOA:
  • Humans filter out all but 2-3% of the data they're exposed to; IT has very constrained input capabilities that usually do very little filtering.
  • Humans orient using (a) fragmented patterns that were (b) recently activated; IT "orients" by exhaustively searching all data (often by using sophisticated, but static, pre-built indices/patterns).
  • Human memory morphs every time it is activated, and tends to adapt/evolve to a context; IT data is static.
  • Humans detect "weak signals" by noticing anomalies that trigger patterns. Weak signal detection is improved by keeping inputs ambiguous and "priming" memory by activating memories of high-risk/low-probability narrative fragments. IT has nothing like this, and tends to "fight against" human sensemaking by flooding a decision maker with large amounts of data.

Although SOA is still more IT than human in these dimensions, SOA does seem to be significantly more amenable to filtering, fragmented & morphing than traditional IT. To the degree that's true, it implies that architecting, design, development, deployment, and use of SOA will involve much more orienting throughout all phases than traditional IT.

Bottom line: Traditional IT is a typical tool in the sense that we don't interact much with it to solve a problem. As with a hammer/saw/etc., our interactions tend to be in a prescribed (and proscribed) manner with most of the adaption occurring outside the tool. SOA is more like a rope than a hammer in that we can use the tool to explore our understanding of needed capabilities and then use the tool to develop/evolve those capabilities.