Sunday, April 20, 2008

On Tools and Goals

Dennis Howlett takes a refreshingly skeptical swipe at Enterprise 2.0 and social media in this post.

My initial reaction was that new information tools are likely to a certain amount of organizational confusion...especially when the tools are potentially as powerful and disruptive as those associated with electronic collaboration.

Typically, there will be early adopters who see tremendous potential...and spend lots of energy creating "visions" and trying to sell them. Often these visions have little concrete connection to a real-world need; they instead tend to talk about the amazing attributes of the tools.

Leaders and users in the real world predictably react with some skepticism. While they may agree that the claimed attributes are desirable (e.g., speeds knowledge transfer), they're compensated based on (a) how they do an existing job better, and (b) whether they develop new (profitable) jobs.

Unless the visionaries can provide a detailed plan for how the new tool to address one or both of these needs in a specific business context, the tool is not likely to attract much support.

Howlett's take is that organizations (which pursuing specific goals like making a profit) are still trying to figure out whether/how social media fits into their day-to-day business activities. It's a phase I've come to think of as "playing around" with a new tool to try to figure out what it's good for (and not good for). Most organizations don't have much experience with "playing around" with new IT collaboration tools...but I wonder if this may be an expertise they need to develop.

From Clusters to Coping

A few years ago, a co-worker was nice enough to buy me a copy of Steven Johnson's book "Emergence." It provided a brief readable update of the self-organizing systems literature I had read 5-10 years before.

I recently ran across an article by Johnson discussing two types of emergent dynamics: clustering and coping. Although his discussion is grounded in his involvement in the Howard Dean campaign, the broader points he makes are potentially relevant to any emergent goal-directed system.

He asserts that Clustering is a simple form of emergent behavior where a shared common interest (possibly a goal) and the following of a few simple rules & signals results in complex group behavior. Examples most folks are familiar with are the boids simulation and slime mold. However, this behavior is not very adaptable.

Coping, by contrast, is characterized by (a) a much more sophisticated semantic code, and (b) the availability of "truly local" meta-information about the group's state (often communicated indirectly). He cites E. O. Wilson's estimate that an ant's pheromone signaling "vocabulary" contains as many as two dozen "words." And, the complexity of the individual agent's rules is correspondingly greater than the simpler Clustering agents.

My initial reaction (trying to bend this compare/contrast to the frames I know) was that COP's cluster, while COI's cope. I'm not so sure his taxonomy fits COPs/COIs very well, but I found it interesting that his discussion of social insects (vs., say slime mold) did not extend to whether Clustering might be characterized by a single decision role/responsibility/right (i.e., homogenous agents), and that Coping might require multiple decision roles/responsibilities/rights (i.e., heterogeneous agents...possibly with the ability to "morph" (i.e., we have 200+ types of cells that emerge from the union of two cells)).

Regardless, there seems to be an implication that designing independent agents to exhibit emergent behavior that is both (a) goal-directed and (b) adaptable may require a taxonomy of roles/responsibilities/rights, a non-trivial signaling code, and the communication of state related to the group and its goal. I don't have a clue about how hard this might be to design, but I think I'm safe in saying it's a different way of designing than most of us are familiar with.

Thursday, April 17, 2008

Chaos: The End of the Beginning

You may have heard that Edward Lorenz died. James Gleick's description in "Chaos" of Lorenz's work on chaos theory sent me on an almost 10 year odyssey in the 90's of reading a wide range of books and research on chaos theory (& mathematics), complex adaptive systems, emergence, and other related topics.

It is one of the few things that has fundamentally shaped the way I view the world. We live in a culture that is saturated with the legacy of scientific naturalism and the reductionism that empowers it, with the knowledge discovered using the scientific method continuing to astound us.

All ways of knowing have been influenced by this revolution, including how organizations and individuals know what they know, and create new capabilities. Both engineering and managing have become professions in a reductionist matrix, and have a core concept of using deterministic models to drive decision making. As with the scientific revolution, the resulting explosion of knowledge and capabilities has been amazing.

However, a richer understanding has begun to emerge over the past hundred years. This understanding includes the chaos of Poincare & Lorenz, the death of positivism (as the result of discoveries like quantum theory and Godel's Incompleteness Theorem), and the need for new ways of exploring domains that are more Complex than Known/Knowable.

The death of Edward Lorenz is reminder that perhaps we are near the end of the beginning of our journey toward understanding how to effectively engage complexity.

More REST

If you've followed Web 2.0 at all, you've at least heard of REST. In the wake of emerging concerns about the thickness of SOA, there's been some very interesting discussions of REST. Here's three:

This is not fast/easy reading, but these concepts address a fundamental issue of the structure of information.

Is SOA Too Hard?

One observer I heard over a year ago observed that new Technology enables change, and new Processes are built on top of that technology, but that new Organizational structures (roles, responsibilities, right) and changed People (the mental models and narrative fragments used to turn information into action) are much more difficult challenges and require significantly more time and effort.

Over the past couple of months there's been a number of well-informed observers express worry about whether SOA is just too hard/costly to justify its benefits. This is discussed at some length in two recent posts:
  • Joe McKendrick has a good summary of a recent dust-up sparked by a ZapThink missive decrying the fact that techies are simply putting new SOA lipstick on the same old EAI pig.

  • And, Dion Hinchcliffe appears to be returning to more frequent blogging. He has for some time emphasized WOA as an Exploration-oriented capability and SOA as more of a Exection-oriented capability. His latest on the topic has some interesting thoughts of the desired balance between the two...and provides more evidence that the massive task of integrating stovepipes will be more bottom-up & viral than top-down & architected.

How an enterprise can create a vibrant WOA culture internally is unclear...this is the bleeding edge of Exploration with lots of fast/small failures. One enabler is making all enterprise data as widely available and mashable as possible...but that's only an enabler...it's not a catalyst. (though it's interesting that the data-centricity of WOA is consistent with previous observations I've made about the apparent trend of IT security toward relatively fine-grained wrapping of data)

Enterprises look more like the Sahara with a few vibrant oases (stovepiped functions) than the tropical rainforest of Web 2.0.

How you begin to make the desert bloom is very much an open question...you must address Technology, Process, Organization, and People simultaneously. SOA seems to be kind of like creating mini-oases, while WOA is more like installing a massive sprinkler system.

The complexity of the coupling among the POPT layers (along with the fact that Organization and People are especially tough) would point toward keeping Technology & Process as thin as possible...which is exactly what WOA does.

Clouds, Disruption, and the Enterprise

Most folks think of two things when Google is mentioned: search and ads.

However, Google is grounded in a culture of continuous innovation, so they're always doing something new. Although this apparently chaotic creative activity (Exploration) would not seem to fit the needs of an enterprise(Execution), Google continues to deploy capabilities that target the enterprise.

Several recent postings (perhaps triggered by the release of Google Apps Engine) have interesting takes on how Google may (or may not) fit into the enterprise apps domain.

This report discusses Gartner's take on how disruptive Google is, and seems to imply that Google (in the near/mid-term) is best suited for infrastructure (including the possible hosting of some mission-critical capabilities), Exploratory capabilities, non-critical Execution capabilities, and capabilities that are not key to competitive advantage.

This report has a few warnings about using Google for enterprise apps.

Dion Hinchcliffe has a nice compare/contrast between Amazon & Google's PaaS capabilities using a stack consisting of Client Capabilities, Cloud Computing Services, and General Purpose Support Services. The barrier to entry (of using PaaS) continues to drop, but Dion notes that governance, security, privacy, and control are major hurdles to enterprise-class Execution support.

And, Phil Wainewright notes that the integration of Saleforce and Google Apps is a significant shot across Microsoft's bow. I tend to agree with Phil that most folks seem to under-estimate the potential impact of what I've been calling a "composable CONOPS" for the past 3 years.

Dan Farber also discusses this topic and has an interesting slide of "Google's Vision". It is basically a taxonomy: Search/Apps/Ads layered on top of Platform/Scale and Footprint/Trust...my snap reaction is "sounds about right."

Finally, Dana Gardner discusses how all this will required a fundamental rethinking of data.

Thursday, April 10, 2008

Contextual Fallacies

I don't think I'd speculate that fallacies are always contextual, but the degree to which a fallacy is central and critical may depend on the nature of the context.

This thought was trigged by a reference to the importance of the Eight Fallacies of Distributed Computing.

It seems to me that the risks highlighted by these fallacies are much more important to Execution/Transaction-oriented activities than to Exploration/Discovery-oriented ones...or maybe I'm just being too reactionary... :-)

Monoliths

I don't remember hearing the "scale-up"/"scale-out" contrast before, but this assertion that Windows is too monolithic (to do either, I suppose) caught my eye because of a Gartner diagram portraying the contrast.

Wednesday, April 9, 2008

Enterprise Mashups

Dion Hinchcliffe blogs much less frequently than he did a year ago, but continues to be one of the more thoughtful observers of all things 2.0.

His discussion of enterprise mashups is typical...enough structure to provoke interesting questions, but not so much as to unnecessarily narrow the conversation...hitting a catalytic bull's eye, as he usually does.

I'd been wondering what had happened to QEDwiki, now I know. The key trend here is the emergence of standards that enable interoperability, along with more robust security.

Mashups are a qualitatively different kind of IT in the way they shift the focus to non-IT people creating mission-centric solutions. This is truly revolutionary stuff (and a vanishingly small percentage of what we see in our lifetimes is).

Security remains the tiger in the middle of the room, so it's nice to see Dion focus on it.

Identity & Trust

IT is tearing down walls faster than risk management capabilities can be created. So, Information Assurance is an increasingly urgent concern.

Nothing new there, but two recent blog entries provide a fresh look at two key concerns:
  • Dennis Howlett's discussion of the need for secure coding is a good summary for those who don't follow the topic. I don't recall hearing much about it until a few months back...now SANS is pushing it hard, and it appears the large commercial SW vendors are climbing on board.
  • Microsoft is the 800 pound IT gorilla of this generation (as IBM was when I was young), so whenever they say they're supporting an open standard, skepticism is a natural response. They've been promoting a mixture of proprietary and open standards (e.g., CardSpace & OpenID) in the security arena, and pitched an End-to-End Trust vision at RSA this week. I suspect even Microsoft can't predict exactly where this will go. Dignan & McFeters discussion of the proposal & the associated challenges of Identity and Trust is a good one.
Both of these discussions help highlight the centrality of Identity & Trust in the emerging hyperconnected world. Since the beginning of the Industrial Revolution, and the later flowering of Enlightenment skepticism into nihilism, existentialism, and relativism, these issues have become central to individual, family, community, and societal health.

I've referenced "Fighting Identity" before as a good discussion of how this is playing out in one arena, but it seems that Web 2.0's emphasis on group conversations and action is pushing the discussion of Identity & Trust to the center of the IT circle.

I'll leave it as an exercise to the reader to think about how these themes play out in the non-IT-based groups they're a part of...though I can't resist one observation: it is extremely difficult to change Identity and to build Trust; one should not undertake such efforts without carefully weighing the costs/risks against the potential benefits.

Tuesday, April 8, 2008

Business Risks 2.0

Hyped technologies/concepts/tools are rarely subjected to thoughtful criticism (partly because there's no market for it)...hence Gartner's 'peak of inflated expectations' followed by 'trough of disillusionment.'

Here's a interesting discussion of some risks associated with what might be called Hype 2.0. In this blog entry, Dennis Howlett identifies the following risks:
  • Being overwhelmed by Hypergrowth. He cites the case of a South African winery that was warned about this risk and still succumbed.
  • How to marry Exploration (discovery) and Exploitation (transactions). I've talked about this one repeatedly, so it's nice to see it referenced here.
  • How to marry Exploration with Hypergrowth. I suppose it's relatively easy to at least talk about throwing more servers online...however, when Hypergrowth starts effecting less scalable resources (personnel, processes, organizational structure, supply/distribution chains/channels, etc.) you're facing a much tougher challenge.
I'm sure this list is not exhaustive, but it's not a bad start.

Group Conversations: A Taxonomy

Over the weekend, I listened to a talk given by Clay Shirky about his new book "Here Comes Everybody."

As a sucker for taxonomies, I found his taxonomy of group conversations interesting:
  • Sharing - post stuff you find interesting
  • Conversations - sounded like CoPs to me...no roles/responsibilities & revolves around a shared interest
  • Collaboration - sounded like CoIs to me...roles/responsibilities & has a specific goal
  • Collective Action - focused on near-term action...roles/responsibilities are often fluid and the goal is very specific and often time-sensitive
As you move from Sharing toward Collective Action, there's an increasing level of synchronization (shades of Figure 5 of "The Implementation of Network Centric Warfare").

I liked his "every URL is a latent community" (my paraphrase)...very RESTful.

Here's a good summary.

Storytelling

The KM community seems to see "storytelling" as already past the hype phase. Some folks seem to view the whole topic with some cynicism/suspicion...apparently having seen too many attempts to use storytelling-based tactics to exercise control or impose an agenda.

Regardless, there seems to be broad agreement that much of human sensemaking is based in the use of narrative fragments (with the more recently activated ones being most likely to be recalled) in a bricolage fashion to assemble a storyline that fits the current decision context.

I recently ran across an interesting resource that had a couple of links that help illustrate the storytelling perspective:

Transforming NCW

For reasons that are not entirely clear to me, there's been a lot of compare/contrast among NCW, "irregular warfare", 4GW, and the ideas of military thinkers like John Boyd. I have no training in this area, so I can't really judge, but it seems to me that there's more overlap than difference among the key ideas of these frameworks...at least at the model/conceptual level.

As I've mentioned before, Figures 5 and 11 of "The Implementation of Network Centric Warfare" seem to be conceptually applicable across the entire diplomatic, information, military, economic, and socio-cultural development (DIMES) spectrum, and mostly consistent with the contrasting frameworks mentioned above.

These thoughts were triggered (yet again) by an article on irregular warfare in today's Washington Post. Just because the US DoD has emphasized certain aspects of NCW and focused on breaking things doesn't mean that the NCW model is limited to those aspects and to breaking things.

For a more in-depth consideration of irregular warfare, this paper is solid, yet brief, introduction.

As an aside, it's interesting to note how prophetic William Gibson was in his Sprawl trilogy and subsequent writing about the kinds of memes that emerge from a hyperconnected multi-dimensional effects-oriented information-based culture. I read Neuromancer when it first came out in the mid-80's, and his description of the Panther Modern's terrorist disruption to cover Molly while she's stealing the Dixie Flatline remains a vivid example of what I'd call NCW today (i.e., entirely consistent with Figures 5 and 11, but nothing like what the DoD calls NCW).

Sunday, April 6, 2008

Modeling Chaos

Snowden's Cynefin framework says that modeling is primarily of value in the Complicated/Knowable domain, and that the Unordered domains (Complex & Chaotic) are not effectively modeled.

While I think the basic point is right, a superficial reading of Cynefin might overlook the fact that most decision contexts have a mixture of simple/known, complicated/knowable, complex, and chaotic elements. And, Snowden has a lot to say about tactics for moving unordered aspects of a context into an ordered domain to enable better decision-making.

This was prompted by an article in Network World about a new capability that IBM is touting for managing a crisis (which usually has a large chaotic component). The basic approach is stochastic programming.

My initial reaction was negative...based on perspectives like Cynefin, but also in reaction to statements like "The model allows all unforseen challenges to be solved, mostly within an hour..." I suspect even the most ardent proponent of stochastic programming would find this statement a bit extreme.

I suppose Moore's Law makes stochastic programming increasingly attractive...and, as with all IT-centric tools, it may, in a relatively short time, become relevant to a wide range of needs. However, as with all "better mouse-traps", there are significant switching/transition costs/investments associated with changing existing processes, policies, and (most of all) the world inside the heads of those affected by the change.

More "Nature of Governance"

A suggestion that markets ultimately provide needed governance is exactly right...depending on what kind of governing you're trying to do. Which prompted me to spend a few minutes thinking about what kinds of governance there are...which of course led to questions about "why governance", etc.

So, here's a few random thoughts:


  • Governance seems to imply some sort of control (implies centralized), or influence (implies decentralized)...though I may be mixing apples/oranges here...


  • Control seems to have at least a whiff of telos (goals/purpose) to it.


  • So, one key question would seem to be "what goals are you pursuing & why do you need governance to reach them?"


  • Once you understand that, then a consideration of the relevant cause-effect relationships, and the degree to which you can know/control/influence them would seem to follow.


  • If you have to achieve a specific effect in space & time, then you're probably going to lean toward control-oriented governance (e.g., I need to ensure that credit card transactions are finalized within x hours of initiation).


  • If you're trying to influence behavior across all space/time, you'll probably use a more decentralized governance approach.


  • Centralized governance probably tends to dominate the Execution/Exploitation domain; decentralized governance probably tends to dominate the Exploration/Innovation/Discovery domain.


  • The emerging hyperconnectivity would seem to be dramatically increasing the ruggedness of the information-decision/action landscape...it that's so, incentives to more efficiently explore it are ultimately an essential factor (demand-side), but we also need to find better ways to "control" (guide?) decentralized exploration/adaption/agility (supply side).
This article on Internet governance is a "first-page" Google response to "governance taxonomy". It posits five "baskets" of governance: (a) infrastructure and standardization, (b) legal, (c) economic, (d) development, (e) socio-cultural.

Sounds like a reasonable jumping-off point to me...for those inclined to work the problem top-down... :-)

Send the Computing to the Data

Over the past few decades, it has become clear that technology is creating an irresistible pull for valuable new mission capabilities that tear down security barriers faster than we can put in needed risk mitigation/mgt. capabilities.

A few years back, I extrapolated that the logical end point was to wrap data (and associated metadata) in one or more layers of IA using some sort of PKI. If you know anything about IA, you'll recognize that this endpoint is not in the near future.
Whether we'll end up at that extreme (the current backlash against DRM would vote "no"), it does put the focus on data (vs. processing).

This reflection was triggered by two articles that caught my eye today (both from Jon Udell's blog):
  • A discussion by John Montgomery asserting that mashups involve data that's (a) simple to access programmatically, (b) interesting, and (c) available under terms that enable users to work with it...and you can pick two of the three.

  • A discussion by Jon Udell about a Microsoft HPC initiative that provides a basic cluster approach to wrapping large (multi-tera/peta/exabyte) data stores (e.g., climate data) with an HPC cluster.

Both articles have a bit of the "send the computing to the data" flavor.

Wiki's as a Tool

As with any collaborative effort to create knowledge, the underlying assumptions of the collaborators are a key factor in determining the success of the effort.

Clashing assumptions may sink the collaboration, especially if the difference involves a key aspect of the goal being pursued.

I realize these observations are trite, but there seems to be a lot of hype about wiki's as a collaborative tool these days.

Here's two interesting discussions of some collaborative challenges associated with wiki's:
This sort of thing always reminds me that knowledge exists on a spectrum from "difficult to deny" (e.g., most of physics) to "speculative & subject to complex discussion with no hope of agreement" (e.g., most of philosophy & theology). (my definition of knowledge is "that which is used to turn data/info into decisions/actions").

To the degree that a collaborative goal involves knowledge at the speculative end of the spectrum, you'll probably need agreement on fundamental assumptions if you're going to be successful.

If you can't get agreement on fundamental assumptions, you're likely to spend all your energy on that issue...leaving none to pursue the initial goal.

Review: Everything Is Miscellaneous

A few comments on David Weinberger's "Everything is Miscellaneous."
  • If you've followed the web very closely, most of this the book will be familiar. However, the author's discussion is still worth reading.

  • If you haven't followed the web closely, this is a good (yet short) intro to a fundamental aspect of its structure.

  • The author explores some of the activity around (and implications of) the emergence of what he calls "third order" structures for organizing/accessing information.

    • "First order" is the physical world; there's a single order for everyone, and users have to adapt to it (e.g., the goods in a retail store). Order is determined sometime between store design & stocking the shelves.

    • "Second order" is an index of the physical world (e.g., a card catalog). Order is determined by the type of index (e.g., alphabetical, Dewey Decimal, etc.).

    • "Third order" is an index that is created for a specific context (e.g., Amazon's "people who bought this book also bought..."). Order is determined when the "index" is created, and changes over time.

  • The book is very readable and will probably provoke some new ideas, even if the territory is familiar.

  • The author's discussion of philosophy is occasionally distracting. Having read a bit about modernism, postmodernism, and the historical roots of both, I thought his description of modernism and postmodernism was a bit on the shallow side. As a result, I got the impression he was dismissive toward modernism, and overly enthusiastic about postmodernism. From an epistemological perspective, I just didn't see much awareness of a middle ground (e.g., critical realism). If I see something presented as a dichotomy that I've come to understand as more of a spectrum, I start wondering about the author's purpose in presenting it that way. Of course, it may be that the author oversimplified in an effort to hit a mainstream target.

  • However, if you've not thought/read much about how we know what we know, this is a very readable summary of the fuzziness of human understanding. And, the research cited is much more supportive of the middle ground than the philosophical extremes.

  • I wish he had covered how knowledge is turned into action. I suppose the minimal discussion of this topic is understandable, given that he's a philosopher, he's trying to hit one topic (how we've traditionally organized information & how the web is changing that), and he's trying to keep it short & simple.

  • Finally, there's not much exploration of what this means for groups with a specific purpose (e.g., a work group, company, etc.). The exploding access to a "world of miscellany" is raising very interesting questions about how a group creates/maintains enough coherence to act sensibly in moving toward a goal.

Of course, whether something is miscellaneous depends on the context...the author's point is that all info/capabilities tend to be miscellaneous WHEN DECOUPLED FROM A CONTEXT.

No info/capabilities relevant to a specific context are miscellaneous for that context.