December 16, 2020

Control, agency and complexity — Phil Windley and Philip Sheldrake in conversation

Re-posting here in the hope that such discussions continue and a shared language and understanding is co-created around these topics.

Control, agency and complexity — Phil Windley and Philip Sheldrake in conversation

A rich discussion around the topics of sovereignty, agency and complexity arose on Medium.com between Phil Windley and Philip Sheldrake in follow-up to a September 2019 post by Sheldrake: Generative identity — beyond self-sovereignty (first published to the AKASHA Foundation blog here).


Thread 1 — on control and agency

In response to Sheldrake's assertion:

agency (in co-evolving structure) entails a negotiation in and with the world that the word “control” denies.

Windley

There is no agency without control of something. Agency implies I control things. Else how can I be autonomous?

Sheldrake

I believe you may be wielding the concept of agency slightly differently to sociologists — by my reading at least — perhaps evidenced most clearly by your question “Else how can I be autonomous?” << to varying extents you are not.

“Agency refers not to the intentions people have in doing things but to their capability of doing those things in the first place.” (Giddens, 1986)

One can say that the greater one’s capabilities of control, the greater one’s agency. But while control is perhaps the desired object, agency does not presume control.

Having control over a social outcome entails quite different dynamics to, say, controlling the light switch in one’s bedroom. The latter is entirely dumb and will never disagree with you. As and when social interaction encompasses disagreement, one person’s “control” may come at the cost of someone else’s. (Perhaps you want the light off but your partner still wants to read.) Quite naturally then, it is impossible for everyone to maintain control.

In complex systems, I would talk in terms of influence rather than control (with the likes of power theory and actor-network theory linking the two). One may be negotiating in a social system in one context where one’s will prevails, and then in another where, despite no difference on one’s own part, one’s will does not. That’s life. That’s relationships (the pathways for organizing). That’s interpersonal data (the medium of organizing). That’s identity (the sense-making capacity of organizing).

Windley

Sovereignty isn’t about complete control. It’s about borders. When we say a country is sovereign, we don’t mean it can do anything it wants vis ‘a vis other countries. Only that it controls what happens within its border. Outside its border, it must interact with others as peers.

So too with SSI. The big idea of SSI is that it gives the individual a place to stand as a peer instead of having to be within someone else administrative identity system where they set all the rules. SSI allows people to interact with other people and organizations as peers.

See https://www.windley.com/archives/2020/08/authentic_digital_relationships.shtml for more on why I think this leads to better relationship and a few — dare I say it?? — generative use cases that arise from those better relationships.

Sheldrake

A nation state is an artificial construct defined by artificial borders. Nation state sovereignty most definitely invokes those borders.

But humans are natural not artificial. While I argue that SSI currently relates to legal or more generally noun-like identity — rather than the verb-like, more natural conceptualizations of identity — it is being designed to bleed into every micro-interaction in our quotidian lived experiences. And this is deeply worrying.

There are absolutely no hard borders in natural living systems. Perceptions to the contrary may be expected but remain entirely subjective, and I address this very point at greater length in the original post (heading: "An ecology").

Drawing hard borders is your personal choice driven by one's own goals, context, values, and cultural lens. Drawing them makes no difference in the ecology of the whole ... as Gregory Bateson noted, they just form part of the very ecology they were supposed to delimit.

But drawing hard borders does make a difference when concreted in technical code from which Alice has no escape. Alice's context, values, and cultural situation differ to yours, and while you may sincerely hope for her to have "self-sovereignty" as you see it, your architectural imposition actually denies her it.

Windley

Sorry, gotta disagree. As Gabriel García Márquez said: “All human beings have three lives: public, private, and secret.” There are very much borders in human relationships and while we may not identify them the same way a nation-state does, there are borders.

You’re assuming these borders are hard. I think SSI lets us make them extremely flexible and move them all the time. You’re creating things to argue with based on assumptions the technology is driving the architecture.

Sheldrake

Having read just the one of his books to date I have some appreciation for Márquez's magic realism. A pithy quote cannot be relied upon however when presented in opposition to deep domain expertise in psychology, sociology, and ecology (which I am not claiming for the avoidance of any doubt! ... just asserting that such expertise be brought in here.)

A border is the edge or boundary of something. When you intimate that the borders you perceive in relation to the human condition are not hard, what do you mean? (I would underline that references to their flexibility and mobility doesn't change a border from being a border.)

While borders feature in your sense-making of the world, they are in fact I would argue a product of the very distributed cognition they deny. In conversation (via interpreters) with Amazonian tribe leaders last year, I can vouch that they do not perceive a border between themselves and the trees, let alone each other.

What purpose does your reference to borders serve outside legal identity other than to support one's perceptions and make it look easier to code?

Any technical architecture for a complex living system premised on false assumptions that manages still to secure traction will leave deep deep marks.

Márquez writes at the top of his memoir ("Vivir para contarla" / "Living to tell the tale"):

"La vida no es la que uno vivió, sino la que uno recuerda y cómo la recuerda para contarla."
"What matters in life is not what happens to you but what you remember and how you remember it."

How certain are you that this will endure in an SSI age?

Thread 2 — on complexity

In response to Sheldrake quoting Matthew Schutte in the context of the 2016 SSI principle: Control. Users must control their identity. ...

This assumes that 1) “identities” are a static referent, 2) identities are maintained at a system wide scale.

Windley

Neither of these are necessary assumptions for the principle. I agree that identities are more than a static referent and it's better to speak of relationships, but control is still required. And there's simply nothing about control that implies centralization and maintenance at a "system wide scale"

Sheldrake

Thanks Phil. I think I can try to put my finger on the ambiguity underpinning the difference in your interpretation of this principle and Matthew’s. With the emphasis on try! …

Christopher Allen’s principle is “Control. Users must control their identities.” I refer to the principle here as “Control. Users must control their identity”, singular. This is accurate (less ambiguous) because, as Allen’s accompanying explanation confirms, his use of the plural corresponds to the multiple users rather than to each user having multiple identities. Each user has one identity, and “it” is used to refer to that one identity.

If Matthew reads this similarly, then the one singular identity (referent) must, by definition, endure, for how else might they be referred to. It is not dynamic but static. It is an “it” among others, which together constitute a social system. Others refer to my “it” and I refer to theirs. They are then categorically a system-wide static referent.

If you interpret this principle differently to the point where you assert that it does not make either assumption, then perhaps you are reading “identity” as a mass noun. This is not an easy interpretation if only because Allen presents a binary choice (“They must be able to choose celebrity or privacy as they prefer”) that would not be required if one could — if this schema ever reflected reality — choose differently for one’s different identities.

Windley

I like to point out that there’s no artifact called an “identity” in the identity metasystem created by the protocols supported by the Indy and Aries projects. People see and manage relationships (connections), credentials, and messages. I view “identity” as what emerges from all those and is only seen, in total, by each person.

I’ve said before that trying to parse Christopher’s words from 2016 to understand what is happening today is just setting up strawmen arguments that are easy to knock down. Christopher made an important contribution, but his words are not scripture and he’s not a prophet. You’d do well to move on from arguments based on finely parsed readings of his words like it’s the Apocrypha.

Sheldrake

To your point about Allen's 2016 principles, you will note that the most recent SSI critique makes no mention. But my 2019 critique does because such 'laws' get ingrained in minds and encoded in software and endure longer than you might otherwise think appropriate, and remain then worthy of criticism.

I wonder if you agree ... I really do think there's not much that distinguishes our perspectives / models. We really do see things very similarly indeed. Reassuringly.

To paraphrase the concluding sentence to your first paragraph here, if only to underline that alignment but also to signal the very subtle difference ... Alice's identities are the co-emergent consequence of her relationships, perceived imperfectly and only in parts by each person (including herself).

In fact, Georg Simmel put it far more eloquently than either of us more than a century ago.

"We are all fragments, not only of humanity in general but also of ourselves. We are amalgamations not only of the human type in general, not only of types of good and evil and the like, but we are also amalgamations of our own individuality and uniqueness — no longer distinguishable in principle — which envelops our visible reality as if drawn with ideal lines. However, the view of the other broadens these fragments into what we never actually are purely and wholly. The fragments that are actually there can scarcely not be seen only juxtaposed, but as we fill in the blind spot in our field of vision, completely unconsciously of course, we construct the fullness of individuality from these fragments.
"... the procedures play out in the individual soul as well. In every moment these processes are of so complex a kind, harboring such an abundance of manifold and contradictory vicissitudes, that identifying them with one of our psychological concepts is always incomplete and actually falsifying: even the life moments of the individual soul are never connected by just one thread."

Windley

I like the quote and agree we are more aligned than not.

My problem with your response is parsing it finely to justify the assertion that SSI has static, noun-like identities instead of merely looking at the architecture and judging it. It’s like sitting around for years trying to determine from first principles how many teeth a horse has instead of just going out to the pasture and counting.

In fact, I think SSI is extremely dynamic and flexible. Indeed, after reading your posts on generative identity, I’m wondering what you’re going to change in the architecture to come up with something more generative than the SSI architecture I’m familiar with.

So, I await the design. Until then, it’s just words without substance to talk of dystopia and generativity because there’s nothing to support the argument.

In the end, we’re talking about how to support life-like relationships and interactions in the digital realm. That requires principles, architecture, and ultimately code.

Sheldrake

Love the horse metaphor! :-) ... and I am, first and foremost, a professional engineer. I embrace thinking and doing in equal measure; each is better off for the other. And when thinking alerts you to significantly negative consequences of one's actions, one pauses for reflection and entreats others similarly.

You are not the first to refuse the critique of SSI with the challenge to me to come up with a more fitting design (or worse ... some refer to a "solution"). This is illogical. The current lack of a design that does not suffer the same dangerous consequences does not relieve the current design of its flaws nor its champions of the ethical responsibility to pause and reflect.

I'm sorry that I haven't been able, yet, to articulate the malignant emergent consequences of SSI in a way that more people working in this space grasp readily. Blame the writer not the reader. Fortunately, the analyses is gaining traction here in Europe, slowly but surely, again not for the erudition of the critique but for the working of different social norms and corresponding perspectives I'm sure.

You write that I should be "merely looking at the architecture". I should make it very clear that I would never merely do that. I will come back to my two main themes in response to your comment here: emergence and sociotechnology ...

One can look at the architecture of SSI all one likes, just as one can study the architecture of the automobile. Such study is necessary but woefully insufficient, taking no account for example of traffic jams, pollution, or any other systemic consequences.

As I write: "Viewed atomistically, technologically, SSI looks quite sensible. At scale, as sociotechnology, the emergent consequences are malignant."

You write that your approach "requires principles, architecture, and ultimately code". I would retort that the current principles are unclear and must be revisited in inter-disciplinary fora and tested for their facility to operate at a layer beneath the society with which you are most familiar to enable very different operationalizations.

And when you write "code" you are thinking I suspect of technical code. I have no doubt in my mind that technical code alone will be woefully insufficient in heading off distressing outcomes per my references in the critiques to friction.

Windley

You write “Viewed atomistically, technologically, SSI looks quite sensible. At scale, as sociotechnology, the emergent consequences are malignant.” But I’ve never been able to understand those consequences or why you think they follow from SSI from your writing.

We disagree most pointedly, I think, on the importance of architecture and its ability to help with this. You say, for example:

“One can look at the architecture of SSI all one likes, just as one can study the architecture of the automobile. Such study is necessary but woefully insufficient, taking no account for example of traffic jams, pollution, or any other systemic consequences.”

You’re mistaking component architecture with system architecture and then criticizing the ability of architecture to elucidate the problems because component architecture doesn’t answer all the questions. I contend that if you look at the architecture of the automobile transportation system holistically, then you can can better analyze malignant consequences and their causes.

SSI is a system and there is a system architecture. The principles are very much a part of the architecture. The malignant consequences you worry about ought to follow from the system architecture and become something to study and learn from. So, please point them out. Otherwise, it’s all just handwaving.

I suggest that part of the problem is that you’re trying to critique SSI which is a big tent that does not have a consistent set of principles or even agreement on a system architecture. As such, it’s an easy target since you can undoubtedly find some system somewhere that purports to be SSI and is “doing it wrong.” :)

A better approach might be to select one or two specific SSI ecosystems like Sovrin, for example, and examine their principles (which are clearly laid out and evolved from Allens) and system architecture (which is very developed) and point out the specific malignancies that follow. I’m sure there are some and the community welcomes sincere criticism. Sovrin’s architecture has changed a great deal over the years as problems have been discovered because the participants are sincere in their desire to develop an identity metasystem that empowers people to live fully engaged digital lives.

Sheldrake

Thanks as always Phil for taking the time to respond. Your response largely pivots on:

You’re mistaking component architecture with system architecture and then criticizing the ability of architecture to elucidate the problems because component architecture doesn’t answer all the questions.

We are contemplating quite different scales of system. The system architecture to which I refer encompasses human behaviour, human community and society, and the natural living world. I have not seen any system analysis from the SSI community at such scales, nor any anticipation then of the inevitable emergence.

The system architecture to which you refer is technical. Period. There is no way, contrary to your assertion, that the distressing consequences with which I’m concerned ought to follow from this system architecture. There may be some, but my focus is SSI in the real world.

(You also write that the principles are very much a part of the architecture, but then note that there aren’t any consistent principles??)

I, and increasingly others, will keep on highlighting the distressing outcomes here, and I thank you again for treating me here as a critical friend. Thanks. Much appreciated.

Windley

There are consistent principles for specific systems. You have to critique a specific system and its principles, not the general notion. One might, for example, argue for or against democracy to the extent people generally understand what democracy means, or if you appeal to some widely accepted definition. Otherwise, you will be better understood if you argue against democracy as it operates in [pick a country]. So too for SSI.

You say you’re highlighting distressing outcomes, but you’re not doing it in a way that people in, say, the Sovrin community recognize as an issue with their system cause you so often misconstrue what that community believes or is doing.

Until you can say “Distressing outcome X will occur in system Y because Y does Z” your critique will continue to miss the mark and not have the impact it could.

It’s easy to scare people by saying “this complex thing you don’t want to spend time understanding will ruin your life and cause cancer”. Much harder to say “we need a way to live fully engaged digital lives safely. Here’s how we can do that.” So, yeah, it doesn’t surprise me that you can get people to listen and be scared.

The tragedy is that the result will be a continuation of the status quo, a dystopia of surveillance capitalism that is destroying not just digital life, but real life as well. I think you have a moral duty to either offer specific critiques that can be acted on, or to stop scaremongering.

Sheldrake

Until you can say “Distressing outcome X will occur in system Y because Y does Z” your critique will continue to miss the mark and not have the impact it could.

This has been a productive exchange Phil in terms of improving my understanding of the communication and collaboration challenge.

I regard you as one of the most sophisticated leaders in the ‘digital identity’ space. You have experience and knowledge I lack, and I have experience and knowledge you lack. While that is a truism for any two people of course, I make that observation if only to humbly suggest that you appear to have a blind spot that I think typifies ‘the community’ here. Complexity.

I’m very aware of the dangers of characterizing a group of people because undoubtedly there is always variation, there are always exceptions, but when I’m called — on more than several occasions now — to identify direct causal effects (your X, Y, Z above) it adds to the suspicion that current principles, architectures, and designs have been developed in isolation from an appreciation for complex systems, and conveys a reluctance to engage with complexity even when prompted in critique.

This might signal an ignorance of complex adaptive systems, of natural living systems, of sociology, of ecology. This would explain why The dystopia of self-sovereign identity isn’t hitting home as reliably as hoped with ‘the community’, yet, and indeed why the critique is warranted in the first place. It also corroborates a primary recommendation — to press PAUSE until inter-disciplinary expertise has been brought to this endeavor.

...

Our two-threaded conversation here on Medium these past nine days is of course entirely in the public domain. We’d like then to copy and paste it unedited and in its entirety to generative-identity.org. Please do let me know this week if you have any objection, or indeed would like to add a concluding remark of course. Thank you again.


Image by Robert Zunikoff.