Programmable Money and Identity

Since my time at Microsoft (almost 20 years ago!), I have been infected by the digital identity virus. At one moment I was even part of the WEF Personal Identity workgroup. Since then, I followed the identity space in some depth, some years more than others. See also my post on The Cambrian Explosion of Identity from 2019 that also figures David Birch.

Earlier this week, that same David Birch published a very interesting post about digital identity: the bottom line of his insight is that we should be less interested in solving pre-digital conceptions of identity and more in (certified) credentials.

I recently came to a similar conclusion, but from a completely different angle.

First, I bumped into The Block Whitepaper on CBDCs (Central Bank Digital Currencies) from August 2020, and got intrigued by the schema on page 13:

The authors do a great job in explaining the difference between Claim-based (or account-based) money and Object-based (or token-based) money.

In other words, money is an asset, and can be represented as a Claim (Account) or as an Object (Token)

Then I ran into this May 2021 post by Andrew Hong on The Composability of Identity across Web 2.0 and Web 3.0. It’s a quite technical paper, and I probably only understand 5% of it, but my attention was again drawn to a diagram on the composable and non-composable aspects of identity:

Andrew Hong writes: “The second layer (and onwards) highlights categories and products that allow us to represent that transaction data and/or social graph as tokens. Since tokens have the qualities of existence, flexibility, and reusability – then by the transitive property – our digital identity now does as well. I can move around these tokens at will to different accounts and in different combinations.”

I suddenly realized that the difference between Account-based money and Token-based money also applied to identity.

In other words, just like money, also identity is an asset (it always was), and it can be represented as a Claim (Account) or as an Object (Token)

During my 2003 Microsoft project, I was lucky enough to be exposed to more advanced  identity thinking by wise people like Kim Cameron and the other folks from The Internet Identity Workshop gang, and I got quite familiar with their thinking of certified identity claims or claims-based identity.

But only now, I realized that both Money and Identity can be account- or token-based, and that token-based is probably what’s going to help us make progress, because it makes identity and money programmable.

In other words:

For Account-based identity, you need to be sure of the identity of the account holder (the User ID / Password of your Facebook-account, your company-network, etc.). For Token-based identity (Certified claim about your age for example) you need a certified claim about an attribute of that identity.

  • For a paper/plastic ID Card/passport, it is the signature of yourself and the signature of the issuing authority, and plenty of other technical ways of ensuring the integrity of the card of passport (holograms, etc.). But it is a certified claim that the ID Card/passport is real, authentic, not tampered with.
  • In the case of an electronic ID Card (like the one we have in Belgium), the certified “token” for authentication or for digital signature is stored on the microchip of the ID card, and can be enabled by the PIN-code of the user (a bit like User ID / Password)
  • For a certified (identity) claim (like proving that you are older than 18 for example), you basically need a signed token, a signed attribute. And because it is done digitally, (identity) attributes becomes programmable, you can assign it access and usage rights

For Account-based money, you need to be sure of the identity of the account owner (the User ID / Password or other mechanism to access your account). For Token -based money (a 100€ bill, an NFT token, a ETH token, etc.) you need a certified claim about an attribute of that money.

  • For a 100€ bill it is the signature of the President of the ECB (European Central Bank) and plenty of other technical ways of ensuring the integrity of the paper bill (holograms etc.). But it is a certified claim that the 100€ bill is real, authentic, not tampered with.
  • For digital money, it is a signed and encrypted token representing one or more certified aspects of that money. And because it is done digitally, money becomes programmable, you can assign it access and usage rights, just like you could do for identity aspects

So, I come – although from a different angle – to the same (or at least similar) conclusion as David Birch in his latest post about identity that (certified) credentials are the way forward.

“These credentials would attest to my ability to do something: they would prove that I am entitled to do something (see a doctor, drink in the pub, read about people who a richer than me), not who I am.” (David Birch)

I am just adding the money dimension to it, and using the same sentence, I can now say:

“These credentials would attest the money’s ability to do something: they would prove that the money is entitled to do something (pay for Starbucks, pay for food, only to be used if there is enough money on my account, etc.), not whose money it is. (Petervan)

Post Scriptum:

You could also consider NFTs as certified claims of something (in today’s hype, they are certified claims of the authenticity of an artwork, but it could be anything, also identity, or also money. Amber Case for example referred to NFTs in the context when mentioning the Unlock Protocol on her ongoing overview of micropayments and web-monetization:

Unlock Protocol has a particularly inventive approach to NFTs — using them as customizable membership keys for certain sites. This allows people to set a length of time for the membership, or access to certain features like private Discord channels.

Content creators can place paywalls and membership zones in the form of “locks” on their sites, which are essentially access lists keeping track of who can view the content. The locks are owned by the content owners, while the membership keys are owned by site visitors.

In that sense we can really look at certified credentials as key to open something (a door, a website, a resource) or to enable something (a certain action, a certain right, a certain flow, etc).

Rabbit hole? Curious to read your thoughts.

Warmest,

Immoral Machines of Loving Greed

The theme for Techonomy 2019 in Half Moon Bay, California was “Reset and Restore: Governing Tech, Retrieving Ethics, and Acting on Climate.”

Keen and David

In the opening session, Founder and Host David Kirkpatrick prompted: “These are serious times” and the following interview by Andrew Keen of David was really interesting. Keen rightfully asked the question of what needs to be reset, and – if we have to restore something – is this a nostalgic going back to good old times, or what is meant here?

To make a long story short, it seemed the answer could be distilled to a resetting and restoring back to/towards more humanity.

Konstantinos Karachalios, Managing Director of IEEE’s Digital Ethics department referred to the German Jewish Viennese philosopher Gunther Anders, who wrote in 1956 “The outdatedness of the Human Species”.

Konstantinos also shared some strong opinions about the Power (in)equation – the asymmetry in power of the big tech vs. us – and summarized his thinking as “The Time of (Engineering) Innocence is Over”

Colin Parris @colin_j_paris did a session titled “Why AI has to be humble” about GE’s use of self-learning AI in the building of GE Jet Engines. Super-slick and professional presentation, almost too clinical. The last slide was about “Intimidation by Immortal Machines”.

Immortal machines

My head got spinning and got me thinking of John Markoff’s 2015 book “Machines of Loving Grace – The Quest for Common Ground between Humans and Machines

Markoff

In itself, the book’s title is a spin on Richard Brautigan’s “All Watched Over by Machines of Loving Grace” from 1967, and of course, Adam Curtis fantastic 2011 documentary “All Watched Over by Machines of Loving Grace

 

I like to think (it has to be!) of a cybernetic ecology

where we are free of our labors

and joined back to nature,

returned to our mammal brothers and sisters,

and all watched over by machines of loving grace. 

Richard Brautigan, “All Watched Over by Machines of Loving Grace” © 1967

Let me put all this behind the backdrop of what I saw and experienced a couple of days earlier in the San Francisco Museum of Modern Art (SFMOMA).

Moss screen

Richard Moss "INCOMING" - Picture by Petervan

On the 7th floor, there is an amazing video installation by Richard Mosse, called “INCOMING”, and it is about the horrible conditions in another Western export product: refugee camps, and related issues of sovereignty, warfare, and surveillance.  The installation forces us to confront our own complicity. Strongly recommended. Still running in SFMOMA till 17 Feb 2020. Warning: you won’t come out smiling from this installation!

https://vimeo.com/234290984

See also interview with the artist in Forensic Architecture

The entrance of the installation also includes a picture of Berlin’s Tempelhof, a symbolically loaded site to house asylum seekers.

Temperhof

Tempelhof context

“…, and the airfield has been transformed into a popular public park. Some of its adjacent buildings and territory were designated as an emergency refugee shelter in 2015”

What misery! What a shame for a “modern” society! This installation made me rethink my opinion about refugees. For me, it questions the whole semantic discussion about “asylum seekers” vs. “economic” refugees. There is no difference. When people become so desperate to flee their home and take these incredible risks and withstand these inhumane circumstances, those semantics become irrelevant.

This injustice is going to explode in our face, sooner or later. A toxic mix with climate change, inequality and the 1% owning 99% of the wealth. I can only hope I will not be treated this way when I or my children have to find refuge for climate change or other disasters in the future.

All the big problems of today are crying for more compassion, more morality, less greed. The root cause is a lack of morals combined with an abundance of greed.

Putting it all together, “Immortal Machines of Loving Grace” may be better replaced by “Immoral Machines of Loving Greed”.  Just replacing two words is probably better and more adequately describing our Zeitgeist.

In that sense, some of the discussions of Techonomy 2019 should have included the refugee crisis vs. having safe conversations about the attention economy, tech supremacy or immortal machines of loving grace in a five-star luxury hotel.

See also my separate post on the key memes of Techonomy 2019.

petervan-signature_transparent_black_version2

 

Big data and surveillance in cities: convenience vs tracking

being-observed-surveillance-02_Gerd_illustrations_13_01_16_v1-06_YellowSidewalk Labs is Google’s / Alphabet Inc.’s urban innovation organisation. Its goal is to improve urban infrastructure through technological solutions, and tackle issues such as cost of living, efficient transportation and energy usage (Wikipedia). Looking at their website, it’s full of ambitious goals: reimagining cities to improve quality of life, people-centered design, street safety, affordable housing, sustainability, people first, etc. In other words: do the right thing.

The Intercept takes a deeper look and presents a more skeptical perspective: instead of the usual statistical data, the project uses real-time location data, but it’s unclear where does data come from. The Intercept is also raising questions about how Sidewalk Labs sets limits in regards to the type and quality of consent; and – more worrying – its potential for corporate and government surveillance.

“If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work?”

“Replica is a perfect example of surveillance capitalism, profiting from information collected from and about us as we use the products that have become a part of our lives. We need to start asking, as a society, if we are going to continue to allow business models that are built around exploiting our information without meaningful consent

Initially posted at Futurist Gerd

The Selfish Data

Some days ago, a Google video “The Selfish Ledger” leaked: a futuristic thought experiment on how total data collection could reshape society. I believe it is a very interesting perspective on data collection that can lead to as many utopian as dystopian scenarios as you want.

There was an excellent coverage in The Verge, well done, so read that one first maybe. The same Verge article also includes a good context video here.

What I would like to offer here is a somewhat broader perspective on the whole issue.

The use of the word “Ledger” reminds me of course of the 2012 Digital Asset Grid project – in essence a collection of distributed ledgers of all sorts of data (not only personal data), a blockchain without blocks and without chains – that was already incorporating concepts like the intention economy of Doc Searls. With some goodwill one could interpret the “Resolution” concept in the Google video as some sort of intention.

In 2012 there was maybe a time window where Personal Data Stores could offer an alternative to the almighty GAFAS of this world, but that time has long been gone. The Google video also shows how outdated the GDPR legislation is. Today is not anymore about users giving consent, but about data having its own life and will. I could paraphrase Kevin Kelly’s “What does technology want?” into “What do my data want?”. Not that I believe that my data wants anything at all, but it gives you a zest of Google’s thought experiment.

google ledger

The key snippet from the video is where the human becomes the custodian – not the owner – of the data ledger, and can pass it on to next generations. The video suggests that data has it’s own intention, an intention to survive and pass on information to next generations. Like the Selfish Gene of Richard Dawkins (a book from 1976 ! that is also referred in the Google video). The Selfish Gene was published more than 40 ago, and since then the ideas of Dawkins have been quite critized.

The Google film also has a bit of the same alienating atmosphere, uncanny valley feel of Andy Curtis documentaries. Of course the documentary “The Century of Self” is the most relevant in this context.

It’s a series of 4 videos, together more than 3 hours of footage, but I strongly encourage you the watch it with the Google video as reference point.

Curtis depicts “how those in power have used Freud’s theories to try and control the dangerous crowd in an age of mass democracy.” and refers a lot to the PR techniques developed at the time by Edward Bernays, who was using the corporate PR techniques, but now for governments wishing to influence the behaviour of their citizens.

Curtis also cites the words of Paul Mazur, a leading Wall Street banker working for Lehman Brothers in 1927:

“We must shift America from a needs- to a desires-culture. People must be trained to desire, to want new things, even before the old have been entirely consumed. […] Man’s desires must overshadow his needs”

The Google video seems inspired by that desire to train people to desire, whether that is buying stuff or realising resolutions. Still very much looking at the user as a consumer, which is an insult IMO. It also starts feeling very much like the Sesame Credit score, the Chinese government social rating system, a private credit scoring system developed by Ant Financial Services Group, an affiliate of the Chinese Alibaba Group, where in essence behaviour in line with the party line is rewarded, and behaviour not in line with that norm is punished. The critical question is of course who sets the norms and what are the intentions of those issuing these norms.

Also, what many discussions about personal data seem to omit, is that the data that are intentionally or unintentionally shared by users are only a very small snapshot of somebody’s data “ledger”. A lot is not shared at all: I would refer to these data as “The Unspoken”. The ideas, thoughts, concepts, models, desires, fears, etc that are unspoken, because they embarrass you, or because they have not yet been integrated in your personal narrative of who you are.

The Unspoken data are related to unspoken dreams, frustrations, fantasies, weird thoughts, shadows, memories, etc. In many cases personal secrets that you are too afraid to share as they expose your vulnerabilities. I have started making a list of The Unspoken that you can find here, and I kindly invite you to complement this list if you feel so. Who said again that “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”?

On another dimension, I have been reading quite recently a couple of books that at first sight may seem unrelated to the subject at heart here.

  • Nora Bateson’s “Small Arcs of Larger Circles: Framing through other patterns”. A book about how thoughts, ideas, concepts and patterns are inter-relational and are passed from one generation to another.
  • Michael Singer’s “The Untethered Soul”: about the timeless philosophical question “Who am I?” and more importantly, which “I” are we talking about here. The “I” of our thoughts and emotions, or the “I” that is witnessing them?
  • Keith Johnstone’s “Impro: Improvisation and the Theatre”: highlighting how people try much too hard not being obvious, and how many people think they are only interesting of they have something different to show, share, say.
  • Venkatesh Rao’s “Tempo: Timing, Tactics and Strategy in narrative-driven decision making”, describing virtuoso how “tempo” is an always present but less outspoken aspect of our relationships between people, corporations, etc
  • Han Kang’s “The White Book”, with an essay about swaddling white bands around a newborn baby: “The womb will have been such a snug fit, so the nurse binds the body tight, to mitigate the shock of its abrupt projection into limitlessness. Person who begins only now to breathe, a first filling-up of the lungs. Person who does not know who they are, where they are, what has just begun. The most helpless of all young animals, more defenceless even than a newborn chick.”

The Google video is also inherent of Silicon Valley’s solutionism delusion; that if there is a problem to be solved, there is an app or an algorithm for it. This is finite game thinking as compared to infinite game thinking, as well described by James Carse.

I like Nora Bateson’s quote here:

The problem with problem-solving is the idea that a solution is an endpoint.

 And further in her book:

I see a great deal of misunderstanding—a great deal of information floating around, and even more being generated in the form of big data, little data, medium data. But not much in the forms of the warm data of interrelationality.

“Warm Data” is information about the interrelationships that integrate elements of a complex system. Information without interrelationality is likely to lead us toward actions that are misinformed, thereby creating further destructive patterns.

“Warm data”, I like that. I prefer that way more than selfish data.

 

petervan-signature

The Illusion of Agency

At this year’s Innotribe Sibos, we have a session about digital ethics. Part of a full day on man-machine convergence.

Some of that conversation will be about the use and control of data. With this post, I would like to add my perspective to that conversation, based on some recent thinking on human agency.

At a recent MyData2016 event in Helsinki, i was surprised how little the thinking about personal data stores has evolved since 2012, when i was myself deeply in the trenches of the topic of distributed data sharing.

It was a really great conference, well organized, cool audience etc, but like many conferences, it was the tribe talking to tribe, believers talking to believers, all thinking that their lens to look at things was the right one, with little or no contrarian view.

I wanted to be that contrarian, and challenge a bit the assumptions.

At the event there was a lot of talk about “PIMS”: Personal Information Management Systems, or personal data stores, or personal data “clouds”. I don’t want to have a discussion about the subtle semantics here.

At one moment, Jamie Smith from Ctrl-Shift – who i respect a lot – said something along the lines of “PIMS are all about giving people agency”.

I think that is a big illusion, and that was what my talk was about. The illusion that the problem is about taking back ownership and control of your data. And that a PIMS is the solution. I believe we are discussing the wrong problem and the wrong solution when talking about managing our own personal data at our terms and conditions.

Owning your own agency is more important than owning your data. That in essence is what my talk was about.

My presentation at #MyData2016 conference

UPDATE: here the link to the Prezi of this presentation. Because there is so much video in this Prezi it takes 2-3 min to load. Be patient 😉

The talk is part of a longer story of more than one hour, wandering through a whole bunch of philosophical, ethical and artistic considerations. At this event, i got only 20 minutes, and i told the moderator he could cut me off, which he did most elegantly (no pun intended) at the end of my presentation.

My agency vs. my data is a pretty big deal.

  • It is not about buying but creating
  • It is not about my data but my agency
  • It is not about privacy but about shelter
  • It is not about power asymmetries but relationship symmetries
  • It is not about MyData but about OurData

In that sense the GDPR (General Data Protection Regulation) is shooting at the wrong problem. In that sense our politicians and leaders in general are again outperforming in solving the problems of the past.

I got some good reactions after this talk, from Doc Searls saying “you gave the talk that i always wanted to give”, to somebody else sending me a tweet and a mail saying “your presentation has changed my life, i decided to leave Facebook after more than 10 years”.

There is such a strong tension between our actual reality and the desired reality that we are currently moving in some form of virtual or surreality. But as Magritte said:

“Surrealism is the immediate knowledge of reality”

And we feel lost. We escape and try to reconnect nostalgically to what was, and are afraid of what going to be. People focus on the surreality of their phones instead of real life.

People believe what is on their phones and PIMS is the reality, and are able to represent us as human beings. But as Markus Sabadello said at this event: “Technology will not be able to represent the full complexity of human beings”

Our devices and apps make us believe we are in control, because we now can “manage” our data and lives. But we are focused on managing life, rather than living it. That is our big illusion.

To summarise, I believe our plan and ambition towards our desired reality must at least have following components:

  • This space needs to be regulated. Regulation means setting ethical and moral norms, AND policing them
  • These norms must be ethical and moral
  • We must decide who sets these norms, who polices them, and who penalises/rewards good behaviour.

For that we must bring “Society-in-the-loop”, and not let this be decided by governments, corporations, or god forbid, algorithms

 

society-in-the-loop-iyad-rahwan

Society-in-the-loop by Iyad Rahwan

We must expand ourselves from a problem (efficiency) orientation to a creative (value creating) orientation, because the future is not about solving the past but knowing what you want and use mastery to make that happen.

Last but not least, we must be very much aware of the shallowness of the actual reality, and strive for high quality work with high quality attention and presence and meaning also called “Deep Work”

Maybe next year, they should call the conference #MyAgency2016;

When public becomes private

Windows at Brussels Airport after suicide bombings on Tuesday. Credit Pool photo by Frederic Sierakowski

Windows at Brussels Airport after suicide bombings on 22 March 2016. CreditPool photo by Frederic Sierakowski, in NYT article “Je Suis Sick of This”

In the aftermath of the terrible Brussels terrorists attacks, I encourage you to watch the full 1h50m LiveStream of the “A Conversation on Privacy” of just a couple of days ago.

3heads-Image-facebook-1-1000x510

The conversation was positioned/framed as “The balance between national security and government intrusion on the rights of private citizens” and featured renowned linguist and MIT professor Noam Chomsky, NSA whistleblower Edward Snowden, and Intercept co-founding editor Glenn Greenwald. Nuala O’Connor, president and CEO of the Center for Democracy and Technology, was the moderator.

It is clear from the reactions of the public in a full house Centennial Hall of the University of Arizona College of Social and Behavioural Sciences in Tucson Arizona that Chomsky, Greenwald and Snowden were playing a home match, but that should not underplay some of the key points they were making.

There are basically four big chapters in this conversation:

  • What is privacy, and the effects of mass surveillance (nobody in its right mind is questioning targeted surveillance)
  • The Brussels and other attacks and the (in)-efficacy of mass surveillance
  • The FBI – Apple case
  • The role of journalism

I am looking forward to a full transcript of this conversation, in the meantime I made the following bulleted notes:

  • On privacy
    • When discussing privacy and security, are we discussing security of State, Corporations, or Citizens?
    • The statement “if you don’t have anything to hide, you have nothing to fear” does not cut it at all:
    • Everybody needs to be able to think and explore in a space where you are not subject to other people’s judgment, where you can make decisions as result of your own agency
    • People are starting to self-sensor, curtailing their own speech
    • Privacy is the right to enjoy the products of our own intellect
    • Privacy is the fountainhead of all other rights
    • Privacy is the right to a free mind, without having your ideas being pre-judged before they are fully formed
    • If no privacy, you live as a collective, in a state of reaction to your environment
    • “I don’t care about privacy because I have nothing to hide, is about the same as saying I don’t care about freedom of speech because I have nothing to say”
    • Rights are designed for those who are vulnerable. “Not caring about a right (because it does not apply to you) is probably the most anti-social thing I can imagine.”
    • Rights exist to protect the minority against the majority. Even if the majority does not care about privacy (or any other right), that majority view is irrelevant
    • Silicon Valley companies still don’t care about your privacy. What they fear is users would give their data to somebody else
    • The “Digital Self” is unhealthy, creates a sense of intimacy that is fraudulent, leads to very superficial interactions amongst people
    • Should there be state secrets at all? Governments classify EVERYTHING as Secret or Top Secret, because of their unwillingness for transparency
    • The elites decides on our behalf.
    • The elites change as quickly as possible the conversation to the theoretical risk of having a free press
    • Almost NOTHING is concerned with the security of the population; the population is the enemy, and they are not supposed to know what the government or corporates are doing
    • The (US) does not want you to know that the real battle is about world domination of the US doctrine
    • The trade off between security and privacy is is a false dichotomy
    • It leads to the illusion of democracy
  • On the European attacks:
    • Mass surveillance does not have ANY concrete results against terrorism
    • “When you collect everything, you understand nothing”, “you are blinded by the noise”
    • But if mass surveillance does not work for terrorism, it must work for something… What is it good for then? It is about setting and policing our policies and marking anything that is not conforming as suspicious
    • The resources are misallocated to mass surveillance in stead of addressing the route causes
  • On the Apple – FBI case
    • The FBI “wants it all” – all communications between human beings – in other words “wants to kill privacy”. They want access to everything, even your private conversations in between the safe four walls of your home.
    • Orwell interpretation “if you live in a society where you are always being watched, you loose freedom”. But that was an interpretation. What Orwell really wrote was “… a world where we COULD be watched at any moment…”
    • In such a world, you have to act AS IF you were being watched all the time, not knowing of the surveillance device is operating, watching you, or if somebody on the other side is doing something with the information collected
    • Who should be permitted to hold secrets: The citizens ? The governments? The corporations?
    • The content of San Bernardino calls already HAVE been given to the authorities (through the service providers)
    • By unlocking the phone, they would now also have access to the metadata
    • “Private citizens” should have full transparency on “Public officials”
    • The emerging culture is the opposite: Public officials’ activities becoming more and more opaque, and Private citizens’ activities becoming more public
  • On Journalism
    • “What is non-objective is significant” with respect to journalism and framing
    • A lot of journalistic framing follows from their own obedience to the framework of conformity that they learned at our best schools in the world (Oxford, Cambridge, etc.)
    • We have to continue to reveal things that should never have been concealed in the first place

On the same day of the Conversation on Privacy in Tucson, there was an interview with US Secretary of State John Kerry on Canvas (Flemish Television).

john kerry

The theme of that interview was “the need for an integrated system of information exchange to increase security”, and that some countries have reservations to such systems – specifically referring in to Edward Snowden.

Some extracts of what John Kerry said (i tried hard not to put things out of context):

“It is fair to say that in a number of countries, partly because of mister Edward Snowden, and the history, people had a reservation about doing some of these things, because they felt that might be an invasion of privacy”.

“I don’t worry about my privacy. The fact that I am getting on an airplane – if I were not flying in a military airplane now, but if I am flying in a civilian airplane which I was doing as a senator – I don’t care if they know if I am on that plane; because I am obeying the law.”

“So I think people have to relax a little bit and understand that there are plenty of ways to protect your privacy without creating greater danger in society at large.”

“I do know that you (Belgium) have a federal system, I know you have a fairly decentralized system,…. And I remember the difficulties we had in the US between federal authority, state authority, and local authority and the movement of information. So, we’ve streamlined much of that now.”

“It is up to Belgium to decide what it should do, but I would urge Belgium and all European countries to create a more integrated flow of information so that we can protect ourselves more effectively”

“And I would say to every citizen that there is a way to do that and still protect people’s legitimate privacy. There is absolutely a way to do that, and we’ve proven it and we’ve lived with it.

To be honest, I could not believe my ears when watching this interview. If you have done a little bit of homework on the topic of privacy, you would also revolt against some of these platitudes which are in the same category as “If you have nothing to hide, you have nothing to fear”. The journalist in case missed the opportunity to give pushback to Kerry and to offer a more comprehensive framing of the issues on the table.

It seems to me that the underlying theme in all of this is a cultural tipping point from “when public controls private” to “when private controls public”.

Which of course stands in stark contrast with the idealistic visions of a fully distributed society: also that is a big illusion, because in any system where there is power to be re-distributed, some bigger players like governments and corporations will try to take advantage and create monopolies.

One could discuss what “control” means in this context, and I believe it is related to setting, dictating, manipulating and policing our set of norms and behaviours.

Although the conversation in Tucson is addressing mainly the way western (read US) politics are ran, the whole reasoning is applicable to any other belief system that evolves towards totalitarianism.

Evgeny Morozov was razor sharp is this week’s “The state has lost control: tech firms now run western politics“:

The only solution that seems plausible is by having our political leaders transfer even more responsibility for problem-solving, from matters of welfare to matters of warfare, to Silicon Valley.

This might produce immense gains in efficiency but would this also not aggravate the democratic deficit that already plagues our public institutions? Sure, it would – but the crisis of democratic capitalism seems so acute that it has dropped any pretension to being democratic; hence the proliferation of euphemisms to describe the new normal (with Angela Merkel’s “market-conformed democracy” probably being the most popular one).

The “need for an integrated system of information exchange to increase security” leads to a corporate and government surveillance state. Artificial intelligence tech firms and powerhouses start penetrating every segment of industry, also financial services.

@suitpossum was spot on with his great post this week on “The dark side of digital finance: On financial machines, financial robots & financial AI”, about machines controlling the “body” of the organization. @suitpossum has a great way to articulate how AI and robots are gradually robbing us from our personal agency.

The issue is whether they collectively imprison people in digital infrastructures that increasingly undermine personal agency and replace it with coded, inflexible bureaucracy; or whether they truly offer forms of ‘democratisation’.

I start calling this “The Illusion of Agency” and it will be the topic of one of my upcoming talks and associated blog posts.

There are several ways our policy makers can react to the attacks:

  • One way is to chose for confrontation: to step up reaction and retaliation, enforcing this way the agenda set by the attackers to undermine our way of living. Hitting back includes these “integrated systems” and the access to encrypted data as suggested in the British Investigatory Powers Bill. See also great NYT article on this topic
  • Another way is to use our resources to address the route causes of all this: the disrespect and straight military attacks by the western powers on non-western cultures and economies, not in the interest of the security of their populations but in an attempt to protect the economical and power interests of an elite.

But as public becomes more and more private, and private becomes public, and knowing who is in power, I am rather pessimistic and afraid that they – not we – will chose for the confrontation.

In the meantime – as I said in the beginning of this post – I invite you to listen to the full conversation on privacy, so you get some other perspectives than the obvious and populist ones you can pick up in the mainstream press and television news programs.

The Myth of Collaboration

Rogier Noort just published a post on his site, for a great part based on an interview he did with me during the Enterprise 2.0 Summit in Paris in February of this year. Rogier’s original title of the post was “Collaboration:  Salvation or Myth”. It’s a great post, and Rogier clearly took the pain to reflect on our conversation. I would label it as “The Myth of Collaboration”. Some people call my point of view blasphemy in a period where everything has to be “social”, “working together” and “collaboration and hacking spaces”. So be it. I just felt there was something deep wrong about it, and Rogier did an awesome job of articulating my thoughts. I have copied the text in it’s entirety, and just added the usual colour emphasis.

+++ Start Rogier’s post +++

Collaboration is an important part of productivity. It’s a highly desired commodity, but seemingly more elusive that you’d might think.., and it cannot be forced.

The other day my wife saw a message from an old colleague.., they’re moving her to a flex desk. “Now, I’m no longer allowed to place a photo of my grand children on my desk”, is what she said.

Her work is routine, she’s not allowed to work from home, needs no collaboration, won’t hop from desk to desk, and nobody will wander in looking for a place to work.., in other words.., that particular department does not need flexible workspaces. What they need is a working environment where an employee feels comfortable, secure and relaxed. A place where it’s okay to have a picture of your grand children on your desk.

This message reminded me of a conversation I had with Peter Vander Auwera about this very topic. I didn’t know quite how to put this in a post, until now.

The Key to Success

There is a wide variety of approaches to SocBiz, or Enterprise 2.0, some say the business goals have to be aligned to social, or we need to measure everything first, or we have to have a Digital Village first… others take a more tangible approach. A more non-virtual one. They reshuffle the physical space people work in.., the office floor.

Collaboration is the key to success.., so.., we create a (physical) working environment where collaboration is as easy as raising your hand and ask a question. Serendipity is guaranteed because people have no fixed desk, so you never know who you’re going to sit next to.

The Myth

According to Peter “[the office space] has been designed to enhance collaboration… working with each other across departments.”

The myth is, you have to collaborate all the time.

But, not everybody operates that way. As far as I’m concerned, I like my work area quiet. I need focus to concentrate, and more often than not, my work needs to be accurate and creative. Two things I can (or need to) do alone, no collaboration is needed.

For Peter it’s the same; “I don’t function that way… I need time on my own to think.”

Collaboration is Not Happening

Peter explains his view further; “When you sit with other colleagues around a “collaboration” table.., I hardly see any collaboration. Everybody still works in their own zone, because they have work to do. It just doesn’t happen.”

This happens when culture and progressive ideas clash. You can’t force people into a collaborative state of mind. Reshuffling desks, open up the floor, and taking away personal offices does not guarantee collaboration.., it just doesn’t.

I’m sure at some companies, for some departments this approach can do wonders. But, we should judge the merit of such huge changes on any specific floor/office/department/company.

You could simply ask employees their stand on such a high impact change.

Personal Space

“The other aspect has to do with physical space and emotional space. When working in a collaborative space I have the feeling my privacy is disturbed. At any time somebody can come up behind you and look over your shoulder.., it feels like a sort of surveillance.”, Peter says.

“It’s difficult to articulate, because I have nothing to hide, in fact, I have a lot of things to share. The idea of collaboration has the opposite effect, it doesn’t invite me to collaborate with the people who look over my shoulder. Because I feel they are intruding in my privacy zone, my creativity zone.”

The idea that anybody can criticise your work at any time can be a great hinder. This is not just in the physical space, but can also occur in a collaborative on-line space. When I’m working on something, a blogpost for instance, I like to write a great deal, preferably all the way to the end with a revision or two, before I let anybody read it.

This is my process, the way I want to work.., I do not want any input, suggestions or comments until I’m good and well ready for them.

More about working in peace can be read in “Silence, I’m Painting“, an article by Peter on his personal blog.

Inspiration

… or lack thereof. Most people in the office have nothing or very little to do with your work. The chance of having exactly that person that you need come sit next to you in an open floor space is quite slim.

The odds of serendipity (fortuitous happenstance or pleasant surprise) are against you, against us. Even if you plan and scheme everything to enhance those chances.

Inspiration therefore is one of those things we seek out. We connect with those people who can help us move beyond a certain point.., everything else is just noise.

Controversial

Peter worries about this attitude sounding arrogant. Knowing Peter.., this is far from what is happening.

What’s really happening is that, at times, we should stop and think, reflect on the changes we’re trying to make, and the goals we want to achieve. Despite the fact there are a lot of talented people out there with a great number of good ideas, we cannot, and should not, just apply them. This goes for collaboration, but also hierarchy, job titles, software.., you name it.

Social business, The New Way of Working.., or whatever you want to call it.., is NOT generic. There is no One-Size-Fits-All. Not only does this apply to every company, but also to each department and each individual. To generalise, automate, or standardise this idea works as good as trying to fit every person in exactly the same suit.

Balance

Like any other undertaking, regardless of what it is, for it to have long term success, there has to be balance.

An office should provide spaces for all sorts of productivity styles. Employees should be involved in the design, their opinions should drive the change. After all, it is they who do the work.

 Thank you Peter for the insights and challenging us to think.

Peter is a creative thinker, creator and sensemaker. Co-initiator of Corporate Rebels United, a movement to unite corporate rebels worldwide to ensure that true change happens virally. Charter Member of Change Agents Worldwide.


Edit: Richard Martin (@IndaleGenesis) pointed me to this wonderful video made by Dave Coplin (@DCoplin). It really adds to the points made in the post. It’s only 9 minutes, I encourage you to watch it.

+++ end Rogier’s post

My Ultimate Cyborg Makeover

I have been away for some while. Many of you thought I was on a sabbatical leave, but that was just a smoke curtain for a much more dramatic makeover and re-invention of myself. I decided to become a true cyborg.

Oculus Ruft Headset Shoot

Zuck was onto something when he decided to acquire Oculus for 1.9B$ earlier this month: blurring the virtual world with the physical world to tap into the enormous opportunity of virtual experiences. But I believe he did not go till the end of his thoughts. You see, the Oculus is “only” one-directional. Giving you the input of virtual worlds. What if you could also give-back and share-back into the virtual world? The ultimate sharing economy?

That’s why I recently decided to become angel investor in a small start-up from Ukraine called “The Fishery”. We are really in stealth mode, I can’t say too much of it. But we are applying the lean startup methodology and we now have our first MVP (Minimal Viable Product) that we start iterating with our celebrity customers. I hope you will understand I can’t share names at this stage.

fitbit-flex-jawbone-up-review-19

Whereas products such as FitBit, Jawbone and others focus on QS (Quantified Self), we believe that with the Fishery we are entering the space of the Qualified Self – it’s about depth and quality, not quantity. We are still hesitating what will be the name of the product: something between the “Fishbit” of the “iFish”: indeed, what we are doing is starting to fish into the deep oceans of the subconscious and the unconscious, where data and the human species become integral one and holistic.

For quite some time, I was a big believer in so called “Personal Data Stores”: tools for the user that allow us to decide ourselves which pieces of our data we share with what vendor in what particular transaction context. But I realized that this only covers the data that we share intentionally. It does not cover data that we share non-intentionally (like the signals from our SIM cards), or data that are collected in surveillance and co-veillance scenarios.

So why not bite the bullet, accept that privacy is dead, and move into the realm of extreme transparency? And what if we could just plainly connect our own human brain to the internet, and create a distributed peer-to-peer exchange of human brainpower, and start to keep a human ledger that is cryptographically secured and trusted? This goes way beyond the Minority Report scenarios (after all, a film of more that a decade old). In this case, you only have to start thinking about something you would do, and hop! It would be immediately shared and algorithmically processed by the hive of connected brains. Of course, we’d have to make some major changes to legislation and regulation, but that can be overcome, it has been done before.

Anyway, last week I was back in our labs in Ukraine, and I volunteered to become the first test case for the latest beta version of our Fishbit.

Petervan with Fishbit

What you see on the picture is me on the lab-bed, right after the 3 hour operation. The little brick on my chest is the prototype of the Fishbit. About 35 wires are connected to different sensors on my brain, my heart, my blood pressure, my lungs, skin, my legs, arms, etc: it’s a true virtual and “brick”-and-mortar tricoder of all my physical and mental sensations and experiences, not only at the cognitive level, but more importantly also tracking and tracing the sub- and unconscious activities of my brain and body.

The Fishbit has of course a number of well-documented open APIs, as this is clearly a platform play where developers can let explode their creativity for thousands of apps tapping into my body, mind, and soul. And to fully bite the bullet of transparency and surveillance, we have added a couple of more secret “dark” APIs to give direct access to governments and other trustworthy organizations looking after the greater good of society at large. But I am deviating.

The mask and the tube are there to add extra oxygen and creative gases, because the sensations are so strong that I need to breath much more consciously to let my heart pumps more oxygen in the blood streams. I can tune the tube, for example per season or month, when for example in April I get an extra dose of laughing gas, and in May some smell or spring blossoms to bring me back to my 60ies hippie memories.

One of the earlier versions had an API with Twitter that made it much easier for me to tweet. I just had to think “tweet”, and hop, there where 140 characters describing what I had spotted in my 2,500 RSS feeds that I follow on a daily basis.

But now we can go a lot further

Jung Man and his Symbols

Many of you know that I am a deep expert in the works of Carl Jung, especially his Book of Dreams, The Man and his Symbols, and his work on the Self, the Archetypes, the personal and the collective unconscious

Jung Sphere

Illustration from the book: “Jung, a very short introduction” by Anthony Stevens

What we discovered with Fishbit, is that sharing as we know in Facebook, Twitter, etc is so… well, outdated. If we reflect on Jung, this sort of FB-sharing only addresses the outer shell of who we are, the ego. In many cases that ego is made up and self-created, and by no means reflecting our deeper selves and motivations. Now, with Fishbit we can tap into that power.

Now, I can share my dreams as they happen. The Fishbit sensors sense when I am entering my REM sleep, can capture my dreams, and in the preferences I can set whether I want my dream to be shared as a literal transcript, as a film scenario or as a piece of poetry.

Now, I can connect my collective and personal conscious to the grid, and share with vendors my really true subconscious needs, to they can shoot better ads to me, the target. Finally! Indeed, as my hero Frank Zappa used to say: “without deviation from the norm, progress is not possible.”

Zappa deviation norm

And is it not progress when now, for the first time, data, dualism, humanism and the deep unconscious merge into a exciting melting pot with unseen business opportunities on the medium and long term? I hope you share my enthusiasm for this wonderful new world. Welcome to the world of Fishbit. Welcome to my ultimate cyborg make-over.

UPDATE: obviously this post was related to it’s fishy publication date. Thanks for your reactions of concern about my health, I am doing 100% fine 😉

The future rarely arrives when planned

The title for this blog post comes from a 2010 talk by Mark Pesce. He adds to it:

it rarely arrives in the form that we expect

it is too hard to grasp, a bridge too far

the seeds of the future are always with us in the present

I have referred many times already to Mark Pesce in my previous posts:

He keeps inspiring me, by the challenging content and his oratory skills. And yes, I am trying very hard to get Mark to one or more of our main Innotribe events as core anchor/igniter of some of our conversations.

I also recommend my readers to have a look at some of his recent work, especially about “hypereconomics”, Flexible Futures, and last but not least his upcoming book “The Next Billion Seconds”. The chapters of the books are being released now on an almost weekly basis, and here are some of the catchy titles with associated content:

  • Initiation
  • Introduction
  • Articulation
  • Replication
  • Duration
  • Revelation
  • Revolution
  • Origin

It reads like an “Origin of Species”, looking back and projecting us in the future of the Next Billion Seconds, aka the next several ten thousand of years. A fascinating read indeed.

But I wanted to use his 2010 talk as guidance to some of the work our Innotribe team is doing in our incubation project called the “Digital Asset Grid” (DAG)

In this talk, Mark Pesce talks to  a group of Human Service folks and Health officials. Although it is about health, I encourage you to listen with holistic ears, as everything he says is applicable for any vertical.

The talk is titled “When I am 64” and is looking forward 17 years from 2010. The “64” is a wordplay on the famous Beatles song.

Here is the link to the first part of the talk. The talk was split into 3 separate videos.

I will avoid the temptation to do an ad-verbatim transcript, and will just use a couple of quotes to illustrate my own points.

Highlights first video

Somewhere half-way, Mark Pesce mentions how his team went open source with their 3D Mark Up language and how surprised they were with the amazing ideas people came up with on what they could do with it.

  • He mentions and Austrian project that made a 3D encyclopedia, like a tree of knowledge, and
  • a 3D visualization of NYSE stock data.

The latter one makes it possible to see 5,000 times more information than on could see with the standard flatlanders’ Bloomberg terminal. Mind you, this was in 1997, that now 15 years ago.

My lessons learned for DAG:

  • The DAG story is a story of value propositions. That is what the prototype we are building will focus on. It is NOT a technology showcase.
  • We play with the idea of an open source DAG server. There is some hesitation. We should not hesitate. We should look at it like IBM looked at Apache Server at the time. Our core competence is to operate a high-available, secure and resilient infrastructure. Probably less in building server software. We know more than me.
  • There is so much innovation in the ecosystem. Our current thinking is to bring the APIs of the infrastructure in a controlled open. So that Banks and other 3rd parties can be on the bleeding edge of innovation.
  • On the longer term, this whole concept of stream-servers makes me think a lot about the Metacurrency.org software project of Art Brock and Eric Harris-Braun. The idea is to build a basic communication later to be able to deal with stream-scapes.

I can assure you that “streams” and “scapes” will be commongood in some years time. Another very cool initiative in this space is Nova Spivack’s latest start-up BottleNose.

Highlights second video

It really gets interesting when Mark Pesce starts unfolding how the power of our communities shape our behavior. Somewhere at minute 09:10, Mark develops an extremely interesting banking scenario:

  • Imagine someone steals your identity, walks into bank, and takes a loan in your name (if they are able to present the proper documentation)
  • The problem is that once you present stolen proof documents at the entry of the process, the process usually kicks off perfectly and delivers the programmed results
  • Better would be to be proofed by others, by your community. “An identity that is confined and constrained by those you are connected to”, by your on-line context
  • At minute 10:35, Mark suggest

that you should be able to handing the bank your social graph!

You really would expect your bank to be able to write some piece of software which could confirm your identity

Bank validating your identity strength based on who vouched for you !!!

This really comes very-very close to some of the use cases we have in mind for DAG.

This would result in a system with greater resilience, much harder to fool, because:

  • Identity is a function of community
  • And not just identity > even TALENT is a function of and a recognized value of a community
  • The social graph is the foundation of identity

In my opinion, all this is leading towards “interest based connections”.

The relationship economy, the reason why REXpedition is so important, is the next battlefield of competition; after most organizations squeezed all the juice out of SixSigma, Lean, and similar programs for increasing productivity and efficiency.

  • The focus of these programs was on doing better what we already did (sometimes doing bad things better)
  • Now its’ about doing new things, the right things. And those right things have all to do with better managing our trustful relationships

Therefore, Mark’s thesis that “a group of well connected highly empowered individuals is a force to be reckoned with” is one of the biggest forces in place. It has always been, but now returning in force thanks to our hyper-connectivity and information abundance.

Highlights third video

This part, entitled “Senior Concessions” really got my attention when Mark Pesce starts talking about “Personal Broadcasting”, networks of trust and sharing of social graphs.

Sharing of social graphs will enable us to identify who brings real value, who brings insight, who bring wisdom. And also those who seek to confuse, who are confused, or who are self-seeking.

This smells very much like reputation and influence like:

  • the reputation score in eBay
  • the thinking of Andreas Weigend’s from the Stanford Social Data Lab
  • Doc Searls VRM (Vendor Relationship Management) thinking
  • Drummond Reed’s Social Vouching start-up connect.me with its underlying Respect Trust Framework.

Mark continues how boundaries of expertise are becoming more and more fuzzy. The patient now often knows more than the specialist. The student knows more than the teacher. It reminded me to one of the first books I read about fuzzy logic by Bart Kosko in 1994. “The new science of fuzzy logic

Reading that book so very early in my career was probably meant to be part of my life and my purpose.

Anyway, Pesce puts the patient in the center, like Doc Searls put the user in the center of his user-centric intention economy.

In my opinion, banks have a similar huge opportunity to put the customer back in the center and offer unprecedented high-quality data services.

And Mark Pesce goes on:

  • This is about user centric “social” graph
  • Knowledge will pass from one user to another (similar to John Hagel’s knowledge flows)
  • As knowledge is passed on to the community, the community empowers itself
  • Person as agency of his own data, deciding who gets access
  • Privacy of medical data is about making these data freely available to those who need it in context, but make them secret to those who do not need those data
  • Only if person has agency for his data and authorizing access to his (medical) records, and tools to track that access (and give/release access)
  • Without those tools we will loose track of who owns what etc and becomes easier for those who shouldn’t to have a look in
  • As our medical records spread through our networks of medical expertise, we will feel less fear, and more to surrender our privacy
  • There is power in releasing our privacy because we gain connections

It’s almost going back to Doc Searls (and others’) 1999 ClueTrain Manifesto where the authors declare in one of their 95 thesis that “Markets are Conversations”.

It’s also going back to Buckminster Fuller’s geodesic domes, where the each element is weak, but where the combined structure is stable.

As a matter of fact, the 3D space of the geodesic dome perfectly illustrates what the DAG is all about. Look at it as a certified map of where the data are located with their associated usage rights. Sharing as utility. P2P sharing with certified pointing infrastructure. It’s moving us from a Flatlanders 2D thinking of the physical world to a 3D thinking of the graph. That is what the DAG is really all about.

I put this blog together during one of my weeks off, weeks that are completely un-planned and un-structured. For me these are weeks where I refresh my brain, new ideas pop-up during moments of organized boredom. You could call it my Boredom Weeks.

It can therefore not be a co-incidence that Mark Pesce ends with a referral to Genevieve Bell, Intel Fellow and director of the Interaction and Experience Research Group within the Intel Labs. Just on that same day, I received a tweet from one of my followers referring to Genevieve Bell’s TEDxSydney 2011 talk on boredom.

The video basically illustrates that ideas come in moments you don’t expect, when you are not focused, when you have this blissful moments of boredom. Its back to the start and title of this blog post: “The future rarely arrives when planned and it rarely arrives in the form that we expect”

I can already see now how DAG will take off from and into un-expected directions. And we are just at the start of the prototype phase. Exciting times

@petervan from the #innotribe team

Digital Asset Grid: Let’s meet at the SWIFT Dance Hall

This post is a fifth in a series on personal digital identity. Part-1 “The unpolished diamond was published here in August 2010 and Part-2 ‘The Digital Identity Tuner” was published here in September 2010. Part-3 “Personal Data Something” was published here in December 2010. And part-4 “Austin-Munich-Toronto” was published in February 2011 here.

Drawing by Hugh MacLeod (@gapingvoid) during the Innotribe Deep Dive on Digital Identity, Sibos Toronto, September 2011.

That was February 2011. Since then a lot happened. I had so many rich discussions, met so many new fascinating people, and have been aroused by a deluge of new ideas on digital identity.  And my employer SWIFT gave the go-ahead for an incubation project on Digital Identity that is now called the “Digital Asset Grid”.

As I mentioned in my Innotribe Sibos report, the Digital Asset Grid (DAG) is important because:

  • We are moving from money bank to digital (asset) bank
  • The DAG is an infrastructure play for SWIFT to offer a certified pointer system pointing at the location of digital assets and the associated usage rights
  • It’s and economic imperative for SWIFT to expose its core competence via API’s
  • The DAG is a huge opportunity for SWIFT to be a key infrastructure player in offering an end to end hardened infrastructure and end-point to enable the seamless exchange of any sort of digital asset between any number of entities
  • This is also a huge opportunity for financial institutions to plug-in to this infrastructure for offering a new set of services in the data leverage space in un-regulated data market places

For me Digital Identity is so much more than your log-in, or our account-number that is backed by a Know-Your-Customer (KYC) process, or another userid/password or a security token.

I look at it a spectrum. Like you have a spectrum analysis for a star that uniquely identifies it, you can imagine a spectrum for the digital identity of persons:

Digital Identity Spectrum is everything from PKI, account#, Log-In to address, attributes, history, preferences, biometrics, reputation, risk profile, intentions, signals, etc and all this in transaction and time context.

It’s no co-incidence that Facebook recently announced “TimeLine”. Identity in time-context leads to your identity spectrum that is unique at one given time. And yes, you will be able to play it backwards like a movie, but also forward to do trend analysis and forecasting.

VRM (Vendor Relationship Management) is about sharing specific parts of my spectrum with specific vendor(s) in specific transaction context(s). In the Digital Asset Grid project we asked ourselves:

“What if we could apply the VRM principles not only to personal data but to any content, to any piece of information, to ANY digital asset?”

You could then start thinking about sharing specific parts of any digital assets with specific vendor(s) in specific transaction context(s).

In essence, what we are doing, is “weaving” digital contents with associated digital rights and who has the rights to that content.

It’s a map of digital weavings

of digital fabrics

This is how the Digital Asset Grid was born.

Is this not too consumer oriented for an organization like SWIFT? I believe this is the wrong question. The discussion “consumer vs. enterprise” has kept us blind. Same by all sorts of other customer segmentations like “small-medium-large”. In the identity ubiquity game, all this is segmentation is irrelevant.

We have to start thinking in terms of different sorts of entities that participate to the identity-dance. Those entities can be:

  • Person (humans)
  • Loose group of persons (for ex Google Circles), that have no legal construct
  • Commercial companies
  • Non-Profit companies
  • Governments
  • Educational institutions
  • Programs (code)

The last one – programs – is quite fundamental. We are witnessing the blurring between humans and computers. It smells like early singularity. And in this debate we should not only be concerned on how programs augment humans, but also how humans augment programs. But that is another more philosophical discussion, and some good reading on this can be found in the book “The Most Human Human” by Brian Christian. (Amazon Affiliate link)

Back to our Digital Asset Grid…

The vision of the Digital Asset Grid

is to move the SWIFT network and SWIFT services

from a closed, single-purpose, and messaging-based system

to an open, general-purpose, API-based system

It’s a natural evolution. That’s it. No disruption. No—“the next big thing.”

Just apply out-of-band our core competency to the modern age of connectivity. Instead of destabilizing the market by disruptive innovations, provide the basic infrastructure missing for a global transaction-based platform on the Internet.

Of course, its vision is grand, with plenty of innovative elements and thinking. Here are some examples how we move from the traditional identity “space” to the new “Digital Identity Grid”

I would like to zoom-in on one of the bullet points above: from one way request-response to full duplex dance.

The web – a collection of pages – is based on some simple request-response mechanisms. I request a page and the server responds and gives me the page. End of that transaction.

With the dataweb – a collection of Digital Assets with associated usage rights – we will need something where exchanging entities can perform a dance around and with the Digital Assets. And we want to be sure that they are who they say they are, and that they have the right usage rights to the digital assets. So we move from a two dimensional view of the world (in computer terms a “table”) to a multi-dimensional view (in computer terms a “graph”)

The Digital Asset Data Web is the next phase in the evolution of important internet stuff. It’s probably what comes next in the following series:

To continue the dance metaphor, the SWIFT infrastructure is the Dance Hall where entities meet to perform certain specific dances.

One of the many use cases for the Digital Asset Grid would be to solve compliance, In stead of moving messages from A to B, we keep the data where they are and “point” to them with SWIFT certified pointers to where the data are located and the associated usage rights.

The dance protocol (full duplex) for this use case, from opening of the dance with (a “webhook” in technical terms), to the actual picking-up of the content, and closing the dance and everything in-between, could look like something like this:

  • PartyA: “hey, I am sending a signal that I wanna dance the tango (slang for payment instructions) with any party in the Swift dance hall at 9pm”
  • PartyB: “yep, I wanna dance with you, let’s meet in the SWIFT dance hall at the bar”
  • PartyA: “ok, here we are, cool place ;-)”
  • PartyA: “Let’s get to business”
  • PartyA: “I just gave you following rights my payment instructions at this XRI: you have XDI pick-up rights”
  • PartyB: “ok, gotja. Will pick it up right away”
  • PartyB: “knock knock, I am coming to fetch those payment instructions”
  • PartyA: “let’s check if you have the usage rights….”
  • PartyA: “everything looks fine, go ahead”
  • PartyB: “loading, loading, loading…”
  • PartyB: “Ok I am done”
  • PartyA: “So am I”
  • PartyB: “tomorrow, same place same time to dance ?”
  • PartyA: “would love to 😉 9pm again ?”
  • PartyB: “sure, bye bye”
  • PartyA: “bye bye”

And, what’s really cool about it, it’s fully auditable, end-to-end.

When telling this story to one of my colleagues, I got the following reaction: “Hey, but you are changing the basic messaging paradigm of SWIFT… I am not sure that I want to support an innovation like this… one that is cutting off the branch from the tree I am sitting on…”

Here is something essential for innovation. Any innovation team in any company should not only look at some nitty-gritty small incremental innovations, but

daring to be great and to re-think

the cash cows of our companies

Like Guy Kawasaki used to say: “the best way to innovate is to set-up a company that is trying to kill your cash-cow”

All the above is about the infrastructure story that SWIFT could play in and in that sense is a bit navel staring. But the biggest opportunity however in all this is probably for banks, financial institutions, and new upcoming innovative financial service providers.

This is a HUGE opportunity to offer new digital services in non-regulated markets

Many examples and use-cases here :

  • Personal Data Lockers, Digital Asset Lockers, Digital Asset Services aka Digital Bank, « Who-touched-my-data » services, Personal Data Trading Platforms, Digital Asset Trading Platforms, Corporate and Bank Klout Services, Audit services, Tracking services, Big Data and Analytics services, EBAM, Corporate Actions, etc.
  • Also e-Wallets of all kinds. Not only « wallets » for money but wallets for all sorts of Digital Assets. An e-Wallet is nothing else than a browser on a personal money store. What if we start thinking a browser for a personal data (asset) store?
  • And I spoke recently to one of our managers in Securities Business : also there plenty of examples, even in looking at trading assets.

So far, the Digital Asset Grid was just the result of a research project at SWIFT. Today, I am very pleased to announce that the SWIFT Incubation Team just gave the green light to move this project in prototype stage.

It means that during Q1 2012, we’ll have a working prototype targeted at a specific use case, but we will expose the API’s of the infrastructure and give them in the hands of developers and challenge them to come up with some cools apps that can be built on top of this infrastructure.

A lot of the thinking in this blog is the condensation of a lot of teamwork of many many people who participated to this Digital Asset Grid project. With the risk of missing out somebody, I’d like to send out a digital invitation signal to those people for a thank-you dance in the SWIFT Dance Hall: Mary Hodder, Kaliya Hamlin, Doc Searls, Drummond Reed, Craig Burton, Andreas Weigend, Gary Thompson, Tony Fish, and also lurking-in Don Thibeau, Scott David, and Peter Hinssen.

I would like to say Thank you! Maybe with David Bowie’s 1983 hit “Let’s Dance”? http://www.youtube.com/watch?v=N4d7Wp9kKjA

Let’s dance put on your red shoes and dance the blues
Let’s dance to the song they’re playin’ on the radio

Let’s sway while colour lights up your face
Let’s sway sway through the crowd to an empty space

If you say run, I’ll run with you
If you say hide, we’ll hide
Because my love for you
Would break my heart in two
If you should fall
Into my arms
And tremble like a flower