Identity Rights System 3.0

Next week, SWIFT Innotribe will be hosting the European eID Interoperability Conference 2010.

It’s a great agenda with presentations by European experts on eID, and also some of the smartest SWIFT folks on identity. For example, we’ll have Jacques Hagelstein, our Chief Architect, and we’ll also run an Innotribe Lab on day-2. Check out and download the PDF agenda here.

Hosting this sort of events is an interesting win-win model, where we at SWIFT can share our great meeting and auditorium facilities and at the same time dove-tail with important topics that are relevant in our industry.

Acting like this beyond our traditional boundaries nicely fits The Medici Effect that i described in my previous post, although i am not sure we at SWIFT apply this principle always with full consciousness and intent. It does not matter, the key thing is that it just happens, and i feel confident that on this intersection of worlds some new ideas will emerge naturally.

Thinking through how we deal with company and personal identities in an on-line world, and being able to deliver this on a world-wide, predictable, resilient and secure way is one of the key value propositions of SWIFT in the financial services eco-system. SWIFT has the advantage – it’s a deliberate choice – that we are a community based venture, and a lot of services we offer adhere to standards and rulebooks that have been subscribed to by our membership. Even then, delivering this is not a sinecure.

But in this post, i’d like to take you on a journey beyond SWIFT’s ecosystem and edges, and look at what is happening in terms of identity and privacy outside our safe community walls.

My first contacts with privacy related matters date back to my Microsoft period, where I was quite involved in the Belgian eID project.

image

Microsoft saw Belgium as a good test ground to see what happens when a country rolls-out in a mandatory way 8 million electronic identity cards to its citizens, what applications get developed, and what needed to be done at the level of Windows, Office, MSN Chat, etc to support an identity card issued by a third party, in this case a government. At that time, I experienced the Belgian Privacy Commission more as a pain in the neck, limiting us in doing ‘”real cool things” with on-line identity. But they surely planted in my head the first seeds of some “culture” of privacy. It’s only now that i start to fully appreciate the importance of privacy, and the role of Privacy commissions and alike.

Now the Belgian eID cards are rolled out, we even look at a second and third generation, but the number of applications that are really leveraging the eID on a day-to-day basis are disappointingly low.

Already when the first eID cards got rolled out, it appeared to me that the card was already a dated old-fashioned way of dealing with identities. It does not make a difference whether we talk here about a smart-card, a USB token, or whatever other hardware device.

The point i am trying to make is that

the model of an identity “card”

does not match anymore

the online realities of today

The “card” is an artifact of the physical world, and we try – in vain – to squeeze all sort of on-line concepts into an off-line model.

The next occasion where I felt something was wrong with our model, was when i saw the demo of Intelius Date Checker. See also my post on “privacy is dead” for more details on this application. I was shocked that nobody in the audience made any reflection on the huge privacy issues at stake here. It must have been American culture ?

Then a couple of months ago, there was the famous debate launched by Mark Zuckerberg of Facebook, where he basically suggested to change the paradigm with 180°: in stead of considering "private” as the default setting of personal data and letting the user decide what data he releases to whom, he suggested “public” as the default setting, forcing to “un-public” data the user did not want to make public and keep private. See also ReadWriteWeb coverage here. Unfortunately for Zuckerberg, there was around the same period an article about a Facebook employee revealing how much privacy data they have access to by for example super-admin passwords and alike.

And even ex-colleague Paul Shetler took the pain to scream out his frustration on why public as a default really does not make sense.

It all makes me feel very uncomfortable how much i have to believe from Mark Zucherberg or Eric Schmidt when they are behaving like the white-knights of privacy.

It looks to me that

privacy is out-of-control

 

and that they would like to officialise the dead of privacy by declaring “public” as the new norm. It looks to me as privacy has become

 

too complex to fix it

 

Via Facebook, Google Buzz, Twitter, etc, etc, there is already too much data out there. Fixing this taking into account regional and country laws and regulations must be a real nightmare for the Facebooks and alike.

It’s an interesting debate what should be the default: privacy or publicy. And Stowe Boyd rightly adds the dimension of “sociality”. Because you release some info about yourself consciously (when participating on social media, your really want people to know about yourself and your preferences) or passively (by accepting blindly the privacy notices on Facebook and alike. Some related info on sociality here.

This aspect of passive privacy is really well explained by David Birch. He recently wrote a whitepaper: “who do you want to be today ?” and “Kissing Phones”. Check-out here. And just a couple of weeks ago, David wrote this fantastic post about Moving to Privacy 3.0

And the big boys are feeling the pressure. A couple of years ago the audience at the Gartner IT Symposium in Cannes was still having fun with “The Great Google Hack” scenario. This session was part of an “Unconventional Thinking” set of sessions with following disclaimer from Gartner: “This research doesn’t have the full Gartner seal of approval (we call them Mavericks internally).” Today this is not just a scenario but getting very real. I am just picking one of the thousands of articles that have been written on the Google China hack described as the privacy breach of the year.

Let’s throw in some additional dimensions, so that you as novice reader on this subject really start feeling the pain.

  • What have you browsed ? Interesting reflections by Microsoft’s Chief Architect Identity on “browser fingerprints”. Btw, Kim is confirmed speaker at the eID Interoperability Conference next week.
  • Where have you been, and how your iPhone becomes a spy-phone here and here
  • What have you bought recently ? How you can let a service like Blippy stream your purchases online.
  • Who have you slept with ? Given some’s willingness to post all their data online, and the rising casual nature of some behavior, this isn’t so far out of reach to be completely ridiculous.
  • Add to this things like Facesence MIT, about mind-reading
  • Bodyscanners about being “sniffed-out” by chemical noses.
  • Did you take your pil and when. In essence about “body-surfing” and RFID like tracking inside your body.
  • Please rob me, in essence about real-time location tracking

Some suggested solutions for all this go into the direction of

 

“gatekeepers”

 

Trusted entities that are the safe-harbor for keeping these personal data. Or even distributed models of “gatekeepers” certification.

image

The recent announcement at the March 2010 RSA Conference of the Open Identity Exchange (OIX) goes in this direction. Please note that this initiative is backed by industry leaders Google, PayPal,Equifax, VeriSign, Verizon, CA, and Booz Allen Hamilton.

However, I don’t think it will work, and i am not alone, although from a different perspective (see below on PETs). I think it won’t work, because in the open online world, it will not be acceptable that somebody or some company sits in the middle of all this identity hocus-pocus, and controls our world. The internet has just become way too distributed to accept this sort of models. Maybe this works in a closed community (vertical or other) where users subscribe to a common set of standards and rules), but not on the open internet.

One possible route are PETs (privacy enhancing technologies).  For example, Stephan Engberg, one of the speakers at the European Commission’s December 2009 workshop talks about security (and privacy) “in context” and seems to be a big advocate of PETs. Check-out an interesting debate here.

The word “context” is very important here.

To come back to the beginning of this blog post, i believe we have to change the old eID model to a model where we acknowledge that the personal data are highly distributed on the net today and are dealt with “in context”.

Personal data sits everywhere, and you really can start imagining “data weavers” or “identity weavers” that combine these individual sets of personal data into new sets of relevant information, based on the context of usage.

The concept of data-weavers was already introduced in my guest blog “Digital Identity Weavers” by Gary Thompson from CLOUD, Inc.

image

I repeat myself by saying that this CLOUD vision goes way beyond the web of pages, goes way beyond the early thinking on Semantic Web. It is in essence proposing an identity architecture for the Internet. Because the internet is broken. It was never designed with identity in mind.

Its about user control of personal data.

It’s about context awareness.

It’s about who i am, how i am, and

what i do and intend to do in an on-line world.

But we all have problems in imagining how such standard and supporting system might work.

How it would look like ?

 

And then suddenly last night the pieces seemed to fall together. What if we start thinking about this in a way similar to “Information Right Management” (probably called something else today), something that Microsoft built as a feature in Microsoft Office, and basically put the user in control of what somebody could do with his documents. Mind you, this is about “USAGE” rights, not access-rights.

In Microsoft Office this was visualized by the “do not pass” sign.

By clicking on that icon, you – as the user – can control whether somebody can cut-and-paste from your document, whether they can print it, forward it, etc.

We need a standard that makes it possible to control/manage the usage-rights of the different pieces of our personal data that are distributed over the internet. And then we need to let play the competition on how this standard gets implemented in our day-to-day tools. Maybe by a clickable icon, maybe something else. Would be great to let Heads of User Experiences have a go at this.

But maybe it is too late. Maybe there is already so much data out there, that there is no way to 1) find where they are and 2) give back the control to the user/owner of the data. The breach already happened.

To conclude, get inspired by this NYT article “Redrawing the Route to Online Privacy”

So if the current model is broken, how can it be fixed? There are two broad answers: rules and tools.

“Getting this balance right is critical to the future of the Web, to foster innovation and economic growth,” Mr. Weitzner said.

Whatever the future of regulation, better digital tools are needed. Enhancing online privacy is a daunting research challenge that involves not only computing, but also human behavior and perception. So researchers nationwide are tackling the issue in new ways.

At Carnegie Mellon University, a group is working on what it calls “privacy nudges.” This approach taps computer science techniques like machine learning, natural language processing and text analysis, as well as disciplines like behavioral economics.

How would all this be relevant for our financial services industry ? One example would be to apply semantic web technologies to Corporate Actions. For folks at SWIFT it’s pretty obvious that we can apply our semantic knowledge to the data in the “messages” that are exchanged between parties of Corporate Actions.

What seems less obvious is to apply the same semantic tagging techniques to the personal data and attributes of the persons who participate in a Corporate Action transaction.

In essence this is about applying the CLOUD concepts. It’s about setting new standards and rules in this space. And are standards not one of the cornerstones of SWIFT.

It would be great to build an innovation prototype to educate our community on the power of semantic web.

I call this the “Identity Rights System 3.0”

UPDATE: apparently the subject is red-hot at SXSW in Austin this week. Check out Danah Boyd at SXSW “Privacy is not dead”

7 thoughts on “Identity Rights System 3.0”

  1. Great to see you care.

    However you should have Einsteins words in mind here.

    “We cannot solve our problems with the same thinking we used when we created them.”

    Gatekeepers and interests in controlling people, transactions and devices for power, profits or both created the problem – more of this will only make the problem worse.

    As one of the main gatekeepers and the monopoly arm of a strong kartel structure, SWIFT is part of the problem creating problems for everybody else by forcing identification and market unbalancing gatekeeper controls into transactions – more gatekeeper power will only worsen the market problems of restoring balances.

    IRM, DRM or whatever the latest PR term is cannot do anything but severely worsen the problem. It has been tried and will continue to fail – Sticky Policies, DRM for PRM, P3P are just some of the many forms that failed and will continue to fail.

    Why? At least three reasons – the mediamarket have failed to protect large datastructures with DRM small datasets will fail even worse, nobody can or will spend the energy meaningfull express their out-context-consent ex ante predicting all the kinds of possible abuse and third DRM involves extremely strong gatekeepers that will abuse the structures.

    My personal favorite example is the illeagal order sent with DRM – the sender can prove you got the order and fire you if you fail to comply, but you cannot prove that you got the order and thereby take the fall if the unlawfullness uncovered. Only non-democracies need that kind of structures.

    Nothing can remedy this problem except opening the gatekeeper controls / eliminating the restrictions.

    Our challenge is moving to preventive security instead of surveillance style security. You start by solving the problem of how market transactions can pay context isolated, ie. without ANY Trusted Third Party.

    We know technologies to do so – Digital Cash is invented – problem is the non-legitimate interests in power and control. We can solve the technical aspects but not the the fact that we have strong kartel structures (not only SWIFT but also EMV) eliminating competition and innnovation in the payment systems.

    Our problem is that we are suffering from Command & Control Economics – not much different from the problems of former Eastern Europe, China before 1980 etc.

    The “community” is a commitee of interests forcing gatekeepers (themselves) upon the market to favour themselves.

    “You can chose with Lord you want, but not that you are going to have a Lord nor that this Lord will expose you to maximise the value of his Gatekeeper position over you.”

    So back to your problem of restoring privacy and markets, your question is not how to solve the problems elsewhere, but hot to strop creating problems elsewhere.

    How do you redesign the SWIFT structure so secure legitimate payments can run through the network?

    The requirement is very simple – the system, all servicing entities collaborating, may not be able to distinguish between two transactions with the same stakeholder and two transactions with two different stakeholders (belonging to the same large group).

    That is UNLESS some conditional even occurs – such as someone trying to spend the same money twice, fail to pay the next installment on a loan or .. Preventive security is about daling with these problems WITHOUT violating the primary constraint.

    Trust is “accepting risks” and we do not want to accept risk- not in financial terms and not in other kinds of security. I.e. if we want to build trust, then reduce risk (make the deal better).

    It is not a question if there need to be services like SWIFT and others in the infrastructure. Security By Designs means among other things altering services providers from “trusted” third parties to “trustworthy” third parties.

    But the thought of “Information Rights System 3.0” will worsen the problem without solving anything. You will have to trust some people (e.g. your doctor) but not the System (e.g. all doctors or any server). But that is ok – because isnt it exactly how it should be ?

  2. Peter

    One additional comment

    You write
    the model of an identity “card”
    does not match anymore
    the online realities of today

    Agree entirely

    The purpose of a National Id system and Id Card has changed. As I stated in my presentation at the EU DG Justice Conference

    Purpose of National Id 2.0 :
    Ensure citizen can establish and maintain
    a new pseudonymous context
    and negotiate & adapt to the specific application
    independently of previous or future transactions

    Click to access ENGBERG_Stephan.pdf

    In other wordt – the purpose of an Identity Card is to AVOID Identification entirely by establishing the structurel platform for maintaining multiple contextual Identities related but not necesarily referable to the same citizenun less an optional and conditional accountaiblity has been negotatiated as part of the identity creation and interaction.

    As a simple example consider your interaction with your doctor or your layer – you need to be able to create a pseudonymous context in which you can always forward an encrypted message to your advisor signed with a PKI Digital Signature.

    The lesson is simple – Serverside Identification erode security for all because the servers integrate and cannot be trusted.

    We ALSO need to strengthen client-side key control against mallware etc. to be trustworthy to counterparts. And that is exactly the purpose of National Id 2.0 and client-side – never serverside – single-signon.

Leave a comment