Identity in the Age of AI

Although Artificial Intelligence has been a growing field for decades, its possibilities for both good and evil have become more apparent in the last few years, culminating with the launch of ChatGPT-3 in November of 2022. It was this event that truly brought AI to the mainstream and made it  a must-have topic of conversation at every dinner table and social gathering. AI applications can converse with us (e.g. ChatGPT), allow us to generate incredible images from text (e.g. Midjourney and DALL-E 2), and even easily allow us to create deep-fakes (e.g. DeepSwap) in a matter of minutes (in “A quick and sobering guide to cloning yourself” Ethan Mollick, an Associate Professor at The Wharton School, demonstrates this). This is the case even with the mitigations and processes that GPT-4 has put in place, as detailed in the document titled “GPT-4 System Card”, that make it more ineffective at tasks such as phishing attacks or identifying key vulnerabilities. 

With so many ways to quickly create high-quality content that could seem to come from any given person (e.g. ChatGPT allows you to create any content in the style of Matthew McConnaughey or Seinfeld), it has become more important than ever to protect yourself across all your online interactions.

There is no silver bullet to solve this issue, but in the next few paragraphs we talk about the multi-pronged approach that can move us in the right direction, and the key role that we believe IDPartner will play in it. 

Viable Solutions needs to address the following three key areas.

#1 Make sure that I only interact with reputable businesses and people, the ones that I intend to interact with, so that I can avoid:

  1. Phishing and social engineering attacks
  2. Deepfakes
  3. Automated social media profiles (bots) that spread misinformation

According to a survey conducted by Statista in December of 2020, 80% of US adults have consumed fake news, with about 38% of them having accidentally shared them. As AI gets better and more capable of aiding in the generation of fake news, these numbers are only likely to increase, unless we take a multi-pronged approach to help people better discern truth from lies. These prongs could include:

  • Tools to Fact-Check News: Organizations such as Newtral (largest EU fact-checking organization) and NewsGuard (provides tools for companies, and a browser extension for consumers, to protect themselves against misinformation) do a great job at scanning the news landscape and differentiating between those that are fake, misleading (half-truths) and real.

    In fact, Newtral has been in the news lately as they launched ClaimHunter, a multilingual AI language model they started developing in 2020, that accelerates fact-checking by 70 to 80% versus a human-only model. It also holds the promise to make fact-checking a fully automated process. This is one of the many examples of AI being used for good.

  • Tools to check Content Provenance: Knowing how and by whom a picture or a video were generated, and if / how they have been manipulated afterwards, would be very helpful when trying to identify computer-generated images and deep-fakes.
    Here we would like to highlight the Content Authenticity Initiative, initially led by Adobe and now backed by many well-respected organizations. Their “open-source tools allow you to integrate secure provenance signals into your products.” They empower users “to share and consume tamper-evident context about changes to content over time, including identity info, types of edits used, and more.”  

  • Verified Profiles: Knowing who published the data or opinion in the first place, and also who has been part of the chain sharing it, will help me decide how credible it is, and therefore what I want to do with it.

    For example, one could expect for companies and organizations to fully disclose their identity (to be fully verified) as they share content, and I may choose not to share anything coming from a company that is not fully transparent. But privacy concerns may prevent individuals from fully disclosing their identity. In this case, it may be enough to know that a profile belongs to a human (this could solve most of the problems on Twitter), or have some very basic information about the person (such as country of residence). 

Human critical thinking is the last step before believing or sharing a piece of information, and we need to aid that person as she makes the decision. There is no silver bullet, with AI being both part of the problem and part of the solution, but this is something we can combat with better (and friendlier) tools and more awareness.

#2 Help me to always stay in control of my data - What I call the “last-inch problem”

  1. Make sure it is me that is accessing my information: Tools such as biometrics and behavioral analysis, routinely used by mobile operating systems (iOS and Android), along with many security-minded apps, are key in addressing this “last-inch problem”. It is really me who is holding the device and logging into my account, not just someone that has my credentials or a deep-fake video playing in front of the device camera.

    Banks are a great example of companies that put much effort into this area - they are custodians of our assets, and have a great track record in protecting them. Facephi is one of the companies that banks such as Santander and HSBC use in this endeavor.

  2. Help me recover my credentials when necessary: Humans are fallible, and we lose our credentials. We need a way to safely recover them, that is, making sure that it is really me the person that requests them, and not an impostor. This is not always obvious to do, particularly in an online environment. 

#3 Protect my privacy

There are many situations where I need to share my information or authenticate myself, the most common ones are to sign-up for a service or open an account, and to sign-in to a service. 

The most common ways to do this are:

  • Using unverified information with “Sign-up/in with Google” or “Sign-up/in with Facebook”. Many apps allow you to do this, but the main drawbacks are that: 
  • Bots have Google and Facebook accounts, so bots may create profiles in these apps. Certainly not ideal.
  • The user does not control the information shared between Google / Facebook and the app, at the time of sign-up and then on an ongoing basis. This potential continuous leak of information can be dangerous, and at the very least, leaves the person more exposed than necessary.

  • Using verified information by uploading documents and taking selfies. This process is used by companies such as AirBnB, and eBay to identify their users and make their marketplaces safe environments for people to interact, and it has some drawbacks:
  • Even with very advanced technologies, this process can be frustrating (when the document you are uploading is not recognized) and time consuming. 
  • In addition, the user is often sharing more information than is strictly necessary. For example, when I upload my driver’s license, maybe the marketplace just needs to know that I am a resident of the US and over 18 years of age, not really my DOB and my full address. This again is an unnecessary leak of information that can put users at risk.

  • Using shared secrets. These are pieces of information known only to the parties involved in a communication or transaction, which are used to establish trust and secure connections. Common implementations are traditional passwords, pre-shared keys and one-time passwords. 
  • Although these methods generally, and traditional passwords particularly, can feel cumbersome and not very secure on their own (many of us are guilty of often forgetting our passwords, reusing them on many different sites, or even writing them on post-it notes), they can be strengthened by including additional steps in the process, such as multi-factor authentication.
  • An additional limitation is that they do not themselves allow for exchange of information, although it could be part of the back-end handshake included in the process. 

There are several better ways to share information that allow it to be verified by a trusted entity, and to be shared in a privacy preserving way, such as verifiable credentials, zero knowledge proofs, or OpenID Connect for Identity Assurance. For example, I could just share that I am a resident of California, and over 18 years of age, instead of my full address and DOB. 

IDPartner has developed an User-Controlled Identity Marketplace that can address several of these areas

#1 Interacting with reputable businesses

IDPartner makes sure that every business joining the marketplace is fully vetted, and end-users can check that the websites they are interacting with belong to reputable businesses that are who they say they are. 

One of the elements that goes into this process leverages the work done by the Bank for International Settlements and the Global Legal Entity Identifier Foundation around LEIs and vLEIS (Legal Entity Identifiers and the corresponding verifiable LEIs). There will be a strong preference for businesses to have requested their own LEI (and vLEI as they become more ubiquitous) as a prerequisite to participate in IDPartner’s Marketplace. An LEI enables a clear and unique identification of legal entities participating in financial transactions, and only businesses that go through a thorough KYB process via a Qualified LEI / vLEI Issuer (think of the likes of Bloomberg) can be part of the IDPartner Marketplace.

But the checks put in place to verify businesses by IDPartner go further, including DNS checks and other security measures during onboarding, along with the generation of real-time security signals at transaction time designed to detect any unusual and unexpected behavior that could result from a compromised entity. 

This gives users the peace of mind that when they interact with IDPartner and its network of businesses, they are interacting with reputable and trusted entities. 

#2 Interacting with reputable people

Having an open and easy to access Identity Marketplace allows us all to decide who we will interact with across our online experiences. We may choose to only interact with those that have identified themselves as Humans on Twitter, and this may be enough for that environment. But we may choose to only interact with a fully verified identity when we are about to sign a high-value legal contract via DocuSign or Adobe. Or maybe somewhere in between when we are setting up an in-person meeting to sell something on Craig’s List. 

Going from clunky and cumbersome traditional methods to verify your identity, such as uploading your ID and taking your selfie, which can result is considerable drop-off, to IDPartner’s solution that allows the process to be completed with a couple of clicks, has the potential to be a game-changer. 

#3 Last inch problem

Because we are prioritizing our work with financial institutions, users’ identities, their most precious assets, are stored and protected by the same regulated providers that protect many of their most valuable assets. Whether in the physical or digital world, they make sure it is you, and only you, that has access to your assets. 

#4 Protect my privacy

IDPartner protects end-user privacy by ensuring that their verified information is only shared with reputable businesses in a privacy preserving way - whether it is via verifiable credentials, zero knowledge proofs, OpenIDConnect for Identity Assurance, or SD-JWS standards -, working to limit the information shared based on the need of each specific use case, and giving the person the final say with regards to the data that is actually exchanged.  in many ways, for example,  Verifiable Credentials and Zero-Knowledge Proofs, therefore allowing users to share the minimal required information to achieve their end goal. There are several better ways to share information that allow it to be verified by a trusted entity, and to be shared in a privacy preserving way, such as verifiable credentials, zero knowledge proofs, or OpenID Connect for Identity Assurance. 

Identity is the foundation for all other activities, and IDPartner provides an extensible Identity Platform

There are many services that could be built on top of the Custodial Wallets that we enable, and more generally, on top of the IDPartner Marketplace. A couple (of the many) that we are particularly excited about are:

#1 Payments

As the saying goes “If you solve identity, everything else is just accounting.” This often-used quote suggests that once identity is effectively managed, the remaining aspects of online transactions, such as payments, become (comparatively) simple matters of accounting. And we could not agree more.

Per Visa’s “2023 Global Ecommerce Payments and Fraud Report”, globally, merchants continue to spend about one-tenth of their annual ecommerce revenue to manage payment fraud, and even after all the spend, they lose 3% of digital revenue to fraud. A 13% tax on revenue is a very heavy burden. 

With the proliferation of peer-to-peer (P2P) payment methods, and the rampant fraud associated with them, this number may very well increase. Visa+ - Visa’s new solution that wants to become the connecting infrastructure layer in the world of digital wallets and P2P apps - has the potential to exponentially increase the utility of these wallets by enabling the interoperability between them. It also has the potential to exponentially increase the fraud, unless it adds some layer of protection, or even better, privacy-protecting identity enabled directly at the infrastructure level. 

#2 AI-Powered Agents

Identity and Payment experts such as David Birch have been talking about Economic Agents for a very long time. It is great for Government Agencies, Businesses and Merchants to have AI-powered bots to interact with their customers, but the true revolution will be when the customers also have their own AI-powered bots to interact with them. 

These days are not just fast approaching, they seem to have already arrived, at least as experimental models… AutoGPTis an open-source application that uses OpenAI’s API to automate the execution of multi-step projects. Its GitHub repo page describes it by saying that it “chains together LLM ‘thoughts’, to autonomously achieve whatever goal you set”. 

In a world where humans will be represented by AI - I honestly cannot wait to try Trish.SocialMediaPro, Trish.TrafficTicketNegotiator and Trish.TaxExpert, alongside the good-old Trish.Human - having a way to legitimize my bots as my representatives and different from random non-human-related bots, becomes a pressing matter. Would embedding my Human identity into them be the right path?