Skip to main content
SearchLoginLogin or Signup

Layers of Trust

🎧 Creating data provenance standards to combat misinformation and human rights crises (45 minutes with transcript).
Published onSep 19, 2022
Layers of Trust
·

In this podcast episode, Jacobo Castellanos, the Technology Threats and Opportunities Associate of Witness, talks with me about tackling misinformation using digital specifications that add layers of information and trust to images and videos. We met at the Rights Conference where he lead a session about “Tackling misinformation with authenticity and provenance infrastructure that works for all.” Read the description of the session:

With consolidated efforts like the Adobe's Content Authenticity Initiative (CAI) or the Coalition for Content Authenticity and Provenance (C2PA) leading towards a more widespread and potentially systematic use of provenance and authenticity infrastructure, efforts to understand their impact, to avert and mitigate potential harms, and to bolster their use to enable rather than disempower critical voices are required. As these frameworks and tools are designed, developed and deployed, it is all the more important to include in these processes voices from a broad range of different lived, practical and technical experiences coming from all parts of the world and acting across areas that include civic media, human rights, mis/disinformation, activism, technology advocacy and accountability, and digital rights.

To promote participation from a diverse range of people in this session but also in broader discussions around provenance and authenticity infrastructure, this session will first offer a brief introduction of key provenance and authenticity initiatives and scenarios, and then host an open discussion on stakeholder-centric concerns and opportunities (e.g. content provenance and authenticity in social media and its impact on community, civic and/or independent media).

After the session, I wanted to know more about how the specifications were designed and implemented so I reached out to Jacobo to learn more from him about this collaboration. Here we talk about the design principles, accessibility, pitfalls and limitations, and the hope of using technology and deep fakes to address issue of misinformation and humans rights crises.

SUMMARY KEYWORDS: provenance, trust, images, tools, information, misused, system, communities, and metadata.


Sarah Kearns 00:18

We met briefly at the RightsCon earlier this year, when you hosted a discussion on tackling misinformation via metadata provenance and images. But maybe before we jump into all of that, perhaps you could start with the introduction to yourself, and then the organizations that you're working with the C2PA and Witness.

Jacobo Castellanos 00:38

Great. Yes. So my name is Jacobo Castellanos, and I work with Witness. Witness is a human rights organization, using video and technology working with communities globally, to protect and defend human rights. I am specifically part of the technology threats and opportunities program. And one of the issues that we cover in this program is that of provenance. So by provenance, we understand the source and the history of digital media of images, videos. And, as part of this, these efforts, one of the areas or one of the spaces that we are working in is within the C2PA, the Coalition for Content Provenance and Authenticity.

Witness is a member of the C2PA. And as such, I participate in the technical working group developing these specifications. And I'll get to them in a second, I suppose. One of the areas of concern that we have is how images and videos are used to enhance processes of human rights, they could be within legal processes, for example, as evidence of human rights violations, but also for broader processes of awareness and accountability, narrative building. And so we've found in working with a lot of partners that video, images, technology, in general, is a huge part of these efforts. It's it's not just the risk, but it's been happening to a certain extent that a lot of these images, videos, the information that's coming out of communities is undermined, or it's dismissed for a variety of reasons. Witness has been thinking about what what are the different ways in which we could add a layer of certainty perhaps or more authenticity so that this information is taken into consideration in a variety of these processes. I think another area where witnesses coming at this issue from is just recognizing the that misinformation and disinformation is, is a growing challenge online, and how this plays into these these mechanisms for communities and the contents that they create.

Read more about the partnership here:

Sarah Kearns 03:18

So it kind of sounds like you're saying people don't believe information that's coming out of areas that are undergoing humans human rights crisis sees because it's just labeled as disinformation when it's when it's real.

And probably the other way to where fake things are, that are not real, are treated as real because people can't really discern whether or not something whether or not a video or an image is, is real, at least I feel like I see every time I log on to Twitter, there's something like, “oh, this this video was fabricated,” or “no, this video is from another point in time and isn't current.” Is that sort of what you're speaking?

Jacobo Castellanos 04:00

For sure. And then also, you know, I think that with the growing use of synthetic media, this is even more challenging, right? Because now, it's not just a challenge, dealing with cheap fakes or shallow fakes, which are images and videos that have been slightly edited or put out of context. But now it's also dealing with content that is synthetically created and manipulated. And that is often not easy to recognize, and less and less so easy to recognize by the naked eye, and even by computers and different algorithms.

So it is in this ecosystem and already recognizing the enormous challenge of misinformation and the challenges that especially marginalized communities face and putting their contents out that we've been part of these efforts as one of the mechanisms by which the truth from these parts nursing these communities is, is taken for what it is.

The C2PA is this coalition of companies, mostly a couple of organizations, news media organizations, as well. And it's led by a group of eight companies. So Adobe, there's Twitter, Microsoft, Intel, Arm, BBC, Sony, and there may be missing one or two more. But these companies are leading a process that includes other member organizations, one of which is witness, creating standards, the specifications, by which you can add metadata of sorts to the images and videos that we see. But not metadata in the same sense that we know it today. But it's metadata that has features that in a certain way, secure that information, the information that is added, as opposed to the information that we have now, the metadata that we have now that we can change, we can alter quite easily.

Generating Trust through Technology

From the perspective of a creator of content and the perspective of a viewer, we can't have a lot of trust in the metadata that we see just recognizing that it's so easy to change. What this system does is that it creates specification so that you could add a layer of trust, if we could put it in those terms, I think, where this information is changed, it leaves a trace of sorts. So it's a tamper evident system. This is our specifications level, which means that companies, organizations could implement it into their tools into their services. And the idea is that this could happen a crop, a crop like this could be used across a workflow. So from the moment that somebody takes a picture, so it could be included into the hardware of a camera, or it could be added as a third party service. And then it could go into an editing software, I could go into a publishing software, so that when it gets to the viewer, there's, there could be an icon, for example, that says right click here for more Providence or information that you get, like the history of what has happened to it. And the idea is that based on this information, the viewers could have a better sense of whether they want to choose something or not. And then also for those content creators, perhaps to be able to add a layer of, of trust to the content that they create.

Sarah Kearns 07:56

So I guess I'm curious as to how how that layer of trust is authenticated or not tamper-proof, because that, that sounds very challenging.

Jacobo Castellanos 08:10

So I could speak to some degree about the technical elements behind that. And so what it does is that it creates a cryptographic connection. So it adds a hash that connects first of all the image with this card with this provenance information that you're adding. So that you know that this provenance, this history that you're reading is, in fact connected to this particular image or this particular video. And the second element of security is that it's signed. So there has to be a sign, right? And so the whole trust model of the C2PA is based on on whether or not you trust whoever is signing it. I think that's important to mention, because, you know, we don't believe in technological or technical solutions exclusively. And when it comes to trusting or having a more a higher degree of, of certainty about what you're seeing, or what you're interpreting, it's still a question of trust. And trust is still, essentially, I think, a question of, of human relationships. And so how that is translated into the web is a big question and, and the CTPA, I would say, it's not trying to create trust, but he's just leveraging existing relationships of trust.

Sarah Kearns 09:34

That makes sense. You're using the technology of crypto and hashes. And I'm not necessarily a technical expert on on this myself either. But it sounds like you're using that technology to at least highlight you're creating that not temporal data to be like this image was taken by this person in this location and do with that what you will?

Jacobo Castellanos 10:01

When I mentioned that there's a cryptographic link between the image and the information, I don't necessarily mean that this includes blockchain technology. What this does is just give you the information that could be tampered, but you trust whoever is signing this piece of information so then you perhaps be more inclined to trust it.

The information that you're sharing this metadata that you're sharing, is defined by the signer. So it doesn't necessarily need to be written automatically. You could, depending on how you implement the specifications, have the Creator change all of the information and say, “Look, I'm gonna change what I said here and put something else, but then I'll sign it. And if you trust me, then you might be inclined to believe it. And if you don't, then you could question it as much as you'd question any other information that you see online.”

C2PA architecture and workflow. Source.

Sarah Kearns 11:29

Well, so it sounds like it would be that track change would follow that that image, say, it's like, you have an image, and then like, the signer would do something to it, and then sign it again?

Jacobo Castellanos 11:47

A little bit like that. So I could walk through a use case example.

Sarah Kearns 11:52

Yeah, maybe that'd be helpful!

Jacobo Castellanos 11:54

It could be the human rights defender, it could be a civic journalist, it could be anybody that has that has somehow enabled this system into their phone. When in a protest or in whatever place, they take a video of a police violence, when that depending on how their system is implemented. This journalist could have the power to say, “I've taken this video, but because there’s sensitive information I don't want to share necessarily to the public,” so the journalist can go into this metadata box and change certain items that could perhaps hide the location just because it could be sensitive to them as the journalist or to whoever appears in that video. Then the journalist could have that claim signed, it could go to as part of the editing process, they could go into the news site, where the news organization might add or remove more information, again, with a CTP enabled system and then publish that. By the time that it gets to the viewers, they'd see whatever information at every stage of this workflow the signer wanted to share.

Sarah Kearns 13:07

We've thrown the word provenance around. That's also a museums studies type of term that describes the data about an artifact is tracked as it’s moved across different museums. At least that's that's how I'm more familiar with the provenance term in that sort of context. So it almost sounds like like that: when once a news consumer sees the video, they can sort of see those track changes like this signer made this information available, and the news organization made this information available.

Jacobo Castellanos 13:55

I hadn't heard it from that museum context, but I think it'll be good to look at it might be, it might help explain it. But yes, that's essentially it.

Sarah Kearns 14:04

Something else I'm curious about, because it sounds like you're saying that this technology would have to be implemented or embedded before the journalist goes out into the field. Now, so with these types of systems and standards — and we should probably go back and talk about what some of the standards are, but before I before we do that — I'm sort of curious on if these standards would work on older images? What if there's something that doesn't have that that metadata? Is that something that you'd be able to still track in in a meaningful way?

Jacobo Castellanos 14:40

Yeah, so the way that it's designed it would work, especially considering this trust model that I mentioned before. Because again, you could add this information to older images and videos, and just sign it and again, it doesn't change the trust model. If you trust person, the entity the system that signs it, then you could just decide to believe it or not. So it is definitely accessible and could be used for images that predate the creation of this system.

I think what's also important to highlight is that these are open specifications, which means that if you go to their site, you could actually read the specs. The idea is that for anybody that has the resources, and the capacity really, and the tool or service to implement it to implement that they could write.

The specs have been designed to really keep to cater to a lot of different industries, to a lot of different stakeholders to a lot of different types of use cases. The idea is for anybody that is interested in creating, adding this provenance and authenticity system into the tools that they'll be able to do so by incorporating the specs. One of the cases where this could be used is definitely by adding provenance information to older images and videos.

Check out the specs here:

Sarah Kearns 16:12

I guess, how easy are those specs to implement? You've described places where there's human rights violations, and I might imagine that it’s not easy access to Adobe, for example. I feel like that software is fairly expensive and cost prohibitive to most people. But you also said that they are open standards so maybe you could describe that a bit.

Jacobo Castellanos 16:47

Yeah, they're open standards. And the question of accessibility is really one that we have focused on. Just to give more background to our work within the specs, witness has been covering several strands of work, but one of them is carrying out a harms assessment. The idea behind this system assessment is to try to identify the ways in which these specifications could be misused, abused, or even used in their intended way, but cause harm by that.

There are various ways to look at the question of accessibility. One of them is this that you mentioned, how accessible is it to implement these, these specifications into tools and services. The fact is, right now, if you do go to their website, and you do open up the specs, you'll find that they are incredibly technical. Even for developers that have a lot of experience, the systems into their tools will be very complicated. There's a question about the the extent to which they'd have the capacity or the resources to do it should be said that Adobe, for example, has put out a toolkit, an open source toolkit to facilitate this process.

Witnesses has been thinking about these questions too like, how accessible is it to implement in the spaces where we think that this could be useful? So if we think that this is useful for human rights defenders? How do we make sure that there are tools that are accessible to human rights defenders that incorporate these specs? And we are aware of the privacy considerations and other considerations that need to be bear in mind as these tools are created. There are other questions about accessibility. Like, for example, how accessible is the system to use it from a user experience perspective? How much are you able to control to effectively control the information that you share? Is it could you use these systems in older operating systems? Could they be used in places where there's not necessarily enough access to the internet? So part of our work is definitely to think about, how do we make it accessible to those that could use it.

Sarah Kearns 19:59

What are some some cases where it's been used successfully?

Jacobo Castellanos 20:09

I'm not sure. Because this is very much at an exploratory stage where it is now, at least at this widespread system. Within the human rights ecosystem, I think it should be said that the idea of provenance, I think, you know, it is my understanding that that is where this idea started, originally. There are tools that are that have been created within this ecosystem looking to cater to the human rights community, such as proof mode from The Guardian project. Then there's parallel efforts such as eyewitness and other tools that are out there.

What is happening now is that we're seeing a push away from these niche tools, and more towards a systemic use of provenance and authenticity systems that are interoperable, and that operate across workflows. So I'm not sure that we could say that they're successful examples of the implementation of this, because it's just such an early stage. It's important to think about how we could push it towards these successful cases. But also, just to be wary that before we continue to push for a systemic use of provenance and authenticity, there's a lot of hurdles to cross in terms of, as we talked about before accessibility, privacy and justice amongst other issues.

Standards and Principles

Sarah Kearns 21:49

So I feel like now would be a good time to maybe go back and talk about some of the standards that that you are they are implementing in these systems.

Jacobo Castellanos 22:47

It'd be better to start off with some of the principles around the standard specifications.

So one of the principles is that this be opt in. From our perspective, and also from the C2PA, as it's now been published, it's important that it be updated, because there are many cases that we could imagine we’re using provenance and authenticity tools is not only may not be possible due to accessibility issues, but it may also not be a good idea, right? If you live in a country where there's no checks on government surveillance and control, then, you know, using these could be risky. And so the standard start with the idea that which they should be opt in.

Another of the guiding principles is that we keep an eye out for privacy across this workflow of capturing provenance information. We are concerned with the idea of inadvertent disclosure of information. We are concerned with ways in which this could be misused by malicious actors to capture more information or or just to stifle freedom of expression, amongst others. So yeah, so I guess those are some of the guy guiding principles. And now speaking more about the specs themselves. Well, I mentioned before a little bit about how it works. So the idea is this system works from the capture stage, all the way to the publishing and sharing stage. At each stage, you're able to add this information that is cryptographically verified, that it's signed, and that transfers across to the next stage of the pipeline of this image and video.

Sarah Kearns 25:12

So when you're making these standards and specs, how were they agreed upon with C2PA?

Jacobo Castellanos 25:22

That's an interesting question, because so it is, at the end of the day, a standard development body at least in how it operates and how the decisions are made. The first thing to say about these bodies is that it is often the case that they're not entirely diverse and representative of different geographic locations and experiences and cultures and languages, and gender. I think it's important to note that C2PA is not necessarily an exception.

One thing to note about these bodies in general, is that it is necessary to think about how we incorporate input from a diverse set of actors, even if they're not directly involved. To a certain extent, it's difficult to get direct involvement, because they are very technical bodies. Witnesses is focused on going to the communities and have them be part of these of this technical working group. This is also to connect back with communities to connect on these issues and be able to capture some of these concerns so they are reflected in the specs.

Now within the C2PA, itself, Witness is part of the technical working group, but then also, there's other working groups. There’s a steering committee, there's a, a communications committee that looks at other questions, but the one where witness participates is really the technical working group. There's a number of representatives from different member organizations that participate on weekly meetings where we discuss pending issues and come to decisions based on consensus.

Working with Communities to Build Trust

Sarah Kearns 27:48

Which which communities are you reaching out to as witness to to get that feedback for the working group? And what have some particular feedback?

Jacobo Castellanos 27:58

Even before joining C2PA, Witness had already led a couple of regional convenings around the prepared on panic campaign on deep fakes and synthetic media. Though deep fakes already do pose certain risks — especially around sexual and gender based violence — we're not at that stage where they are at the top of the list of priorities of the communities that we send to serve and support. But we did have a couple of convenings to discuss how it could impact them, what are the threats that that need to be considered, what are potential solutions, and what are the opportunities that they could that could be enabled for these communities. And as part of that, the idea of provenance was already incorporated into these discussions.

And then once we did join the CPA, we also led other regional convenings in Asia, in Africa, and Latin America, another one with a focus in Brazil. We’ve also hosted thematic meetings focused on digital rights, human rights, and disinformation. The idea was to have these conversations with the partners that that we work with and for and present them with the work that is happening in order to share what's happening, but also to capture some of the concerns that they've had.

Government misuse came a lot into these conversations. There's many places where we could foresee that for all the good intentions that transparent data provenance may have, it will be misused by governments and used as a tool to stifle freedom of speech. There have been laws that have already been enacted in certain places that seem to be pushing to this idea that you could require, for example, journalists to sign the information that they post online with individual identifiers so that everything that is posted online by certain journalists is directly connected to a specific person and that could certainly be problematic.

Another concern is one that was described under our assessment — epistemic injustice — which essentially addresses how could this strengthen existing relationships of power. We can imagine that this sort of system could be potentially implemented by social media for algorithmic ranking. So if you do include a C2PA manifest into an image on Twitter, then it is possible potentially that this image be have a higher ranking algorithmically that puts it that makes it more accessible that more people can see it. But in that case, what happens with those that don't have a seat to peer manifests? Are they going to be less seen or they're going to be less credible? And what happens if you know these tools are not necessarily available to everyone? Or available but not desirable? And so yeah, those are just two examples how it could strengthen these relationships, existing relationships with have power and how it could be misused by governments.

Sarah Kearns 31:45

Yeah, you mentioned this earlier when we talked a little bit about creating infrastructures of trust. I feel like what what you're saying there, I feel like ties back into that: how to build a technology that people feel comfortable and safe using for the benefit of like creating truth. That just sounds so challenging.

I guess I'm just curious on what do you think about the media landscape now and how you see these tools counteracting the current challenges. I think it's good to to acknowledge the limitations. And I think it's good to be aware of or abreast of how the technology can be used or misused. So how are you like building around those those loopholes? Or how are you still building community trust, despite all those fears?

Jacobo Castellanos 33:01

So yeah, so the current landscape online of information online is very challenging, and the future doesn't look so bright, at least from where I'm standing. It seems to get more complicated and more complicated. And, and one way that I think I could perhaps answer your question is by looking at the example of synthetic media and deep fakes. These tools that have emerged have really blown up especially around the conversation around AI created images, for example, led to Journey.

So how this is, how this is affecting the online landscape is that it is further blurring the lines between what is real and what is synthetic. And also, perhaps making it harder for just to trust anything that we see online. This is not to say that synthetic media, deep fakes are all bad. There's definitely a lot of good cases. And I love to speak a little bit about that, too. But the fact is that it does complicate matters if if we don't think about the potential impact they could have and how we could resolve or how we could look to strengthen or leverage relationships of trust and and fortify truth. And so to answer your question, the way that provenance fits into this, we're thinking as well, it could be one of the mechanisms that we use so that when we see something online that has been synthetically created, that we're able to recognize it by accessing this provenance, right. And this is different to other ways by which we could tackle synthetic media such as for example, labeling. When you label synthetic media at something that is much more conspicuous, Right, it's when, for example, if you've played around with w two, they put that call bar with colors at the bottom right, that tells you okay, this has been synthetically created. You could also use a watermark, you could use icons, you could use an explicit message that this has been labeled. But some of the questions that we have when thinking about this more explicit type of, of communicating that something is synthetically created or manipulated is that it could, you know, it raises questions on freedom of expression. And it raises questions on the impact that this could have on artistic creations on creative creations on satirical creations, right. And so we've been discussing the or thinking about the possibility of provenance tools being used as another form of disclosing that something is synthetically created or manipulated, without necessarily affecting the experience of an artistic expression, for example, it's something that had to kind of like dig into a second level of analysis to see, which comes with its own risks and with its own questions, but it's definitely another mechanism that could be put into place when thinking about synthetic media. Yeah.

Sarah Kearns 36:14

I guess you mentioned that there are good uses of synthetic media. I'm kind of curious on as to what examples you have.

Jacobo Castellanos 36:20

Yeah, I think that's just important to mention, because a narrative that tends to be shared a lot online about all the potential and the risks and all the harm that is already happening due to synthetic media. Especially for sexual and gender based violence, deep fakes have been a big issue and there's been a lot of coverage around that and around the use of deep fakes and elections. I think more and more, there's examples in which this could be misused. There's been, for example, cases coming out of phishing, and, and deep fakes being used for in hiring process to infiltrate certain organizations.

What the this narrative is missing is that synthetic media also has a lot of potentially good uses, right? Just one that comes to the top of my head is, there's this video on where David Beckham, the football/soccer player, is speaking, I think, seven or nine languages to bring attention to the issues around malaria, so could be used to raise awareness. For social causes, there's an example here from Mexico about a journalist that was murdered. There was a deep fake that was created of him raising awareness of issues around insecurity that journalists face in Mexico. So it definitely could be used for a lot of good cases.

David Beckham speaks nine languages to launch Malaria Must Die Voice Petition

See the full playlist of deep fakes for social justice HERE.

When we think about dealing with synthetic media, and when we think about misinformation online more generally, and when we think about fortifying the truth for marginalized communities, from human rights defenders, we're definitely thinking about ways in which this could be enhanced in different ways.

Sarah Kearns 38:50

I like what you said about reusing information, and turning it into art or comedy, and how that's not necessarily bad to reuse something in a in a different way.

What future are you envisioning if these tools and standards that you're you're making are implemented correctly?

Jacobo Castellanos 39:48

So, okay, so it's a long way to that. First of all, I'd say that, you know, we don't know if this is going to get a foothold and it's going to take off. It's certainly starting To be used a lot, and I think it's important to recognize that there are companies who have a lot of power and a lot of influence that are part of this. So, you know, definitely, tools that are already used by millions of people will start using provenance and authenticity. So what we're doing now is, is getting an early start getting a foot early in the door, to make sure that some of the concerns are raised from the start. Right.

Sarah Kearns 40:30

Yeah, right. You mentioned that you're working with Twitter and BBC.

Jacobo Castellanos 40:36

Yes, and as far as I understand, they've already been testing some of these technologies into their systems. And Adobe already has a better version that is out for Photoshop, I believe, and, and so it's certainly you're already going to be used by a lot of people. So we can imagine that this may very well take off. If it does, then, you know, part of our efforts is just to make sure that from the get go, there's these design principles, this these parameters that shape, the conversation, there's a human rights framework surrounding this conversation.

The future that we need hope for is just that the truth is fortified. That is a an ambiguous way to put it, because what is truth. Certainly thinking about the communities that we send to serve and support, thinking about human rights defenders thinking about journalists, agreeing about grassroots activist and marginalized communities in general, in general, whose voices are already forgotten or dismissed? How do we fortify their message? How do we fortify their truth?

If it's just a vision, we hope that providence tools could serve just to put these voices front and center that they could not be undermined and dismissed as they are now. On the flip side, but this could also help tackle some of the misinformation and some of the malicious information that we're seeing online.

Even though this is very much at an early stage, I think that one area that Witness is looking to talking to as many people that are usually not part of these conversations, so that they could be part of it. And so insofar as we could talk to more people from more places, with more experiences with more background, it could better shape our work. And it could better inform what we do and what happens within these spaces. And as this ecosystem evolves.

Sarah Kearns 43:10

How could more people get involved in that process?

Jacobo Castellanos 43:14

Witness will be organizing regional convenings. And throughout the next year, in a global communities, we'll have one in Brazil, others and another in Asia, potentially another in Africa, potentially. So it could it could be through a regional lens, that if you're focusing on work within a regional space, and you this is something that interests you, perhaps from a privacy perspective, perhaps from a digital rights perspective, from a journalist perspective, from whatever perspective, you you work on that you think it's relevant to reach out to us witness? And then we'd be happy to, to keep those conversations going. But otherwise, just to see what we do all the work that we're sharing online, and and what's happening within this YouTube space, so the ctp.org website.

Sarah Kearns 44:14

Cool. Well, that's really exciting. And I understand that there's still a lot to come and be built, but I appreciate you taking the time to talk with me about it. I think this is a really interesting and important project. So I wish you the best of luck and I'm excited to see what comes of it.

Jacobo Castellanos 44:35

Thanks, Sarah. Thank you

Comments
0
comment
No comments here
Why not start the discussion?
Read Next