Natalia Domagala on fighting for transparent AI, the power of algorithms, climate change and more

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Natalia Domagala, an advocate and global digital policy specialist fighting to make technology work for people and societies. We talked with Natalia about the power of algorithms, her favorite work projects, climate change, fighting misinformation and more.

The first question that I kind of wanted to ask you about was algorithms. I know you do a lot of work in that space. What do you think people overlook the most when it comes to knowing about algorithms on the internet?

Domagala: I think it’s perhaps the fact that these algorithms actually exist, because we know about this, but most people actually don’t. I think an average internet user never actually questions, what happens at the back end? How does the internet actually function? I don’t think many people ask themselves those questions. And then when they suddenly browse for a new item that they want to buy, and then suddenly they go online the next day, and they see a list of similar items suggested within their browser, or they open their social media account, they suddenly see all the relevant ads. I think a lot of people just think that this is some sort of magic, that suddenly the computer knows what they need and what they want. I think it’s a very key point to educate people about how algorithms are being used and to actually tell them that they are being used and what you see online doesn’t just magically appear in there. It’s actually there, because there are systems that are scraping your data and then analyzing your data and then feeding it back to you in a way that, for the most part, actually encourages you to buy something or give up more of your data as well.

What do you think are some easy ways that people can become more knowledgeable and become a little bit smarter about algorithms, and also data?

So I think the first thing is using a secure browser and using a browser that doesn’t necessarily store your data. I think the same goes for tools and apps that we’re using to communicate. So using apps that have higher standards of privacy, using apps that don’t actually store your data, that don’t use your data for anything. Another big one is not linking your accounts. I know this is quite inconvenient, because the way that the internet has developed is that now you can just log into so many services using one login from one social media portal, one website to everything — and that again creates that kind of feedback loop with our data that’s not very privacy-friendly. I think also using incognito mode, that could be a quick solution. The one that I think is really sort of a bit annoying, I think, to people, but is good is actually reading all the privacy policies. And if you go on a website and you’re prompted about cookies, instead of clicking accept all — which is the easiest way because that’s how the user experience is structured — actually going through it, unchecking all those cookies and saying “no, I don’t want you to store this data. I do not want you to collect this information.” There is something really important to be said about how our online experience is structured for convenience. But overall, I think just getting into the habit of not just closing those windows, but actually saying, “reject all.”

You’ve done a lot of work with algorithms and this type of privacy protection. What has been your favorite project to work on in the work that you do?

I think my favorite project was the algorithmic transparency standard, which I worked on when I was at the U.K. government. It was all about creating a standardized way for government agencies to share information about their use of AI. It’s all about making sure that this information is easily accessible and that you as a member of the public can actually go on a website and find out how the government uses algorithms about you, how this could affect you, or what kind of decisions, what kind of policy areas, what kind of contexts those algorithms are being used in. At the time when we were working on it, that was something that hadn’t actually been done on a national scale, so it was a very interesting, very exciting project because we got to create something for the first time. It was very much public facing. It was all about actually asking the people: what kind of information would you like to see from the government? How would you like this information to be presented? Is there anything else you would like to know? What kind of feedback loops should be put in place as well? So to me, that was really a way to educate the people about how and why government uses AI, but also a fantastic way for government departments to compare how they’re using this technology, and if there are any similarities, any areas for improvement, any kind of ways to actually involve external researchers into their work as well. I think it’s a win-win sort of project which I would love to see in other countries as well, but also in the private sector, because algorithms are everywhere, but we don’t actually know about this. We don’t have enough transparency when it comes to that.

Natalia Domagala at Mozilla’s Rise25 award ceremony in October 2023.

What do you think is the biggest challenge that we face in the world this year on and offline, and how do we combat it?  

I think many of the challenges that we are facing, not just this year, but in the years to come, are intertwined. For example, for me personally, one of the most pressing challenges that we’re facing is climate change. And this is something that we can actually see unraveling in front of us. Already we see all the wildfires, we see the floods, we see the droughts, hurricanes and all that. You might ask how is this connected with the challenges we’re facing in the digital world? Well, I actually think in many ways, because there is an immense environmental impact of AI, especially training and running AI systems or any advanced computing systems on the internet. They all require a great deal of power and electricity, and this intensifies greenhouse gas emissions. This leads to an increase in energy consumption as well, and eventually, that also requires more natural resources. I think as the world gets more digitized, but at the same time our resources are becoming more scarce, this is something that we will absolutely have to address. Also, in the digital world right now, there’s so much AI-powered misinformation and disinformation. I think to continue with this climate example, I think there is a lot of content out there, a lot of lobbying from groups and parties that actually have no interest in reducing emissions, no interest at all in taking environmental action, and thanks to AI, it’s actually really easy for them. It’s possible to produce and spread misinformation and disinformation at the kind of scale that we hadn’t really seen before — scale and speed as well. AI makes it very easy, and this is not just related to climate, but we can take that pattern and look at it in every aspect of our lives really, including politics with things like election related misinformation and current affairs reporting and anything really. What we see on the internet shapes human behaviors on a large and a global scale, so it’s powerful and can be of interest as well. I think the second issue related to AI is deep fakes. Generative AI creates a whole range of new challenges that we need to address, and we need to address quickly because this technology is growing, and it’s being developed again at unprecedented scale. Things like how to distinguish fake content from authentic content, or there are challenges related to intellectual property protection. There are challenges to consent. There are challenges related to things like using someone’s voice or someone’s image or someone’s creative outputs to train or develop AI without their knowledge. There are so many stories in the media about writers whose work has been used without their consent, or musicians that had their voices taken to create songs that are actually not theirs. I think this is partially due to the insufficient governance of AI and the lack of appropriate regulations to manage the digital sphere overall. In terms of how to convert these challenges, I think they are too complex, I’m afraid, to have an easy solution. One step would be to start introducing regulation of AI and regulation of digital markets that’s actually fit for purpose, that has a specific emphasis on fighting misinformation and disinformation, that has specific areas that talk about creative outputs, intellectual property, deepfakes and how to deal with them as well. Another step is education and raising public awareness, especially when it comes to AI and how it can be used, how it can be misused, how it can be manipulated. A very simple thing is raising the public awareness of what we are seeing online and sort of trying to build this critical thinking and the ability to challenge what we’re seeing and question the content that we’ve been given. I think this is really important, especially in the era when it’s so easy to put out anything online that looks really credible. 

Where do you draw inspiration from in the work that you are currently doing? 

I think the world around me and just understanding what’s going on, in terms of what are some of the bigger trends that are happening globally. I think AI was something that I got into relatively early in the policy sphere, just because I found it just through research from talking to a lot of people. The same with transparency. Transparency has always been there, but I think now it’s more appreciated because people are understanding the risks and mistakes. For me, personally as well, I read a lot. I read fiction, nonfiction. Everything really. A lot of the inspiration for my work and for my life comes from just going to a library or bookshop and walking around, and sort of seeing what draws my attention, and then, thinking how I can relate that into my life or my work. Also, big conferences and gatherings, but especially the ones that bring in people from different areas. I think that’s where a lot of creativity and a lot of productive collaborations can actually happen if you just have a group of people who are passionate about something that come from completely different areas and just put them in one room, those kinds of meetups or conferences were something that I definitely benefited from in terms of shaping my ideas, or even bouncing ideas off other people. Traveling and looking at different parts of the world, I think, especially in the AI policy space. It’s really interesting to see how different countries are approaching that, but also just from a cultural perspective, what’s the approach to data and privacy? What’s the approach to sharing your information? What’s your level of trust in the government? What’s your level of trust in corporations? And I think a lot of that you can really observe when you travel. I did anthropology as my first as my bachelor’s degree, so I have a lot of curiosity in terms of exploring other parts of the world, exploring other cultures and trying to understand how people live, and what is it that we can learn from them as well.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?

I hope that we are celebrating the internet that’s democratic and serves the interests of people and communities rather than big corporations. I also hope we are celebrating the existence of the kind of AI that makes our lives easier by eliminating the burdensome and repetitive tasks that are time-consuming that no one wants to do but the kind of AI that’s actually safe, well regulated, transparent, and also built and deployed with the highest ethical principles in mind that’s actually a positive part of our lives that makes our everyday experience move and freeze our time to do things that we actually want to do without compromising our data, privacy or our cybersecurity.

What gives you hope about the future of our world?

Mainly people. I feel like as the challenges that we are facing in the world are getting worse, the grassroots solutions that come from the people are getting more radical or getting more innovative and effective, and that gives me a lot of hope. Initiatives like Rise25 as well give me a lot of hope. You can see all of those wonderful people making things happen against all odds, really driving positive change in the kind of conditions that are not actually set up for them to succeed and people that are challenging the status quo in the work that they’re doing, even if it’s unpopular. That gives me a lot of hope. I’m also very impressed by the younger generation and their activism, the way they refuse to submit, and the way they unapologetically decide to fight for what they believe is right. I think that’s definitely something that millennials didn’t have the courage to do, and it’s incredible to see that now the generations that come after us are a little bit more ready to change the world the way that perhaps we didn’t. That gives me a lot of hope as well, the way that they just go for it and take action instead of waiting for governments or corporations or anyone else to fix it. They just believe that they can fix it themselves, and that’s really optimistic and really reassuring.

Get Firefox

Get the browser that protects what’s important

Share on Twitter