“During her Reputation tour Taylor Swift hired a security company to set up a kiosk that played her concert and exclusive rehearsal footage. They covertly installed the kiosk with cameras, with facial recognition technology for the purpose of luring any of Swift’s hundreds of known stalkers. That way they could identify and apprehend them. That example raised a lot of questions about ethical use of technology for me.”
This example was shared by Rebekah Tweed, program director for All Tech is Human, during “The Human Side of AI” live panel discussion held last week. In this edition, Nesma Bensalem, founder of WeCare Impact, moderated the discussion and helped the panel to explore the importance of Ethics and Diversity in AI. Alongside Rebekah were Kara Howard, Founder of The KI, Bridget Greenwood, founder of The Bigger Pie, and Ruben Daniels, founder of Memri. Together the five of them explored why Ethics and Diversity in AI is important, and what impact it has on the technology that is being built. Rebekah captured it perfectly when she asked: “Do we as a society even have a chance to have these conversations and to have a say in the technology that is affecting our daily lives?”.
“Is it okay to capture biometric data about thousands of people without their consent or even their knowledge?” asked Rebekah. “Is it relevant that it’s for the safety of a single person? How much of our privacy are we willing to trade for safety?” Those questions are rarely answered, but most times they aren’t even asked. We need to be asking the right questions about the technology we are building.. Rebekah reminds us about the deeper impact of these issues, especially regarding bias and fairness in automated decision-making tools.
The road to AI is being paved by corporations that are doing live research on humans. There’s little transparency and even less explainability on how the algorithms work and what goal they are trying to achieve. . In fact, how they work is not even known to the engineers who build it themselves, making it hard to say who’s responsible for their impact. This can become problematic real quick if your self-driving car decides to hit the brake for no reason. Or when the model that identified eligible candidates for kidney transplants turned out to have a racial-bias. It can have a huge impact on people’s lives. But how do you make sure these things don’t happen?
Bridget Greenwood offered some more insights on the topic. “The speed at which these things are developing is making it really hard for regulators to keep up. There are concerns about EU regulations coming up that potentially will be a bit too stifling for innovation here, but then again, if you look towards China, where the government just decides what the rules are, they still find ways to be innovative. It’s just a different way of dealing with regulations. The same goes for the US, where it’s organized differently again.”
Bridget highlighted what she thinks will be vital for any solution to be successful: “There needs to be a profitable angle for businesses to make sure that they are ethically AI. So that could be in terms of the longevity of the user, because the experience of it is actually making them better as opposed to, for example, TikTok that is creating a generation of teenagers with terrible mental health issues. I keep asking myself, is that really the best profitable business model that we can do? Can’t we do better?”
Ruben Daniels, founder of Memri,, has been asking similar questions in the past and believes it should be possible to do better. “Our journey really started with realizing that data privacy was an issue and it took us a while to figure out that actually it’s more about the control of your own data because it’s controlling what you see and spend your attention on. In that way it has a major impact on how you spend your life. What you take in, what emotions come out of that, the relationships you build and your view in the world… and being able to have some control over that is so important.” While looking for how to give their users that agency, Memri realized they needed to change things at a fundamental level just as much as on a technological level.
Ruben: “We studied how others in the world have organized themselves by giving all stakeholders a voice and how to do that on a community level, figuring out how to be a leader in that context. And for me personally that has been a real growth journey on figuring out how to be present, provide leadership within certain roles and allow others to fill in their space themselves. And so we have spent a lot of time on that and we turned out to create a cooperative legal structure, a multi-stakeholder cooperative legal structure, getting all those voices involved as we’re thinking about building out this AI ecosystem. We don’t want to be the ones that are essentially specifying how the world should work. Instead, we’re inviting communities that would want to shape that world to imagine it together.”
According to Kara Howard, actively putting in effort to hear alternative voices and wanting to have diverse voices is a must. “I remember reading a piece in Vanity Fair that was an interview with Elon Musk. He had just initiated his non-profit around the ethics of AI and there was this visual image of all the key futurists in the AI space and technology space and their perspective about the future of AI, and whether they think it will be something scary or not, and it really stuck out to me because there was close to 15 images and there wasn’t one woman listed.”
Kara found a great opportunity in realizing the potential dangers of it. Enough to initiate a community called the Feminine Intelligence where they connected futurists, thought leaders and many others to broaden the perspectives and the voices out there on a meta-level, making sure that those under-represented can find their route to influencing how the future of technology will look like. “Organizations, such as Memri, that want to tap into communities and be a part of the conversations happening will probably start forming ecosystems where experts and thought leaders provide guidance for founders by suggesting tooling or methods and helping them tailor it to their needs.”
Plenty more subjects were touched up on, a lot more insights were provided, and many more questions from the audience were answered, but it was clear to all that … “AI, Ethics and Diversity requires much more discussion.”
If you’re interested in watching the entire video you can have a look here.