Humans of AI

– 11 min read

Choose your own AI adventure

Insights from the first season of Humans of AI

Alaura Weaver

Alaura Weaver

Humans of AI - Alaura & Ilana

As the host of Humans of AI, I’ve spent the past six months having conversations with business leaders and technology innovators about their visions for how generative AI will impact our lives as business leaders, workers, and consumers. Some of these conversations have left me feeling cautiously optimistic. Some have left me feeling just plain cautious. But all have left me with a sense of gratitude to be living in this moment.

We find ourselves standing at a critical juncture with artificial intelligence. We’re at a historical crossroads, one of those moments where the path you choose really matters. On one side, there’s a route that could perpetuate the challenges we’re already grappling with — issues like bias, discrimination, and social divisions. These aren’t just abstract problems; they’re the kind of systemic injustices that echo, loudly, in the world around us. They shape lives in real, tangible ways.

But then, there’s another path. It’s the one that leads to potentially significant, positive change. This isn’t about blind optimism. It’s about the thoughtful, human-centered application of technology. It’s about using AI not just because we can, but because we’re guided by a clear sense of purpose, a commitment to do good, to improve our corner of the world — whether it’s at work or in our communities. It’s about ensuring that as we move forward, we’re bringing everyone along, not leaving anyone behind because of outdated biases or oversight.

So here we are, at this juncture, deciding not just the future of technology, but really, the future of how we want to interact with each other as a society. What we decide here — it matters. It matters a lot.

I think I can safely speak for all the guests we’ve had on Humans of AI this season that we’d like to go the second route.

For our final episode of season one of Humans of AI, we’re switching things up a little bit. I’m joined by our producer, Ilana Nevins to look back on this season and dive into some key themes and discuss what’s next.

Summarized by Writer

  • Past episodes highlight the ongoing discussions about the ethical development of AI and the importance of addressing and mitigating biases that AI systems may perpetuate.
  • It’s crucial to maintain human oversight and involvement in AI development to make sure that technology enhances rather than detracts from human capabilities and societal needs.
  • AI has the potential to empower individuals and transform sectors, like healthcare, showcasing its ability to close data gaps and offer new insights.
  • The dual potential of AI is to either exacerbate challenges like bias and exclusion or to drive significant positive change through thoughtful, human-centered application.

Confronting the ethical challenges in AI development

It’s important that we confront the ethical challenges of AI development head-on. The last thing we want to do is perpetuate existing biases through our advancements in technology. In my conversation with Hilke Schellmann, award-winning journalist and author of The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now, we talked about biases in AI in hiring processes.

Hilke illuminated the critical issue of causation versus correlation in AI systems, where AI might amplify existing prejudices instead of challenging them. Seemingly neutral data — like the presence of the word “baseball” on resumes — could be misconstrued by AI as a predictor of job success, potentially due to underlying gender biases, as baseball is predominantly associated with men.

This tangible example validated a lot of the way that I think about the systems of the world in general and how bias and injustice is baked into certain systems. Through their training data, these AI models provide a clear example of the issues we’ve been discussing for decades. If we don’t take the right steps to address these biases and only focus on what technology is capable of, the cycle of harm isn’t going to end.

Choosing the people-first path of AI innovation

But as David Ryan Polgar, Founder and President of All Tech is Human points out, human agency has a profound impact on shaping the future of AI, meaning we can take the right steps. David highlights that technology’s march isn’t linear but subject to the zigs and zags of human action.

“It’s what we do today. Or don’t do today. It’s how we create laws. It’s how we collaborate. That’s how the future happens,” David says. “It’s about our involvement and it’s about human agency. What I always find fascinating is that technology is moving at such an accelerated pace. Yet our ability to understand its social impacts come together about norms of behavior, about legal aspects that we need to set in motion that’s moving painfully slow. Everything comes down to the fact that the gulf between the speed of innovation and the slowness of our consideration, that delta is far too large.”

David’s perspective is a call to action for all of us involved in this field to not only advance technologically, but to do so with a keen awareness of the ethical and social implications. Our choices and actions today will define the trajectory of AI development, which is a heavy — but necessary — burden to carry.

Human agency starts with how we build AI systems. The complexity of AI shouldn’t obscure our understanding of how it works. Patrick Hall reminds us that transparency in AI systems is crucial for managing risks and maintaining public trust. He warns that when AI systems are too complex to be understood, they become nearly impossible to manage or trust, especially when issues arise.

“When you have a system that’s so complex that it’s unexplainable, no one can know if it’s been altered,” Patrick says. “And if a system is so complex that no one knows how it works and it’s causing problems — say it has some non-discrimination issue, or it has some data privacy issue — if it’s so complex that no one understands how it works, you have to go back to the drawing board.”

All of these conversations about ethical AI development revolve around human accountability and understanding. So many of us in the field are standing on the right side of AI, but certain people — even when given evidence of these unjust patterns — choose to look away. They chase the bigger promise of profit or the bigger promise of power at the cost of others. We have to make sure that this next generation coming into the AI workplace is making “people first” part of their generational culture. When people are at the forefront of AI development and considerations, we can create positive change.

Amplifying human potential and empowerment through AI

One of the ways AI can transform the workforce is through upskilling and reskilling. Neeti Mehta Shukla emphasizes that as AI automates routine and repetitive tasks, it opens opportunities for workers to engage in more complex and fulfilling roles. This shift not only enhances individual careers but also benefits organizations by fostering a more skilled and adaptable workforce.

“If certain jobs are a little bit more manual and repetitive, they’re probably going to transition out, but the humans have the domain expertise,” Neeti says. “Humans have the subject matter expertise. So leverage that, help upskill and reskill them and you are better off as an organization. It gives you a competitive advantage and brings your communities up again. So to me it’s a win-win if you can help transition right.”

Joanna Taylor speaks to how AI is being used to bridge gaps in healthcare data and improve treatment options. Those systematic biases we were just talking about? When we approach AI from a human-centric perspective, we can start to unravel those decades of hurt. Joanna discusses how AI can generate synthetic data sets to compensate for historical data deficiencies, particularly in clinical trials that have traditionally underrepresented certain groups.

“The new treatments that are coming and are available, AI is being used to look for new indications associated with existing drugs,” says Joanna. “We have this data gap now from 30 years. We’re missing data from the clinical trials that have been conducted, what do we do to address that? So I’m starting to see some really great examples of using AI to create synthetic data sets, for example, which could hopefully expedite or close some of that gap.”

There was something profoundly stirring about my conversation with Joanna. You see, as a woman diagnosed with a neurodevelopmental disorder in adulthood, I’ve navigated systems that seemed not just indifferent, but almost oblivious to my existence. It’s a poignant reminder of a broader, historical oversight: the datasets on neurological conditions like autism and ADHD, traditionally gathered from studies focusing predominantly on white male children. This skewed focus has led to a troubling gap in understanding, often resulting in the underdiagnosis and undertreatment of women and girls. It’s a narrative too common, yet each time I encounter it, it resonates deeply, reminding me of the work still needed to forge paths that acknowledge and support everyone.

AI and the “curb cut” effect 

Beyond the empowerment we can find through upskilling and through synthetic data sets, generative AI also offers a new level of accessibility and accommodation for people at work, regardless of their relationship to disability.

There’s a phenomenon called the curb-cut effect, which is how accessible accommodation for one group of people empowers even broader groups. When you include a curb cut at an intersection to accommodate wheelchair users, it empowers all kinds of people to cross the street now.

I think it’s the same idea with AI. I often use generative AI at work to help offset my cognitive limitations, and I can hand that tool to somebody who is neurotypical and they’ll find value in it as well. It means that I’m able to deliver on the very ambitious timelines that we have here at Writer and it helps me as a person with a disability maintain a leadership role.

AI as a bridge to opportunity 

Our conversations over the past season have made it clear how deeply technology intertwines with humanity’s most pressing issues. Neeti Shukla-Mehta’s story about the work she does with an aid agency in Ukraine doesn’t just highlight an advancement in technology; it underscores a profound shift in how we approach humanitarian aid.

“We helped automate a process of taking in that aid request and we were able to increase by 400% a day the number of aid requests they could take in,” Neeti says. “We were able to serve 32,000 more people in that same timeframe than we could have otherwise.”

By automating the process of handling aid requests, they didn’t just improve efficiency; they expanded their capacity to help, dramatically increasing the number of people they could serve. This isn’t just about numbers; it’s about the real, tangible impacts on lives—32,000 more lives, to be exact, within the same timeframe.

Then there’s the personal narrative of May Habib, co-founder and CEO of Writer. Her story brings a human face to these discussions. Born in a small village on the Syrian border, her family’s migration journey, facilitated by a simple yet life-changing stamp of approval, encapsulates the hope and transformation that can arise from such technological interventions. Her journey from a refugee to a CEO is not just inspiring; it’s a testament to what’s possible when we provide safety, stability, and opportunities to those who need them most.

This conversation, woven through personal and technological narratives, isn’t just about the power of AI. It’s about the potential for technology to serve as a catalyst for change, offering new pathways out of crisis and into safety, much like the path May’s family took years ago. It’s a reminder of the ripple effects that innovation can have, reaching far beyond the immediate application, reshaping lives and futures in profound ways.

More to come from Humans of AI

This is the end of season one of Humans of AI, but we’re not finished having these important conversations. In season two we’re getting down to the nitty-gritty of power users of AI and how they’re seeing their careers change, how their view of the world and future vision is changing because of their day-to-day interactions with AI.

Thank you for joining us on our first season of Humans of AI, and thank you to the guests who shared so much of their time, stories, and expertise, and thank you to our listeners. If you’re in data science or machine learning, leading AI innovation at your organization, or an early power user of generative AI, we really want to hear from you.

Want to hear more stories from the humans working at the crossroads of business and generative AI? Subscribe to Humans of AI wherever you listen to podcasts.