Humans of AI
– 8 min read
Navigating the ethical implications of generative AI with All Tech Is Human’s David Polgar
There’s never been a point in human history when people stopped innovating because they feared the social implications of a piece of technology. The explosion of generative AI is no exception, which is why we need people across disciplines leading the conversation about ethical development and implementation.
In this episode of Humans of AI, we have the privilege of exploring the profound insights of David Ryan Polgar, the Founder and President of All Tech Is Human, an organization dedicated to promoting responsible tech development.
David’s journey into the world of tech responsibility began with a poignant personal experience that highlighted the unanticipated consequences of unchecked tech development. This encounter served as a catalyst for his deep commitment to exploring the ethical implications of technology and its impact on the human condition.
As a thought leader and advocate for responsible AI development, David brings a unique perspective to the table. He challenges the prevailing notion that technology is on an endless march forward, emphasizing that it zigs and zags, and that it’s our collective responsibility to shape its future.
Join us as we uncover the key takeaways from this thought-provoking conversation, gaining valuable insights into the ethical implications of generative AI and the importance of responsible tech development. David Polgar’s expertise and dedication to promoting ethical AI practices serve as a guiding light in navigating the complex landscape of AI ethics.
Please be advised that this piece contains discussion of self-harm and suicide.
- David explains how technology doesn’t always progress linearly. It often zigs and zags, and human actions determine its future.
- The pace of technological innovation often outstrips our ability to regulate it, leading to legal and ethical dilemmas.
- Generative AI can be beneficial if used correctly, but David also warns it can lead to ‘thoughtless communication’ if misused.
- The future of generative AI needs to be aligned with our values, and this requires the collective effort of various stakeholders.
Unforeseen digital experiences highlight the need for ethical tech development
In the early days of Facebook — a time when friend requests were pouring in from random acquaintances and peers you attended school with — David had an experience that would ultimately form his perspective on AI governance. Among the flurry of friend requests, he received one from a high school friend. He let it linger, and before he could accept it, he found out that this friend had recently died by suicide.
“It was a situation like that really brought everything into focus, just how significant this issue is,” David says. “At the same time, that friend request didn’t go away. It stayed there, right? It stayed there as if this individual was living and breathing and could converse with me.”
Reflecting on this experience, David emphasizes the need for ethical tech development and the recognition that the future of technology is intricately linked to the future of humanity and democracy.
“I got heavily involved in the space of how technology was really upending, how we live, love, learn, even die,” David shares. This deep dive into the world of technology opened his eyes to the unforeseen consequences that can arise from unchecked tech development. It became clear to him that the future of tech isn’t a linear march forward, but a complex web of interconnectedness with the human condition and the very fabric of democracy.
Recognizing the significance of this intersection, David founded All Tech Is Human, an organization dedicated to promoting responsible tech development. He firmly believes that the choices we make today will shape the future we want to see. “We’re recognizing that the future of tech is intertwined with the future of the human condition and the future of democracy, and that is a big deal. Full stop,” David says.
Tech innovation often outpaces regulation, creating ethical and legal dilemmas
The speed at which technology evolves often exceeds our capacity to fully understand and govern its social impacts. David stresses how although technology moves at such an accelerated pace, our ability to understand its social impacts and implement legal measures moves painfully slow.
“Everything comes down to the gulf between the speed of innovation and the slowness of our consideration, that delta is far too large,” David states. This disconnect poses significant challenges, as the consequences of technology can unfold before we have the chance to fully comprehend and address them.
“What we’re finding a lot with tech is things are happening that are clearly in our consensus, unethical, but ethics doesn’t always coincide perfectly with law,” David explains. This misalignment between ethical considerations and legal frameworks creates a complex landscape where actions can be deemed unethical, but not necessarily illegal.
Generative AI presents promise and peril in the realm of communication
As our world becomes increasingly interconnected, it’s crucial to preserve the authenticity and intention behind our interactions. While generative AI can enhance efficiency and communication processes, it should never replace the genuine human connection that lies at the heart of meaningful conversations.
“The issue that’s going to be more pronounced in the coming years with generative AI is you always have people who only think of numbers. Here’s the thing: People are not numbers,” David emphasizes. He likened thoughtless communication to a dystopian Ray Bradbury story where two humanoid doppelgangers end up speaking to each other.
Generative AI can be a valuable tool for efficiency, particularly in business contexts. However, David cautions against losing sight of the purpose behind our communication. “Generative AI can be great for efficiency of how we communicate, but we always want to make sure we don’t lose sight of why we are communicating,” David says. When individuals try to scale intimacy by leaning into generative AI, they might be communicating with more people, but the aggregate of a relationship’s value is much lower.
Use generative AI to scale informational communication
David points to the Slack community for All Tech Is Human as where he sees the massive promise of generative AI. The community has 8,000 members, meaning there are a lot of common questions that are frequently asked.
Generative AI can scale informational communication, providing quick and accurate responses to queries.
“There’s sometimes the need for that thoughtful communication,” David explains. “But a lot of communication is actually just, ‘I need information. Where is this information? How do I understand this?’” Generative AI can provide not only the requested information, but also additional context and insights, enhancing the overall communication experience.
How All Tech Is Human is tackling AI governance issues
All Tech Is Human recognizes the need to strike a delicate balance when it comes to fostering creativity and innovation, while also protecting the rights of creators such as artists and writers.
“We need to incentivize and maximize creativity, and that’s a little bit of a Goldilocks zone. You need to offer protection for creators to promote enough incentive that they become creative and that they have this as their livelihood,” David explains.
All Tech Is Human is working towards responsible use of AI. They recently hosted a virtual hackathon about evaluating responsible AI governance maturity and are holding other in-person events about strengthening the information ecosystem.
AI is at a point where it has significant promise and significant peril. David’s goal is to co-create the best course of action that maximizes the positive aspects of it.
Promote responsible AI development by bringing all relevant stakeholders to the table
All Tech Is Human understands that responsible AI development requires a multi-stakeholder approach. David concludes by stressing that All Tech Is Human isn’t just a few thought leaders, but that everyone is part of the process.
“You’re all part of the process. No matter what your livelihood or your perspective is, you have something to offer,” David says. “Your lived experience is crucial because we need to create a tech future when we think about generative AI that’s aligned with our values.”
As individuals working in tech, we can either sit on the sidelines and watch everything fall apart, or we can stand on the frontlines and fight for a tech future that keeps its promises. Let’s be on the right side of history. By following the example set by individuals like David Polger and organizations like All Tech Is Human, we can shape a future where generative AI is harnessed ethically for positive change.
Want to hear more stories from the humans working at the crossroads of business and generative AI? Subscribe to Humans of AI wherever you listen to podcasts.