Kay Firth-Butterfield leads the responsible AI movement.

By Stacey Ingram Kaleh

Kay Firth-Butterfield
Kay Firth-Butterfield | Photo Courtesy: Felicia Reed

Visionary, CEO, mom, breast cancer survivor, advocate, breaker of glass-ceilings—any and all of these terms could be used to describe Kay Firth-Butterfield. Yet no combination fully reflects the transformative impact she’s made and continues to make on how we, as a global community, approach our relationship with technologies like artificial intelligence.

“I’ve always been interested in helping people,”says Firth-Butterfield of the value that has driven her work, from practicing law to teaching research and writing, to standing up ethics-centered strategies for technology companies.

With roots in Texas—her mother-in-law is Texan and her daughter is based in San Antonio as a pilot in the U.S. Air Force—and a commitment to responsible AI development, Austin is a place Firth-Butterfield is glad to call home. “I think that lots of great future developments in tech will come out of Austin,” she says. “Austin’s the perfect place to really think deeply about some of these issues of fairness and equality [in tech]because there’s such a significant Hispanic and burgeoning immigrant community. And I think people are very open to sharing and helping here.”

As CEO of Good Tech Advisory LLC, Firth-Butterfield helps companies, organizations and governments consider how their work will affect stakeholders, including impacts on fairness, equity, health and more. “I advise companies and countries on deployment of AI and business transformation with AI and the future of work, always with a responsible AI lens.” What does it mean to be responsible with AI? Firth-Butterfield explains that a human-centered approach, working to ensure that all people benefit from the technology—including women and people of color who are often underrepresented in design and development processes as well as in data training sets—is key to acting responsibly. Her work has often involved bringing diverse perspectives together to develop practices that protect people from harm and go above and beyond legal compliance. Firth-Butterfield can also be found on the UT Austin campus where she serves as a senior research fellow in the College of Liberal Arts.

Firth-Butterfield’s career path is one she forged herself. After starting as a barrister-at-law in England, with a stint as a judge, she moved to the U.S. to become a professor and focus on her passion area of human rights, within which she considered the relationship between humans and intelligent machines. “I started to think about how AI could be a problem for us or could be fantastic for us,” she says. Then, as she was writing a human rights book with a chapter on human–AI interaction, a chance meeting on an airplane brought a new opportunity. “There was this serendipitous meeting with the CEO of Lucid [AI] on a plane. He ended up offering me a job that became what was the world’s first chief AI ethics officer.”

Pioneering such a role did not come without its share of challenges. Firth-Butterfield recalls continually having to make a case for her work among business leaders and board members and being excluded from meetings and media opportunities less than a decade ago. Yet, not only did she model the crucial role of ethics in AI development, she actively shared her insights with future lawmakers, teaching one of the first-ever classes on “Law and Policy of AI” with Derek Jinks, J.D., at UT Austin.

The drive to help others and share knowledge quickly propelled Firth-Butterfield to scale her efforts globally. She was recruited by the World Economic Forum (WEF) in 2017 to serve as their Head of AI and Machine Learning and led cross-sector collaborations to develop guidelines and resources that advance responsible and inclusive culture and practices. Her experience in law, business and academia helped her serve as a bridge between sectors and become an influential advisor on AI governance. In 2023, Firth-Butterfield stepped down from the WEF and started Good Tech Advisory. Her professional journey has been one that’s grown and branched in an almost exponential way to benefit more and more people. For her global leadership in advancing responsibility and accountability in AI, Firth-Butterfield received a prestigious 2024 TIME 100 Impact Award.

Not resting on her laurels, Firth-Butterfield continues to speak up about the risks AI technologies could pose if we do not progress thoughtfully, and believes that the more we know and understand, the better we can make decisions about how we use technology. “I have a 29-year-old daughter who has her life ahead of her, and there are some potentially very good things that will come from AI that will make her life better.” As a breast cancer survivor, Firth-Butterfield is passionate about opportunities to advance medicine. For example, AI tools can analyze vast amounts of medical data, matching patterns much quicker and testing compounds with great speed, which could lead to improvements in radiology, drug discovery and more. “But,” she cautions, “if we don’t deal with AI responsibly, then there will be many people who don’t receive good outcomes from AI, whether it’s because AI didn’t hire them because of a bias, or AI didn’t give them a mortgage because of bias, or whether it’s because they are one of 3 billion people who are not even connected to the Internet, so AI knows absolutely nothing about them.”

Firth-Butterfield points to gaps in the Internet-based data sets thatAI tools are trained on as a major issue with the potential to exacerbate inequities. “If we do not do something about the data on the Internet,” she urges, explaining that there’s more data input by men about men as compared to by women about women, “what’s going to happen is that AI is going to keep drawing on the way that the world’s been set up in the past as opposed to the way that we as women might have hoped the world would move toward being more equitable and equal.” She sees this moment as a critical juncture for society. “Do we want the thinking and politics of the past, or do we want the thinking and politics of a more equal society?” If we want the latter, she says we need more carefully considered, diverse and problem-specific data as well as more women at the table in the AI design process.

How can we help move AI technologies in the direction we want? “I would start by doing a lot of reading and, if you work in a company, ask a lot of questions about how the company is using AI and what it means for stakeholders and employees,” Firth-Butterfield encourages. “And, for the moms who are reading, if you are thinking about buying an AI-enabled ‘smart toy’ for your child, make sure you know where the data is going.”

Kay Firth-Butterfield leads by shaping the future she wishes to see, and she invites us to take action to do the same.

Learn more about Good Tech Advisory LLC. at goodtechadvisory.com and read Kay Firth-Butterfield’s column on responsible AI for TheInnovator at theinnovator.news.


More from the Summer Issue

Share.

Leave A Reply

Social media & sharing icons powered by UltimatelySocial