5 AI Themes to Watch from the Gartner IT Symposium and Xpo
At the 2024 Gartner IT Symposium and Xpo, these 5 AI Themes were key to watch as we head into 2025
Read More About 5 AI Themes to Watch from the Gartner IT Symposium and XpoAt the CTA Insights Series, Anne Lifton gave a keynote talk on Ethical AI followed by a panel tackling some of today's hard questions around AI
“Humans are required never to treat others merely as a means to an end, but always as ends in themselves.” Anne Lifton, RevGen’s Principal Architect of Data Science and Machine Learning, closed her talk on Ethical AI with this quote from Emmanuel Kant.
RevGen was proud to sponsor the Colorado Technology Association’s insight series this month, with Anne giving a keynote speech on a topic that has grabbed attention with ChatGPT and other artificial intelligence tools making headlines recently. Later, she was joined by Liz Harding, Vice Chair of Technology Transactions and Data Privacy at Polsinelli, and Matt Fornito, Chief Data Officer & Evangelist at Revenue.io, for a panel discussion that dug into some of the major concerns.
There are several ways artificial intelligence can go wrong if bias remains unaddressed, Anne warned, including tarnishing brand reputation, lawsuits, or even regulatory consequences. “However, if we flip that around, we can frame it more positively as: the elimination of bias in artificial intelligence opens up opportunities we otherwise wouldn’t have seen.”
One great example of this, she said, was in hiring. AI is more and more frequently used to scan resumes of job applications; however, as pointed out in John Oliver’s Last Week Tonight, some of these programs can re-learn bias, even when initially programmed to treat all applicants fairly.
“By mitigating that bias, we can uncover a whole new pool of well-qualified candidates,” she concluded.
Mitigation and empowerment were key themes of pushing towards ethical AI usage.
“We are going to have to create a culture of monitoring and auditing AI, just like we do any other software. It should be monitored, recorded, and audited, just like the rest of the software we use. And the more complex it is, the more important this is.”
She continued, “And when something goes wrong, response – simply acknowledging the error – is not enough. Mitigation has to happen, so that the same error doesn’t come back around again. We see this a lot in IT and cybersecurity, where we update software to close security loopholes, but this isn’t happening often when there are issues with AI.”
[Read More: Is Machine Learning Right for Your Project?]
Another key piece? “Involve your leadership early to build accountability.”
However, she notes that having a culture of ethics – independent of the use of AI – makes building a culture of empowerment around artificial intelligence an easier task. And it can bolster the company in other ways as well. She dropped a figure that raised several eyebrows in the room. “The companies with the strongest ethical cultures outperform their peers by 40% across all measures of business performance.”
“It’s important to empower people to raise objections; however, we also need to be solution oriented. We have to allow ourselves to be nimble and iterative,” she said. Especially because much of the landscape around AI, both from a legal and technological perspective, is still evolving.
“I think it’s interesting to see the way AI issues are being built into the legislation,” Liz Harding said. “From a legal perspective, the keywords are transparency and choice. And that is difficult to achieve. If you’re training an algorithm using an individual’s data, most people don’t know that, so the first step is being able to give [those individuals] notice. And if they choose to remove their data, how do you achieve that?”
Matt Fornito agreed that there is a significant grey area. The important question to him is, “What is being built and why? We have to get into explicit versus implicit bias. Data is data. And companies aren’t intentionally building biased models.”
However, he acknowledged that intention wasn’t all that mattered. “We’re getting to a place where we need data scientists who aren’t just coming from computer science and mathematics. We need more diverse backgrounds – psychology, social sciences – to really understand where bias can come from.”
[Business Insight: Operationalizing Data Science in the Real World]
Frannie Matthews, CTA CEO and the panel moderator, followed that up with a question that had the whole panel thinking: Where is the push to build ethical AI coming from?
In all her research, Anne found just one Fortune 500 with a published approach to ethical AI. However, they “didn’t have a systematic way to assess bias, so it was very subjective, and rather cumbersome because of that subjectivity.” She noted that the recent launch of Microsoft’s interpretability and bias assessment modules do show that there is likely a demand for better systems.
“Going through the data journey,” Matt added, “data analysts often aren’t directly generating revenue. So, companies then hire data scientists. But what a data scientist can do is based on the questions you’re trying to answer. The reality is that none of these organizations start talking about ethics until they’re generating enough money [from data science efforts] to be concerned about it.”
Liz nodded along. “Starting July 1st, we will have five comprehensive state privacy laws [that will affect the usage of AI]. And these laws don’t prohibit the use of these technologies, but you have to take an impact assessment first. They’re saying, if you’re going to [use AI], at least take the time to see what the impact is. The downside is that these laws are often seen as ‘check the box’ exercises, where the organization has already made the decision to use these technologies, and the assessment isn’t really given the space to have a purpose.”
As for the future of AI, there wasn’t a consensus.
“As much as I try to think about what the next 10 years are going to look like,” Matt said, “I can’t even fathom it. There are so many things people haven’t thought through, so adaptability is going to be so important. AI should be an enablement tool. Radiologists used to have to look at hundreds of MRIs, whereas now we can use AI modeling to scan MRIs to help find anomalies quicker. We should be using AI to better the world.”
For Anne, responsibly using AI in any industry came back to the key point of her presentation.
“The most important thing,” she concluded, “is just to listen to your customers where they sit. Ask them if this tool is doing right by them. Make sure your customers are happy with their recommendations, and that they feel seen by the personalization. Treat your customer as more than a mere means.”
Anne Lifton is a Principal Architect of Data Science and Machine Learning at RevGen. She has over 10 years of experience in building, deploying and managing the lifecycle of data science models across several industries and all three major cloud platforms.
Get the latest updates and Insights from RevGen delivered straight to your inbox.