Are You Technically Optimistic About AI?

Technology and InnovationTechnologyArtificial IntelligenceTechnology, Data, and Digital Officers
min Interview
Katie Nivard
October 22, 2024
6 min
Technology and InnovationTechnologyArtificial IntelligenceTechnology, Data, and Digital Officers
Executive Summary
Raffi Krikorian discusses AI's impact on leadership with Katie Nivard, emphasizing ethical governance, AI models, and tech for social good.
abstract-back-futuristic-1433201210.jpg

 

A Q&A with Emerson Collective’s Raffi Krikorian

Raffi Krikorian has been at the forefront of Silicon Valley’s global impact over the past 20+ years, starting as Head of Engineering & Infrastructure at Twitter, leading Uber’s self-driving cars division and early applications of AI, then joining the Democratic National Committee as its first CTO after the 2016 post-Russian hack. In 2019, he was hired as CTO and Managing Director at Emerson Collective. His podcast, “Technically Optimistic,” delves into transformational technology, asking how it can be developed and deployed with society’s best interests at heart.

Raffi sat down for a fireside chat with Katie Nivard, leader of RRA’s Tech x Impact practice, at RRA’s Board and Consultant gathering in San Francisco to discuss where we are on the AI journey and what leaders need to know to be successful in an AI-enabled future.

Katie Nivard
Katie Nivard
Raffi Krikorian
Raffi Krikorian

Katie Nivard: What questions are you currently asking yourself about artificial intelligence?

Raffi Krikorian: Let me start with an anecdote. When Uber first started testing self-driving cars, we placed a fleet of data collection vehicles in Tempe, AZ and drove them around for months. This process essentially defined what the cars needed to know and recognize to functionally drive. But when we finished, the cars couldn't recognize a person of color, because they didn’t encounter enough of them in Tempe.

It's so easy to introduce bias, even when we have the best of intentions. When organizations are creating and implementing AIs, they must be asking:

  • How do we design AI systems that actively promote fairness and transparency while minimizing bias?
  • What structures can we create to ensure accountability and quickly address issues as they emerge?
  • How do we create trustworthy systems that people can meaningfully engage with?

There’s already so much bias in so many of our structures. Think of the US medical system—historically, the overwhelming majority of studies were only conducted on men, meaning we didn’t gather insight on how diseases and their potential treatments impact women. We already live with these impacts; we don’t want to perpetuate them.

 


 

KN: What do leaders need to understand about AI today?

RK: Leading in the age of AI means talking about where we want the world to go. We can't slam on the brakes – the AI revolution has already begun. But we also can't go full steam ahead. Next generation leaders have to hold both of these truths simultaneously. The future of work is as much about managing massive culture transformation as it is about AI enablement.

 


 

KN: What do leaders need to be successful in an AI-driven world?

RK: To succeed in an AI-driven world, leaders need a strong grasp of ethics and how governments and systems work. They should be technically skilled and able to think about how rapid changes in technology impact the bigger picture. Since things are moving faster than ever, leaders should also rely on teams with the right knowledge to tackle these challenges. Most importantly, they need a clear set of values to guide their decisions, ensuring they stay grounded as technology reshapes the world.

Leaders, ask yourself: Do you have a clear view on your non-negotiable foundations? What do you want to teach your kids about the world? How do you think, communicate your views, and team with others to address them?

Power has shifted over time, from the strongest to the most powerful to the richest person. Now, we need the most ethical leader. Yes, data needs to be in order, but at the end of the day, what we really need are leaders who think clearly, systemically, and holistically.

 


 

KN: What's the right AI model?

RK: There is no right or wrong model. It all depends on what you want to get done in the world, what values you have, and what you want people to see as you work to get that done. We run into issues when there's a lack of clarity around an organization’s goals and values.

 


 

KN: Related, you’re on the board of the Mozilla Foundation. What are Mozilla’s views on AI governance?

RK: Mozilla fights for a free and open internet. We want to better define how to be a citizen on the internet.

We all think Big Tech knows everything about us (and, let’s be honest, they probably do). But this is what Mozilla fights against. We believe that internet users have a right to privacy, as well as a right to options. We don't want a world with only three AI platforms that are generating results from data that we aren’t given the option to interrogate. And that doesn’t happen without governance.

At Mozilla, we employ a unique governance structure that keeps the mission as the guiding principles. Mozilla Foundation, the umbrella organization, comprises a handful of for-profit subsidiaries. This includes Mozilla Corporation, which develops the Firefox web browser, Mozilla Ventures and Mozilla.ai. Mozilla Foundation therefore sets the policies that govern development, operates key infrastructure and controls Mozilla trademarks and copyrights.

 


 

KN: Let's talk about power consolidation in the AI space and why open source AI models matters.

RK: We need AI tools to keep up with innovation. But, in a closed model, we don't get to see how they work. Given how much information we're prepared to give these tools, we have a right to know how they work. Effectively, was this car trained in Tempe?

With open source AI models, curious people can dive behind the scenes and understand exactly how data is being gathered and redeployed.

AI is neither good nor bad—it's good and bad. And it’s making decisions for us at the deepest levels. We should be allowed to understand how it does that.

 


 

KN: Any advice for the next US president on AI?

RK: It's easy to focus on all the things that could go wrong when it comes to tech innovation. And while it’s good to react to what’s in front of us, it’s also important to make accelerating investments. The government could return to the 1960s approach—by which I mean, we could make significant investments in tech again.

We know that speaking about tech innovations—both their risks and their opportunities—betters them in the long term. Consider The Social Dilemma documentary, in which numerous tech founders spoke about the dangers of social media. Now, Instagram has teen accounts and better filters. Impactful content plus helpful commentary can lead to meaningful action.

Don’t just mitigate. Invest in the areas where you want the country to grow. That includes AI, but isn't limited to it. We need new approaches to educating the next generation about home economics and digital literacy. The way we earn and spend will change, and so much of that will happen online. We need to better equip everyone, especially kids, for that.

 


 

KN: Given your podcast, what are you Technically Optimistic about?

RK: I’m optimistic about the potential of technology to drive positive social change when it’s guided by strong ethics and thoughtful leadership. I think that people think about society a lot more than we think they do.  When I talk with leaders about the bigger picture, the conversation changes when we discuss what kind of world they want for their loved ones, their kids. I believe that tech can help us solve big, societal issues—if we engage the right people.