Responsible AI

RRA’s Responsible AI Principles

We are committed to using AI ethically. Our RAI Principles are grounded in a People-First approach—ensuring human oversight to advance trust and to avoid unintended consequences.
27%
of leaders believe that their organization has provided the right level of guidance to harness generative AI ethically and safely
24%
of leaders believe that their organization has the processes in place to protect itself against AI misuse and mishaps
74%
of leaders are concerned about the misinformation AI can produce

 

 

CONTEXT     People-First Approach     AI Principles     Insights     AI Resources     CONTACT 

 

 

Artificial Intelligence (AI) presents incredible opportunities—but without proper guardrails it can present significant risk. As we adopt AI, we have a responsibility to ensure we act responsibly and in accordance with developing legal principles.

 

Our Responsible AI Principles (RAI) are designed to guide us to use AI ethically, in alignment with our values, and in compliance with laws and regulations. Over time, we will revisit our principles to ensure that they're fit for purpose, and we will publish examples of our policies and practices, so that our stakeholders can better understand our efforts and work with us on our shared journey.

 

Our RAI Principles: A People-First Approach

Our RAI Principles are grounded in a People-First approach: keeping human oversight in AI development and looking out for the impacts of AI adoption on people. We aim to use AI to help people and organizations build great relationships, advance exceptional leadership, and achieve their highest potential. All other RAI Principles support this value.

 

Empty separator
RRA applies responsible AI practices throughout the company
Human Responsibility

Take full accountability for the technology we develop and deploy to ensure that our use of AI is in service of the needs and interests of people, teams, and organizations. Keep humans accountable and in ultimate control of quality, work product, and final decisions.

Privacy

Protect the fundamental right to privacy of our clients, candidates, colleagues, and communities when developing and using AI. Inform individuals whether and how their information is used in AI systems and what choices they have about our use of their personal data.

Transparency & Explainability

Document how and where AI systems are at work in our business. Ensure that we can explain how algorithms operate and make choices.

Fairness

Develop and deploy AI systems that treat people fairly. Represent diverse points of view when planning AI solutions to augment our work, including systems for discovering, assessing, and developing leaders.

Safety & Security

Protect the wellbeing of our clients, candidates, colleagues, and communities by aligning with AI best practices for cybersecurity, sustainability, and resilience.

Accuracy & Reliability

Evaluate all AI systems so that we can have confidence in our use and trustworthiness in all our relationships.

 

 

quote

Our RAI Principles remind us that every algorithm has the potential to profoundly affect humans—both positively and negatively. By embedding these principles into our work, we are committed to a future where AI enhances human potential, while protecting our clients, candidates, and colleagues.”

Harpreet Khurana
Chief Digital and Data Analytics Officer, Russell Reynolds Associates

 

 

quote

Our RAI Principles are closely connected to our commitments to sustainability and diversity, equity, and inclusion. By focusing our innovation on issues like fairness and privacy, we unlock an opportunity to harness this powerful technology to contribute to a more sustainable and equitable future for all.”

Pam Fitzpatrick
Global Head of Sustainability, Russell Reynolds Associates

 

 

Resources: Getting Started on Responsible AI

We encourage every organization to develop a Responsible AI framework to guide the ethical adoption of AI, including principles and good governance practices. These resources can help you start your Responsible AI journey.

 

NIST AI Risk Management Framework

US-based NIST developed a voluntary framework to guide the management of risks to individuals, organizations, and society associated with artificial intelligence (AI). This site offers resources for organizations to use in planning and adopting an AI risk management framework.

 

OCED.AI Policy Observatory

OECD.AI is a forum for countries and stakeholder groups to shape trustworthy AI. On this site, you’ll find resources for policymakers and practitioners, such as a catalogue of tools and metrics for trustworthy AI.

 

World Economic Forum AI Governance Alliance

This cross-industry membership organization promotes responsible innovation and usage. The alliance convenes members and produces guidance on technical governance of AI systems.

 

The EU AI Act

Finalized in 2024, the EU AI Act is the first comprehensive regulation on AI by a major regulator. Their site includes the text of the law, as well as tools for quickly assessing your compliance requirements.

 

 

 

 

 

images-dna-1444892930.jpg

Want to know more about our Responsible AI Principles?