Artificial intelligence (AI) is maturing rapidly as an incredibly powerful technology with seemingly limitless application. It has demonstrated its ability to automate routine tasks, such as our daily commute, while also augmenting human capacity with new insight.
Combining human creativity and ingenuity with the scalability of machine learning is advancing our knowledge base and understanding at a remarkable pace. However, with great power comes great responsibility.
Specifically, AI raises concerns on many fronts due to its potentially disruptive impact. These fears include workforce displacement, loss of privacy, potential biases in decision-making and lack of control over automated systems and robots.
While these issues are significant, they are also addressable with what we call ‘responsible AI’, a framework with the right planning, oversight, and governance.
Responsible AI brings many of these critical practices together. It focuses on ensuring the ethical, transparent and accountable use of AI technologies in a manner consistent with user expectations, organisational values and societal laws and norms.
Responsible AI can guard against the use of biased data or algorithms, ensure that automated decisions are justified and explainable, and help maintain user trust and individual privacy.
By providing clear rules of engagement, responsible AI allows organisations to innovate and realise the transformative potential of AI that is both compelling and accountable.
Accountability means creating governance frameworks to evaluate, deploy and monitor AI to create new opportunities for better citizen and mission services. It means architecting and implementing solutions that put people at the centre.
Elements of responsible AI
By using design-led thinking, organisations examine core ethical questions in context, evaluate the adequacy of policies and programs, and create a set of value-driven requirements governing AI solutions. There are four foundational elements of responsible AI.
You must create the right framework to enable AI to flourish – one that is anchored to your organisation’s core values, ethical guardrails, and regulatory constraints.
Standards bodies such as IEEE are providing guidance for global organisations to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritise ethical considerations.
Any new solution should be architected and deployed with trust built into the design. This means that requirements for privacy, transparency, and security have equal weight with new product features.
The resulting systems should address the need to include AI solutions that can explain their rationale for decision making.
Capital One is researching ways to make AI more explainable, hoping to use it to review credit card applications since banking regulations require that financial companies furnish an explanation to customers when their applications are denied.
AI needs close supervision using ongoing human monitoring and auditing of the performance of algorithms against key value-driven metrics such as accountability, bias, and cybersecurity.
Automakers Volvo and Audi are addressing accountability with announcements that they will assume liability for any accidents that happen when automated driving technology is in use.
Training to better understand how AI systems operate with an integrated approach is needed, including educating employees to how AI will be integrated into operations and why; asking employees where and how AI might improve their day-to-day roles; engaging employees in co-creation to determine how people, processes and AI technology come together; and developing the skills needed for employees to take advantage of the insight offered by AI to achieve better, more consistent outcomes.
Simply put, AI represents a new way of working. It will bring about profound changes within organisations and society that we can’t fully understand and predict today.
In this context, responsible AI is a critical component of an organisational change model that focuses on rapid learning and adapting.
By embedding responsible AI into your approach for organisational change, you ensure that the critical element of trust is cultivated and maintained among key stakeholders.