Responsible & ethical AI- process automation has part to play

0
56

Hennie Colyn | Direct Sales Executive | Process Automation | Schneider Electric | mail me |


To some it might feel we’re living in an “endless summer” of artificial intelligence (AI), with a new breakthrough announced almost monthly. And whilst AI offers important benefits to a myriad of industries, it should also be cautionary tale, with the relevant cybersecurity, legal compliance, data protection and so forth measures put in place.

In the case of process automation, AI undoubtedly has its part to play, adding an important layer of intelligence. By using machine learning (ML) and complex algorithms to analyse structured and unstructured data, businesses can use decision-making engine of AI develop a knowledge base and formulate predictions based on said data.

Where process automation works with data, AI interprets it, whether historical or current, to uncover trends, make predictions or suggest optimal courses of action. All this offers intelligent decision support to businesses, helping them deliver fail-proof and future-focused strategies that propel business growth.

An intelligent yet cautionary partnership

Together process automation and AI offer proactive problem detection, predictive maintenance, self-healing systems, and intelligent automation. These capabilities enable organisations to minimise downtime, reduce operational costs and increase productivity levels.

AI-powered analytics tools can process and analyse large datasets, identify patterns, and uncover valuable business insights. This, as mentioned, empowers organisations to make informed decisions, optimise processes, and drive innovation.

From the above, it’s clear that process automation and AI are an exciting fit, however, from a risk and biased point of view, AI must be managed carefully and stringently.

AI experts and data scientists are often at the forefront of ethical decision-making: detecting bias, building feedback loops, running anomaly detection to avoid data poisoning – in applications that may have far reaching consequences for humans. They should not be left alone in these critical endeavours.

To select a valuable use case, choose and clean the data, test the model, and control its behaviour, you need both data scientists and domain experts.

For example, take the task of predicting the weekly energy consumption of an office building. Here the combined expertise of data scientists and field experts enables the selection of key features in designing relevant algorithms, such as the impact of outside temperatures on different days of the. This approach ensures a more accurate forecasting model and provides explanations for consumption patterns. Therefore, if unusual conditions occur, user-validated suggestions for relearning can be incorporated to improve system behaviour and avoid models biased with overrepresented data.

Responsible AI

Creating AI solutions follows the same process as creating other digital products – the foundation is to manage risks, ensure cybersecurity, assure legal compliance and data protection.

Keeping this in mind, we have taken a three-pronged approach to the way we develop AI solutions: 

  • Compliance with laws and standards, like our Vulnerability Handling & Coordinated Disclosure Policy which addresses cybersecurity vulnerabilities and targets compliance with ISO/IEC 29147 and ISO/IEC 30111. At the same time, as new responsible AI standards are still under development, we actively contribute to their definition, and we commit to comply fully with them.
  • Our ethical code of conduct as outlined in our Trust Charter. Our strong focus and commitment to sustainability translates into AI-enabled solutions accelerating decarbonisation and optimizing energy usage. We also adopt frugal AI – we thrive to lower the carbon footprint of ML by designing AI models that require less energy.
  • Our internal governance policies and processes. For instance, we have appointed a Digital Risk Leader & Data Officer, dedicated to our AI projects. We also launched a Responsible AI (RAI) workgroup focused on frameworks and legislation in the field, such as the European Commission’s AI Act or the American Algorithmic Accountability Act, and we deliberately choose not to launch projects raising the highest ethical concerns.

 



LEAVE A REPLY

Please enter your comment!
Please enter your name here