Freedom comes with responsibility. To be free (to have the power and right to act, speak and think as we wish), is to be responsible for the freedom of others. Freedom means respect for those who are also living free. It asks us to stand up and be accountable. But can Artificial Intelligence (AI) be accountable when even we, as humans, can find this challenging?
A Google search leads to a plethora or positive and negative results of what AI can do but does it drive or defy our freedom? The verdict is out.
I examined a number of AI use cases and the answers were not what I was expecting. For every case where AI could drive our freedom, there was a risk it could defy it.
A person may be free to do as they wish (to an extent) but many are unable. AI is opening up a new world for people living with disability.
Examples of this include AI-empowered apps that narrate the surroundings for the visually impaired to closed captioning technology that transcribes text in real-time for the hearing impaired while translating sign language to text for those who are unable to read it. These types of AI allow anyone to experience and interact with their world, removing obstacles to connection.
AI is not only opening up the world for those with disabilities but for those wishing to engage across languages, whether through translating speech in real time or in translating and transcribing websites in foreign languages so our access of the world is not limited.
Yes, AI is opening up our world. However, AI is proving to be vulnerable to bias and stereotyping and this is problematic.
If the descriptions of the world or transcription and translation of text or speech are bias then this creates a warped accessibility, if any accessibility at all.
For example, algorithms can develop sexist or racist traits. Google Translate was accused of gender stereotyping as it was making the assumption that all doctors were male, and all nurses were female. It would be problematic if AI was advising a visually impaired person that their doctor was male, if they were in fact not. They would not be accessing a correctly represented world.
In 2020 South Africa experienced 621,282 contact crimes (murder, attempted murder, sexual offences, assault and robbery). Yet we give off non-verbal cues that act as signs about what we plan to do. These are called micro expressions and AI algorithms placed in CCTVs can detect these expressions among pedestrians and anticipate potential criminal behaviour before it occurs. This is being piloted in India and ensures the freedom of safety and the adherence to responsibility.
The full article is reserved for our subscribers!
Read the full article by Kim Furman, Marketing Manager, Synthesis, as well as a host of other topical management articles written by professionals, consultants and academics in the June/July 2021 edition of BusinessBrief.
email@example.com | +27 (0)11 788 0880 |