“Experience is a mother of wisdom” is an idiom that encourages people to accept and learn from whatever happens in their life. Experience is the best teacher, especially experience in mistakes. Mistakes can happen in every field of work, even with people who research and develop AI as my team. In order to control and limit those mistakes, we have to seriously take AL/ML model’s risk management into account all the time.
AI is an innovative technology that people may be convinced to rely on and trust too much. So, as Benjamin Franklin said, “If you don’t plan to fail, you fail to plan”, AI researchers and developers need to be prepared to fail to create better technologies that meet people’s expectations. However, the users, as well as general people, should also be aware of AI’s limitations and what could happen if AI goes wrong.
I would like to conclude the reasons for AI failures and lessons learned from its failures. According to MIRT, non-profit organizations that cooperate with the United States government, there are 6 reasons and 5 lessons learned from AI failures.
6 false human mindsets that are reasons of AI failures
Humans sometimes perceive that AI has more potential or can do things perfectly compared to the human ability; for example, predict things more accurately, can think or analyse by itself, and doesn’t need human supervision.
In the real world, there are many environmental elements and variables. AI does best in specific tasks such as remembering objects and translating text by using specific data. Nonetheless, when it comes to the real world that contains different variables in different situations; for example, the images may be too blurry, the light may be too little, or there may be strange sentences that AI can’t process or have never seen before. These environmental variables affect AI’s efficiency. And sometimes, insufficient security systems can also lead to cybersecurity attacks which is one of the main reasons for AI's failures.
The data that is input in AI may be too specific and doesn’t cover different contexts, which leads to AI’s inability to understand the context precisely. For example, when AI Facial Recognition is taught to recognize faces using too much data from lighter skin tones, it may not be able to recognize darker skin tones efficiently. Moreover, this is the era of AI competition between countries, so people may be so hurried to develop AI systems that they forget to establish careful quality control.
Some may make the mistake that when AI development processes are achieved, all of the process is complete. Actually, AI development is an endless task. Developers need to continuously improve their AI to increase security and make it fit more to issues that could potentially come up in the future.
AI mistakes may happen because people depend on and trust technologies too much. Humans incline to trust and be pleased with unfamiliar technologies than they are supposed to be (Cognitive Bias).
Some AI mistakes may happen when AI doesn’t have an extensive support system. There are several elements that are necessary if one wants to build a secure AI system. Apart from data, algorithms need to have other perspectives of instructions, such as management, rules, social effects. These elements are like an ecosystem contributing to AI system development that covers diverse dimensions and has lower future risks that affect society. Because in fact, we are unable to provide thorough data for AI, risk management in terms of AI effects is significantly necessary.
5 Lessons I learned from AI failures.
Multidisciplinary: AI development requires multidisciplinary and thorough thinking, both technical and behavioral.
Multistakeholder: AI applications affect more than just the users. As a result, data of the affected is needed to be collected so that AI is developed more appropriately and unexpected effects occur as least as possible.
Bias: There is nothing in the world that doesn’t have bias. Data is collected by humans who have more or less bias. Humans’ assumption influences AI development, so there may be bias hidden in data that affects AI/ML model development.
Documentations: Documents about AI development is an effective tool to reduce system errors. Recording the purposes, the pros and cons of the design, the past decisions, and the assumptions of system development will help a lot with the inspection when problems happen.
Accountability: The level of accountability depends on the level of AI effects. The more influence AI has, whether property, mental, or physical, the more accountable and careful the development process needs to be.
In the next article, I will dive more into detail about how to reduce and avoid AI mistakes. Stay tuned!
References:
https://sites.mitre.org/aifails/
Comments