Digital Humanist | Valentina Rossi

SEPTEMBER 2024

WHAT DOES IT MEAN HUMAN-CENTRED AI IN CONCRETE TERMS?

SUMMARY.

Human-centred AI (HCAI) emphasises aligning AI development with human needs, values, and experiences, yet many systems remain focused on technological efficiency over ethical considerations. Current AI designs often overlook user involvement and social impact, prioritising accuracy and productivity. To be truly human-centred, AI should enhance human capabilities, ensure transparency, and consider fairness, privacy, and social responsibility from the outset.

WHY ISN'T AI HUMAN-CENTRED BY DEFAULT?

Since artificial intelligence is created by humans for humans, it’s natural to wonder why it isn’t inherently aligned with human needs, rights, and values. The answer lies in the history and design priorities of these technologies: developers have traditionally focused on enhancing functionality from a technological and efficiency perspective (Bingley et al., 2023).

AI systems were originally designed to tackle narrowly defined technical problems. Machine learning, for example, is inherently data-driven, aiming to optimise outcomes based on input data rather than prioritising ethical considerations or human well-being.

Moreover, most AI development is funded and carried out by private companies focused on profitability and productivity within the economic market.

Creating AI that genuinely centres on human needs involves understanding complex human behaviours, needs, and expectations — far more challenging and time-consuming than optimising straightforward metrics like accuracy within a specific dataset.

Despite a growing body of research in human-centred AI (HCAI), it remains common for AI-driven projects to overlook human agency and user involvement or only consider them in the later stages of development (Capel and Brereton, 2023).

PILLARS OF HUMAN-CENTRED AI DESIGN.

Human-centred AI means prioritising human needs, values, and experience throughout the entire process of designing, developing, and implementing AI systems.

Rather than focusing solely on technological advancement or improving algorithmic performance, human-centred AI aims to create systems that integrate with, empower, and enhance human abilities, while also ensuring human wellbeing, autonomy, and control (Riedl, 2019).

SOCIETAL IMPACT

Human-centred AI recognises that AI systems operate within a broader social context and strives to create a positive social impact. This involves considering factors such as fairness, justice, social responsibility, and the potential for unintended consequences. The user experience is influenced by the social impact of AI, even if developers may not prioritise this aspect (Geyer et al., 2023).

USER EXPERIENCE

Prioritising the experience of people using AI systems is essential for human-centred AI. This involves ensuring that AI systems are usable, understandable, reliable, and meet users’ needs. For instance, users may prioritise the AI’s ability to understand them, while developers might focus on enhancing users’ ability to understand the AI (Bingley et al., 2023).

HUMAN VALUES

AI systems should be designed to align with human values, such as fairness, transparency, privacy, and accountability. This requires careful examination of potential biases and ethical implications during AI development, as well as mechanisms to ensure that AI systems operate responsibly and ethically (Capel and Brereton, 2023).

AUGMENTED INTELLIGENCE

Human-centred AI focuses on using AI to complement and enhance human abilities, rather than replace humans, within a collaborative paradigm of augmented intelligence. This involves designing AI systems that enable people to perform tasks more effectively, make better decisions, and solve complex problems collaboratively (Capel and Brereton, 2023).

HUMAN OVERSIGHT

As AI systems become increasingly sophisticated in terms of underlying complexity, it is essential to maintain human control over their operation and decision-making processes. This involves designing AI systems that are transparent, explainable, and allow for human intervention and oversight when necessary (Schmager, Pappas, and Vassilakopoulou, 2023).

EXAMPLES OF HUMAN-CENTRED AI SYSTEMS.

Although the sources provide extensive discussions on the principles and challenges of HCAI, they offer few concrete examples of AI systems that fully embody these ideals. However, we can extrapolate some potential application areas for these principles.

For instance, in healthcare, AI is being used as a decision-support tool in medical imaging and diagnosis systems. In the context of HCAI, this would involve engaging patients and healthcare providers in the design and development of these systems, from the earliest stages. Such involvement ensures that the systems are usable, understandable, and tailored to meet users’ specific needs. Moreover, AI-based healthcare systems should be designed to augment healthcare providers’ capabilities, not replace them. An AI system might analyse medical images to detect anomalies, but the final decision regarding diagnosis and treatment would always rest with a human doctor (Capel and Brereton, 2023).

To give another example, to design an AI-powered chatbot like ChatGPT with a human-centred approach, we should consider inclusivity and cultural sensitivity across diverse languages and dialects (Bommasani et al., 2021). Currently, ChatGPT performs poorly in non-Western languages, struggling with nuanced expressions and local context, which can lead to misunderstandings and unintentional biases (Zhu et al., 2023). Additionally, eliminating “hallucinations” — where the chatbot generates inaccurate or fabricated information — is essential for building trust and reliability(Jones, 2024). Continuous user feedback loops would further enhance its contextual understanding and adaptability. Privacy in data handling must also be a core priority, ensuring user information remains secure and confidential. Finally, reducing overreliance on the system through transparency about its limitations would set realistic expectations, promoting a fairer and more balanced user experience.

HOW WELL ARE WE DOING?

The current situation is complex. On one hand, there is growing awareness of the importance of HCAI, with substantial efforts in research and development. Numerous guidelines, frameworks, and principles have been introduced by governments, organisations, and researchers, such as the AI Act promoted by the European Union or the Ethics Guidelines for Trustworthy AI.

However, many challenges remain, and AI is not yet fully human-centred in practice. While developers are increasingly aligning with HCAI guidelines by focusing on ethics and safety, they often place less emphasis on the social impact of AI — a critical factor in ensuring positive user experiences.

Moreover, translating HCAI principles into practice has proven difficult. Existing guidelines are frequently criticised for being too broad and lacking the detailed guidance necessary for practical development (Bingley et al., 2023).

Additionally, many users may not clearly understand what AI truly entails, which can affect their ability to assess and interact with AI systems effectively. For this reason, a foundational level of literacy — similar to the rise of computer literacy in the 1990s — is essential, enabling people to engage with these technologies responsibly and effectively (Markauskaite et al., 2022).

In conclusion, while the increasing focus on HCAI is encouraging, there is still a long way to go to make AI genuinely human-centred.

The key to success lies in prioritising human needs, values, and experiences and developing AI systems that enhance human capabilities, promote social wellbeing, and contribute to a better future for all, including the environment in which we live.

Bingley, W. J., Curtis, C., Lockey, S., Bialkowski, A., Gillespie, N., Haslam, S. A., … & Worthy, P. (2023). Where is the human in human-centered AI? Insights from developer priorities and user experiences. Computers in Human Behavior, 141, 107617.

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., … & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

Capel, T., & Brereton, M. (2023, April). What is human-centered about human-centered AI? A map of the research landscape. In Proceedings of the 2023 CHI conference on human factors in computing systems (pp. 1-23).

Geyer, W., Weisz, J., Pinhanez, C. S., & Daly, E. (2022). What is human-centered AI. IBM Research Blog, 31.

Jones, N. (2024). Bigger AI chatbots more inclined to spew nonsense-and people don’t always realize. Nature.

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., … & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI?. Computers and Education: Artificial Intelligence, 3, 100056.

Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human behavior and emerging technologies, 1(1), 33-36.

Schmager, S., Pappas, I., & Vassilakopoulou, P. (2023). Defining human-centered AI: a comprehensive review of HCAI literature.

Zhu, W., Lv, Y., Dong, Q., Yuan, F., Xu, J., Huang, S., … & Li, L. (2023). Extrapolating large language models to non-english by aligning languages. arXiv preprint arXiv:2308.04948.