Artificial intelligence (AI) is all the rage lately in the public eye. How AI is being incorporated to the advantage of our everyday life despite its rapid development, however, remains an elusive topic that deserves the attention of many scientists. While in theory AI can replace, or even displace, human beings from their positions, the challenge remains on how different industries and institutions can take advantage of this technological advancement and not drown in it.
Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. They found that teachers were generally receptive to AI’s input — until both sides came to an argument on who should reign supreme. This very much resembles how human beings interact with one another when a new member forays into existing territory.
The research was conducted by HKUST Department of Computer Science and Engineering Ph.D. candidate Chengbo Zheng and four of his teammates under the supervision of Associate Professor Prof. Xiaojuan MA. They developed an AI group member named AESER (Automated Essay ScorER) and separated twenty English teachers into ten groups to investigate the impact of AESER in a group discussion setting, where the AI would contribute in opinion deliberation, asking and answering questions and even voting for the final decision. In this study, designed akin to the controlled “Wizard of Oz” research method, a deep learning model and a human researcher would form joint input to AESER, which would then exchange views and conduct discussions with other participants in an online meeting room.
While the team expected AESER to promote objectivity and provide novel perspectives that would otherwise be overlooked, potential challenges were soon revealed. First, there was the risk of conformity, where the engagement of AI would soon create a majority to thwart discussions. Second, views provided by AESER were found to be rigid and even stubborn, which frustrated the participants when they found that an argument could never be “won.” Many also did not think AI’s input should be given equal weight and are more fit to play the role of an assistant to actual human work.
“At this stage, AI is deemed somewhat ‘stubborn’ by human collaborators, for good and bad,” noted Prof Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. However, human collaborators feel disengaged when they could not meaningfully persuade AI to change its view. Humans varying attitudes towards AI. Some consider it to be a single intelligent entity while others regard AI as the voice of collective intelligence that emerges from big data. Discussions about issues such as authority and bias thus arise.”
The immediate next step for the team involves expanding its scope to gathermore quantitative data, which will provide more measurable and precise insights into how AI impacts group decision-making. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions.