Start main page content

Colloquium explores ethical dilemmas arising from the use of AI and Big Data in research

- Wits University

Philosophical, legal, moral, teaching and other angles unpacked at Wits SARIMA Carnegie 4th annual Global Ethics Day online event on 6 August.

Image from: https://www.nature.com/articles/d41586-024-02420-7 and included in presentation by Professor Kevin Behrens, Steve Biko Centre for Bioethics, titled “Generative AI in Research: The Need for Transparency, Caution & Regulation”, to demonstrate ‘opacity’ and presented at Wits SARIMA Carnegie World Ethics Day online colloquium on 6 August 2024.

Hosted in partnership with the Southern African Research and Innovation Management Association (SARIMA) and the Carnegie Council for Ethics in International Affairs, the online colloquium used legal, philosophical, moral and practical arguments for and against the integration of AI technology into the research environment.

Each of the guest speakers had cautionary words around the potential for the misuse and misapplication of a technology which has become a part of our everyday lives, including in academia.

Professor Lynn Morris, Wits Deputy Vice-Chancellor: Research and Innovation, opened the colloquium, which Eleni Flack-Davison, Head of the Wits Research Integrity Office, chaired.

As AI and Big Data re-shape our approach to modern research, Flack-Davison spoke about the importance that Wits, in particular, places on the ethical application of this rapidly evolving technology.

Wits has no fewer than five ethics committees which monitor and research issues around the use of AI, Gen AI and big data in order to guide and advise its community.

“This is a very exciting era in which to be undertaking research, whether in the field of science or the humanities,” says Flack-Davison. “But with the extensive capabilities of AI and Big Data also come ethical dilemmas for researchers.”

AI, bias and ‘WEIRD’ science

One of the key reservations expressed by most of the speakers and eloquently outlined by Rosemarie Bernabe, Professor of Research Ethics and Research Integrity at Oslo University’s Centre for Medical Ethics, was the issue around bias and the source of AI’s data.

Bernabe also spoke about the digital divide that is perpetuated by expensive, designed for, and by, white males, virtual, augmented and mixed reality technology. Certain population groups are excluded from being part of the AI revolution based on discrimination by wealth, gender and even physical ability.

Dr Sahba Besharati, Senior Lecturer in Cognitive Neuroscience in the Wits School of Human and Community Development echoed this concern and spoke about so called ‘WEIRD’ science – Western, Educated, Industrialised, Rich and Democratic ? which describes a lack of diversity in research.

Dr Sahba Besharati was named a CIFAR Azrieli Global Scholar in May 2021s

“Medical and behavioural science is objective, but the introduction of AI brings in biases,” says Besharati.

The datasets used by AI are not representative and lack sample diversity. In efforts to level the playing field, Besharati herself, working from the Wits NeuRL, is driving local neuroscience research where even the sensor caps for monitoring neural responses have been adapted to accommodate different types of hair.

“If research is going to have a scientific as well as social impact, we must build trust and include diverse communities in our R&D,” she says.

AI creates its own truth

Professor Kevin Behrens, Director of the Steve Biko Centre for Bioethics at Wits, spoke about AI’s opacity [‘cloudiness’ or lack of transparency] and how it creates its own truth. Unlike computer programming, which responds predictably to certain prompts, we have no idea how AI generates its output.

“There is such a thing as automation bias, and putting too much trust in technology,” warns Behrens. “It is essential to apply human oversight in the use of AI, and to be transparent when it comes to disclosing how and when we have used it.”

Sidney Engelbrecht, Senior Research Compliance Specialist at the King Abdullah University of Science and Technology, Saudi Arabia, gave some specific examples of misuse of AI, GenAI and of sources cited incorrectly, and spoke about the ethics of the scholars using it – and not just of the technology itself.

AI and academic integrity

Dr Lorraine ‘Lois’ Doherty, a Complementary Lecturer in the School of Mechanical, Industrial and Aeronautical Engineering at Wits, with expertise in applied ethics and bioethics, and Susa van Dyk, Prosecutor and Legal Advisor at the University of the Western Cape, took a philosophical and legal look at AI and Big Data technology respectively.

Doherty’s presentation, titled A little about a lot, questioned how far AI, robotics and associated technology can go without a heart and soul.

Van Dyk spoke about preserving academic integrity and the need for detection tools to be developed at the same pace as the technology in order that the academic and research process not be compromised.

“AI challenges the notion of authorship, originality and trust,” says Van Dyk. “Expertise and judgement are missing from AI technology. It takes knowledge and insight and real engagement with subject matter for researchers and academics to spot errors in work produced via GenAI.”

Guidelines for teaching and learning AI and Big Data

Dr Greig Krull, Senior Lecturer and Academic Director for Digital Learning, and Marike Kluyts, Postgraduate Writing Specialist, both in the Teaching and Learning Unit in the Faculty of Commerce, Law and Management, have come up with a series of draft guidelines to assist users of AI and Big Data.

“It’s important within a team of researchers to have discussions about what is and what isn’t permissible,” says Krull.

Other themes that came through the presentations related to the ethical use of AI and privacy, data security and protection, agency and identity, liability in the event of AI linked errors, and copyright.

With an audience of over 200 online, this annual colloquium is an important date for many students, teaching staff and researchers. The ethical dilemmas that arise when choosing to use technology that can simultaneously benefit and endanger humanity were exposed and discussed provocatively.

Share