[Eeglablist] [Newsletter] rTAIM Seminar #13 | Ethical Agent or Mere Tool?: A Cautioning Argument for “Moral” Artificial Intelligence in Healthcare | 30 October 2024, 15h00, Online
steven gouveia
stevensequeira92 at hotmail.com
Mon Oct 28 09:03:15 PDT 2024
rTAIM (Rebuilding Trust in AI Medicine)
Monthly Seminars
Seminar #13
Ethical Agent or Mere Tool?: A Cautioning Argument for “Moral” Artificial Intelligence in Healthcare
Jordan Joseph Wadden (Unity Health Toronto - Canada)
Following the 12th rTAIM Online Seminar (by Francesco Prinzi, University of Palermo), we are happy to announce the first seminar of this academic year 2024/2025, the 13th rTAIM Online Seminar, with the participation of Jordan Joseph Wadden (Unity Health Toronto, Canada), on the 30th October 2024, 15h00-16h30 Lisbon Time Zone, via Zoom.
ONLINE | Link Zoom<https://urldefense.com/v3/__https://videoconf-colibri.zoom.us/j/95294324802?pwd=Lt2o4boR0iHYWFtXvu6KdrDoiMk51c.1__;!!Mih3wA!BNQassoFa2qTUQBC803gvrdg5QD3HFJ_cGlhLJRXiMKv2SbtcDgJ7Mgk_dqmwQBFe6DbV2uI8qch-9lrM4abRyj2xTXTtbhABw$ >
ID: 952 9432 4802 | Password: 506705
#Seminar 13: To talk of moral machines presupposes that researchers understand enough about how humans come to their own conceptions of the good. Moral epistemology and moral psychology have struggled to describe these facts and processes, and we still don’t have a generalizable conception. However, there appears to be an assumption that ethics can nonetheless be instilled directly into artificial intelligence. This is of special concern in healthcare because algorithms and systems are being designed to recommend treatments and pathways for patients. But finding the right treatment for a patient requires more than just an analysis of the medical facts. Healthcare teams must also weigh preferences, goals, psycho-social needs, religious and spiritual beliefs, quality of life, and so on. Each of these elements has a unique, and often individualized, moral element. This paper presents an argument that healthcare artificial intelligence ought to be viewed as a mere tool, rather than as a (quasi-)moral agent which could act alone in the clinical encounter. I argue that some researchers have become distracted by the AI and robot rights discussions, making them think it’s possible to code ethics into a system when humans still don’t understand how we come to believe our own moral beliefs.
Short Bio: Jordan Joseph Wadden, MA, PhD, HEC-C, is a clinical ethicist at Unity Health Toronto in Canada. He is an assistant professor (status only) in the Department of Family and Community Medicine at the University of Toronto. His research focuses on the ethical implementation of artificial intelligence and machine learning in healthcare. He was the winner of the 2024 Ethics of AI Award at the 3rd International Conference on the Ethics of Artificial Intelligence.
rTAIM Seminars: https://urldefense.com/v3/__https://ifilosofia.up.pt/activities/rtaim-seminars__;!!Mih3wA!BNQassoFa2qTUQBC803gvrdg5QD3HFJ_cGlhLJRXiMKv2SbtcDgJ7Mgk_dqmwQBFe6DbV2uI8qch-9lrM4abRyj2xTXbR6TAmQ$
https://urldefense.com/v3/__https://trustaimedicine.weebly.com/rtaim-seminars.html__;!!Mih3wA!BNQassoFa2qTUQBC803gvrdg5QD3HFJ_cGlhLJRXiMKv2SbtcDgJ7Mgk_dqmwQBFe6DbV2uI8qch-9lrM4abRyj2xTUNJBaUCA$
Organisation:
Steven S. Gouveia (MLAG/IF)
Mind, Language and Action Group (MLAG)
Instituto de Filosofia da Universidade do Porto – UIDB/00502/2020
Fundação para a Ciência e a Tecnologia (FCT)
____________________________________________
Instituto de Filosofia (UI&D 502)
Faculdade de Letras da Universidade do Porto
Via Panorâmica s/n
4150-564 Porto
Tel. 22 607 71 80
E-mail: ifilosofia at letras.up.pt<mailto:ifilosofia at letras.up.pt>
https://urldefense.com/v3/__http://ifilosofia.up.pt/__;!!Mih3wA!BNQassoFa2qTUQBC803gvrdg5QD3HFJ_cGlhLJRXiMKv2SbtcDgJ7Mgk_dqmwQBFe6DbV2uI8qch-9lrM4abRyj2xTWXRyWWUQ$
More information about the eeglablist
mailing list