Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Dave Alongi's Reflections on Affective-Cognitive Learning and Decision Making in Robotics, Papers of Computer Science

In this document, dave alongi shares his thoughts on two academic papers related to affective learning and decision making in robotics. He expresses his skepticism towards the idea of creating emotionally intelligent robots, arguing that their lack of precision and determinism could create ethical dilemmas and add unnecessary burdens to their creators. Alongi also discusses the single-self concept and its implications for personal responsibility and accountability.

Typology: Papers

Pre 2010

Uploaded on 03/16/2009

koofers-user-mes
koofers-user-mes 🇺🇸

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Dave Alongi's Reflections on Affective-Cognitive Learning and Decision Making in Robotics and more Papers Computer Science in PDF only on Docsity! Dave Alongi (dalongi2) CS 598kgk Affective-Cognitive Learning and Decision Making: A Motivational Reward Framework for Affective Agents By: Hyungil Ahn and Rosalind W. Picard I just read 8 pages and have just about no idea what I read! It was very math, code, and big-word oriented rather than clearly explaining their motivations and goals. It probably didn’t help that I waited until I was done reading to look up the word “affective.” I think their main goal was to create a computer/robot that used emotions and desires in its decision making process, not just pure facts and rewards from the outside world. This is an interesting idea for making computers/robots seem more “human,” and more acceptable to build and, eventually, sell. However, this is an area of HCI and robotics that I’ve never really understood or liked. One of the main benefits of computers is their ability to be very precise and quick in decision making and computation, to help humans to make better decisions. It seems that this field is trying to make them less precise in order to “fit in” with humans. In reality, they don’t really need to fit in because we shouldn’t want to make friends with a machine; machines are built by humans to serve a purpose – to advance humanity and reduce the burden on people. It seems like a step back to give them emotions that could further burden their creators, and something that adds uncertainty to an area where there are already ethical issues to be addressed. Again, I don’t have too much to say on this paper because I didn’t get too much out of it. I think it’s a cool idea, but one that is headed in a direction that I don’t think the field needs to be headed in just yet. Maybe in the future when robots become more ubiquitous that their lack of emotion in decision making starts to affect humans emotionally; but just yet it doesn’t seem like an area that is needed. Dave Alongi (dalongi2) CS 598kgk The Emotion Machine By: Marvin Minsky This was an interesting look into different ways people have thought about the human mind throughout recent history. One of the things that Marvin criticizes about some of psychology’s ways of looking at the brain is that they are too simplistic, and that they cover of the details. I find this very odd coming from someone with a computer science background, since one of the main things we learn is to abstract away the details. Granted, psychology can get a little too abstract sometimes; but a little abstraction is a good thing. For instance, the paper critiqued above could have used a little abstraction and less math/code. One of the parts of this chapter that I found interesting was the debate over the Single-Self concept. I think one of the reasons that people like this model is that the alternative, with a lot of detail about how the mind works, is pretty scary. One path that it might go down is that everything is deterministic – that is that we don’t actually have free will and are just along for the ride. And something like this has huge ramifications: if someone commits a crime but couldn’t change their actions, should they really be punished? These are very similar to the problems in robotics ethics: If our actions are predetermined by someone/something else then why should we be punished. I much prefer the Single-Self concept for its abstraction, plus the fact that it keeps the idea that everyone is an individual – is unique. This metaphor also lends itself towards personal responsibility and accountability more than the lack of it, which I believe is needed. It is all too common that people use litigation and blame so that they can make nonexistent entities (corporations, government) take responsibility where they shouldn’t have to. I understand that this metaphor is not suitable in robotics because we want to be able to blame someone else. If a robot with emotion were to commit a crime, we want someone to be punished because we would not feel content with just punishing a robot. So using the more deterministic model is more practical in this field. However, I really hope that it doesn’t turn out to be true with relation to the human mind.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved