Title:  

What Happens When a Robot Lies to You?  

Investigating Aspects of Prosocial Intelligent Agent Deception Towards Humans 

 

Date: November 12th, 2024 

Time: 4:30PM-6:00pm EST (in-person refreshments start at 4pm)

Location

In Person: Tech Square Research Building (TSRB) Auditorium (Room 118)

Zoom link: https://gatech.zoom.us/j/94384708502?pwd=1NyaiLqtNF6SYzxyo1g93QxKgDNm6q.1 

Meeting ID: 943 8470 8502

Passcode: 361788

 

Kantwon Rogers 

Ph.D. student in Computer Science 

School of Interactive Computing 

Georgia Institute of Technology 

https://www.kantwon.com/ 


Committee:
Dr. Sonia Chernova – (co-advisor) College of Computing, Georgia Institute of Technology 

Dr. Ayanna Howard – (co-advisor) College of Computing, Georgia Institute of Technology / College of Engineering, Ohio State University

Dr. Ashok Goel – College of Computing, Georgia Institute of Technology 

Dr. Harish Ravichandar – College of Computing, Georgia Institute of Technology 

Dr. Selma Šabanović – School of Informatics and Computing, Indiana University Bloomington 

Dr. Marynel Vázquez – Department of Computer Science, Yale University 

 

Abstract 

People across many societies are explicitly taught some form of the adage “honesty is the best policy”, but is that a lie? Telling the truth is not always helpful, and lying is not always harmful. In truth, everyone lies. We lie to help ourselves, and we lie to help others. We lie in both serious and inconsequential situations. Lying is a foundational part of how people interact with each other, and accepted members of society are successfully able to navigate the highly nuanced norms of social deception. 

Robots and artificially intelligent systems are increasingly being placed within our societies, and in some contexts, they are expected to interact with humans socially. People must trust that robots are functionally competent to complete tasks while also being socially competent to understand social conventions that may favor particular strategies over others. If people often successfully choose lying to be the best policy in certain situations, it then follows that an intelligent agent, that is designed to learn from humans and exhibit social competency, may replicate expected lying behavior as it becomes fully integrated into social settings. 

In this thesis, I explore robots that lie to benefit others and how deception influences people’s interactions and perceptions of robots. My work examines how managing expectations, the influence of agent design and presence, and the aftermath of deception shape human responses, while also exploring how people interact with autonomous deceptive agents over time.