77652 Offenburg, Germany
18059 Rostock, Germany
University of Heidelberg
69117 Heidelberg, Germany
Dominik L. Michels
KAUST / Stanford University
Thuwal, Saudi Arabia /
Stanford, California, USA
Submissions to the workshop can be made from December 24 until March 2 via the EasyChair submission system. All accepted workshop papers will be published in the ACM Digital Library. This workshop is part of PETRA 2018, the 11th ACM Conference on Pervasive Technologies Related to Assistive Environments. The workshop is a project of the Arab-German Young Academy of Sciences and Humanities (AGYA).
Social robots not only work with humans in collaborative workspaces but also follow us into much more personal settings like home and health care. Does this imply that social robots should be able to interpret and adequately respond to human emotions? Should they simulate emotions? Will science fiction scenarios become true and humans start to fall in love with robots?
The workshop aims to cover the phenomenon of social robots from the historic roots over today’s best practices to future perspectives. Thus, it is interdisciplinary: we welcome contributions not only from computer scientists, but also researchers from disciplines like psychology, medicine, law, history, and the arts and humanities.
For developments like robots in health care and domestic areas, we will discuss both the technological and the ethical challenges arising. The aim is to create a guideline, an intended design space for future developments in social robotics.
• Human-centered computing~Empirical studies in HCI • Human-centered computing~Collaborative and social computing devices • Human-centered computing~User studies • Human-centered computing~Empirical studies in interaction design • Human-centered computing~Accessibility theory, concepts and paradigms • Human-centered computing~Accessibility systems and tools • Social and professional topics~History of hardware • Social and professional topics~Codes of ethics • Social and professional topics~Assistive technologies • Computing methodologies~Cognitive robotics • Computing methodologies~Robotic planning • Applied computing~Consumer health
Social Robots; Robotics; Companions; Technology Acceptance.
The link for submission will be published soon.
In contrast to chatbots or avatars, social robots are physically embodied. They interact and communicate with humans or other autonomous physical agents by following social behaviors and rules.
However, will social robots always follow these rules? In the broadest understanding of the term, “social robots” represent the universal longing to create a model of humans to match personal desires and necessities. Nevertheless, already in Karel Čapek’s play R.U.R. (Rossum’s Universal Robots) from the 1920s, where the term robot is probably first used, the machines revolt against their creators.
Obviously, the concept of automated human-like machines has always incited both fear and fascination. The concept is much older than the modern term. Ancient “robots” did not start as a human creation – in antique stories gods create the predecessors of today’s machines. The giant bronze automaton Talos, a mythological guardian protecting the island of Crete and Zeus’ girlfriend Europa, was alive only by divine authority: built by Hephaestus, the god of handcraft. The 1920 artistic illustration by Sybil Tawse (Figure 1) shows that Talos was envisioned as a kind of robot.
Today’s robots are created by humans – so humanity has once again usurped divine powers. It is not surprising, that this continuation of the original sin creates the described mixture between fear and fascination. The emotional potential rises with the proximity between humans and robots. The latter have long left the cage of industrial settings, now working together with humans in collaborative workspaces. Moreover, social robots start following us into much more personal settings like home and health care.
In such personal or intimate settings, “social robots” are required. They have to look harmless and friendly (Figure 2) to reduce fear – and they should be able to respond adequately to human behavior and ideally also to human emotions: a depressive patient needs a different form of address than an athlete recovering from a bone fracture. However, should social robots also simulate emotions? Should they smile, to ease nonverbal communication? As robots do not feel emotions as humans do (lacking the biological substrate of a brain made of neurons and a nervous system spreading throughout a body), showing or simulating feelings could be considered lying.
In the paper and video “Hello World”, shown at the CHI-conference 2015 in Seoul , Kyle Overton reflects on this ambiguity of human expectations: while machines are expected to learn much about humans and augment them in numerous ways, they “understand nothing” because “they can’t feel the loss of a loved one”. Overton is right about the schizophrenic nature of humans’ desires regarding machines and artificial intelligence. However, is it even desirable, that social robots have the ability to feel? Do we really need “ethically correct robots” ?
It is more an ethical than a technological question, if we should integrate social norms and emotions into robotic behavior schemes. The design space for social robots is not the engineers’ playground.
2. WORKSHOP SCOPE
Although the workshop has a background in computer science, it is interdisciplinary: psychologists, lawyers, economists, medical doctors, historians or researchers from the arts and humanities are welcome to contribute.
An example for the importance of a historic and societal perspective on the development of social robotics is the success of the “Mechanical Turk” (Figure 3). This famous automaton was fake, operated by a human chess master hiding inside. However, even in the late 18th century, a time when natural sciences and engineering already were well established, society was surprisingly willing to accept automated machinery with unlikely abilities. The Turk’s creator Wolfgang von Kempelen toured with it throughout Europe and the United States, showing it to nobility and political leaders like Napoleon Bonaparte and Benjamin Franklin.
How could he fool everybody? We think that even today many people look at robots and automats with a special kind of fascination and a willingness to attribute physical and mental skills far beyond the machines’ actual capabilities. Looking at the misconceptions of the past might help to avoid similar patterns of behavior in the present and the future.
Maybe human society already has developed a critical awareness, and a phenomenon like Kempelen’s Turk is only a historic anecdote? Let us consider a recent example: the female humanoid robot called “Sophia”, developed by Hong Kong-based Hanson Robotics (Figure 4). The company claims that the Sophia-robot learns and adapts human behavior using artificial intelligence. Indeed, there are multiple videos of Sophia showing authentic responses to questions. However, these responses are not generated at runtime, but pre-programmed – the AI only selects the best fitting one to create an illusion of understanding. Thus, the robot resembles common chatbots. The most intriguing feature are authentic facial expressions, which match to the conversation. So just like the mechanical Turk, Sophia fascinates the audience more by a clever illusion than by an actual ability.
Accordingly, in a commonly viewed CNBC-interview with its creator David Hanson, the Sophia-robot claims that it “hopes to do things such as go to school, study, make art, start a business, even have my own home and family” . In this light, it is not surprising that in October 2017, Sophia became the first robot to receive citizenship of a country: both Saudi Arabia and Hanson Robotics surely appreciated the echo created by this media scoop.
One of many examples for the importance of ethical aspects in the development of social robots is the Paro robot (Figure 5). In health therapy, we could consider robots just a new tool, improving current instruments. While this may be true for robots simply providing physical support, therapeutic robots interacting with patients are no mere tools.
The social robot Paro supports therapy and care already since 2009. As an artificial and harp seal, it was deliberately designed to interact with elderly people and patients of hospitals and nursing homes. It responds to petting by body movements, opening and closing its eyes and sounds. Paro even squeals if handled too strongly. Studies show positive effects on older adults’ activity [7, 9]. However, Paro is no living being and thus has no “real” feelings. Is it legitimate to make a patient believe that it has? Moreover, what weighs more: the dignity and security of a patient, for example suffering from Alzheimer’s disease, or animal rights?
If you think that ethical topics regarding Social Robotics are limited to persons with impairments, like patients in health care or elderly persons, you are incorrect. Scheutz & Arnold provokingly title their 2016 IEEE paper : Are We Ready for Sex Robots? So what changes, if the “petting” does not apply to a robotic harp seal but to a sex robot like “Synthea Amatus” (Figure 6)?
Even a robot offering sexual services is just a machine – or is it more? What kind of behavior patterns and simulated emotions are acceptable in this use case? How would an artificial intelligence have to be adapted to serve customer needs without violating societal standards? In a potentially problematic way, such robots combine very diverse topic areas: robotics, human rights and dignity – and even female rights and gender studies.
The examples above show that several advanced social robots are already developed and some are even on the market. Current and future developments will further increase the range of applications. Accordingly, fantastic new possibilities and strange, ethically debatable applications will emerge at the same time.
For discussing such complex topics on a common ground, we recommend using a shared model of acceptability. There are several models to assess the acceptance of new technological developments. A common one is the Technology Acceptance Model (TAM). It was developed by Davis in 1989  and posits that the individual adoption and use of information technology are determined by perceived usefulness and perceived ease of use. Eleven years later, the model was extended by Venkatesh and Davis  (TAM2, Figure 7) in an attempt to further decomposition acceptance into societal, cognitive and psychological factors.
As Hornbæk & Hertzum explain in a recent review on developments in technology acceptance and user experience , TAM has long grown out of the field of research in computer science and its key constructs have been refined for different disciplines. There are additional constructs to supplement perceived usefulness and perceived ease of use like perceived enjoyment , adding experiential and hedonic aspects to TAM.
This flexibility of the TAM-model, in our opinion, makes it suitable as a common ground and reference for the workshop contributions and discussions.
The vast spectrum of social robots in the previous section shows that we are on the edge of something new: artificial intelligence and engineering allow robots with social behaviors to become an everyday phenomenon.
Society has to decide in which way technology should develop. Social robots surely can support activities like cleaning or serving dinner. However, services that are more personal are subject to debate, for example health care or emotional companionship. Even if the ethical problems in professional service areas could be solved: if human society starts substituting service and knowledge work by social robots, could “artificial intelligence create an unemployment crisis” as Ford  postulates? If a robot “starts a business”, like the Sophia robot claims: will law allow that humans work for robots? Who is entitled to the money that business generates – and what happens if a robot becomes rich? If a human destroys such a social robot – what is the right punishment?
If we extrapolate the development of service robots for personal use, the substitution of humans can potentially spread to intimate relationships. Relationships with social robots, which simulate love and understanding authentically, even under the most unlikely circumstances, will be much easier than relationships with human partners. This can have a very damaging societal impact.
If one day there is a reliable platform for modular and configurable social robots, criminal humans might be able to develop criminal robots to get artificial assistance. These might be harmful to the human’s body or emotions and even act in groups. Military support robots are already members of fighting teams. Eventually, supporting and saving the life of a fellow human soldier implies killing on the side of the enemy. This is a dangerous direction: killing a human defines the opposite a social robot: an antisocial robot.
Finally, if social robots are getting smarter and more social throughout the years: do we still have the right to treat them like commodities or slaves – or will there be robot rights, like animal rights and human rights?
In this article, we presented the outline for the workshop “Social Robots. A Workshop on the Past, the Present and the Future of Digital Companions”.
We highlighted the workshop’s interdisciplinary nature and illustrated it by linking examples from history (Mechanical Turk) with phenomena of today (Sophia). We also highlighted the ethical component in the future development of social robots, briefly discussing the use of the Paro robot in elderly care and the sex robot Synthea Amatus.
In a brief discussion, we highlighted some of the most interesting questions arising from the technological advancement of social robots, like the impact on employment, on relationships and on the system of law.
In conclusion, the workshop will include – but is not limited to – the following topics.
- Social Robots in various Use Cases (e.g. domestic, health)
- User Studies with Social Robots
- Societal Acceptance and Rejection of Social Robots
- Artificial Intelligence and Embodied Agents
- Data management and Data Privacy Issues
- Autonomous Navigation and Locomotion
- Robotic Simulation, Artificial Emotion Simulation Techniques
- Mathematical Modeling and Simulation of Human Affective Behavior
- Natural Language Processing for Human-Robot-Interaction
- Multi-source Data Fusion Methods for Interaction
- History of Robots and Automata
- Best (and Worst) Practices of Today’s Social Robots
- Legal Perspectives on Social Robots
- Design Space for the Development of Social Robots
- Ethical Guidelines for the Development of Social Robots
- Lessons Learned from National or International Projects on Social Robots
5. LIMITATIONS AND FUTURE WORK
We are well aware that the workshop will just provide a first glimpse into the future of social robots. We aim to further develop the findings and gather representative data on how Social Robots are perceived in different groups of society.
The workshop “Social Robots” is funded by the Arab-German Young Academy of Sciences and Humanities (AGYA). The workshop is initiated and organized by AGYA members Oliver Korn, Christian Fron, and Dominik L. Michels, together with Gerald Bieber. It is part of the AGYA research project “Perspectives on Social Robots”.
The Arab-German Young Academy of Sciences and Humanities (AGYA), at the Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) and at the Academy of Scientific Research & Technology (ASRT) in Egypt, was founded in 2013 as the first bilateral young academy worldwide. AGYA promotes research cooperation among outstanding early-career researchers from all disciplines who are affiliated with a research institution in Germany or any Arab country. AGYA is funded by the Federal Ministry of Education and Research (BMBF).