![]() |
|
We conducted an experiment to investigate if computers could motivate users to change their behavior. By leveraging a social dynamic called the "rule of reciprocity," this experiment demonstrated that users provided more helping behavior to a computer that had helped them previously than to a different computer. Users also worked longer, performed higher quality work, and felt happier. Conversely, the data provide evidence of a retaliation effect.
Reciprocity, retaliation, agents, persuasion, influence, social dynamics, computers are social actors, media equation, experiments, empirical studies
© 1997 Copyright on this material is held by the authors.
As more people use interactive technologies for self-improvement and self-regulation, our need to understand how computers might elicit constructive behavior change in users becomes ever more critical. For example, how can a health promotion application help users comply with a regimen prescribed by their doctors? How can educational software persuade students to work longer or harder? How can a financial management program motivate users to manage their money more wisely?
One unexplored angle for promoting behavior change through HCI is by leveraging social dynamics. Many social dynamics in human-human interactions bring about behavior change [1]. Why can't these human-human social dynamics be transferred to HCI?
Evidence that this approach might work comes from experiments showing that people follow certain social rules and dynamics when interacting with computers [5]. For example, people respond to flattery from computers much like they respond to flattery from other humans [2], and people react to limited computer "personalities" much like they react to human personalities [4]. Although these studies show significant and provocative attitudinal differences, to this point no research on using social dynamics in HCI has clearly demonstrated the ability to change user behavior.
To investigate if computers can motivate users to change their behavior, we designed a controlled experiment of human computer interaction that leverages a powerful social dynamic: the rule of reciprocity. The rule of reciprocity states that "people should help those who help them" [3]. According to anthropologists and social psychologists, the rule of reciprocity exists in all cultures and carries tremendous power [1,3]. However, no one has investigated whether the rule of reciprocity would also apply in human-computer interactions.
We randomly assigned 76 subjects to one of four conditions in a 2X2 balanced, between-subjects design (see Table 1).
The experiment consisted of two ostensibly unrelated tasks. The first task created a situation in which the computer could help the subject. In this case, the computer performed an ostensible web search and provided information the subject needed. Half the subjects received very useful information ("high help") from the computer, and half received mediocre information ("low help").
The second task created a situation in which the subject could help the computer. In this case, the subjects could provide information to the computer on how they perceived colors on the screen in an ostensible effort to help the computer create a color palette to match human perception. Half the subjects worked with the same computer as before, and half worked with a different&emdash;but identical&emdash;computer. This second task allowed subjects to choose how much&emdash;or how little&emdash;help to provide the computer.
Table 1: Variable values and the resulting four conditions|
|
Same computer for both tasks |
Different computer for each task |
|
High help from computer on first task |
Reciprocity condition |
Reciprocity-control condition |
|
Low help from computer on first task |
Retaliation condition |
Retaliation-control condition |
We predicted that subjects in the reciprocity condition would provide more help to a computer than subjects in the reciprocity-control condition. We also predicted that subjects in the retaliation condition would provide less help than subjects in the retaliation-control condition.
Table 2 shows the results of t-tests comparing the reciprocity and reciprocity-control conditions. The only difference between these two conditions was that reciprocity subjects worked with the same computer on both tasks, while control subjects worked with different&emdash;though identical&emdash;computers.
Table 2: Behavioral measures of reciprocity|
|
Reciprocity Condition means |
Reciprocity-control Condition means |
Reciprocity Effect (Reciprocity vs. Control) t(36) |
|
Number of color comparisons subject completed for computer |
11.0 |
5.9 |
14.60*** |
|
Time subject spent on color comparison task (in seconds) |
104.4 |
60.3 |
12.15*** |
|
Quality of performance on color task measured by number correct |
4.47 |
3.19 |
6.83** |
Table 2 shows three behavioral effects that support our hypothesis on reciprocity. First, reciprocity subjects performed significantly more work for the computer than did control subjects, with means of 11.0 versus 5.9. Next, reciprocity subjects spent significantly more time working on the task than did control subjects. Finally, reciprocity subjects made significantly fewer mistakes in helping the computer than did control subjects.
In addition to the above behavioral findings, subjects in the reciprocity condition reported more positive affect than did subjects in the reciprocity-control condition, t(36)=7.78, p<.01. In other words, reciprocity subjects felt happier.
Our experiment to test reciprocity effects also provided a means for detecting retaliation effects. A t-test comparing the retaliation and the retaliation-control conditions shows that retaliation subjects performed lower quality work than did retaliation-control subjects, t(36)=-3.80, p<.01. In other words, subjects who received "low help" on the first task and then worked with the same computer on the second task made significantly more mistakes. In addition, retaliation subjects reported lower positive affect than did retaliation control subjects, t(36)=-6.33, p<.01.
We designed this experiment to control for two alternative explanations: (1) the effect is due to the number of computers used and (2) the effect is due to the quality of information received in task 1. The significant interactions from ANOVAs rule out these two alternative explanations:
|
Number of Comparisons |
F(2,74)=8.48 |
p<.01 |
|
Time Spent on Task |
F(2,74)=5.04 |
p<.05 |
|
Quality of Performance |
F(2,74)=10.42 |
p<.001 |
|
Positive Affect |
F(2,74)=14.11 |
p<.001 |
In other words, the data show that the reciprocity and retaliation effects reported above are not due simply to either the number of computers used or to the quality of information received in task 1.
This experiment provides empirical evidence that using a social dynamic&emdash;the rule of reciprocity&emdash;in human-computer interactions can motivate users to change their behavior. Users behaved in more helpful ways to a computer that had helped them on a previous task. Specifically, users reciprocated to the computer in terms of amount of help given, time on task, and quality of work. Conversely, users were less helpful to a computer that failed to help them previously.
Although this experiment was conducted as basic research, the results may suggest implications for designing agents, interactive systems, and computer applications.
The broadest implication of this research is that we can use social rules and dynamics to design agents and interactive systems that can change user behavior. Exactly when and where such persuasion is beneficial and ethical should be the topic of further research and debate.
In creating a system to promote behavior change, system and agent designers may well be able to leverage the rule of reciprocity to motivate users. Users may be more likely to agree with, comply with, or help out an agent that has previously helped them. This "give-and-take" may also lead to enhanced user performance and increased positive affect. In some cases, the agent or system may not need to perform any new work; rather, the work might simply be framed in a way that evokes the power of the rule of reciprocity.
This study also suggests that retaliation effects may occur when users perceive the agent or system has performed poorly. To avoid such effects, the system could introduce a different agent or application for subsequent activities, thereby avoiding the retaliation effects shown in this study.
1. Cialdini, R. B. (1993). Influence: Science and practice (3rd ed.). New York: HarperCollins.
2. Fogg, B.J. & Nass, C.I. (in press). Silicon Sycophants: The effects of computers that flatter. International Journal of Human-Computer Studies.
3. Gouldner, A. W. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review, 25, 161-178.
4. Nass, C.I., Moon, Y., Fogg, B.J., Reeves, B. & Dryer, D.C. (1995). Can computer personalities be human personalities? International Journal of Human-Computer Studies, 43, 223-239.
5. Reeves, B. & Nass, C. (1996). The Media Equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press.
![]() |
|