Another live-blogging experience for you … will clean up later.
Jeff Hancock from Cornell University — an Assistant Professor in the Department of Communication and in the Faculty of Computing and Information Science at Cornell University. His work is concerned with how information technologies – such as email, instant messaging, and videoconferencing – affect the way we talk to and understand one another. His research is supported by the National Science Foundation, and his work on lying online has been featured in the New York Times Magazine, CNN, ABC News, NPR and the BBC. Dr. Hancock earned his Ph.D. in cognitive psychology at Dalhousie University, Canada, and joined Cornell in 2002.
Abstract: Deception is one of the most significant and pervasive social phenomena of our age. At the same time, technologies have pervaded almost all aspects of human communication. The intersection between deception and information technology gives rise to an important set of questions about deception in the digital age. Do people use different media to lie about different types of things, or to different types of people? Are we worse at detecting a lie online than we are face-to-face? Can linguistic patterns that reflect deception be automatically identified and used to assist online deception detection? This talk will discuss these questions and describe some recent research that may shed some light on the answers.
NSF Human & Social Dynamics Grant (w/ Claire Cardie and Mats Rooth)
lying is pervasive – Diogenes looking for an honest man
technology is ubiquitous
* SoundCover – record background sound for use on phone
* Phishing – IU is a leading resource for phishing research
* Alibi Network –
* Fake Your Space
* Post Secret – hyper disclosive
digital deception
* intentional control of info
* in a technology mediated message
* creates a false belief in receiver of the message
Production
Detection
* Human
* Automated
Lying rates across media
different lies for different media
online dating
Think of a lie (most people lie between 1-2 times a day)
* “I’m asking you to be honest about your lying.”
Where do we lie most often? … FtF … Phone … IM … Email
* people think email is the most likely place to lie -> because you can craft/plan lie, fewer leakage cues, social disconnect (I’m lying to/through a machine), more small lies
* FtF -> more big lies, use cues to adapt the tactic in the lie
1300 studies -> eyes aren’t reliable (but that’s what people believe is the source of tells)
* none of us have real tells
* detection perception ability is about 54%
HIGH
[Frequency of lies per Interaction]
LOW
FtF < Phone < Instant Message < Email (Social Distance Theory - DePaulo, 1996) FtF > Phone > Instant Message > Email (Media Richness Theory – Daft & Lengel, 1984; 1986)
* assume one underlying difference, but there are many more
Synchronous – FtF, Phone … IM (not email)
Recordless – FtF, Phone … IM (not email)
Distributed – Phone, IM, Email … (not FtF)
30 people – record all social interactions for 7 days
* what medium?
* did you lie?
lies/day 1.60 (1.96 for DePaulo, 1996)
average lies ratio 25% (30%)
FtF < Phone < IM < Email (Social Distance Theory)
FtF > Phone > IM > Email (Media Richness Theory)
FtF (27%) – Phone (37%) – IM (21%) – Email (14%)
later versions incorporate group behavior
many factors
* not all lies are the same
* lie to different people
* students are “unique”
* meta-lying (about lying behavior)
Replication study – 75 students … same kinds of results
on phone –
email – explanation (from students to professor)
FtF – feelings (similar to IM)
$30 per participant (n=80) … 40M/40F
– compared real attributes to those on dating services
– cross-validation with self-disclosure … self-reported accuracy of profile (people can be honest about their lying)
– have to have accessed page twice recently (trying to filter out staleness)
– height: men (said they were taller)
– weight: men (heavy guys say lighter, lighter guys say heavier); women (lighter)
– age: very truthful (this is a non-maleable bit of information)
– photos: not included (looking more at community perception of deception)
Goffman, Beaumiester, Walther -> tensions of self-presentation (appear attractive : appear honestly)
* Frequent but subtle lies created to balance tensions (max benefit, min cost)
Is it harder to tell if someone is lying FtF or CMC
* cues: verbal … non-verbal … pysiological (verbal is important)
* motivation: highly motivated liars are detected more readily (motivational impairment effect)
– in text-base communication, more time to plan
37 dyads, FtF and CMC
* lie on two topics, half highly motivated
FtF – Lo – 50%; Hi (54.9%) (motivation impariment effect)
CMC (motivation ENHANCEMENT effect)
* no media effect … interaction effect
Can automated techniques be used to detect deceptive language?
* does truth differ from lies (language)? … linguistic style matching
* LIWC (Linguistic Inquiry and Word Count)
Word count: deception uses more words (28% increase0
Pronouns: slightly less first-person singular in deception … potentially powerful cue
* person being lied to has their system changed, but didn’t realize (beyond chance) that they were lied to
* correlation in linguistic characteristics goes up for lies
* person being lied to produced fewer questions than when hearing the truth
Lie-brary
* rate IM message as deceptive or not
* 10,000 messages -> 6% were deceptive
* trying to understand the difference between those two groups
1 reply on “Web of Deceit”
A couple of reactions to the talk.
1) Spooky
What happens *if* he can get a natural language processing system to work that identifies lies in our text? Very Orwellian.
2) Application to Games
Many online worlds (SL, WoW) depend on role playing as part of the user/player experience. How can one reasonably detect deception in a realm where everything is in fact deception?
3) Application to Security and Games
Dichotomous with the above. Detecting deception in virtual worlds that involve readily monetized virtual property has obvious benefits.