Machination of Modern Life: Hallucination or Illumination

Tech companies are increasingly finding new and interesting ways to apply machine learning tools to solve existing problems in many different industries. From predicting strokes and seizures to generating realistic sound effects that imitate humans, machine learning is yielding some interesting results.

One of the more unusual and amazing application of machine learning is in the area of affective computing, in which various systems and devices can be programmed to recognize, interpret, process, and simulate human emotions. For example, researchers have created algorithms that can detect when a person is lying. A most welcome development, users of online dating sites would say, but the application of such an algorithm goes beyond that – curbing terrorism, screening information provided on a visa application, protecting against fraud, etc.

How do these algorithms work? It varies. For computer-based communication, the algorithm created by researchers at City University London and the University of Westminster analyzes word use, structure and context, to determine if a person is lying in an email for instance. The immediate benefit of this is obvious –it can help people detect the kind of scam or phishing emails that have caused untold hardships in the form of identity theft and financial loss. To create the algorithm, researchers compared text in tens of thousands of emails that contained lies and truthful contents. They found that liars are less likely to use personal pronouns, are more likely to use adjectives such as “brilliant” and they tend to mirror the sentence structure of the person they’re communicating with.

Then there are those algorithms that rely on emotional information to detect when a person is lying. Some rely on data about facial expressions, body posture, gestures, and speech that are gathered using video cameras and microphones. Others detect emotional cues by measuring physiological data, such as skin temperature or changes in skin color as blood flows to regions like the cheeks and forehead. These cues are similar to the same ones we humans use to detect emotions in each other – for example, blushing, which is caused by a rush of blood to the cheeks, could indicate embarrassment.

Based on their analysis of such data, these algorithms are accurate 70 per cent of the time in detecting when a person is lying. That’s around the same average as traditional lie detectors, but still impressive considering that in humans, the average is around 54 per cent of the time.

And like the popular quote by Ronald H. Coase, “If you torture the data long enough, it will confess”.