A computer watched the debates. It thought Clinton was happy and Trump was angry and quite sad

We’re only human. We see the world through the filter of our prejudices. This is why American voters can watch Hillary Clinton and Donald Trump duke it out for the presidency during the debates and constructentirely divergent narratives. He won! She won! You’re wrong! No you!

Exhausting isn’t it? We’ve got opinions. Our brains, after all, are not computers. But what if computers could watch the debates?

Four graduate students from Columbia University’s Data Science Institute took a step toward making this possible. They built an application called Debate in (E)motion that ‘watches’ the debate, captures a video frame every five seconds and spits out a score on how confident it was that it recognizes a set of emotions.

Part of the impetus for the app, according to Amirhossein Imani, one of the students on the team, was to create a pundit-as-public-service, one that could offer an alternative, unbiased account. To get a sense of what it saw, take a look at the data visualization Quartz created, based on results from the first debate:

In the first debate the computer recognizes a lot of happiness in Clinton’s face, which makes sense. Despite his attacks, she kept a smile on her face, as if she found him amusing. The computer also perceived some surprise and sadness but not much else except for a flicker of contempt. For Trump, it detects substantially less happiness than Clinton, plus a lot of sadness, some surprise and contempt and considerable anger.

To be sure, because this type of analysis is in its infancy, there is a wide margin for error. The application was developed in a rush, during a hackathon at DevFest. The team used Microsoft Cognitive Services programming tools to recognize faces and detect sentiment. But even with those robust algorithms, the way the US television networks film events introduces mistakes. According to the program manager at Microsoft, “To identify micro-expressions, one normally needs a very high frame-rate camera, which the debates are not shot in.”

Additionally, because the application only reads faces, it misses the nuance and context of physical gesture and tone of voice. Below you can see it has trouble reading ‘surprise’, flat-out misinterpreting many raised eyebrows.

That said, what the computer does capture, along with its lack of political and emotional bias, makes for interesting patterns.

For instance, what are those happiness gaps that Hillary experiences? And what is going on in that moment when the computer thinks the Donald is most visibly happy?During some of the happiness gaps Trump is complaining about the Fed being political, that “political hacks” are negotiating the trade deficit and that Clinton needs to release her deleted emails. This is followed by gaps that include the moment when Clinton aggressively lays out the reasons she believes may explain why Trump is not releasing his tax returns.

Quartz also worked with the team to compare the first debate to the final debate on Oct. 19. We skipped the second debate because of the different town hall format. Before you look at the results, take a minute to consider your own opinion. Did you think she was stronger in the first vs. the third? Was he more in control during the third debate? What do you expect to see?

(^-^)(-_-)(^-^)

Here’s how they compare:debate_three_sentimentv1

If you look at the patterns closely you can see that in the third debate the computer thinks that Trump did a better job of keeping his feelings in check, especially anger and contempt.

The data show that both candidates displayed roughly the same type of emotional patterns during the debates. But who won and who lost ends up a matter of opinion. Which brings us back to the pundits and our prejudices … and an election day that is not too far away, something we all can look forward to being visibly happy about.

[Source:-Quartz]