A digital and marketing pro.

Into the Deep – or, 61% wig.

Since the advent of sci-fi, we’ve been trying to create artificial brains. We’ve been trying to understand our own for even longer. The clever folk looking at Deep Learning are attempting to replicate human thought processes, by building ‘neural nets’, pieces of software that are connected by layers of ‘neurons’ that can, in a manner of speaking, think. That is, look at data, and teach themselves to form an observation.

This can take the form of hearing sounds, and understanding what the noises mean – or voice recognition, like Siri and Google Now. Or looking at a bundle of pixels, and determining what this abstract group of colours could be – is it a bike? Or a jellyfish?

This website has a great demo tool, where you upload an image, which works well for simple imagery, like our jellyfish friend:

Deep Belief - Jellyfish

Obviously, narcissism being what it is, I wanted to find out what it thought I looked like:

Deep Belief - Me

61% wig, 35% feather boa. Pretty darned accurate summary I would say….

Of course, the use cases extend beyond my own wiggery. Jetpac is a city guide that analyses geo-located Instagram photos. It parses the info it reads to create top 10 lists, so blue sky and greenery mean a scenic hike; lots of lipstick and teeth = a bar where lots of women hang out; while lots of moustaches, apparently, indicate a hipster bar.

[It raises interesting questions for people whose pictures are being used. Although Instagram pics are publicly available, it’s still a bit of a shock to see your face appear on the feed in the guide.]

These techniques are all being used by the big boys – Apple, Google, Facebook are all innovating within this space, and developing their artificial intelligence expertise. With the huge amount of images and video going online every day, the winning tech companies will be those who can crunch all this data, and really understand what the data means. How will this impact on users & brands though?

  • As the systems refine, they will be able to personalise and to predict what users want to an ever more refined degree. Facebook’s newsfeed algorithm will know precisely what you want to read. Images will be auto-tagged, and then auto-shared, knowing who you want to share with.
  • By the time a user interacts with a brand, they have already pretty much decided whether they’re going to purchase. Advertising & social targeting relies on what a user has previously done, and serves out ads based on that. If you can begin to predict what users will do in the future, you have a much greater opportunity to really influence their behaviour.
  • Social listening tools will be more accurate and more in-depth. Right now, these tools can be fairly blunt instruments in trying to monitor sentiment around a brand. Most of them can only look at what people write, in text that computers can read. In the future, if these tools can automatically interpret images and video, brands will have a much sharper understanding of how they are perceived.
  • The winners in this field will be those with huge amounts of data to crunch and interpret. Those tools that are more ephemeral with data – ie, Snapchat – won’t be in a position to learn anything about their consumer.
  • With great power comes great responsibility. In January, Google bought Deep Mind, a leading Deep Learning startup, for $500m+. Facebook had also been looking at this purchase, but one of the reasons reported as to why they sold to Google instead, was that Google established an ethics board, which would oversee how this data will be used responsibly.

At the moment, researchers say that the ‘neural nets’ are comparable, in numbers of neurons, to an insect. There’s a long way to go before the prospect of Samantha in ‘Her’ – an autonomous, self-learning operating system, with its own life, and feelings – becomes a reality. But it is there, beckoning us into the AI future.

  1. Yes, they’re ‘thinking and learning’, but it sounds like these computers are going to be quite judgmental if all they have is surface details to go on. Just as the human race is becoming civilized enough to look past prejudice, we’re making computers and teaching them to do the opposite.


    • Or, maybe Mary, this is the first step along the way? The ‘puters will be making the most logical choices, as that’s what they’re built for, but this would lead us back to a very Utilitarian approach. Based on the assumption that Utilitarianism doesn’t chime with current more individual-led philosophies, how do we avoid heading down that path? There is the possiblity, of course, that as computers take on more and more of our lives, we would begin to think more like them – that we actually become more Jeremy Bentham in our thinking. And possibly rather brutal.

      Thanks for your comment!


  2. Hiding data – or minimising your footprint – will surely become more and more of a reason to seek out brands. ‘Snapchat’ principles may begin to apply to whole new categories.


    • Thanks for this Rossle! I think it’s an interesting concept – that as more and more is shared of your life online, that there is a need for pockets of privacy. Have you seen ‘Secret’? An app that lets you tell the world your (or your friends’, or your work’s) secrets https://www.secret.ly/. As with most apps, it might be much stronger here in SF than elsewhere, so not sure if it’s caught on outside the Bay Area!

      Dave Eggers latest book, ‘The Circle’, predicted that the need to remove trolling and fraud would give governments the authority they need to eradicate the last levels of anonymity. As generation Z grows up with different expectations of privacy, there may be fewer people advocating for the right to be anonymous. The notion of privacy could be reduced to an app that gives you an illicit frisson when you tell a secret – privacy as a parlour trick rather than a right.


Leave a Reply

Your email address will not be published. Required fields are marked *