Since the advent of sci-fi, we’ve been trying to create artificial brains. We’ve been trying to understand our own for even longer. The clever folk looking at Deep Learning are attempting to replicate human thought processes, by building ‘neural nets’, pieces of software that are connected by layers of ‘neurons’ that can, in a manner of speaking, think. That is, look at data, and teach themselves to form an observation.
This can take the form of hearing sounds, and understanding what the noises mean – or voice recognition, like Siri and Google Now. Or looking at a bundle of pixels, and determining what this abstract group of colours could be – is it a bike? Or a jellyfish?
This website has a great demo tool, where you upload an image, which works well for simple imagery, like our jellyfish friend:
Obviously, narcissism being what it is, I wanted to find out what it thought I looked like:
61% wig, 35% feather boa. Pretty darned accurate summary I would say….
Of course, the use cases extend beyond my own wiggery. Jetpac is a city guide that analyses geo-located Instagram photos. It parses the info it reads to create top 10 lists, so blue sky and greenery mean a scenic hike; lots of lipstick and teeth = a bar where lots of women hang out; while lots of moustaches, apparently, indicate a hipster bar.
[It raises interesting questions for people whose pictures are being used. Although Instagram pics are publicly available, it’s still a bit of a shock to see your face appear on the feed in the guide.]
These techniques are all being used by the big boys – Apple, Google, Facebook are all innovating within this space, and developing their artificial intelligence expertise. With the huge amount of images and video going online every day, the winning tech companies will be those who can crunch all this data, and really understand what the data means. How will this impact on users & brands though?
- As the systems refine, they will be able to personalise and to predict what users want to an ever more refined degree. Facebook’s newsfeed algorithm will know precisely what you want to read. Images will be auto-tagged, and then auto-shared, knowing who you want to share with.
- By the time a user interacts with a brand, they have already pretty much decided whether they’re going to purchase. Advertising & social targeting relies on what a user has previously done, and serves out ads based on that. If you can begin to predict what users will do in the future, you have a much greater opportunity to really influence their behaviour.
- Social listening tools will be more accurate and more in-depth. Right now, these tools can be fairly blunt instruments in trying to monitor sentiment around a brand. Most of them can only look at what people write, in text that computers can read. In the future, if these tools can automatically interpret images and video, brands will have a much sharper understanding of how they are perceived.
- The winners in this field will be those with huge amounts of data to crunch and interpret. Those tools that are more ephemeral with data – ie, Snapchat – won’t be in a position to learn anything about their consumer.
- With great power comes great responsibility. In January, Google bought Deep Mind, a leading Deep Learning startup, for $500m+. Facebook had also been looking at this purchase, but one of the reasons reported as to why they sold to Google instead, was that Google established an ethics board, which would oversee how this data will be used responsibly.
At the moment, researchers say that the ‘neural nets’ are comparable, in numbers of neurons, to an insect. There’s a long way to go before the prospect of Samantha in ‘Her’ – an autonomous, self-learning operating system, with its own life, and feelings – becomes a reality. But it is there, beckoning us into the AI future.