top of page
Writer's pictureChris

So what does AI actually do for us?


There are terms that get abused then there’s poor ole “artificial intelligence”. It's an umbrella term if ever there was one (further abused by the kind of people who call it an umbrella term, or worse, a "suitcase" word - something that needs to be unpacked).

What we’re really talking about is a whole set of capabilities and approaches that are too often unhelpfully lumped together.

But rather than talk about the how, perhaps the easiest thing to look at what these tools and techniques actually do for us. Consider this a light unpacking - the kind you do when you're staying at a friend's house for a weekend before heading off on a long-haul trip of a lifetime. There is of course more to dig into in each of these. And not as much suncream.


1. Make Data Useable:

This is the processing of unstructured data into a machine readable structured format. Once data is structured data we can do stuff with it – analyse it, process it in existing applications, set an RPA tool to work on it.

Here Speech and Natural Language Processing go a bit together. This is the parsing and extraction of intent and data from text (or text after its been transcribed from speech). Since 2012, switchboard quality transcription tests (admittedly distinct from other real world scenarios, but most germane to business transactions) have seen error rates plummeting from perhaps a third to at or even in excess of human quality, at around 5%.

Machine vision is in essence the same thing but applied to visual info – pictures or scans or video. This is now operating at better than human performance – 2% vs 5% human error rate. If you want to be mesmerised check out this implementation. Basic relatively speaking but you won't be able to take your eyes off it.

75-80% of the worlds data is unstructured – converting that to something that can be used opens up enormous potential uses cases for automation. The challenge of course then becomes building all the queries that you might want to connect to - all the dialogue boxes - and we don't yet have a way to do that genuinely easily at scale.


2. Spot patterns:

This is very akin to traditional data analytics, just with the ability to handle unstructured data alongside structured data. It’s the kind of technique that powers Amazon’s search recommendations “people like you bought x” or “customers also looked at y”.

Quite often we know what we want to find out from data - we know our goal. But if we don't know the right questions to ask of data in the first place, then achieving our goal can be hard. Surfacing patterns in data can help prod us towards better questions. However, the well-worn "correlation does not equal causation" maxim sums up beautfully why data scientists aren't out of a job any time soon: please check out Tyler Vigen's beautiful website illustrating just this fact.


3. Learn from data:

Stanford’s definition of machine learning is perhaps the neatest – the science of getting computers to act without being explicitly programmed

Getting computers to infer rules and relationships themselves from typically substantial data sets, rather than the traditional approach of articulating and hard-coding those rules into applications or processes, is the holy grail for automation scaling junkies. It's fraught however with the potential for bias from either or both the underlying data or the creators of the algorithm, so human curation is key. Understanding how they learn - and indeed why they take what action - is a fundamental issue for the wider spread adoption of machine

learning.


4. Generate Content:

Sometimes this is referred to as knowledge representation.

Tools like Arria are capable of taking huge amounts of data, and, based on previously trained output examples, deliver reports that appear to have been written by a human. Where there is a lot of data being captured – on an oil rig, weather forecasting or in market trading for example, these techs enable detailed, more timely reporting that would otherwise consume a lot of human expert time.

Amazon’s Polly, Apple’s Siri and other engines are capable of generating speech which is increasingly “less robotic”. Check out the table comparing IOS9 to IOS11 at the bottom of the page on Apple's own release on improving Siri (the detail above will likely interest only the more technical among you :)

Also what we mean here is making recommendations as to next actions (as distinct from say product recommendations to a customer). Typically this comes accompanied by confidence ratings which can be set to an organisation’s comfort, or passed to a human agent for sense-checking. This goes towards the so-called “cognitive computing” end of things.

IBM are guilty of many heresies in the marketing of this stuff, but say that “What search is to information retrieval, Cognitive Computing is to decision making”. I quite like that. Speaks to scale beyond human abilities. Take a lot of information, then draw next best actions from it, to recommend to a human agent, or to automatically perform.


So if these are the main areas AI can help, where can it help you? As ever hugely interested in examples, or feel free to get in touch if you want to chat over a coffee.

8 views0 comments

Recent Posts

See All

Comments


bottom of page