The Voice Revolution- Not only @home. Everywhere.

March 7, 2017 amir Uncategorized 0 Comments


Andrew Bartels wrote a great post recently — “Why buying Quip might be the best acquisition Salesforce has ever made”.

Andrew’s main argument was that Quip will become the new UI, due to the integration with big-data services and AI.

But it’s my opinion that there’s a catch. “Imagine a world where a Sales Professional could open an app on his notebook, tablet or phone and start taking notes”. The premise is that road warrior, sales people will type in their notes, allowing the smart virtual apparatus running in the background to kick in.

That will not happen and for a good reason. It has also never happened before either and for the same reason: The main obstacle is indeed the UI: Users do not type in everything they do. That’s the main problem Salesforce is facing. With mobile devices and smaller UI real-estate, the problem is much bigger.

So what is the missing link between the data input and the huge benefits of big data & AI?

A UI that works… and more specifically our voice UI.

We have witnessed a huge hype circling the Voice Enabled solutions. Alexa Skills and Google Home – the voice enabled devices; Virtual Personal Assistance for every specific usage you may think of and even in car Voice interaction system – for an improved driving experience. There are even some discussions about the nature of the voice of each helper.

But the change we experience in our interaction with the devices has to be beyond a very cool feature, which works in English mainly and could help me turn on the lights.

We all remember the CLI in the 70`s moving on to the GUI in the 90`s. Well, GUI had a good run for a couple of decades but clearly it cannot become a usable interface for users in the new era. The era where mobile devices are getting smaller (smartphones), wearable devices (smart watch, etc.) soon become a commodity and IoT devices, well, they don’t even have screen at all!

Another angle to this change in User Experience is the BOTs massive growth in many different usages and segments, mainly for consumer engagement (based on facebook and other platform) and some on Enterprise use (e.g. Slack). The BOTs are actually bringing the GUI back to the days of CLI phase – where users are required to actually type (text only input\output).

It’s only logical to assume that the next phase in human-bot interaction will be voice. Clearly the time has come for VUI – the Voice User Interface. For instance see  Mary Meeker’s recent state of the internet report: “Voice Should be the most efficient form of computing input”

Another impending discussion is whether voice can be a general good-for-all input interface (relying on semantics and AI in the background) or does it require specific context for each domain/application.

My view is that in order for VUI to be productive and reliable to its users, voice input processing must be used within a context.

It is one thing to expect a virtual assistance or home device to understand a specific, pre defined, narrow “command” or syntax such as “where is the nearest restaurant” or “turn on lights”. But it is something completely different to have it recognize accurately invented words, company’s internal lingo, people with accent or talking in the real world in a noisy environment (try using your in-car voice system with kids in the back).

Context is the main issue here. User’s voice input needs to be processed within its own context, be it the company, business related context, personal one, language and so on.

Only once VUI has become reliable and accurate, will it be used to improve productivity by the people on the go, the mobile workforce for every company size or domain, as well as by their customers.

End user’s adoption of new tech solutions for out-of-office or off-the-desk time are an everlasting challenge for any CRM and other customer engagement platforms. New suggested solutions, such as Quip which still rely on user typing data, simply won’t do the job. Executing AI and other analytics in the backend is great – provided that the data was inputted by the users. If we want to break the GIGO paradigm, we must provide a better way to input the data.

At Tukuoro, we believe that voice should be easily enabled and provide a great experience for the end users, anywhere they are. At home, in their car, when traveling to their next client visit, when generating report etc. Join us for the voice revolution, make your app Voice enabled.

Follow us to Twitter;

Could not resolve host: