Stone Temple Consulting Corporation has now released its 2018 intelligence ranking for the top five digital helpers, showing that the Google Assistant still has the lead but that the gap is closing. For clarity, the study actually considered the phone variation of Google’s A.I. separately from its smart speaker-based technology, as well as testing Cortana, Alexa, and Siri – which ranked behind Google, in that order. So, it may be more accurate to say the company tested the top four. In any case, the test showed that each of the digital smart assistants grew better at answering questions compared to 2017. However, Alexa made the most progress in terms of recognizing and attempting answers at a growth rate of approximately 270-percent.
Breaking down the results, Stone Temple delivered the findings of its study into two distinct categories. The first is how many questions were recognized and answers attempted. The second narrowed down to how many of those answers were accurate and complete. The Google Assistant on both mobile and speaker platforms attempted and answered more than others, with the phone variant performing a bit better. In fact, the Google Assistant for mobile attempted to answer nearly 80-percent of inquiries and successfully answered over 90-percent of those. Cortana, on the other hand, actually answered more of its questions correctly than Google’s Home. It fell just a few percentage points short – at around 65-percent – in terms of questions that it attempted to answer. Alexa attempted fewer than 60-percent of the questions but answered over 80-percent of those fully. Apple’s Siri came in last this year, with 80-percent of its answers being complete but with just over 40-percent of answers attempted.
Aside from Amazon’s staggering growth in terms of answers attempted by its A.I., every other company grew by less than 50-percent. Google’s solution actually grew the least, at just a few points gained. That could indicate something that may become a problem later on. With that said, Amazon also saw the most growth with regard to questions answered incorrectly. Although more questions were attempted by the digital assistants, all but Cortana saw a comparative drop year-over-year in terms of how many of those were answered completely. So it may be a bit too soon to make any kind of judgments there. This study was fairly comprehensive compared to some others that have been conducted in recent times, so it may just be the most accurate gauge of A.I.-driven assistants to date. The same 4,942 queries were used for each service, which were based on the most common uses for smart speakers. Extra care and training were given to ensuring that the questions were asked consistently as well, according to the researchers.