Google's AutoML Vision Can Identify Ramen By Shop

Looking at the ramen bowls in the image above, you would guess that only a seasoned aficionado would be able to tell which specific shop in Japan they all come from, but researcher Kenji Doi took a different route by using Google's AutoML Vision tool to train an artificial intelligence program to do it. The easy-to-use tool made short work of training the AI program, and when all was said and done, it was able to specify which one of the 41 local Ramen Jiro locations each tasty bowl was crafted at with stunning 94.5% accuracy. The whole training process only took 24 hours, but the tool actually had a basic version of the AI program up and running only 18 minutes after being fed training data.

While AutoML Vision is one of many tools that make AI work easier than it's ever been before, the process does still involve some work on the human's part. Specifically, Kenji had to gather up sample photos and feed them into the program, along with some teaching inputs in the form of identifying which shops the sample bowls were made at. Once it had that information, AutoML Vision cranked out a fully working model in short order, which had no issue identifying new photos with stunning accuracy. The table below, called a confusion matrix, shows just how accurate it was; looking at the data, there are only a few misses in the entire chart.

The implications of this tool are astounding. It takes virtually no expertise to use, and can generate complex and detailed training models for AI programs in a short time. This means that businesses and researchers could scale out AI-related operations, shave time off of extensive use cases, and even scale out to previously unseen workloads and use cases. The tool itself is not open-source, but is offered as part of the Google Cloud Platform package. A free trial is available for developers who are unsure about it. The tool is currently in alpha testing, which means that some bugs are to be expected, and Google will be actively accepting and integrating feedback for the project. While the tech giant normally open-sources projects like this once they reach maturity, the fact that this one relies on Google's cloud for processing power makes that a bit less likely, and if it is open-sourced, developers will have to provide their own computing oomph in order to use it separate from Google's services.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now