Some People Really Do Welcome Their Robot Overlords - Study

People tend to trust machines more than people and that could become a problem amid the accelerated rise of AI systems, based on a recent study published by Penn State researchers and supported by the US National Science Foundation. Referred to as the machine heuristic, the research specifically examined whether or not those who trust machines more than humans, in general, are more likely to give private information to machine-based systems.

For example, participants who already trusted machines more than people were more likely to give credit card information and other similarly sensitive data to a machine-based travel agent in the test than to a human agent. Those who didn't have a propensity to trust machines more showed no change between how they acted or responded with either.

So where's the problem?

Issues can arise from the belief that machines are superior, researchers S. Shyam Sundar and Professor James P. Jimirro note, because there is a disconnect in understanding that the creator of the machines is, in fact, human. Those who trust machines more rightfully presume that a machine will only do what it's programmed to do and won't hold ulterior motives of their own.

They also seem to fail to recognize that the people responsible for designing, coding, and building out machines ranging from complex AI to more simple systems are imperfect. There is nothing necessarily in place to prevent any and all mistakes that might lead to their information being mishandled or to ensure that there's no malicious intent behind any given machine.

That, on its own, isn't a problem but it can be and this isn't the first time the matter has been brought up. As indicated in a recent SEC report from world-renowned tech competitor Google, AI represents a particularly significant threat on that front due to its reliance on human input and training.

Ultimately, the company conceded that matters related to ethical, technological, and legal concerns are being brought to light via ongoing AI growth. There may be other unforeseen problems too.

Those are sentiments mirrored in the latest research from Penn State. The number of scams that could be perpetrated by abusing that trust covers the gamut but more concerning is the fact that the ways malicious activity could be conducted does too. Anything from standard chatbots to smart speakers and even robots could be built or even hijacked to serve nefarious purposes.

No need to panic ...yet

While robots and other machine technology built around AI may eventually become a much more common a way to trick potential victims out of their sensitive information, trust in the tech also serves a purpose.

It leads to more comfortable and user-friendly websites or applications, for example, since those who don’t trust machines more don’t necessarily seem to trust them less either. That makes using technology easier on people, to begin with. There are other benefits too since there are vectors of vulnerability removed by utilizing technology instead of people.

Vigilance and self-awareness, the researchers indicate, are simply going to be increasingly important aspects of preventing cyber attacks and other activity from bad actors with the growth of AI and robotics in markets spanning the globe.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author
2018/10/Daniel-Golightly-2018-New.jpg

Daniel Golightly

Senior Staff Writer
Daniel has been writing for AndroidHeadlines since 2016. As a Senior Staff Writer for the site, Daniel specializes in reviewing a diverse range of technology products and covering topics related to Chrome OS and Chromebooks. Daniel holds a Bachelor’s Degree in Software Engineering and has a background in Writing and Graphics Design that drives his passion for Android, Google products, the science behind the technology, and the direction it's heading. Contact him at [email protected]