Google-owned Niantic, makers of augmented reality games Ingress and Pokemon GO, are reportedly using input from players along with some software-side artificial intelligence to begin creating AR maps of the real world. This will allow the company to place elements and overlays that would otherwise not be possible, and to make more accurate overworld maps for its AR games. The company could also feed this data to Google in order to help enhance the larger company's services, a very likely scenario given Google's previously announced ambitions for enhancing Google Maps and Street View, as well as getting into indoor mapping.
The basic procedure for the capture is as follows; players use Pokemon GO's AR mode to catch Pokemon against a backdrop of the real world around them, and their smartphones' cameras capture that world. Niantic's software interprets what the user's smartphone camera is seeing in order to parse real-world objects and landmarks, then maps out their geometry and dimensions in relation to the space around them. It's possible that previous data on known landmarks could be used to enhance this processing. In any case, once an area has been sufficiently well-captured by multiple players and examined multiple times by Niantic's AI, there is enough data on that space and the things in it to tell what's fixed to the landscape, what comes and goes, and who the living creatures are, allowing Niantic to develop an AR experience around those factors.
This move is not without its potential caveats. The biggest and most obvious concern here is privacy. Since Niantic is targeting public places first, the possibility of privacy issues relating to unknowingly recording non-players, especially children, is very strong. Such data would most likely have to be either thrown out or proactively parsed and edited to not include people and creatures who are not opted into the program. On the other side of the same token, there are protections in place for video and photos in public places that could be applied here. Besides that, there are potential concerns about AI identifying objects the wrong way or data not coming together cohesively to worry about.