top of page

3D mapping for marker-less remote placement of AR content.


In the ARCONA system 3D mapping is used as one of the ways to provide marker-less remote placement of AR content.


It is a part of the more general problem of determination of the correct transformation from the local device co-ordinate system to the world co-ordinate system (hereinafter LWT). Our 3D map is implemented as a set of independent spatial anchors; each anchor consists of visual and geometry elements. Such set of anchors is created from sampled data package obtained by a mobile device with the standard sensor kit: camera, GPS, compass, accelerometer, gyroscope. The creation cost is scalable and depends on the anchor quality requirements; thus the map can be created on a powerful server, as well as on a mobile device, “in place”.


To solve the above problems we have a software set consists of the following components:

  • mobile application to collect and transfer device sensor data;

  • SLAM engine;

  • LWT estimator;

  • spatial anchor creator;

  • perception unit to recognize the spatial anchors.

We provided two videos to demonstrate two approaches of LWT determination. In these videos the left part shows the input video stream.


In the left top corner the following information is displayed:

  • estimated LWT reliability state: void (not set, identity), weak, good, strong;

  • number of received GPS messages;

  • number of recognized spatial anchors;

  • spatial recognition buffer size (the buffer contains preliminary recognized candidates) ;

  • number of the spatial anchors currently loaded.

Trajectories are displayed in the right part of the window and drawn by the following colors:

Color

​Trajectory kind

Yellow

SLAM trajectory in its own local co-ordinate system (LWT is identity)

Blue

​LWT is estimated by GPS and compass data

White

LWT is estimated by spatial anchors recognition

​Green

Green circles (with red dashes) | received GPS positions; for each of them the red dash indicates the corresponding compass heading

Purple

Purple circles | created and used spatial anchors

At the end of each trajectory a red dash indicates the corresponding current camera axis direction. Received GPS positions are visualized by green circles; for each of them the red dash indicates the corresponding compass heading. The first video “GPS correction with Arcona SLAM engine in wide areas” is related to the case of wide areas without quite valuable static objects, where GPS provides at least a satisfactory precision (parks, woods, fields and so on). In this case a proper LWT can be estimated using only GPS and compass information. After acceptance of an enough number of GPS messages the mapping becomes sustainable and the result trajectory shows more stable behavior than the original GPS signal. Note that the original SLAM trajectory (yellow) suffers from a little drift, but the mapped one (blue) - almost not. The second video “GPS correction using 3D mapping with spatial anchors in Arcona SLAM engine“ illustrates a compact area case with a number of static valuable objects. Existence of such objects allows to create a 3D map to solve LWT problem for AR users. The created and used spatial anchors are depicted by purple circles. Initially the trajectory (blue) is mapped only on the base of incoming GPS and compass messages (like in the previous video) and is not so proper because GPS is quite distorted. The situation changes after the first spatial anchor is recognized. The trajectory mapped using the spatial anchors is depicted by white. In the left bottom part of the frame the last recognition result is shown. The left image corresponds to the anchor and the right one to the device camera image. One of the key problems in navigation using 3D maps is reliable spatial anchor (place) recognition.


Our approach for this purpose provides low cost anchor (place) identification and can be used even on not so powerful mobile devices (like iPhone 6s). In the folder “recognition” we provide some results of its testing.


A quite long (about 100m) wall without explicit features, exceptsome graffiti, is chosen (see “the_wall.avi”). The test task is to recognize different fragments of the wall.

So the wall was initially sampled using a mobile phone and a set of spatial anchors had been created. Then these anchors are recognized from input video stream during the second pass walking along the wall.


Selected examples of recognition results are in sub folder “recognition/selected”. Just for clarity, in each image pair one of the matched regions is highlighted.



コメント


bottom of page