CN111157009A - Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence - Google Patents

Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence Download PDF

Info

Publication number
CN111157009A
CN111157009A CN202010252800.8A CN202010252800A CN111157009A CN 111157009 A CN111157009 A CN 111157009A CN 202010252800 A CN202010252800 A CN 202010252800A CN 111157009 A CN111157009 A CN 111157009A
Authority
CN
China
Prior art keywords
navigation
map
indoor
mobile terminal
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010252800.8A
Other languages
Chinese (zh)
Inventor
李伟
梁维明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sinan Data Service Co ltd
Original Assignee
Shenzhen Sinan Data Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sinan Data Service Co ltd filed Critical Shenzhen Sinan Data Service Co ltd
Priority to CN202010252800.8A priority Critical patent/CN111157009A/en
Publication of CN111157009A publication Critical patent/CN111157009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention provides an indoor positioning navigation establishing method and system based on a mobile terminal and AR intelligence, wherein the indoor positioning navigation establishing method comprises the following steps: step S1, the mobile terminal scans the real environment to realize the drawing of the indoor map; step S2, drawing an indoor navigation route; step S3, uploading the collected map data packet to a cloud server; step S4, loading an AR navigation data map at the navigation client; step S5, searching for a destination and matching an AR navigation data map; step S6, generating an augmented reality map through matching of the position coordinates of the starting point information and the AR navigation data map, and completing positioning; and step S7, shooting a live-action image of the location of the user in real time through the camera, and comparing and matching the live-action image with scene data in the AR navigation data map by the cloud server to realize navigation. The invention can realize accurate, intuitive and effective indoor positioning and navigation.

Description

Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence
Technical Field
The invention relates to an indoor positioning and navigation method, in particular to an indoor positioning and navigation establishing method based on a mobile terminal and AR intelligence, and relates to an indoor positioning and navigation establishing system adopting the indoor positioning and navigation establishing method based on the mobile terminal and the AR intelligence.
Background
With the continuous development of cities, large building groups such as airports, large department stores, high-speed railway stations, high-rise office buildings and the like are continuously generated, the complexity of indoor environment is continuously increased, and the demand of people for accurate indoor navigation is continuously increased. The existing plane 2D indoor navigation can rapidly derive an optimal path from a starting point to a terminal point, but cannot intuitively guide a user to go to a destination, and compared with the traditional plane 2D indoor navigation, the method can intuitively guide the user to the destination by combining with a novel AR indoor navigation method.
Currently, the following indoor navigation technologies are commonly used:
(1) WIFI positioning and navigation: the method comprises the steps of arranging a plurality of wireless routers indoors, obtaining the signal intensity and the IP address of each wireless router around the wireless routers through a smart phone, estimating the distance from the smart phone to each wireless router by using a signal attenuation model, drawing a circle on each wireless router around the smart phone, wherein the intersection point is the position of the smart phone, namely the position of a user, and thus positioning is realized. In the software, a user selects a destination, the software plans a path according to the position of the user, and the user can reach the destination by walking along the path. The method has high cost, and the problems of inaccurate floor positioning, low response speed and the like easily occur, so that the final navigation fails.
(2) Bluetooth positioning and navigation: by adopting a low-power-consumption Bluetooth transmission mode, the Bluetooth module can be operated only by a button battery without wiring. A large number of Bluetooth modules are arranged in an indoor scene, coordinate information of each Bluetooth module is stored in a database, a map file and a positioning algorithm are stored in a cloud service, and after a user opens a mobile phone positioning page, the map file, the positioning algorithm and coordinates of the Bluetooth modules are obtained from a cloud end server. When the mobile phone scans a group of Bluetooth modules, the distance between the mobile phone and the Bluetooth module is calculated through the signal intensity between the mobile phone and the Bluetooth module, coordinate data of the Bluetooth module is added, the triangulation algorithm calculation is carried out, the position calculated in real time is displayed on a map, and the positioning information is combined with a navigation planned path to guide a user to reach a destination. The method is the same as the WIFI positioning and navigation, the problem of inaccurate floor positioning is easily caused, and a button battery of the Bluetooth module needs to be frequently replaced in the later period, so that the cost is high, and the maintenance is difficult.
Moreover, the two indoor navigation methods have the disadvantages that the substitution feeling is not strong when the user uses the indoor navigation method, the user experience is not good enough, and the use difficulty is high for people with poor direction feeling. In addition, when the indoor navigation method is used, the name of the destination must be input through a typing input mode or a voice input mode so as to implement the navigation function, and the method is difficult to use for people who are bad in Putonghua or inconvenient in typing (such as the old people who are old and have few literacy children).
(3) Code scanning positioning navigation: the mode of code scanning for determining the position is adopted, the label with the position is pasted in the real environment, the user determines the position by scanning the label, the user needs to find the attached identification for scanning, the experience feeling is poor, and the attractiveness of a building is influenced.
In addition, the existing indoor navigation has the problem of positioning point drift due to the accuracy limitation of real positioning prevention, so that an AR navigation route is wrong, and a user cannot reach a destination. Meanwhile, because of the performance limitation of the mobile device, frequent and large-scale navigation calculation is unnecessary, and therefore, the total amount of calculation resources used by the algorithm to run needs to be effectively reduced.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an indoor positioning navigation creating method based on a mobile terminal and AR intelligence, which is intuitive and effective and can reduce positioning errors, and further provide an indoor positioning navigation creating system adopting the indoor positioning navigation creating method based on the mobile terminal and AR intelligence.
Therefore, the invention provides an indoor positioning navigation establishing method based on a mobile terminal and AR intelligence, which comprises the following steps:
step S1, the mobile terminal scans the real environment to realize the drawing of the indoor map;
step S2, drawing an indoor navigation route;
step S3, uploading the collected map data packet to a cloud server;
step S4, the navigation client of the mobile terminal is in communication connection with the cloud server, and an AR navigation data map is loaded on the navigation client;
step S5, searching for a destination and matching an AR navigation data map;
step S6, generating an augmented reality image through matching of position coordinates of starting point information and the AR navigation data map, and then transmitting the augmented reality image to a display on the mobile terminal through a cloud server to be displayed, so as to complete positioning;
step S7, starting a camera of the mobile terminal, shooting a live-action picture of the location of the user in real time through the camera, transmitting the live-action picture to the cloud server, and comparing and matching the live-action picture with scene data in the AR navigation data map by the cloud server to realize navigation;
in step S1, after the mobile terminal scans the real environment, environment detection description and matching are implemented by selecting feature points, so as to implement construction of a point cloud map, and the constructed AR navigation data map is uploaded and stored in the cloud server.
In a further improvement of the present invention, step S1 in this example includes the following sub-steps:
step S101, scanning a real environment through a mobile terminal, and collecting environment data;
step S102, selecting pixel points, comparing the pixel points with other pixel points in a preset range taking the pixel points as circle centers, and selecting characteristic points;
step S103, carrying out assignment operation on the feature points to realize environment detection description, combining the compared results to obtain a matching result, and drawing an indoor map;
and step S104, outputting the drawn indoor map.
A further improvement of the present invention is that, in step S102, a pixel point P is selected, the gray value of the pixel point P is set to Ip, M pixel points within a preset radius range with the pixel point P as the center of circle are selected, whether the difference between the gray value of the M pixel points and the gray value of the pixel point P is greater than a preset gray difference value t is respectively compared, if yes, a feature point is determined, where M is the selected number of the preset pixel points, and t is a preset gray threshold.
In a further improvement of the present invention, in the step S103, the selected feature points are processed by a formula
Figure 73067DEST_PATH_IMAGE002
Carrying out assignment operation, combining results after the assignment operation to realize environment detection description, then carrying out XOR operation on two adjacent feature points to calculate the similarity of the two adjacent feature points, and if the similarity is smaller than a preset similarity threshold, judging that the matching is successful; wherein the content of the first and second substances,
Figure 637516DEST_PATH_IMAGE004
as a result of the value-assigning operation,
Figure 374528DEST_PATH_IMAGE006
the gray value of the characteristic point a is represented,
Figure 648645DEST_PATH_IMAGE008
the gradation value of the feature point B is represented.
The further improvement of the present invention is that, in the practical application process, the user holds the mobile terminal such as the mobile phone, and opens the system, and in step S103, the process of combining the results after the assignment operation to realize the description of the environment detection is as follows: and downloading the cloud map package, carrying out surrounding photos shot by a mobile phone camera, then superposing the result of the assignment operation in the step S103 to the corresponding position in the characteristic point and the cloud map package for matching, and taking the successfully matched cloud map as an indoor map.
In a further improvement of the present invention, in step S2, the process of drawing the indoor navigation route is as follows: the method comprises the steps of marking turning and straight-going points through circles in an AR navigation data map, marking a terminal point through a triangle, marking an initial position as A, marking the terminal position, drawing a fold line or an arrow according to coordinates, achieving indoor navigation from the initial position to the terminal position, and controlling indoor navigation to be placed at the top display position of a camera head all the time.
In a further improvement of the present invention, in the step S3, the data of the map data packet includes point cloud data, motion pose, longitude and latitude, name and size.
In a further improvement of the present invention, in step S6, the process of generating the augmented reality map is as follows: and after receiving the coordinate information of the starting point position, the cloud server matches the coordinate information of the starting point position with the AR navigation data map, so that the place represented by the position coordinate is displayed at the corresponding position in the AR navigation data map, and an augmented reality map is generated.
In step S7, in the process that the cloud server compares and matches the scene data in the live view and the AR navigation data map, when the user moves, the cloud server calculates and plans the navigation route in real time with the current real-time position as the navigation starting point.
The invention also provides an indoor positioning navigation establishing system based on the mobile terminal and the AR intelligence, which adopts the indoor positioning navigation establishing method based on the mobile terminal and the AR intelligence and comprises the following steps:
the indoor positioning module is used for realizing data acquisition and drawing of an indoor navigation route;
the route searching module is connected with the indoor positioning module and used for realizing route planning according to the AR navigation data map;
and the navigation module is connected with the path searching module and is used for realizing path navigation.
Compared with the prior art, the invention has the beneficial effects that: based on the mobile terminal and AR intelligence, the relocation SLAM is realized on the basis of real-time construction algorithm, and the high-precision continuous feature matching of a large-area large scene can be realized by combining the measurement adjustment technology, aerial photogrammetry, the SLAM + IMU and the assistance of multiple sensors, so that the data can be efficiently and accurately acquired; the method can quickly and accurately acquire the indoor 3D map, and the high-precision map is available, so that coordinates of each indoor position can be found on the constructed map, and the position is very accurate; in the positioning and navigation process of the mobile terminal of the smart phone, the navigation system calculates the real-time position of the holder by combining an IMU unit carried by the smart phone, and in addition, considering that the IMU can generate errors in the movement of the smart phone, the method can also correct the scanned characteristic points in real time through an accurate three-dimensional map, does not need to install equipment after the map is built, can position and navigate indoors through the smart phone, does not need to deploy infrastructure indoors, and also combines a multi-source positioning and navigation mode, so that the defect that the accurate positioning is difficult only by machine vision is overcome, and the accurate positioning (small positioning error) and intuitive and effective indoor positioning and navigation can be realized.
Drawings
FIG. 1 is a schematic diagram of a workflow configuration of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the image matching principle in an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example of a test performed to find other pixels within a predetermined range according to an embodiment of the present invention;
FIG. 4 is a diagram of an exemplary test for selecting feature points in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of an indoor navigation path according to an embodiment of the present invention;
fig. 6 is a schematic system structure according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
As shown in fig. 1, this example provides a method for creating indoor positioning navigation based on a mobile terminal and AR intelligence, which includes the following steps:
step S1, the mobile terminal scans the real environment to realize the drawing of the indoor map;
step S2, drawing an indoor navigation route;
step S3, uploading the collected map data packet to a cloud server;
step S4, the navigation client of the mobile terminal is in communication connection with the cloud server, and an AR navigation data map is loaded on the navigation client;
step S5, searching for a destination and matching an AR navigation data map;
step S6, generating an augmented reality image through matching of position coordinates of starting point information and the AR navigation data map, and then transmitting the augmented reality image to a display on the mobile terminal through a cloud server to be displayed, so as to complete positioning;
step S7, starting a camera of the mobile terminal, shooting a live-action picture of the location of the user in real time through the camera, transmitting the live-action picture to the cloud server, and comparing and matching the live-action picture with scene data in the AR navigation data map by the cloud server to realize navigation;
in step S1, after the mobile terminal scans the real environment, environment detection description and matching are implemented by selecting feature points, so as to implement construction of a point cloud map, and the constructed AR navigation data map is uploaded and stored in the cloud server.
The embodiment provides a scheme combining a path search algorithm, a navigation logic method and indoor AR navigation, and relates to the technical field of indoor navigation, wherein a corresponding indoor positioning navigation creating system comprises: the indoor positioning module, the path searching module and the navigation module improve the path searching algorithm of the path searching module and the navigation logic method of the navigation module. The embodiment can guide a user to a destination more intuitively and effectively for a plane indoor navigation system without wasting time to identify a map, has robustness for various positioning methods, can freely select the positioning method, can reduce navigation influence caused by positioning errors, can effectively reduce the total amount of computing resources used by algorithm operation, enables the computing speed to be higher than that of similar methods, and is more suitable for current mobile equipment platforms with limited computing resources, such as mobile terminals like smart phones.
The method for establishing the indoor positioning navigation based on the mobile terminal and the AR intelligence is low in cost, simple to use, good in user experience, beneficial to large-scale popularization and use, capable of being applied to navigation application of large buildings, scenic spots, hospitals, airports and superstores, and capable of being achieved only by using the mobile terminals of the user, such as a smart phone and the like.
In this example, the step S1 is used to realize indoor map drawing, a mobile terminal such as a smart phone is used to scan a real-time environment, a camera of the mobile phone captures a real-time environment image, the system selects feature points, available feature points are retained (step S102), a point cloud map is constructed by using a large number of useful feature points to obtain an indoor map (AR navigation data map), and then an AR navigation point cloud data model of the indoor map (AR navigation data map) is stored in a cloud server, which is referred to as a cloud server for short.
This example builds an indoor 3D map, an indoor map (AR navigation data map), preferably by using a motion structure (SfM), one of the key steps of which is to infer relative camera motion between two frames, done by image matching, so step S1 described in this example is done by comparison between pixel points.
More specifically, step S1 in this example includes the following sub-steps:
step S101, scanning a real environment through a mobile terminal, and collecting environment data;
step S102, selecting pixel points, comparing the pixel points with other pixel points in a preset range taking the pixel points as circle centers, and selecting characteristic points;
step S103, carrying out assignment operation on the feature points to realize environment detection description, combining the compared results to obtain a matching result, and drawing an indoor map;
and step S104, outputting the drawn indoor map.
In step S102, a pixel point P is selected, the gray value of the pixel point P is set to Ip, M pixel points are selected, the pixel point P is used as the center of a circle and is within a preset radius range, whether the difference between the gray value of the M pixel points and the gray value of the pixel point P is greater than a preset gray difference value t is respectively compared, if yes, a feature point is determined, where M is the number of the selected preset pixel points, and the number M of the selected pixel points can be preset or can be changed according to actual needs; t is a preset gray threshold, which can be preset or changed according to actual needs.
In step S103 in this example, the selected feature points are processed by a formula
Figure 78490DEST_PATH_IMAGE010
Carrying out assignment operation, combining results after the assignment operation to realize environment detection description, then carrying out XOR operation on two adjacent feature points to calculate the similarity of the two adjacent feature points, and if the similarity is smaller than a preset similarity threshold, judging that the matching is successful; wherein the content of the first and second substances,
Figure 382432DEST_PATH_IMAGE012
as a result of the value-assigning operation,
Figure DEST_PATH_IMAGE014
the gray value of the characteristic point a is represented,
Figure 470605DEST_PATH_IMAGE008
the gradation value of the feature point B is represented. The preset similarity threshold may be a preset similarity threshold, or may be changed according to actual needs. The feature points A and B are preferably two adjacent feature points, assignment is realized through the two adjacent feature points, and results after assignment operation are combined to realize detection description of an actual environment, so that the description requirements of an indoor environment can be met better.
As shown in fig. 5, in the present embodiment, the user may hold the mobile phone by hand to open the system, and in step S103, the process of combining the results after the assignment operation to implement the description of the environment detection is as follows: and downloading the cloud map package, carrying out surrounding photos shot by a mobile phone camera, then superposing the result of the assignment operation in the step S103 to the corresponding position in the characteristic point and the cloud map package for matching, and taking the successfully matched cloud map as an indoor map.
The example is described by a test example, in the detection stage, the example focuses on a specific area (feature point for short) considered to be special in the image, and in the description stage, we can simply understand that the feature point of the image is selected as a more prominent point in the image, such as a contour point, a bright point in a darker area, a dark point in a lighter area, and the like. The central idea of detecting feature points by using fast (features from obtained segment test) algorithm is to find out the points of a crane chicken group, i.e. to compare a point with the surrounding points, and if it is different from most of them, it can be regarded as a feature point. For example, we can compare that point to all the pixel points on a circle around it with a radius of 3, as shown in fig. 3 below.
Specific calculation process of FAST: a pixel point P is selected from the picture, and then, whether the pixel point P is a feature point is judged. We first set its density (i.e. grey value) to Ip. Setting a proper gray threshold value t: when the absolute value of the difference between the gradation values of 2 points is larger than t, the 2 points are considered to be different.
In the present test example, 16 pixels around the pixel are considered, that is, the number M of selected pixels is set to 16, as shown in fig. 3, if there are n continuous points in the 16 points that are different from each other, it is an angular point, n is the number of pixels whose absolute value of the difference between the gray values is greater than t, where n is set to 12.
This example quickly excludes a large portion of non-feature points for improved test efficiency. The test only examines the pixels at four positions 1, 9, 5 and 13. If the pixel is a corner point, at least 3 of the four pixel points should be the same as the corner point. If the pixel points are not satisfied, the pixel point cannot be an angular point, the angular point represents a point which is obviously different from the pixel points in the surrounding field (the gray value of the pixel points is larger or smaller than t), and if a certain pixel point and enough pixel points in the surrounding field are in different areas, the pixel point can be the angular point. That is, some attributes are distinctive, whether a point is a corner point is mainly considered by a gray image, that is, if the gray value of the point is larger or smaller than the gray values of enough pixel points in the surrounding area, the point may be the corner point.
Through the above process, the picture of this example has many more feature points, and these feature points are marked out, as shown in fig. 4, in practice, they can be marked out in red, etc., and a circle O is made with the key pixel point P as the center of circle and d as the radius. N point pairs are selected for a pattern within the circle O. For convenience of explanation, N =4, N may take 512 values in practical application, and it is assumed that the currently selected 4 point pairs are respectively marked as: p1(A, B), P2(A, B), P3(A, B), P4(A, B), define assignment operation T,
Figure 114076DEST_PATH_IMAGE010
and then, respectively carrying out T operation on the selected characteristic point pairs, and combining the obtained results.
For example, T (P1(a, B)) =1, T (P2(a, B)) =0, T (P3(a, B)) =1, T (P4(a, B)) =1, and the final descriptor is: 1011; then extracting small patches surrounding them and assigning a descriptor to each patch; the descriptor is a special type of bar code that encodes information about the appearance of the patch; assigning a descriptor to each patch means that at the final matching stage, matching is performed by comparing random descriptor pairs between two images, if the two descriptors are similar, matching is recorded, and a large number of matching items directly appear on the images after repeating the process for many times. For example, the combination result (descriptor) of the feature point a and the feature point B is as follows. A: 10101011, B: 10101010, this example sets a preset similarity threshold, such as 80%. When the similarity of the combination result (descriptor) of the feature point a and the feature point B is greater than 80%, it is determined that the feature point a and the feature point B are the same feature point, that is, the 2 feature points are successfully matched. In this example, feature point a and feature point B differ only by the last digit, with a similarity of 87.5%, greater than 80%. Then feature point a and feature point B are matched. In this example, the similarity between the feature point a and the feature point B can be easily calculated by performing an exclusive or operation on the feature point a and the feature point B.
As shown in fig. 5, in step S2 in this example, the process of drawing the indoor navigation route is as follows: turn and straight points are marked by circles in an AR navigation data map, an end point is marked by a triangle, an initial position mark is marked by A, marks from B to G are placed, the end point position is marked (marks from 1 to 6), then a fold line or an arrow is drawn according to coordinates, indoor navigation from the initial position to the end point position is realized, the indoor navigation is controlled to be always placed at the top display position of the camera head, script control is adopted in the control, and the farther an object is from the camera, the smaller the object looks.
In step S3, the data of the map data packet includes point cloud data, motion phase position, longitude and latitude, name and size, where the motion phase position is a spatial position of a mobile phone camera, the name includes a GPS longitude and latitude map name, and the size is the size of the data packet.
In this embodiment, the navigation client installed in the mobile terminal in step S4 is in communication connection with the cloud server, and the AR navigation data map is loaded on the navigation client; in the step S5, a destination is searched for, map matching is implemented, the map information and the destination information are uploaded to a cloud server, and the cloud server performs cloud computing, for example, to count the turning and execution points required to pass through each route, the required route length, and the current pedestrian volume of each road section, and then calculates the minimum route value to match the optimal route. As shown in fig. 5, assuming that the initial position is H and the end position is 1, the shortest route is selected as the optimal route based on the route of the connection such as the turning point and the straight point passing between the initial position H and the end position 1.
In step S6, after receiving the coordinate information of the starting point position, the cloud server matches the coordinate information of the starting point position with the AR navigation data map, so that the location represented by the position coordinate is displayed at a corresponding position in the AR navigation data map, thereby generating an augmented reality map, and then the cloud server transmits the augmented reality map to the display on the mobile terminal for display, thereby completing positioning.
In step S7, in the process that the cloud server compares and matches the scene data in the live view and the AR navigation data map, when the user moves, the cloud server calculates and plans the navigation route in real time with the current real-time position as the navigation starting point.
In step S7, after the cloud server receives the information of the end point position and the information of the start point position, the cloud server calculates the navigation route by taking the coordinates of the end point position as the navigation end point and the coordinates of the start point position as the navigation start point, and at the same time, the camera on the mobile terminal is turned on, the real-time scene image of the location of the user is captured by the camera in real time and transmitted to the cloud server, the cloud server compares and matches the scene data in the real-time scene image and the AR navigation data map to track the real-time position of the user in real time, when the user moves, the cloud server calculates and plans the navigation route in real time by taking the real-time position as the navigation start point, the process of calculating and planning the navigation route in real time is as described in step S5, and the cloud server matches the route to the real-time environment, and transmitting the AR navigation live-action picture to a display on the mobile terminal for displaying so as to realize navigation.
As shown in fig. 6, this example further provides an indoor positioning navigation creating system based on a mobile terminal and AR intelligence, which adopts the indoor positioning navigation creating method based on a mobile terminal and AR intelligence, and includes:
the indoor positioning module is used for realizing data acquisition and drawing of an indoor navigation route;
the route searching module is connected with the indoor positioning module and used for realizing route planning according to the AR navigation data map;
the navigation module is connected with the path searching module and is used for realizing path navigation; the navigation module comprises an AR mapping module for realizing AR mapping; and the AR rendering module is used for realizing AR rendering.
In summary, the present example is based on the mobile terminal and the AR intelligence, the relocation SLAM is realized on the basis of the real-time construction algorithm, and the high-precision continuous feature matching of a large-area large scene can be solved by combining the measurement adjustment technology, the aerial photogrammetry, the SLAM + IMU and the assistance of multiple sensors, so that the data can be efficiently and accurately acquired; the method can quickly and accurately acquire the indoor 3D map, and the high-precision map is available, so that coordinates of each indoor position can be found on the constructed map, and the position is very accurate; in the positioning and navigation process of the mobile terminal of the smart phone, the navigation system calculates the real-time position of the holder by combining an IMU unit carried by the smart phone, and in addition, considering that the IMU can generate errors in the movement of the smart phone, the method can also correct the scanned characteristic points in real time through an accurate three-dimensional map, does not need to install equipment after the map is built, can position and navigate indoors through the smart phone, does not need to deploy infrastructure indoors, and also combines a multi-source positioning and navigation mode, so that the defect that the accurate positioning is difficult only by machine vision is overcome, and the accurate positioning (small positioning error) and intuitive and effective indoor positioning and navigation can be realized.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (9)

1. An indoor positioning navigation creating method based on a mobile terminal and AR intelligence is characterized by comprising the following steps:
step S1, the mobile terminal scans the real environment to realize the drawing of the indoor map;
step S2, drawing an indoor navigation route;
step S3, uploading the collected map data packet to a cloud server;
step S4, the navigation client of the mobile terminal is in communication connection with the cloud server, and an AR navigation data map is loaded on the navigation client;
step S5, searching for a destination and matching an AR navigation data map;
step S6, generating an augmented reality image through matching of position coordinates of starting point information and the AR navigation data map, and then transmitting the augmented reality image to a display on the mobile terminal through a cloud server to be displayed, so as to complete positioning;
step S7, starting a camera of the mobile terminal, shooting a live-action picture of the location of the user in real time through the camera, transmitting the live-action picture to the cloud server, and comparing and matching the live-action picture with scene data in the AR navigation data map by the cloud server to realize navigation;
in step S1, after the mobile terminal scans the real environment, environment detection description and matching are implemented by selecting feature points, so as to implement construction of a point cloud map, and the constructed AR navigation data map is uploaded and stored in the cloud server;
the step S1 includes the following sub-steps:
step S101, scanning a real environment through a mobile terminal, and collecting environment data;
step S102, selecting pixel points, comparing the pixel points with other pixel points in a preset range taking the pixel points as circle centers, and selecting characteristic points;
step S103, carrying out assignment operation on the feature points to realize environment detection description, combining the compared results to obtain a matching result, and drawing an indoor map;
and step S104, outputting the drawn indoor map.
2. The method as claimed in claim 1, wherein in step S102, a pixel point P is selected, the gray value of the pixel point P is set to Ip, M pixel points within a preset radius range around the pixel point P are selected, whether the difference between the gray values of the M pixel points and the gray value of the pixel point P is greater than a preset gray difference value t is respectively compared, and if yes, the feature point is determined, where M is the number of selected preset pixel points, and t is a preset gray threshold.
3. The method for creating an indoor positioning navigation system according to claim 2, wherein the step S103 is performed by using a formula for the selected feature points
Figure 962841DEST_PATH_IMAGE002
Carrying out assignment operation, combining results after the assignment operation to realize environment detection description, then carrying out XOR operation on two adjacent feature points to calculate the similarity of the two adjacent feature points, and if the similarity is smaller than a preset similarity threshold, judging that the matching is successful; wherein the content of the first and second substances,
Figure 693031DEST_PATH_IMAGE004
as a result of the value-assigning operation,
Figure 472768DEST_PATH_IMAGE006
the gray value of the characteristic point a is represented,
Figure 930294DEST_PATH_IMAGE008
the gradation value of the feature point B is represented.
4. The indoor positioning and navigation creation method based on the mobile terminal and the AR intelligence of claim 2, wherein in step S103, the process of combining the results after the assignment operation to implement the environment detection description is as follows: and downloading the cloud map package, carrying out surrounding photos shot by a mobile phone camera, then superposing the result of the assignment operation in the step S103 to the corresponding position in the characteristic point and the cloud map package for matching, and taking the successfully matched cloud map as an indoor map.
5. The indoor positioning navigation creation method based on the mobile terminal and the AR intelligence of any one of claims 1 to 4, wherein in the step S2, the process of drawing the indoor navigation route is as follows: the method comprises the steps of marking turning and straight-going points through circles in an AR navigation data map, marking a terminal point through a triangle, marking an initial position as A, marking the terminal position, drawing a fold line or an arrow according to coordinates, achieving indoor navigation from the initial position to the terminal position, and controlling indoor navigation to be placed at the top display position of a camera head all the time.
6. The method for creating an indoor positioning navigation system based on a mobile terminal and AR intelligence as claimed in any one of claims 1 to 4, wherein in the step S3, the data of the map data packet comprises point cloud data, motion phase, longitude and latitude, name and size.
7. The method for creating indoor positioning navigation based on mobile terminal and AR intelligence of any claim 1 to 4, wherein in step S6, the process of generating augmented reality map is as follows: and after receiving the coordinate information of the starting point position, the cloud server matches the coordinate information of the starting point position with the AR navigation data map, so that the place represented by the position coordinate is displayed at the corresponding position in the AR navigation data map, and an augmented reality map is generated.
8. The method for creating an indoor positioning navigation system according to any one of claims 1 to 4, wherein in step S7, in the process that the cloud server compares and matches scene data in the live view and the AR navigation data map, when the user moves, the cloud server calculates and plans a navigation route in real time with a current real-time position as a navigation starting point.
9. An indoor positioning navigation creating system based on a mobile terminal and AR intelligence, characterized in that the indoor positioning navigation creating method based on the mobile terminal and AR intelligence of any claim 1 to 8 is adopted, and comprises:
the indoor positioning module is used for realizing data acquisition and drawing of an indoor navigation route;
the route searching module is connected with the indoor positioning module and used for realizing route planning according to the AR navigation data map;
and the navigation module is connected with the path searching module and is used for realizing path navigation.
CN202010252800.8A 2020-04-02 2020-04-02 Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence Pending CN111157009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010252800.8A CN111157009A (en) 2020-04-02 2020-04-02 Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010252800.8A CN111157009A (en) 2020-04-02 2020-04-02 Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence

Publications (1)

Publication Number Publication Date
CN111157009A true CN111157009A (en) 2020-05-15

Family

ID=70567678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010252800.8A Pending CN111157009A (en) 2020-04-02 2020-04-02 Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence

Country Status (1)

Country Link
CN (1) CN111157009A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111637890A (en) * 2020-07-15 2020-09-08 济南浪潮高新科技投资发展有限公司 Mobile robot navigation method combined with terminal augmented reality technology
CN111664848A (en) * 2020-06-01 2020-09-15 上海大学 Multi-mode indoor positioning navigation method and system
CN111765890A (en) * 2020-06-28 2020-10-13 济南浪潮高新科技投资发展有限公司 Indoor navigation system and navigation algorithm based on cloud image recognition and AR
CN111795688A (en) * 2020-07-17 2020-10-20 南京邮电大学 Library navigation system implementation method based on deep learning and augmented reality
CN111947663A (en) * 2020-08-07 2020-11-17 山东金东数字创意股份有限公司 Visual positioning digital map AR navigation system and method
CN112013847A (en) * 2020-08-20 2020-12-01 安徽江淮汽车集团股份有限公司 Indoor navigation method and device
CN112197786A (en) * 2020-09-09 2021-01-08 深圳市掌锐电子有限公司 AR navigation pre-display cruise system based on live-action feedback
CN112556727A (en) * 2020-12-15 2021-03-26 国科易讯(北京)科技有限公司 AR navigation positioning error calibration method, device, equipment and storage medium
CN112857371A (en) * 2020-12-29 2021-05-28 上海企树网络科技有限公司 Navigation two-dimensional code generation method, park navigation method and park navigation device
CN113155130A (en) * 2021-04-06 2021-07-23 广州宸祺出行科技有限公司 AR-based large indoor place navigation method and system
CN113340294A (en) * 2021-06-02 2021-09-03 南京师范大学 Landmark-fused AR indoor map navigation method
CN113483781A (en) * 2021-06-02 2021-10-08 深圳市御嘉鑫科技股份有限公司 Intelligent multidimensional stereo space GPS navigation system and method
CN116592908A (en) * 2023-05-17 2023-08-15 浙江高信技术股份有限公司 Positioning navigation method and system based on high-precision map
CN116959695A (en) * 2023-08-02 2023-10-27 广州腾方医信科技有限公司 Intelligent guide detection system and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN106960591A (en) * 2017-03-31 2017-07-18 武汉理工大学 A kind of vehicle high-precision positioner and method based on road surface fingerprint
CN107782314A (en) * 2017-10-24 2018-03-09 张志奇 A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
CN110553648A (en) * 2018-06-01 2019-12-10 北京嘀嘀无限科技发展有限公司 method and system for indoor navigation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN106960591A (en) * 2017-03-31 2017-07-18 武汉理工大学 A kind of vehicle high-precision positioner and method based on road surface fingerprint
CN107782314A (en) * 2017-10-24 2018-03-09 张志奇 A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
CN110553648A (en) * 2018-06-01 2019-12-10 北京嘀嘀无限科技发展有限公司 method and system for indoor navigation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘红: "并行快速特征点匹配算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
甘进等: "基于特征点的快速匹配算法", 《电光与控制》 *
芦文强等: "基于density-ORB特征的图像特征点匹配算法", 《天津理工大学学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111664848A (en) * 2020-06-01 2020-09-15 上海大学 Multi-mode indoor positioning navigation method and system
CN111765890A (en) * 2020-06-28 2020-10-13 济南浪潮高新科技投资发展有限公司 Indoor navigation system and navigation algorithm based on cloud image recognition and AR
CN111765890B (en) * 2020-06-28 2023-08-15 山东浪潮科学研究院有限公司 Navigation method of indoor navigation system based on cloud image recognition and AR
CN111637890A (en) * 2020-07-15 2020-09-08 济南浪潮高新科技投资发展有限公司 Mobile robot navigation method combined with terminal augmented reality technology
CN111795688A (en) * 2020-07-17 2020-10-20 南京邮电大学 Library navigation system implementation method based on deep learning and augmented reality
CN111795688B (en) * 2020-07-17 2023-11-17 南京邮电大学 Library navigation system implementation method based on deep learning and augmented reality
CN111947663A (en) * 2020-08-07 2020-11-17 山东金东数字创意股份有限公司 Visual positioning digital map AR navigation system and method
CN112013847A (en) * 2020-08-20 2020-12-01 安徽江淮汽车集团股份有限公司 Indoor navigation method and device
CN112197786B (en) * 2020-09-09 2021-09-10 深圳市掌锐电子有限公司 AR navigation pre-display cruise system based on live-action feedback
CN112197786A (en) * 2020-09-09 2021-01-08 深圳市掌锐电子有限公司 AR navigation pre-display cruise system based on live-action feedback
CN112556727A (en) * 2020-12-15 2021-03-26 国科易讯(北京)科技有限公司 AR navigation positioning error calibration method, device, equipment and storage medium
CN112857371A (en) * 2020-12-29 2021-05-28 上海企树网络科技有限公司 Navigation two-dimensional code generation method, park navigation method and park navigation device
CN113155130A (en) * 2021-04-06 2021-07-23 广州宸祺出行科技有限公司 AR-based large indoor place navigation method and system
CN113483781A (en) * 2021-06-02 2021-10-08 深圳市御嘉鑫科技股份有限公司 Intelligent multidimensional stereo space GPS navigation system and method
CN113340294A (en) * 2021-06-02 2021-09-03 南京师范大学 Landmark-fused AR indoor map navigation method
CN116592908A (en) * 2023-05-17 2023-08-15 浙江高信技术股份有限公司 Positioning navigation method and system based on high-precision map
CN116959695A (en) * 2023-08-02 2023-10-27 广州腾方医信科技有限公司 Intelligent guide detection system and method thereof
CN116959695B (en) * 2023-08-02 2024-02-06 广州腾方医信科技有限公司 Intelligent guide detection system and method thereof

Similar Documents

Publication Publication Date Title
CN111157009A (en) Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence
CN107782314B (en) Code scanning-based augmented reality technology indoor positioning navigation method
US10677596B2 (en) Image processing device, image processing method, and program
CN102960036B (en) Mass-rent vision and sensor exploration are drawn
CN109029444B (en) Indoor navigation system and method based on image matching and space positioning
CN103162682B (en) Based on the indoor path navigation method of mixed reality
CN108120436A (en) Real scene navigation method in a kind of iBeacon auxiliary earth magnetism room
CN106574841A (en) Methods and systems for generating route data
CN110470312B (en) Navigation method based on optical label network and corresponding computing equipment
KR20110066133A (en) Image annotation on portable devices
CN109115221A (en) Indoor positioning, air navigation aid and device, computer-readable medium and electronic equipment
CN112650772B (en) Data processing method, data processing device, storage medium and computer equipment
KR102622585B1 (en) Indoor navigation apparatus and method
CN111664848B (en) Multi-mode indoor positioning navigation method and system
CN111489582A (en) Indoor vehicle finding guiding system and method based on augmented reality
CN111339976A (en) Indoor positioning method, device, terminal and storage medium
KR20100060472A (en) Apparatus and method for recongnizing position using camera
CN114674323A (en) Intelligent indoor navigation method based on image target detection and tracking
JP2011113245A (en) Position recognition device
CN113532442A (en) Indoor AR pedestrian navigation method
CN108512888A (en) A kind of information labeling method, cloud server, system, electronic equipment and computer program product
KR20190029412A (en) Method for Providing Off-line Shop Information in Network, and Managing Server Used Therein
CN108020236A (en) Self-service guide method and system based on intelligent mobile terminal signal
KR101950713B1 (en) Apparatus of detecting indoor position using lacation map image and method thereof
KR20230051557A (en) Navigation by computer system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515