CN111288996A - Indoor navigation method and system based on video live-action navigation technology - Google Patents
Indoor navigation method and system based on video live-action navigation technology Download PDFInfo
- Publication number
- CN111288996A CN111288996A CN202010197452.9A CN202010197452A CN111288996A CN 111288996 A CN111288996 A CN 111288996A CN 202010197452 A CN202010197452 A CN 202010197452A CN 111288996 A CN111288996 A CN 111288996A
- Authority
- CN
- China
- Prior art keywords
- path
- node
- video
- user
- navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
The invention provides an indoor navigation method based on a video live-action navigation technology, which comprises the steps of obtaining path division nodes through modeling, and obtaining field path nodes matched with the path division nodes through first visual angle video shooting; on the basis of processing and compressing the path node video, pairwise combination is carried out on all path nodes to form a video node combination pair; and calculating the shortest path corresponding to the path node combination pair to form the shortest path video data represented by the path node, namely the indoor navigation video data. And the user acquires the position information as the starting point information according to the path node label or the path node picture, selects the destination as the destination information through the user interaction terminal, and further requests the shortest path video data matched with the starting point information and the destination information. The invention also provides an indoor navigation system. The invention overcomes the conflict problem of navigation precision and cost in the prior art and realizes indoor navigation without information source, with high precision and low cost.
Description
Technical Field
The invention relates to the technical field of indoor navigation, in particular to an indoor navigation method and system based on a video live-action navigation technology.
Background
Indoor navigation has important practical value in real life. Currently, the main indoor navigation technologies can be classified into the following three categories: the indoor navigation technology based on position perception, the indoor navigation technology based on signal measurement and the indoor navigation technology based on the sensor, and the indoor navigation technology that can put into commercial use mostly adopts the indoor navigation technology based on the sensor, and commercial company customizes one set of basic station for concrete building and builds the scheme, installs the basic station, in order to accomplish the inside accurate location navigation of building through navigation. Although the method is mature, the method is only limited to be used for positioning goods or staff in large factories due to high cost, huge engineering quantity and high maintenance cost, and cannot be popularized and applied to public places such as malls, commercial institutions or hospitals due to the above problems.
The search and research of the prior art at home and abroad find that:
the indoor navigation technology is usually based on an indoor positioning technology, specific navigation types can be divided according to types of information sources, at present, various information source positioning methods including WiFi signals (refer to [2] for any one of the above, WiFi indoor positioning algorithm design based on sensor data fusion and application research [ D ]. Zhejiang university, 2016.), Bluetooth signals and electromagnetic wave signals are already available in research results, a basic idea is that a signal transmitting device is installed in advance, and position information is obtained through receiving and returning of terminal equipment. Such positioning technologies depend on the interference resistance and reliability of new navigation to a great extent, and since a base station (i.e., a signal transmitting device) is indispensable, cost compression of various positioning and navigation technologies is particularly difficult, and reliable signals mean a high-end and precise signal transmitting device, which further increases the engineering load of positioning system construction for the power supply problem of a source device.
The iBeacon indoor navigation technology, which was researched and developed by apple, was successful and successfully operated commercially. The iBeacon is also a positioning navigation technology based on a base station, the iBeacon technology is mainly based on a low-power-consumption Bluetooth signal conduction technology, a large number of iBeacon signal transmitters are arranged in a target building, and mobile equipment of a user can receive and process related signals, so that the position of the user is calculated, and finally, indoor accurate guidance is realized. Therefore, the popularization of the iBeacon technology has the following disadvantages: firstly, the arrangement of the iBeacon base station has a certain engineering quantity, the base station adopts a battery for power supply, and the subsequent maintenance is troublesome; since the protocol is open, the iBeacon device is easily forged to steal the data of the user's mobile phone, and thus the security of the iBeacon device is questionable. However, due to the influence and technical support of apple company and the technical advantages of iBeacon, iBeacon is adapted and applied to some extent abroad, and iBeacon technology is adopted as a solution for indoor navigation in at least thirty superstores and airports (refer to Weixiangshen, application of iBeacon in an indoor positioning system based on a mini BM70/1 Bluetooth module [ J ]. electronic product world, 2019,26(02): 18-19.). However, the above technical shortcomings still limit the development of iBeacon.
At present, with the development of various other indoor navigation positioning technologies, a plurality of companies dedicated to indoor navigation scheme customization are also continuously developing respective indoor navigation systems. Comparing representative Fuxi and BeiDou, the technology combination of TC-OFDM + BDS (refer to Zhao Xue, Han Litao, Zheng Ying, Zhang Yan, Wu Jia Yi. indoor navigation model research review [ J ]. software guide, 2016,15(05):1-3.) is mainly used for applying the satellite system to indoor positioning. However, the technology mainly uses the satellite navigation system and the outdoor base station to make a special navigation system for medium and large-scale projects, so the corresponding implementation cost of the technology is high, and besides the construction of the base station, the use right of the satellite navigation system needs to be purchased.
In addition, the method has the advantages that the problem of precision and cost is well solved by utilizing an inertial sensor in the smart phone to complete navigation without depending on an external information source, the adaptation difficulty is low, but due to the characteristics of the inertial navigation system (refer to Liupeng, any peak, Zhang Asia, Wuchang Cheng, a graph method [ J/OL ] for observable analysis and observable state determination of the inertial navigation system, a control theory and application: 1-9[2019-03-27 ]), the error of the inertial navigation system is gradually increased along with the increase of the navigation time, so that the navigation technology is mainly applied to places such as parking lots and the like, and the due stability is difficult to achieve indoors in the true sense.
Except for the two representative enterprises, various commercial indoor navigation technologies based on UWB, WiFi, LTE, BLE (refer to opportunity routing protocol optimization [ J ]. Nanjing post and electronics university newspaper (Nature science edition) in a BLE Mesh network, 2018,38(06):90-95.) and RFID (refer to Schachka Lin. autonomous mobile robot indoor navigation [ D ]. Huazhong science and technology university, 2017.) based on an ultrahigh frequency RFID hybrid positioning algorithm) technology are diversified, a large number of indoor navigation solutions are provided, although a rather mature industry is formed, the problems of cost and navigation accuracy are limited, and the currently developed technology or the formed system can still be quite narrow in the target application scene.
To sum up, various commercial navigation technologies at present basically rely on the existing typical indoor positioning technology, adopt different positioning technology combinations to realize navigation, and apply certain algorithm to correct to improve user experience, however, do not fundamentally solve the main obstacles of indoor navigation popularization: namely the conflict problem of navigation precision and cost. Due to the technical difficulty that cost and precision cannot be considered, the application scene of indoor navigation is severely limited, so that no mature indoor navigation application is available today when outdoor navigation is large and varied and various large buildings are out of the ground. How to achieve higher precision and lower cost becomes a main obstacle to the wide application of indoor navigation. In view of the technical research aiming at indoor navigation at present, the research idea is slightly rigid, positioning and navigation are too tightly bound, the development of the navigation technology is limited, the navigation adaptive cost and the navigation precision become important elbows for popularizing the indoor navigation technology, and how to solve the contradiction also becomes an urgent problem in the field.
At present, no explanation or report of the similar technology of the invention is found, and similar data at home and abroad are not collected.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an indoor navigation method and system based on a video live-action navigation technology. The method and the system adopt the first person visual angle video to guide the user to complete indoor navigation, and the user can travel to the destination only by walking along the video visual angle returned by the system, so that the problem of conflict between navigation precision and cost in the prior art is solved, indoor navigation without an information source, high precision and low cost is realized, and a corresponding technical basis is laid for popularization of indoor navigation.
The invention is realized by the following technical scheme.
According to one aspect of the present invention, there is provided an indoor navigation method based on a video live-action navigation technology, including:
at a server side:
s1, modeling the building needing navigation adaptation;
s2, dividing the internal path of the building on the basis of the model established in S1 to obtain path division nodes;
s3, acquiring the first visual angle video information of the building real path corresponding to the path divided in S2 to obtain real path nodes matched with the path dividing nodes;
s4, including any one or more of the following steps:
labeling each path node obtained in S3, and removing interfering elements and highlighting landmark elements in the building solid path video;
removing interference elements and highlighting the marking elements in the building real path video obtained in the step S3, and extracting path node pictures from all path nodes at set time intervals to form a picture library; preferably, the set time interval is 0.1 s; further preferably, the information contained in each path node picture includes: the path node to which the picture belongs and the time when the picture appears in the path node.
S5, on the basis of S4, pairwise combination is carried out on all path nodes of the building, and all possible path node combination pairs are obtained; and calculating the shortest path corresponding to all the path node combination pairs to form shortest path video data represented by the path nodes, wherein the shortest path video data is indoor navigation video data.
Preferably, in S2, the method for dividing the building internal path and obtaining the path dividing node includes:
taking all places in the building which can be selected as navigation destinations by a user as a reference, taking a path between two adjacent possible destinations as a basic path dividing node, and combining the paths between any two places in the building through one or more basic path dividing nodes.
Preferably, in S4, the pr video clipping method is adopted to eliminate the interference elements in the building field path video; and/or
And performing eye-catching processing on the symbolic elements by adopting a rendering method.
Preferably, in S5, a bidirectional a-algorithm is used to calculate the shortest path corresponding to the path node combination pair; wherein:
the bidirectional a algorithm comprises:
taking one path node in the path node combination pair as a starting point and the other path node as an end point;
starting two threads, and respectively executing two incompletely identical search processes, wherein one thread processes the search process from the path node as a starting point to the path node as an end point, and the other thread processes the search process from the path node as an end point to the path node as a starting point;
when two threads process to the same path node, the shortest path corresponding to one path node combination pair is obtained by the judgment.
Preferably, the method further comprises:
at a user end:
s1, the user obtains the position information as the starting point information according to the available path node label or path node picture, and selects the destination as the end point information through the user interactive terminal;
s2, the user interaction terminal requests the server for the shortest path video data matched with the start point information and the end point information according to the start point information and the end point information, and plays the video data.
Preferably, in s1, the method for the user to obtain the location information as the start point information according to the available path node labels is as follows: and setting the path node label as a two-dimensional code form, and scanning the two-dimensional code by a user to acquire the position information of the user.
Preferably, in s1, the method for the user to obtain the location information as the start point information according to the available route node map includes: the user takes a picture of a path before the user and uploads the picture, the server side carries out pixel level comparison with the path node pictures one by one according to the picture uploaded by the user, when the similarity of the picture and a certain path node picture reaches a set threshold value, the picture uploaded by the user and the path node picture are judged to be the same, and then the position information is obtained according to the path node picture.
Preferably, in s2, before playing the shortest path video data, the method further includes: the user is guided to adjust the initial direction of navigation.
Preferably, the method further comprises:
at a user end:
s3, the user interaction terminal feeds back the evaluation result of the shortest path video data by the user to the server side for the server side to optimize the shortest path video data; wherein:
the optimization method comprises the following steps: and recalculating the shortest path corresponding to the path node combination pair by adopting a bidirectional A-x algorithm.
According to a second aspect of the present invention, there is provided an indoor navigation system based on video live-action navigation technology, comprising:
-on the server side:
a building modeling module: the building modeling module carries out CAD modeling on the building needing navigation adaptation;
a path division module: the path division node acquisition module divides the internal path of the building on the basis of the model established by the building modeling module to acquire path division nodes;
a path node acquisition module: the path node acquisition module acquires first visual angle video information of the building real path corresponding to the path node acquisition module according to the path divided by the path dividing module to obtain real path nodes matched with the path dividing nodes;
the video processing module: comprising a video processing unit for:
marking each path node obtained in the path node acquisition module, removing interference elements in the video of the real path of the building and performing eye-catching processing on marking elements; and/or
Eliminating interference elements in the building real path video obtained by the path node acquisition module, highlighting the landmark elements, and extracting path node pictures from all path nodes at set time intervals to form a picture library;
a path calculation module: the path calculation module combines every two path nodes of the building on the basis of the video processed by the video processing module to obtain all possible path node combination pairs; calculating the shortest path corresponding to all path node combination pairs to form shortest path video data represented by path nodes;
a database module: the database module is used for storing the shortest path video data obtained in the path calculation module and/or a picture library obtained in the video processing module;
-at the user end:
a path request module: the path request module acquires position information of a user as starting point information according to the available path node label or path node picture, and acquires starting point and end point information of a path according to the selected destination data as end point information;
a navigation module: and the navigation module calls the shortest path video data matched with the path starting point and end point information from a database module at the server end according to the path starting point and end point information obtained by the path request module.
Preferably, the video processing module further comprises: and the video compression unit compresses the building real-field path video processed by the video processing unit.
Preferably, in the path request module, the path node labels are obtained by position identifiers which are arranged inside the building and correspond to the positions of the video nodes, and correspond to the video node labels one to one.
Preferably, the path request module further includes: and the picture matching unit is used for comparing the picture with the path node pictures one by one according to the pre-route picture uploaded by the user, judging that the picture uploaded by the user and the path node picture are the same when the similarity of the pre-route picture and the path node picture reaches a set threshold value, and acquiring the position information according to the path node picture.
Preferably, the navigation module further comprises: and obtaining an evaluation result of the user on the shortest path video data, and sending the evaluation result to a path calculation module for path optimization.
Preferably, the position identifier is set in the form of a two-dimensional code or a program code; accordingly, the path request module includes a two-dimensional code or program code scanning unit.
Preferably, the path request module further includes an image obtaining unit for obtaining a picture of the route before the user.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the indoor navigation method and system based on the video live-action navigation technology, the video live-action navigation technology is independently researched and applied, the indoor navigation scheme with low cost and high precision is built, the contradiction that the cost and the precision cannot be considered in the prior art is solved, the vacancy in the prior art is filled, and the technical foundation is laid for the popularization of indoor navigation.
2. According to the indoor navigation method and system based on the video live-action navigation technology, the A-x routing algorithm is optimized, the bidirectional A-x algorithm is provided, the A-x algorithm is well adapted and applied in indoor navigation, the algorithm performance and the actual performance are improved, and the actual adaptation flow and the workload of the video live-action navigation technology are simplified.
3. The indoor navigation method and system based on the video live-action navigation technology have profound significance for commercialization and popularization of indoor navigation, fundamentally solve the serious obstacle that the cost and the precision of the traditional navigation system are not adjustable, provide a brand new thought for an indoor navigation scheme, and have great application value and practical significance.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flowchart of an indoor navigation method based on a video live-action navigation technique according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a first view-angle-capturing video node of a visual-solid path according to an embodiment of the present invention;
fig. 3 is a schematic diagram of data flow of an indoor navigation system based on video live-action navigation technology according to an embodiment of the present invention;
fig. 4 is a graph of operation results of the bidirectional a-algorithm provided in the embodiment of the present invention, in a given grid map, calculating an 8-domain manhattan distance, a 4-domain manhattan distance, an 8-domain euclidean distance, and a 4-domain euclidean distance, respectively; wherein, (a) is an operation result graph of Manhattan distance in 8 fields, (b) is an operation result graph of Manhattan distance in 4 fields, (c) is an operation result graph of Euclidean distance in 8 fields, and (d) is an operation result graph of Euclidean distance in 4 fields.
Detailed Description
The following examples illustrate the invention in detail: the embodiment is implemented on the premise of the technical scheme of the invention, and a detailed implementation mode and a specific operation process are given. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
The embodiment of the invention provides an indoor navigation method and system based on a video live-action navigation technology, wherein a first person visual angle video is adopted to guide a user to complete indoor navigation, and the user can travel to a destination only by following a returned video visual angle, so that indoor navigation without an information source, high precision and low cost is realized, and a corresponding technical basis is laid for popularization of indoor navigation.
The technical solutions provided by the embodiments of the present invention are further described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an indoor navigation method based on a video live-action navigation technology provided by an embodiment of the present invention includes, at a server side, the following steps:
step S1, performing CAD modeling on the building needing navigation adaptation;
step S2, dividing the internal path of the building on the basis of the model established in the step S1 to obtain path division nodes; (wherein a path division node represents a non-physical node in the division scheme of the path);
step S3, according to the path divided in step S2, obtaining the first visual angle video information of the building real path corresponding to the path, and obtaining the real path node matched with the path dividing node; the first visual angle video information of the building on-site path is obtained by shooting a pre-divided path by utilizing the first visual angle of the camera on the site; (wherein, the path node represents a physical node in the collected video information);
step S4, including any one or more of the following steps:
uniformly labeling each path node obtained in step S3 and corresponding to a building actual path; eliminating interference elements (such as pedestrians) by adopting a pr clipping mode; highlighting the signage elements (e.g., specific store signs, road signs, etc.);
removing disturbance elements (such as pedestrians and the like) in the building real-field path video obtained in the step S3, performing highlighting processing on landmark elements (such as specific shop signs, road signs and the like), and extracting path node pictures from all path nodes at set time intervals to form a picture library;
step S5, on the basis of the video data obtained in step S4, pairwise combination is carried out on all path nodes of the building to obtain all possible path node combination pairs; and calculating the shortest path corresponding to all the path node combination pairs to form shortest path video data represented by the path nodes, wherein the shortest path video data is indoor navigation video data.
Further, at the user side, the method further comprises:
s1, the user obtains the position information as the starting point information according to the available path node label or path node picture, and selects the destination as the end point information through the user interactive terminal;
s2, the user interaction terminal requests the server for the shortest path video data matched with the start point information and the end point information according to the start point information and the end point information, and plays the video data.
And the user walks along the visual angle of the video data with the shortest path to finish navigation.
Further, in s2, before playing the shortest path video data, the method further includes: the user is guided to adjust the initial direction of navigation.
Further, at the user side, the method further comprises:
the user interaction terminal feeds back the evaluation result of the user on the shortest path video data to the server side, and the evaluation result is used for optimizing the shortest path video data by the server side; the optimization method specifically comprises the following steps: and recalculating the shortest path corresponding to the path node combination pair by adopting a bidirectional A-x algorithm.
In the embodiment of the invention:
the path division node is essentially a division of the path inside the building, and any path inside the building can be represented by one or more existing path node combinations.
And shooting path nodes corresponding to the path dividing nodes by adopting a first person visual angle. The path nodes are matched with the path dividing nodes, the shooting process is as shown in fig. 2, the video is recorded when a photographer sees the video, the user can follow the video to repeat the journey of the photographer, and the visual angle of the video serves as a virtual leader.
The shot video information of the real-spot path of the building needs to be edited and modified, interference elements (such as pedestrians) in the video are cut, symbolic elements (such as key shop signs) are highlighted in a rendering mode, and finally the size of the video can be compressed on the basis of ensuring the identification degree of the video to accelerate the transmission rate.
The embodiment of the invention adopts a space-time-replacing strategy, calculates all possible paths in the building in advance, and stores path node serial numbers or path node pictures corresponding to the possible paths into a database; when a user sends a path request, the user needs to be simply matched with the user requirements (starting point and end point information) and the path calculated in advance, and the corresponding path node combination is called.
In the embodiment of the invention, the user can easily acquire the current position information as the starting point by setting the position identification which corresponds to the position of the path node and corresponds to the path node labels one by one; or comparing the pre-route picture uploaded by the user with the route node pictures one by one at a pixel level, judging that the picture uploaded by the user and the route node picture are the same when the similarity of the pre-route picture and the route node picture reaches a set threshold value, and acquiring the position information of the pre-route picture as a starting point according to the route node picture.
In the embodiment of the present invention, an Algorithm for performing pixel-level comparison with path node pictures one by one adopts a Sequential similarity Detection Algorithm (SSDA Algorithm for short), all pictures in a path node picture library are respectively used as search pictures, user upload pictures are used as template pictures for searching, and when a picture uploaded by a user is successfully searched in a certain picture, the user is considered to be currently located at a corresponding position of the picture; if the user uploading picture can be searched from a plurality of pictures, selecting the closest picture (namely the picture with the lowest accumulated error) as an ideal picture; and if the pictures meeting the conditions do not exist, prompting the user to upload again.
In the embodiment of the invention, the threshold is set according to the specific terrain of a building, the video definition during implementation and the picture pixel size.
The position identification can adopt a two-dimensional code or an applet code, wherein the contained information is a command for opening the user interaction terminal and the position information of the two-dimensional code; namely, the user scans the code, and the position of the code scanned by the user is acquired while the application is opened; simultaneously, before playing the video data with the shortest path, the method further comprises the following steps: guiding a user to adjust the initial navigation direction, specifically: the user can be guided to rotate to adjust the body position by loading a picture, and the picture is used as the initial direction of navigation; when the server side acquires the starting point information of the user, only the position of the user is known, and if the video information is directly played, the user may not know which direction the user takes the first step; the loaded picture can guide the user to rotate in place until the seen front path is consistent with the loaded picture, and then the front direction of the user is the initial direction of navigation.
When the user needs to navigate, the two-dimensional code is scanned, and the application is opened and the position of the user is obtained. Because each position code has a unique label and position information, the server receives the user destination, calls a path node combination of the departure point and the destination, and sends the path node combination to the user side to start guiding the user path. Or, the user takes a picture of a route before the user and uploads the picture, the server side performs pixel level comparison with route node pictures one by one according to the picture uploaded by the user, when the similarity of the picture and a certain route node picture reaches a set threshold value, the picture uploaded by the user and the route node picture are judged to be the same, position information is obtained according to the route node picture, the server receives a user destination, a route node combination of a starting point and the destination is called, the route node combination is sent to the user side, and the user route is guided.
And the user walks along the video visual angle formed by the video node combination to finish navigation.
In the embodiment of the invention, a bidirectional A-algorithm is adopted to calculate the shortest path corresponding to the path node combination pair. The bidirectional A-algorithm adopted by the embodiment of the invention is different from the traditional A-algorithm, and the traditional A-algorithm uses a searching mode that the searching is expanded from a starting point to an end point and a shortest path is generated in the process; the bidirectional a algorithm searches simultaneously from two directions, that is, one path node in a path node combination pair is used as a starting point, the other path node is used as an end point, and the algorithm starts two threads (i.e., a search route), wherein one thread extends from the starting point to the end point, and the other thread extends from the end point to the starting point. When two search routes converge to the same position, a shortest path can be obtained.
Compared with the conventional a-algorithm, the bidirectional a-algorithm adopted in the embodiment of the present invention doubles the operation speed, and the ideal operation process is shown in fig. 4. In addition, the conventional a-algorithm can often obtain only one fixed solution when solving the complex path problem, while the bidirectional a-algorithm can obtain a plurality of different optimal solutions actually existing in the solving process of some path problems. In fig. 4, (a), (b), (c), and (d) are operation result diagrams for calculating the manhattan distance in 8 fields, the manhattan distance in 4 fields, the euclidean distance in 8 fields, and the euclidean distance in 4 fields by using the bidirectional a algorithm, in order; in the figure, black solid dots are used as a starting point and an end point, a black connecting solid line between two black solid dots is used as a planning route, and the other oblique black solid lines represent node grids traversed by the planning.
Table 1 is a comparison table of the performance of the bidirectional a-algorithm provided in the embodiment of the present invention and the conventional a-algorithm; wherein M represents the Manhattan distance, E represents the Euclidean distance, and all algorithms run in the same grid map; as shown in table 1, it can be seen that the conventional a-algorithm and the bidirectional a-algorithm solve the contrast in a specific path problem.
TABLE 1
The user interaction terminal adopted in the embodiment of the invention can adopt a WeChat applet form and an APP application form.
Wherein:
and (3) small program development: development can be performed based on the JavaScript language using WeChat _ devtools under the Windows10 professional edition operating system.
Android APP application development: can be developed based on Java language by using android studio under a Windows10 professional edition operating system.
Establishing a server side: apache and MySQL can be installed under a CentOS system, an LAMP environment is built, and development is carried out by using a php language.
The embodiment of the invention also provides an indoor navigation system based on the video live-action navigation technology, which can be used for implementing the method, and the data flow diagram of the system is shown in fig. 3.
The system comprises:
-on the server side:
a building modeling module: performing CAD modeling on a building needing navigation adaptation;
a path division module: dividing the internal path of the building on the basis of the model established by the building modeling module to obtain path division nodes;
a path node acquisition module: according to the path divided in the path dividing module, acquiring first visual angle video information of the building real path corresponding to the path divided in the path dividing module to obtain real path nodes matched with the path dividing nodes;
the video processing module: label setting is carried out on each visual path point obtained in the path node obtaining module, interference elements in the video of the real path of the building are removed, and the marking elements are subjected to eye catching; and/or
Eliminating interference elements in the building real path video obtained by the path node acquisition module, highlighting the landmark elements, and extracting path node pictures from all path nodes at set time intervals to form a picture library;
a path calculation module: on the basis of the video processed by the video processing module, pairwise combination is carried out on all path nodes of the building, and all possible path node combination pairs are obtained; calculating the shortest path corresponding to all path node combination pairs to form shortest path video data represented by path nodes;
a database module: the video processing module is used for storing the shortest path video data obtained in the path calculation module and/or the picture library obtained in the video processing module;
-at the user end:
a path request module: acquiring a path node label or a path node picture where a user is located as starting point information according to the available position identification, and acquiring path starting point and end point information according to the selected destination data as end point information;
a navigation module: and calling the shortest path video data matched with the path starting point and end point information from a database module of the server side according to the path starting point and end point information obtained by the path request module.
Further, the video processing module further includes: and the video compression unit is used for compressing the building field path video after the interference elements are removed and the symbolic elements are highlighted.
Further, in the path request module, the path node labels are obtained by the position identifiers which are arranged inside the building and correspond to the path node positions and correspond to the path node labels one to one.
Further, the path request module further includes: and the picture matching unit is used for comparing the picture with the path node pictures one by one according to the pre-route picture uploaded by the user, judging that the picture uploaded by the user and the path node picture are the same when the similarity of the pre-route picture and the path node picture reaches a set threshold value, and acquiring the position information according to the path node picture.
Further, the position mark is set in the form of a two-dimensional code or a program code.
Accordingly, the path request module includes a two-dimensional code or program code scanning unit.
Further, the path request module further comprises an image acquisition unit for acquiring the picture of the route before the user.
The indoor navigation method and system based on the video live-action navigation technology provided by the embodiment of the invention is a brand-new indoor video live-action navigation technology; the technology is different from the traditional navigation method, can realize low dependence on external information and equipment performance, and is a high-precision low-cost video live-action navigation technology. When a user sends a navigation request to the server, the server returns a preset first visual angle path video to guide the user to complete navigation, and the user can reach a destination only by following the video visual angle. In the development process, deep research and optimization are respectively carried out on building path division, shortest path calculation among three-dimensional places and processing of special navigation routes, and the reliability of the navigation system is further improved.
The indoor navigation method and system based on the video live-action navigation technology provided by the embodiment of the invention adopt the bidirectional A-x algorithm, realize the organic combination of the algorithm, the video live-action navigation technology and the indoor path planning, and develop the optimized indoor navigation technology which can be well adapted to the video live-action navigation technology.
According to the indoor navigation method and system based on the video live-action navigation technology, video editing software such as Pr is adopted to edit video nodes, disturbing information is eliminated, and necessary striking prompts are added, so that user experience of practical application is improved; by adopting a high-efficiency video compression technology, the video is compressed on the basis of ensuring the identification degree, and the video transmission speed (the fluency of navigation video) and the flow consumption (the user economy) of a user are ensured.
The indoor navigation method and system based on the video live-action navigation technology provided by the embodiment of the invention adopt the optimized bidirectional A-way searching algorithm, realize the good adaptive application of the A-way searching algorithm in indoor navigation, improve the performance and the actual performance of the A-way searching algorithm, and simplify the actual adaptive flow and workload of the video live-action navigation technology.
The indoor navigation method and system based on the video live-action navigation technology provided by the embodiment of the invention provide two schemes for obtaining the path, wherein one scheme is that the two-dimensional code is scanned by a user terminal to obtain the information of the starting point, and the fuzzy search matching of the path is completed according to the information of the destination point input by the user; in the other method, the user takes a picture of the route before the user and uploads the picture, the position of the user is doubly confirmed as the starting point information by utilizing the image recognition matching and the recognition result determined by the user, and the accurate searching and matching of the route are completed according to the end point information input by the user.
According to the indoor navigation method and system based on the video live-action navigation technology, provided by the embodiment of the invention, the path nodes are obtained through modeling, and the matched real-time path video nodes are obtained through video shooting at the first visual angle; on the basis of processing and compressing the video node videos, pairwise combination is carried out on all video nodes to form a video node combination pair; and calculating the shortest path corresponding to the video node combination pair to form shortest path video data represented by the video nodes, namely indoor navigation video data. And the user acquires the position information as the starting point information according to the video node label, selects the destination as the destination information through the user interaction terminal, and further requests the shortest path video data matched with the starting point information and the destination information. The indoor navigation method and system based on the video live-action navigation technology provided by the embodiment of the invention overcome the conflict problem of navigation precision and cost in the prior art, and realize indoor navigation without information source, with high precision and low cost.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.
Claims (10)
1. An indoor navigation method based on a video live-action navigation technology is characterized by comprising the following steps:
at a server side:
s1, modeling the building needing navigation adaptation;
s2, dividing the internal path of the building on the basis of the model established in S1 to obtain path division nodes;
s3, acquiring the first visual angle video information of the building real path corresponding to the path divided in S2 to obtain real path nodes matched with the path dividing nodes;
s4, removing the interference elements in the building field path video and highlighting the landmark elements, further comprising any one or more of the following steps:
-label setting each path node;
-extracting path node pictures from all path nodes at set time intervals to form a picture library;
s5, on the basis of S4, pairwise combination is carried out on all path nodes of the building, and all possible path node combination pairs are obtained; and calculating the shortest path corresponding to all the path node combination pairs to form shortest path video data represented by the path nodes, wherein the shortest path video data is indoor navigation video data.
2. The indoor navigation method based on video live-action navigation technology as claimed in claim 1, wherein in S2, the method for dividing the building internal path and obtaining the path division node comprises:
taking all places in the building which can be selected as navigation destinations by a user as a reference, taking a path between two adjacent possible destinations as a basic path dividing node, and combining the paths between any two places in the building through one or more basic path dividing nodes.
3. The indoor navigation method based on the video live-action navigation technology as claimed in claim 1, wherein in the step S4, the pr video clipping method is adopted to eliminate the interference elements in the building live-action path video; and/or
And performing eye-catching processing on the symbolic elements by adopting a rendering method.
4. The indoor navigation method based on the video live-action navigation technology of claim 1, wherein in the step S5, a bidirectional a-algorithm is adopted to calculate the shortest path corresponding to the path node combination pair; wherein:
the bidirectional a algorithm comprises:
taking one path node in the path node combination pair as a starting point and the other path node as an end point;
starting two threads, and respectively executing two incompletely identical search processes, wherein one thread processes the search process from the path node as a starting point to the path node as an end point, and the other thread processes the search process from the path node as an end point to the path node as a starting point;
when two threads process to the same path node, the shortest path corresponding to one path node combination pair is obtained by the judgment.
5. The indoor navigation method based on the video live-action navigation technology according to any one of claims 1 to 4, characterized in that the method further comprises:
at a user end:
s1, the user obtains the position information as the starting point information according to the available path node label or path node picture, and selects the destination as the end point information through the user interactive terminal;
s2, the user interaction terminal requests the server for the shortest path video data matched with the start point information and the end point information according to the start point information and the end point information, and plays the video data.
6. The indoor navigation method based on video live-action navigation technology as claimed in claim 5, wherein in s1, the method for the user to obtain the location information as the starting point information according to the available path node labels is as follows: setting the path node labels into a two-dimensional code form, and scanning the two-dimensional code by a user to acquire the position information of the user; the method for obtaining the position information as the starting point information by the user according to the available path node picture comprises the following steps: the method comprises the steps that a user takes pictures of a front path and uploads the pictures, a server side carries out pixel level comparison with path node pictures one by one according to pictures uploaded by the user, when the similarity of the server side and a certain path node picture reaches a set threshold value, the picture uploaded by the user and the path node picture are judged to be the same, and then position information is obtained according to the path node picture;
and/or
In s2, before playing the shortest path video data, the method further includes: the user is guided to adjust the initial direction of navigation.
7. The indoor navigation method based on the video live-action navigation technology according to claim 5, further comprising:
at a user end:
s3, the user interaction terminal feeds back the evaluation result of the shortest path video data by the user to the server side for the server side to optimize the shortest path video data; wherein:
the optimization method comprises the following steps: and recalculating the shortest path corresponding to the path node combination pair by adopting a bidirectional A-x algorithm.
8. An indoor navigation system based on video live-action navigation technology is characterized by comprising:
-on the server side:
a building modeling module: the building modeling module carries out CAD modeling on the building needing navigation adaptation;
a path division module: the path division node acquisition module divides the internal path of the building on the basis of the model established by the building modeling module to acquire path division nodes;
a path node acquisition module: the path node acquisition module acquires first visual angle video information of the building real path corresponding to the path node acquisition module according to the path divided by the path dividing module to obtain real path nodes matched with the path dividing nodes;
the video processing module: the video processing unit is used for removing interference elements in the building real-spot path video obtained by the path node acquisition module and performing eye-catching processing on landmark elements; and is also used for:
setting labels of all the path nodes obtained by the path node acquisition module; and/or
Extracting path node pictures from all path nodes at set time intervals to form a picture library;
a path calculation module: the path calculation module combines every two path nodes of the building on the basis of the video processed by the video processing module to obtain all possible path node combination pairs; calculating the shortest path corresponding to all path node combination pairs to form shortest path video data represented by path nodes;
a database module: the database module is used for storing the shortest path video data obtained in the path calculation module and/or a picture library obtained in the video processing module;
-at the user end:
a path request module: the path request module acquires position information of a user as starting point information according to the available path node label or path node picture, and acquires starting point and end point information of a path according to the selected destination data as end point information;
a navigation module: and the navigation module calls the shortest path video data matched with the path starting point and end point information from a database module at the server end according to the path starting point and end point information obtained by the path request module.
9. The indoor navigation system based on the video live-action navigation technology according to claim 8, further comprising any one or more of the following:
-the video processing module further comprises: the video compression unit compresses the building real-field path video processed by the video processing unit;
in the path request module, the path node labels are obtained by the position identifiers which are arranged inside the building and correspond to the positions of the path nodes, and correspond to the path node labels one by one;
-the path request module further comprises: the image matching unit compares the pre-route image uploaded by the user with route node images one by one at a pixel level, judges that the image uploaded by the user and the route node image are the same when the similarity of the pre-route image and the route node image reaches a set threshold value, and further acquires the position information according to the route node image;
-the navigation module further comprises: and obtaining an evaluation result of the user on the shortest path video data, and sending the evaluation result to a path calculation module for path optimization.
10. The indoor navigation system based on video live-action navigation technology as claimed in claim 8,
the position mark is set in the form of a two-dimensional code or a program code; correspondingly, the path request module comprises a two-dimensional code or program code scanning unit; and/or
The path request module also comprises an image acquisition unit used for acquiring the route picture before the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010197452.9A CN111288996A (en) | 2020-03-19 | 2020-03-19 | Indoor navigation method and system based on video live-action navigation technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010197452.9A CN111288996A (en) | 2020-03-19 | 2020-03-19 | Indoor navigation method and system based on video live-action navigation technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111288996A true CN111288996A (en) | 2020-06-16 |
Family
ID=71030261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010197452.9A Pending CN111288996A (en) | 2020-03-19 | 2020-03-19 | Indoor navigation method and system based on video live-action navigation technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111288996A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111623783A (en) * | 2020-06-30 | 2020-09-04 | 杭州海康机器人技术有限公司 | Initial positioning method, visual navigation equipment and warehousing system |
CN111896003A (en) * | 2020-07-28 | 2020-11-06 | 广州中科智巡科技有限公司 | Method and system for live-action path navigation |
CN112146658A (en) * | 2020-09-17 | 2020-12-29 | 深圳市自由空间标识有限公司 | Intelligent guiding method, device and system |
CN112988947A (en) * | 2021-05-10 | 2021-06-18 | 南京千目信息科技有限公司 | Intelligent identification management system and method based on geographic information |
CN113091764A (en) * | 2021-03-31 | 2021-07-09 | 泰瑞数创科技(北京)有限公司 | Method for customizing and displaying navigation route of live-action three-dimensional map |
CN113091763A (en) * | 2021-03-30 | 2021-07-09 | 泰瑞数创科技(北京)有限公司 | Navigation method based on live-action three-dimensional map |
CN113465601A (en) * | 2021-05-13 | 2021-10-01 | 上海师范大学 | Indoor navigation based on visual path |
CN114334119A (en) * | 2022-03-14 | 2022-04-12 | 北京融威众邦电子技术有限公司 | Intelligent self-service terminal |
CN114360700A (en) * | 2022-03-14 | 2022-04-15 | 北京融威众邦电子技术有限公司 | Business self-service machine and business auxiliary method |
EP4235102A1 (en) | 2022-02-25 | 2023-08-30 | Qvadis S.r.l. | Method of providing a navigation path in an enclosed environment |
CN117109623A (en) * | 2023-10-09 | 2023-11-24 | 深圳市微克科技有限公司 | Intelligent wearable navigation interaction method, system and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106289235A (en) * | 2016-08-12 | 2017-01-04 | 天津大学 | Autonomous computational accuracy controllable chamber inner position air navigation aid based on architecture structure drawing |
CN107588767A (en) * | 2012-04-18 | 2018-01-16 | 知谷(上海)网络科技有限公司 | A kind of indoor intelligent positioning navigation method |
CN108020231A (en) * | 2016-10-28 | 2018-05-11 | 大辅科技(北京)有限公司 | A kind of map system and air navigation aid based on video |
CN109520510A (en) * | 2018-12-26 | 2019-03-26 | 安徽智恒信科技有限公司 | A kind of indoor navigation method and system based on virtual reality technology |
CN109979006A (en) * | 2019-03-14 | 2019-07-05 | 北京建筑大学 | Indoor road net model construction method and device |
-
2020
- 2020-03-19 CN CN202010197452.9A patent/CN111288996A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107588767A (en) * | 2012-04-18 | 2018-01-16 | 知谷(上海)网络科技有限公司 | A kind of indoor intelligent positioning navigation method |
CN106289235A (en) * | 2016-08-12 | 2017-01-04 | 天津大学 | Autonomous computational accuracy controllable chamber inner position air navigation aid based on architecture structure drawing |
CN108020231A (en) * | 2016-10-28 | 2018-05-11 | 大辅科技(北京)有限公司 | A kind of map system and air navigation aid based on video |
CN109520510A (en) * | 2018-12-26 | 2019-03-26 | 安徽智恒信科技有限公司 | A kind of indoor navigation method and system based on virtual reality technology |
CN109979006A (en) * | 2019-03-14 | 2019-07-05 | 北京建筑大学 | Indoor road net model construction method and device |
Non-Patent Citations (1)
Title |
---|
林娜等: "基于双向A*算法的城市无人机航路规划", 《沈阳航空航天大学学报》, vol. 33, no. 4, 31 August 2016 (2016-08-31), pages 55 - 60 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111623783A (en) * | 2020-06-30 | 2020-09-04 | 杭州海康机器人技术有限公司 | Initial positioning method, visual navigation equipment and warehousing system |
CN111896003A (en) * | 2020-07-28 | 2020-11-06 | 广州中科智巡科技有限公司 | Method and system for live-action path navigation |
CN112146658A (en) * | 2020-09-17 | 2020-12-29 | 深圳市自由空间标识有限公司 | Intelligent guiding method, device and system |
CN113091763B (en) * | 2021-03-30 | 2022-05-03 | 泰瑞数创科技(北京)有限公司 | Navigation method based on live-action three-dimensional map |
CN113091763A (en) * | 2021-03-30 | 2021-07-09 | 泰瑞数创科技(北京)有限公司 | Navigation method based on live-action three-dimensional map |
CN113091764A (en) * | 2021-03-31 | 2021-07-09 | 泰瑞数创科技(北京)有限公司 | Method for customizing and displaying navigation route of live-action three-dimensional map |
CN112988947A (en) * | 2021-05-10 | 2021-06-18 | 南京千目信息科技有限公司 | Intelligent identification management system and method based on geographic information |
CN113465601A (en) * | 2021-05-13 | 2021-10-01 | 上海师范大学 | Indoor navigation based on visual path |
EP4235102A1 (en) | 2022-02-25 | 2023-08-30 | Qvadis S.r.l. | Method of providing a navigation path in an enclosed environment |
CN114334119A (en) * | 2022-03-14 | 2022-04-12 | 北京融威众邦电子技术有限公司 | Intelligent self-service terminal |
CN114360700A (en) * | 2022-03-14 | 2022-04-15 | 北京融威众邦电子技术有限公司 | Business self-service machine and business auxiliary method |
CN114360700B (en) * | 2022-03-14 | 2022-07-01 | 北京融威众邦电子技术有限公司 | Business self-service machine and business auxiliary method |
CN117109623A (en) * | 2023-10-09 | 2023-11-24 | 深圳市微克科技有限公司 | Intelligent wearable navigation interaction method, system and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111288996A (en) | Indoor navigation method and system based on video live-action navigation technology | |
CN109993780B (en) | Three-dimensional high-precision map generation method and device | |
CN105371847B (en) | A kind of interior real scene navigation method and system | |
KR20200121274A (en) | Method, apparatus, and computer readable storage medium for updating electronic map | |
Chen et al. | Crowd map: Accurate reconstruction of indoor floor plans from crowdsourced sensor-rich videos | |
US20210097103A1 (en) | Method and system for automatically collecting and updating information about point of interest in real space | |
CN108234961B (en) | Multi-path camera coding and video stream guiding method and system | |
US20180306590A1 (en) | Map update method and in-vehicle terminal | |
KR20210006511A (en) | Lane determination method, device and storage medium | |
KR20180079428A (en) | Apparatus and method for automatic localization | |
CN105139644A (en) | Indoor parking space positioning method based on APP and GPS inertial guidance | |
CN104819726A (en) | Navigation data processing method, navigation data processing device and navigation terminal | |
CN104884899A (en) | Method of determining trajectories through one or more junctions of a transportation network | |
CN111797751A (en) | Pedestrian trajectory prediction method, device, equipment and medium | |
CN105025439A (en) | Indoor positioning system, applied database, indoor positioning method and indoor positioning device | |
CN109889974B (en) | Construction and updating method of indoor positioning multisource information fingerprint database | |
Feng et al. | Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments | |
CN115388902A (en) | Indoor positioning method and system, AR indoor positioning navigation method and system | |
CN112689234B (en) | Indoor vehicle positioning method, device, computer equipment and storage medium | |
CN111866734A (en) | Method, terminal, server and storage medium for positioning routing inspection track | |
Chen et al. | Multi-level scene modeling and matching for smartphone-based indoor localization | |
CN115981305A (en) | Robot path planning and control method and device and robot | |
KR20190063350A (en) | Method of detecting a shooting direction and apparatuses performing the same | |
CN113190564A (en) | Map updating system, method and device | |
CN114935341B (en) | Novel SLAM navigation computation video identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200616 |