CN110631578A - Indoor pedestrian positioning and tracking method under map-free condition - Google Patents

Indoor pedestrian positioning and tracking method under map-free condition Download PDF

Info

Publication number
CN110631578A
CN110631578A CN201910931570.5A CN201910931570A CN110631578A CN 110631578 A CN110631578 A CN 110631578A CN 201910931570 A CN201910931570 A CN 201910931570A CN 110631578 A CN110631578 A CN 110631578A
Authority
CN
China
Prior art keywords
pedestrian
map
track
semantic
inertial navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910931570.5A
Other languages
Chinese (zh)
Other versions
CN110631578B (en
Inventor
阎波
张丽佳
韩浩楠
肖卓凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910931570.5A priority Critical patent/CN110631578B/en
Publication of CN110631578A publication Critical patent/CN110631578A/en
Application granted granted Critical
Publication of CN110631578B publication Critical patent/CN110631578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for positioning and tracking pedestrians indoors under a map-free condition, which describes the movement of the pedestrians as a semantic track by fusing video and inertial navigation data, establishes a simple semantic description map according to the video data, establishes a matching corresponding relation between the semantic description map and the semantic description track, and matches the semantic description map and the semantic description track, so as to correct the original pedestrian track and realize autonomous positioning and tracking under the map-free condition. The invention starts from the positioning and tracking result, focuses on the final positioning effect, can find the position of the pedestrian in the established semantic map, and belongs to a relative positioning method. The invention only needs to store descriptive maps instead of three-dimensional scene maps, greatly saves the calculation storage space, can be used on a light-weight intelligent device, has the characteristic of being used in indoor environment with variable scenes, and has wide application prospect.

Description

Indoor pedestrian positioning and tracking method under map-free condition
Technical Field
The invention belongs to the technical field of indoor positioning, and particularly relates to a design of a method for positioning and tracking indoor pedestrians without a map.
Background
In recent years, with the development of ideas and technologies such as smart cities, smart medical services, smart campuses and the like, indoor positioning services are becoming indispensable technologies for smart city construction. With the development of sensor technology of smart phones, the use of smart phones to achieve indoor autonomous positioning will become a main target of indoor positioning services. Like an indoor positioning technology which only uses a mobile phone and does not need to deploy additional equipment, an Inertial navigation system is a core technology of the indoor positioning technology, and independent positioning is realized by an Inertial Measurement Unit (IMU) which consists of Inertial sensors, but the inherent accumulated error of the IMU in positioning causes the IMU to obviously reduce the precision along with the increase of the positioning distance. Improving the positioning accuracy of an inertial navigation system by adding additional constraint information has become a major research issue at present.
The existing technology for fusing other information to assist the positioning of the inertial navigation system comprises the following steps: fusion positioning based on technologies such as maps, videos and radars, but all of them need to prepare indoor maps in advance, which obviously fails for strange, map-free or scene-variable indoor environments. To solve this problem, a new and emerging technology, namely Simultaneous localization and mapping (SALM), is being developed vigorously, but this technology needs to shoot simultaneously and needs a very large space to store the indoor map constructed by video, and then performs localization, and this huge amount of calculation and storage space is obviously not suitable for smart phones. Research and development of indoor positioning and tracking technology under no map can be realized at a mobile phone end becomes an important problem of current indoor positioning research.
The prior art provides a fusion indoor positioning method using pedestrian motion monitoring and a map, which tries to describe the motion of a pedestrian as a semantic description track of the pedestrian, and fuses and matches the pedestrian track added with description items with the map to realize the positioning and tracking of the pedestrian. The semantic tracks describe the movement modes of pedestrians in the movement process, including straight walking, left turning, right turning, going upstairs and downstairs, taking an elevator and the like, however, only when the pedestrians are in corners, going upstairs and downstairs or entering and exiting doors during normal walking, the pedestrians can rotate, and the rest of the pedestrians can go straight, so that the semantically described pedestrian tracks and the map are fused and matched, and the final positioning tracking is realized. However, the method still integrates map information, and cannot solve the problem of positioning of indoor environments without maps and with variable scenes.
The existing indoor autonomous positioning and tracking technology not only needs to fuse the information of other sensors and the information of an inertial sensor, but also needs to combine a corresponding indoor map to realize final positioning. However, the SLAM technology which does not need a map not only needs to process a large number of video images, but also needs to establish the video images into a three-dimensional scene graph, and the calculation amount is large, and a large storage space is needed to store the three-dimensional scene graph generated by the video images, which has high requirements on a common smart phone; however, if only a two-dimensional map is generated, the video image is translated into a common scanning map, so that the algorithm difficulty is further improved. Neither of these two types of prior art techniques can achieve a lightweight autonomous positioning and tracking system with guaranteed accuracy.
Disclosure of Invention
The invention aims to solve the problem that the conventional indoor autonomous positioning and tracking technology cannot realize portable autonomous positioning and tracking under the condition of ensuring precision, and provides an indoor pedestrian positioning and tracking method without a map.
The technical scheme of the invention is as follows: a method for indoor pedestrian positioning and tracking without a map comprises the following steps:
and S1, collecting video data and inertial navigation data in the indoor movement process of the pedestrian, and preprocessing the video data and the inertial navigation data.
And S2, performing semantic description on the inertial navigation track of the pedestrian according to the inertial navigation data to obtain the semantic track of the pedestrian.
And S3, extracting the features of the video data, and storing the extracted features according to the time sequence to obtain a video feature sequence.
And S4, constructing the simple semantic description map according to the video feature sequence.
S5, matching the semantic track of the pedestrian with the simple semantic description map, correcting the inertial navigation track of the pedestrian, marking the semantic feature position of the simple semantic description map on the corrected track, and realizing the autonomous positioning and tracking of the pedestrian indoors without the map.
Further, step S1 includes the following substeps:
s11, the vertical handheld smart phone walks indoors, video data in the indoor movement process of the pedestrian are collected through a camera of the smart phone, inertial navigation data in the indoor movement process of the pedestrian are collected through an IMU of the smart phone, and the video data and the inertial navigation data are transmitted to a terminal to be stored.
And S12, splicing a video sequence of continuous frames of the video data with a time interval equal to the sampling interval of the inertial navigation data into a picture, thereby aligning the video data with the inertial navigation data in time.
Further, step S2 is specifically: judging the motion mode of the pedestrian at each time point according to a curve graph output by an accelerometer in the IMU, simultaneously determining rotation information generated in the motion process of the pedestrian according to the angle characteristics output by a gyroscope and the magnetometer in the IMU, and marking the motion mode, the rotation information and the time information of the pedestrian at the corresponding position of the original inertial navigation track of the pedestrian to obtain the semantic track of the pedestrian.
Further, the specific method for determining the rotation information generated during the pedestrian movement in step S2 is as follows: and when the rotation angle of the pedestrian is greater than or equal to 45 degrees, judging that the pedestrian rotates, otherwise, judging that the pedestrian does not rotate.
Further, step S3 includes the following substeps:
and S31, extracting the corner features of the video data by adopting a Harris corner feature extraction algorithm.
And S32, classifying the extracted corner points according to the density degree of the coordinates.
S33, judging whether the corner point coordinates of a certain type of corner points can form a vertical quadrilateral with two parallel sides, if so, judging that the corner points are the characteristics of the door, and entering the step S34, otherwise, entering the step S35.
S34, judging whether corner points exist in the quadrangle, if so, judging that the corner points are the characteristics of the opened door, and going to step S37, otherwise, judging that the corner points are the characteristics of the closed door, and going to step S37.
S35, judging whether the coordinates of some corner points are in a symmetrical relation, if so, judging that the corner points are the features of the stairs, and entering the step S37, otherwise, entering the step S36.
S36, judging whether the coordinates of the corner points of a certain type of corner points can form a straight line with a broken section in the middle, if so, judging that the corner points of the certain type of corner points are the characteristics of corners, and entering the step S37, otherwise, directly entering the step S37.
And S37, storing the obtained characteristics of the opened door, the closed door, the stairs and the corners according to a time sequence to obtain a video characteristic sequence.
Further, step S4 is specifically: and translating the features in the video feature sequence into a natural language for describing the features according to the corresponding time sequence of the features to form a simple semantic description map capable of indicating the motion mode of the pedestrian.
Further, in step S5, the matching between the pedestrian semantic track and the simple semantic description map is implemented by establishing a corresponding relationship between the pedestrian motion pattern and the indoor spatial feature, where the corresponding relationship function between the pedestrian motion pattern and the indoor spatial feature is:
Figure BDA0002220397580000032
wherein
Figure BDA0002220397580000033
Indicating the time of day
Figure BDA0002220397580000034
Matching probability and time of semantic track and simple semantic description map
Figure BDA0002220397580000035
Determined by the walking time of the pedestrian, the semantic track marking time T and the marking time T of the simple semantic description map, I (-) is an indication function
Figure BDA0002220397580000031
P(LT|At) And when the semantic track marking time is T and the pedestrian movement mode is A, the probability that the map is described in the simple semantic mode, wherein the time is T and the position is L is shown.
The invention has the beneficial effects that:
(1) the invention fuses the video and the inertial navigation data, overcomes the defect that the traditional indoor positioning technology needs an indoor map for positioning and tracking, and really realizes autonomous positioning.
(2) The invention provides the idea of establishing a simple semantic description map, and greatly saves the storage space and the calculated amount by aiming at the problem that the existing SLAM technology needs to process a large amount of video information to generate a three-dimensional scene map and consumes a large amount of storage space.
(3) According to the invention, the semantic description track is matched with the semantic description map, and the correction of the original inertial navigation track is completed according to the corresponding relation of the semantic description track and the semantic description map on the motion influence, so that the positioning precision is higher.
Drawings
Fig. 1 is a flowchart illustrating a method for locating and tracking pedestrians indoors under a map according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a semantic track of a pedestrian according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating a substep of step S3 according to an embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating a simple semantic description map according to an embodiment of the present invention.
Fig. 5 is a semantic description map with a true value of a pedestrian trajectory according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It is to be understood that the embodiments shown and described in the drawings are merely exemplary and are intended to illustrate the principles and spirit of the invention, not to limit the scope of the invention.
The embodiment of the invention provides a method for positioning and tracking indoor pedestrians without a map, which comprises the following steps S1-S5 as shown in FIG. 1:
and S1, collecting video data and inertial navigation data in the indoor movement process of the pedestrian, and preprocessing the video data and the inertial navigation data.
The step S1 includes the following substeps S11-S12:
s11, the vertical handheld smart phone walks indoors (realized by researchers in the embodiment of the invention), video data in the indoor movement process of the pedestrian is collected through a camera of the smart phone, inertial navigation data in the indoor movement process of the pedestrian is collected through an IMU of the smart phone, and the video data and the inertial navigation data are transmitted to a terminal for storage.
And S12, splicing a video sequence of continuous frames of the video data with a time interval equal to the sampling interval of the inertial navigation data into a picture, thereby aligning the video data with the inertial navigation data in time.
In the embodiment of the invention, because the frequency of video data acquisition is different from the frequency of IMU data acquisition, the frequency of IMU is lower than the acquisition frame rate of the video data, and the same part of the adjacent frame images is more, the image splicing is carried out on the images of continuous frames in the embodiment of the invention, and the images are aligned with the inertial navigation data at the current moment, thereby realizing the preprocessing of the video data and the inertial navigation data.
And S2, performing semantic description on the inertial navigation track of the pedestrian according to the inertial navigation data to obtain the semantic track of the pedestrian.
In the embodiment of the invention, the original inertial navigation track of the pedestrian is obtained by a Pedestrian Dead Reckoning (PDR) algorithm roughly corrected by a Kalman filtering algorithm. When the pedestrian walks and goes upstairs and downstairs, the acceleration characteristics are obviously different, so the motion mode of the pedestrian at each time point is judged according to the curve graph output by the accelerometer in the IMU.
And for judging the rotation of the pedestrian, a gyroscope and a magnetometer in the IMU are used for advantage complementation, and the rotation information generated in the motion process of the pedestrian is determined according to the output angle characteristics. In the embodiment of the invention, in order to avoid that small rotation of the pedestrian in the straight walking process is also considered as behavior change, the pedestrian is judged to rotate when the rotation angle of the pedestrian is greater than or equal to 45 degrees, otherwise, the pedestrian is judged not to rotate.
In addition, the method and the device can assist the IMU data to correct the rotation judgment of the pedestrian by judging the similarity of the front frame image and the rear frame image in the video information. According to the determined pedestrian rotation information, the behavior and time information of the pedestrian are marked at the corresponding position of the original inertial navigation track to form a semantic track, as shown in fig. 2.
And S3, extracting the features of the video data, and storing the extracted features according to the time sequence to obtain a video feature sequence.
As shown in fig. 3, step S3 includes the following substeps S31-S37:
and S31, extracting the corner features of the video data by adopting a Harris corner feature extraction algorithm.
And S32, classifying the extracted corner points according to the density degree of the coordinates.
S33, judging whether the corner point coordinates of a certain type of corner points can form a vertical quadrilateral with two parallel sides, if so, judging that the corner points are the characteristics of the door, and entering the step S34, otherwise, entering the step S35.
S34, judging whether corner points exist in the quadrangle, if so, judging that the corner points are the characteristics of the opened door, and going to step S37, otherwise, judging that the corner points are the characteristics of the closed door, and going to step S37.
S35, judging whether the coordinates of some corner points are in a symmetrical relation, if so, judging that the corner points are the features of the stairs, and entering the step S37, otherwise, entering the step S36.
S36, judging whether the coordinates of the corner points of a certain type of corner points can form a straight line with a broken section in the middle, if so, judging that the corner points of the certain type of corner points are the characteristics of corners, and entering the step S37, otherwise, directly entering the step S37.
And S37, storing the obtained characteristics of the opened door, the closed door, the stairs and the corners according to a time sequence to obtain a video characteristic sequence.
And S4, constructing the simple semantic description map according to the video feature sequence.
And according to the corner feature and the classification condition thereof obtained in the step S3, translating the feature in the video feature sequence into a natural language for describing the feature according to the corresponding time sequence thereof, and forming a simple semantic description map capable of indicating the pedestrian movement mode, wherein the described information comprises a feature name and time. As shown in fig. 4, the easy semantic description map is composed of feature locations and times in the video at which the pedestrian passes the feature, where time is denoted by T, gate D, and corners by C.
S5, matching the semantic track of the pedestrian with the simple semantic description map, correcting the inertial navigation track of the pedestrian, marking the semantic feature position of the simple semantic description map on the corrected track, and realizing the autonomous positioning and tracking of the pedestrian indoors without the map.
In the embodiment of the invention, the matching of the semantic track of the pedestrian and the simple semantic description map is realized by establishing the corresponding relation between the pedestrian motion mode and the indoor space characteristic, and the corresponding relation function between the pedestrian motion mode and the indoor space characteristic is as follows:
Figure BDA0002220397580000054
wherein
Figure BDA0002220397580000052
Indicating the time of day
Figure BDA0002220397580000051
Matching probability and time of semantic track and simple semantic description map
Figure BDA0002220397580000053
The method is determined by the pedestrian walking time, the semantic track marking time T and the marking time T of the simple semantic description map. I (-) is an indicator function and
Figure BDA0002220397580000061
P(LT|At) And when the semantic track marking time is T and the pedestrian movement mode is A, the probability that the map is described in the simple semantic mode, wherein the time is T and the position is L is shown. In the embodiment of the invention, the pedestrian movement mode A comprises left turning, straight walking and right turning, and the position L comprises a door D, a corner C and a stair S.
The pedestrians can only rotate less than 180 degrees at corners, doors and stairs, so that the course of the pedestrians can be modified by backtracking to the point of the last behavior transition according to the time and the rotating behaviors of the pedestrians and the rotating places of the pedestrians, and the semantic description tracks and the semantic description map are matched, so that the track positioning and tracking of the pedestrians are realized. FIG. 5 is a semantic descriptive map with truth values to which the final matching target should approach.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (7)

1. A method for positioning and tracking indoor pedestrians without a map is characterized by comprising the following steps:
s1, collecting video data and inertial navigation data in the indoor movement process of the pedestrian, and preprocessing the video data and the inertial navigation data;
s2, performing semantic description on the inertial navigation track of the pedestrian according to the inertial navigation data to obtain the semantic track of the pedestrian;
s3, extracting the features of the video data, and storing the extracted features according to a time sequence to obtain a video feature sequence;
s4, constructing a simple semantic description map according to the video feature sequence;
s5, matching the semantic track of the pedestrian with the simple semantic description map, correcting the inertial navigation track of the pedestrian, marking the semantic feature position of the simple semantic description map on the corrected track, and realizing the autonomous positioning and tracking of the pedestrian indoors without the map.
2. The method for pedestrian location and tracking indoors without map according to claim 1, wherein the step S1 includes the following sub-steps:
s11, the vertical handheld smart phone walks indoors, video data in the indoor movement process of pedestrians are collected through a camera of the smart phone, inertial navigation data in the indoor movement process of the pedestrians are collected through an IMU of the smart phone, and the video data and the inertial navigation data are transmitted to a terminal for storage;
and S12, splicing a video sequence of continuous frames of the video data with a time interval equal to the sampling interval of the inertial navigation data into a picture, thereby aligning the video data with the inertial navigation data in time.
3. The method for locating and tracking pedestrians indoors under no map of claim 1, wherein the step S2 is specifically: judging the motion mode of the pedestrian at each time point according to a curve graph output by an accelerometer in the IMU, simultaneously determining rotation information generated in the motion process of the pedestrian according to the angle characteristics output by a gyroscope and the magnetometer in the IMU, and marking the motion mode, the rotation information and the time information of the pedestrian at the corresponding position of the original inertial navigation track of the pedestrian to obtain the semantic track of the pedestrian.
4. The method for locating and tracking pedestrians indoors under no map according to claim 3, wherein the specific method for determining the rotation information occurring during the pedestrian moving process in step S2 is: and when the rotation angle of the pedestrian is greater than or equal to 45 degrees, judging that the pedestrian rotates, otherwise, judging that the pedestrian does not rotate.
5. The method for pedestrian location and tracking indoors without map according to claim 1, wherein the step S3 includes the following sub-steps:
s31, extracting the corner features of the video data by adopting a Harris corner feature extraction algorithm;
s32, classifying the extracted corner points according to the intensity of coordinates;
s33, judging whether the corner point coordinates of a certain type of corner points can form a vertical quadrilateral with two parallel sides, if so, judging that the corner points are the characteristics of a door, and entering the step S34, otherwise, entering the step S35;
s34, judging whether corner points exist in the quadrangle, if so, judging that the corner points are the characteristics of the opened door, and entering the step S37, otherwise, judging that the corner points are the characteristics of the closed door, and entering the step S37;
s35, judging whether the coordinates of the corner points of a certain type are in a symmetrical relation, if so, judging that the corner points of the certain type are the features of the stairs, and entering the step S37, otherwise, entering the step S36;
s36, judging whether the corner point coordinates of a certain type of corner points can form a straight line with a broken section in the middle, if so, judging that the corner points are the characteristics of corners, and entering the step S37, otherwise, directly entering the step S37;
and S37, storing the obtained characteristics of the opened door, the closed door, the stairs and the corners according to a time sequence to obtain a video characteristic sequence.
6. The method for locating and tracking pedestrians indoors under no map of claim 1, wherein the step S4 is specifically: and translating the features in the video feature sequence into a natural language for describing the features according to the corresponding time sequence of the features to form a simple semantic description map capable of indicating the motion mode of the pedestrian.
7. The method for locating and tracking indoor pedestrians without maps according to claim 1, wherein the step S5 is implemented by establishing a correspondence relationship between a pedestrian movement pattern and indoor spatial features, and the correspondence relationship between the pedestrian movement pattern and the indoor spatial features is as follows:
ft=I(t-T)*P(LT|At)
wherein f istIndicating the time of day
Figure RE-FDA0002261865030000021
Matching probability and time of semantic track and simple semantic description map
Figure RE-FDA0002261865030000022
Determined by the walking time of the pedestrian, the semantic track marking time T and the marking time T of the simple semantic description map, I (-) is an indication functionP(LT|At) And when the semantic track marking time is T and the pedestrian movement mode is A, the probability that the map is described in the simple semantic mode, wherein the time is T and the position is L is shown.
CN201910931570.5A 2019-09-29 2019-09-29 Indoor pedestrian positioning and tracking method under map-free condition Active CN110631578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931570.5A CN110631578B (en) 2019-09-29 2019-09-29 Indoor pedestrian positioning and tracking method under map-free condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931570.5A CN110631578B (en) 2019-09-29 2019-09-29 Indoor pedestrian positioning and tracking method under map-free condition

Publications (2)

Publication Number Publication Date
CN110631578A true CN110631578A (en) 2019-12-31
CN110631578B CN110631578B (en) 2021-06-08

Family

ID=68974649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931570.5A Active CN110631578B (en) 2019-09-29 2019-09-29 Indoor pedestrian positioning and tracking method under map-free condition

Country Status (1)

Country Link
CN (1) CN110631578B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112229398A (en) * 2020-09-03 2021-01-15 宁波诺丁汉大学 Navigation system and method for indoor fire escape
CN113259855A (en) * 2021-06-16 2021-08-13 北京奇岱松科技有限公司 Indoor target operation track recognition system
CN113469118A (en) * 2021-07-20 2021-10-01 京东科技控股股份有限公司 Multi-target pedestrian tracking method and device, electronic equipment and storage medium
CN115223442A (en) * 2021-07-22 2022-10-21 上海数川数据科技有限公司 Automatic generation method of indoor pedestrian map

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306145A (en) * 2011-07-27 2012-01-04 东南大学 Robot navigation method based on natural language processing
WO2015150855A1 (en) * 2014-04-04 2015-10-08 Basalamah Anas M A method and system for crowd sensing to be used for automatic semantic identification
CN106289282A (en) * 2016-07-18 2017-01-04 北京方位捷讯科技有限公司 A kind of indoor map pedestrian's track matching method
CN106504288A (en) * 2016-10-24 2017-03-15 北京进化者机器人科技有限公司 A kind of domestic environment Xiamen localization method based on binocular vision target detection
CN106767812A (en) * 2016-11-25 2017-05-31 梁海燕 A kind of interior semanteme map updating method and system based on Semantic features extraction
CN109389641A (en) * 2017-08-02 2019-02-26 北京贝虎机器人技术有限公司 Indoor map integrated data generation method and indoor method for relocating
CN109916397A (en) * 2019-03-15 2019-06-21 斑马网络技术有限公司 For tracking method, apparatus, electronic equipment and the storage medium of inspection track

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306145A (en) * 2011-07-27 2012-01-04 东南大学 Robot navigation method based on natural language processing
WO2015150855A1 (en) * 2014-04-04 2015-10-08 Basalamah Anas M A method and system for crowd sensing to be used for automatic semantic identification
CN106289282A (en) * 2016-07-18 2017-01-04 北京方位捷讯科技有限公司 A kind of indoor map pedestrian's track matching method
CN106504288A (en) * 2016-10-24 2017-03-15 北京进化者机器人科技有限公司 A kind of domestic environment Xiamen localization method based on binocular vision target detection
CN106767812A (en) * 2016-11-25 2017-05-31 梁海燕 A kind of interior semanteme map updating method and system based on Semantic features extraction
CN109389641A (en) * 2017-08-02 2019-02-26 北京贝虎机器人技术有限公司 Indoor map integrated data generation method and indoor method for relocating
CN109916397A (en) * 2019-03-15 2019-06-21 斑马网络技术有限公司 For tracking method, apparatus, electronic equipment and the storage medium of inspection track

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUO WANG 等: ""Learning Hierarchical Space Tiling for Scene Modeling, Parsing and Attribute Tagging"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
刘涛 等: ""视觉信息辅助的室内建图与行人导航方法研究"", 《中国博士学位论文全文数据库 工程科技Ⅰ辑》 *
谢潇 等: ""多层次地理视频语义模型"", 《测绘学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112229398A (en) * 2020-09-03 2021-01-15 宁波诺丁汉大学 Navigation system and method for indoor fire escape
CN113259855A (en) * 2021-06-16 2021-08-13 北京奇岱松科技有限公司 Indoor target operation track recognition system
CN113469118A (en) * 2021-07-20 2021-10-01 京东科技控股股份有限公司 Multi-target pedestrian tracking method and device, electronic equipment and storage medium
CN115223442A (en) * 2021-07-22 2022-10-21 上海数川数据科技有限公司 Automatic generation method of indoor pedestrian map
CN115223442B (en) * 2021-07-22 2024-04-09 上海数川数据科技有限公司 Automatic generation method of indoor pedestrian map

Also Published As

Publication number Publication date
CN110631578B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN110631578B (en) Indoor pedestrian positioning and tracking method under map-free condition
Wu et al. Deep learning for unmanned aerial vehicle-based object detection and tracking: A survey
Xiao et al. Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment
US11428537B2 (en) Localization and mapping methods using vast imagery and sensory data collected from land and air vehicles
CN113674416B (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN108196285B (en) Accurate positioning system based on multi-sensor fusion
CN106056075A (en) Important person identification and tracking system in community meshing based on unmanned aerial vehicle
CN109509230A (en) A kind of SLAM method applied to more camera lens combined type panorama cameras
CN108242079A (en) A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model
CN110579207B (en) Indoor positioning system and method based on combination of geomagnetic signals and computer vision
Dong et al. Pair-navi: Peer-to-peer indoor navigation with mobile visual slam
CN108267121A (en) The vision navigation method and system of more equipment under a kind of variable scene
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN111595344B (en) Multi-posture downlink pedestrian dead reckoning method based on map information assistance
CN111795688B (en) Library navigation system implementation method based on deep learning and augmented reality
CN110032965A (en) Vision positioning method based on remote sensing images
CN113066129A (en) Visual positioning and mapping system based on target detection in dynamic environment
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios
Ma et al. Location and 3-D visual awareness-based dynamic texture updating for indoor 3-D model
CN113190711A (en) Video dynamic object trajectory space-time retrieval method and system in geographic scene
CN108534781A (en) Indoor orientation method based on video
CN115235455B (en) Pedestrian positioning method based on smart phone PDR and vision correction
CN115731287B (en) Moving target retrieval method based on aggregation and topological space
Xu et al. Smartphone-based indoor visual navigation with leader-follower mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant