CN115235455A - Pedestrian positioning method based on smart phone PDR and vision correction - Google Patents

Pedestrian positioning method based on smart phone PDR and vision correction Download PDF

Info

Publication number
CN115235455A
CN115235455A CN202211133744.1A CN202211133744A CN115235455A CN 115235455 A CN115235455 A CN 115235455A CN 202211133744 A CN202211133744 A CN 202211133744A CN 115235455 A CN115235455 A CN 115235455A
Authority
CN
China
Prior art keywords
pedestrian
pdr
positioning
visual
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211133744.1A
Other languages
Chinese (zh)
Other versions
CN115235455B (en
Inventor
潘献飞
陈宗阳
陈昶昊
褚超群
涂哲铭
张礼廉
胡小平
吴文启
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211133744.1A priority Critical patent/CN115235455B/en
Publication of CN115235455A publication Critical patent/CN115235455A/en
Application granted granted Critical
Publication of CN115235455B publication Critical patent/CN115235455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses a pedestrian positioning method based on a smart phone PDR and vision correction, which comprises the following steps: establishing a visual characteristic map of a region to be detected; determining an initial position and a course angle of the pedestrian based on the global positioning of the visual feature map; calculating the dead reckoning of the pedestrian based on the PDR on the basis of the initial position and the course angle, and calculating the walking distance of the pedestrian; when the walking distance of the pedestrian reaches a set threshold value, obtaining global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map; and correcting the PDR positioning result by taking the visual positioning result as a positioning reference. The method is applied to the field of pedestrian navigation, and corrects the position and course angle errors of the PDR by indirectly calling visual positioning, so that the method not only can obtain better positioning performance improvement, but also expands the application scene of the traditional PDR from a two-dimensional plane to a three-dimensional space, and has practical research significance and application value.

Description

Pedestrian positioning method based on smart phone PDR and vision correction
Technical Field
The invention relates to the technical field of pedestrian navigation, in particular to a pedestrian positioning method based on a smart phone PDR and visual correction.
Background
With the increase of the demand of people for location services, the indoor positioning technology becomes a research hotspot. Due to signal shielding and interference, the satellite navigation system cannot meet the requirement of indoor positioning service of a user in most cases. In order to solve the problem of satellite signal occlusion in a complex indoor environment, researchers have proposed many indoor positioning methods. Typical indoor location technologies include Wi-Fi fingerprinting, bluetooth, radio frequency identification, ultra wideband, vision, dead reckoning, and the like. With the development of microelectronic technology, pedestrian Dead Reckoning (PDR) based on a mobile intelligent terminal MEMS sensor is favored by researchers due to its strong autonomy, continuity and convenience without deploying a base station in advance.
At present, sensors such as an accelerometer, a gyroscope, a magnetometer and the like are arranged in most smart phones. The pedestrian dead reckoning is an autonomous relative positioning algorithm for estimating the position of a pedestrian by using an inertial sensor of a smart phone, and the walking route and the position of the pedestrian are calculated by carrying out gait detection, step length estimation and course calculation on the pedestrian. However, due to the limited precision of the built-in MEMS sensor of the smart phone and the accumulated error of the inertial sensor device, the positioning error of the PDR will be larger and larger when the position estimation is performed for a long time. In addition, the traditional PDR method can only achieve pedestrian position estimation on a two-dimensional plane, and when the pedestrian has a change in the height position of going upstairs or downstairs, the PDR cannot perform accurate positioning.
In order to solve the problem of PDR error accumulation, many scholars propose a solution that combines PDR with other indoor positioning means, such as correcting the PDR positioning result by using additional information such as Wi-Fi, bluetooth, and geomagnetism, so as to reduce the PDR positioning error accumulation. However, the auxiliary means using external signals such as Wi-Fi and bluetooth requires a large amount of infrastructure to be deployed in an indoor scene in advance, and relies on the external signals, which are susceptible to interference from signals in the environment. The PDR method based on the indoor magnetic field characteristic assistance needs to spend a large amount of time and energy to construct a fine-grained signal fingerprint database in an off-line stage, and the PDR positioning method based on the map information constraint puts higher requirements on the drawing of a high-precision indoor map. The scheme utilizes the absolute position positioning technology and the PDR algorithm to carry out fusion, although the problem of error accumulation of the PDR can be solved, additional infrastructure needs to be arranged, the cost of a positioning system is increased, the advantages of autonomy and continuity of inertial navigation are weakened to a certain extent, and obvious limitations are lacked in practical application. Therefore, the technology for accurately and robustly positioning the pedestrian indoor through researching a low-cost auxiliary PDR independent of external facilities has important application value.
In recent years, computer vision technology is rapidly developed, and a vision SLAM algorithm is continuously developed and matured. The global positioning technology based on the visual feature map is the same as the loop detection principle of SLAM, and is essentially an information retrieval method, and the position of a user is estimated by using a visual feature matching mode. The implementation based on the visual positioning technology is not limited by an external environment, a user only needs to provide one camera to acquire a current image, and a camera sensor is arranged in each current smart phone. Therefore, in the process of dead reckoning of the pedestrian, visual positioning can be carried out by means of a camera sensor built in the smart phone, so that the accumulated error of the PDR method can be corrected in an auxiliary mode, and the purpose of improving positioning accuracy is achieved. However, although the traditional visual matching method can obtain the positioning information, the image query and matching efficiency is low, the real-time requirement cannot be met, and the practical application deployment is difficult to obtain.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the pedestrian positioning method based on the smart phone PDR and the vision correction, which not only can obtain better positioning performance improvement, but also expands the application scene of the traditional PDR from a two-dimensional plane to a three-dimensional space, and has practical research significance and application value.
In order to achieve the above object, the present invention provides a pedestrian positioning method based on a smartphone PDR and vision correction, comprising the following steps:
step 1, establishing a visual characteristic map of a region to be detected, wherein the process is as follows: the method comprises the steps of collecting scene images in a region to be detected by adopting a vision sensor, carrying out synchronous positioning and mapping based on a vision SLAM algorithm, and storing SLAM mapping results as a map database in a key frame as a basic organization form for subsequent online vision positioning.
And 2, determining the initial position and the course angle of the pedestrian based on the global positioning of the visual feature map.
Step 3, PDR positioning: on the basis of the initial position and the course angle, the dead reckoning is carried out on the dead reckoning of the pedestrian based on the PDR, and the walking distance of the pedestrian is reckoned, and the process is as follows: the pedestrian gait detection is carried out by analyzing the output data of the smart phone accelerometer, after the pedestrian is detected to occur in one step, the step length of the pedestrian in the step is calculated according to the acceleration value, and the advancing direction of the pedestrian is calculated through the angular rate information output by the gyroscope. And under the premise of knowing the initial position and the initial course, the position of the pedestrian at each moment can be calculated according to the obtained step length and the course angle.
Step 4, visual positioning: when the walking distance of the pedestrian reaches a set threshold value, obtaining global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map, wherein the process is as follows: after the walking distance of the pedestrian is calculated to reach a set threshold value by using a PDR method, a camera of the smart phone is used for shooting a current scene image, and feature points and descriptor information of a current frame are detected. And performing feature matching on the PDR and the feature map which is established offline by using the prior position information of the PDR to find candidate key frames, and then establishing 2D-3D matching between the current frame and the candidate frames so as to obtain global positioning information at the current moment.
And 5, correcting the PDR positioning result by taking the visual positioning result as a positioning reference, and repeating the steps 3-5 after taking the corrected PDR positioning result as a new initial position and a new course angle of the pedestrian. The PDR and the visual positioning result are combined and fused by adopting an Extended Kalman Filtering (EKF) method. PDR is a relative positioning method, and has a problem of accumulated errors in the positioning process, and needs to be corrected by absolute position information. The visual positioning result based on the visual characteristic map is absolute position information and has no error drift, so that the visual positioning result can be indirectly used for correcting the accumulated error of the PDR, the positioning precision can be improved, and meanwhile, the application scene of the traditional PDR can be expanded from a two-dimensional plane to a three-dimensional space.
The invention provides a pedestrian positioning method based on a smart phone PDR and vision correction, which utilizes an accelerometer and a gyroscope sensor which are arranged in a smart phone to realize pedestrian dead reckoning, simultaneously utilizes a camera sensor of the smart phone to shoot a scene image, carries out vision characteristic matching positioning based on a bag-of-words model, and carries out loose combination fusion on the PDR and vision positioning results by adopting an extended Kalman filter algorithm (EKF) to obtain a fusion positioning result of the pedestrian position. The position and course angle errors of the PDR are corrected by indirectly calling visual positioning, so that the better positioning performance improvement can be obtained, and meanwhile, the application scene of the traditional PDR is expanded from a two-dimensional plane to a three-dimensional space, and the practical research significance and application value are realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of a pedestrian positioning method based on a smartphone PDR and vision correction in an embodiment of the present invention;
FIG. 2 is a diagram illustrating information contained in a single key frame according to an embodiment of the present invention;
FIG. 3 is a flow chart of PDR location in an embodiment of the present invention;
FIG. 4 is a flowchart of visual positioning according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all directional indicators (such as up, down, left, right, front, back \8230;) in the embodiments of the present invention are only used to explain the relative positional relationship between the components, the motion situation, etc. in a specific posture (as shown in the attached drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; the connection can be mechanical connection, electrical connection, physical connection or wireless communication connection; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
Fig. 1 shows a pedestrian positioning method based on a smart phone PDR and visual correction disclosed in this embodiment, which mainly includes the following steps 1 to 5.
Step 1, establishing a visual characteristic map of a region to be detected
The technology for establishing the visual feature map is that certain sensor information is utilized to convert visual features in visual information at different moments into a uniform feature map which can be used for global positioning, and the establishment of the visual feature map is essentially a synchronous mapping and positioning (SLAM) process.
In consideration of the real-time performance of visual positioning and the requirements of scale invariance and rotation invariance of the visual features, the visual SLAM algorithm based on the ORB features is adopted to establish the visual feature map in the to-be-detected area in an off-line manner. And the local map is established by adopting a local BA optimization algorithm, and the pose of each camera and the spatial position of each characteristic point are optimized simultaneously by minimizing the reprojection error of the camera.
Assume the pose of the camera is
Figure 810872DEST_PATH_IMAGE001
Corresponding to the plum group
Figure 92817DEST_PATH_IMAGE002
The spatial position of the feature point is
Figure 214357DEST_PATH_IMAGE003
The observation data being pixel coordinates
Figure 849738DEST_PATH_IMAGE004
Constructing a least squares problem for the observation error as:
Figure 412437DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 740651DEST_PATH_IMAGE006
to be in camera position
Figure 5279DEST_PATH_IMAGE007
Road sign for observing
Figure 975509DEST_PATH_IMAGE008
The generated data;
Figure 720611DEST_PATH_IMAGE009
in order to observe the equation(s),
Figure 95091DEST_PATH_IMAGE010
the number of key frames viewed together with the current frame,
Figure 722382DEST_PATH_IMAGE011
the number of map points which are viewed together.
The visual feature map obtained by SLAM mapping is stored as map data by taking a key frame as a basic organization form. Referring to fig. 2, each keyframe includes the pose of the keyframe in the map coordinate system, the pixel coordinates and the three-dimensional spatial position of the feature point, and the feature descriptor of the feature point, and the complete visual feature map is composed of all the keyframes in the mapping area. In a specific implementation process, the key frame screening/determining process adopts two criteria:
1) The average disparity between the current frame and the previous key frame is greater than a set threshold value keyframe _ disparity, and is usually set to be about 10;
2) The number of feature points tracked by the current frame is lower than a set threshold track _ num, and is generally set to be about 50.
Step 2, determining the initial position and the course angle of the pedestrian based on the global positioning of the visual feature map
In the specific implementation process, when a pedestrian enters the area to be detected for the first time, a position identification algorithm based on the visual feature map can be called, a visual global positioning result is obtained through calculation in the area where the visual feature map is established in the step 1, namely the visual global positioning result can be used as the initial position and the course angle of the pedestrian
Figure 824199DEST_PATH_IMAGE012
. The process of obtaining the visual global positioning result is the same as that in step 4, and is not repeated here.
Step 3, PDR positioning: the method comprises the steps of calculating the dead reckoning of the pedestrian based on PDR on the basis of the initial position and the course angle, and calculating the walking distance of the pedestrian
The process of reckoning the pedestrian dead reckoning based on the PDR comprises the following steps: gait detection is carried out by analyzing data output by an accelerometer of the smart phone, after a pedestrian is detected to occur in one step, the step length of the step is calculated according to the acceleration value, and the heading angle of the pedestrian is calculated according to the angular rate data of a gyroscope. On the basis of the position of the pedestrian at the previous moment, the position of the pedestrian at the current moment can be calculated according to the calculated step length and the course information, so that the position updating process is as follows:
Figure 486124DEST_PATH_IMAGE013
in the formula (I), the compound is shown in the specification,
Figure 31506DEST_PATH_IMAGE014
for the pedestrian is at the firstkThe position of the person in the step (c),
Figure 146093DEST_PATH_IMAGE015
for the pedestrian in the first placek-the position at step 1,
Figure 887538DEST_PATH_IMAGE016
is at the firstkThe course angle of the step at the time of the step,
Figure 669550DEST_PATH_IMAGE017
is as followskStep size of the step.
Referring to fig. 3, the process of reckoning the dead reckoning of the pedestrian based on the PDR specifically includes:
the walking process of the pedestrian has a periodic change rule. According to the movement characteristics of the pedestrian in the walking process, the walking steps can be accurately calculated by analyzing the three-axis acceleration change rule of the accelerometer. Due to the shaking of the body and the error of the sensor in the walking process of the pedestrian, the raw acceleration data needs to be preprocessed by adopting a smoothing filtering method after being acquired, namely:
Figure 385833DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 987716DEST_PATH_IMAGE019
is composed oftThe acceleration after the time of day has been filtered,
Figure 775543DEST_PATH_IMAGE020
is as follows
Figure 271115DEST_PATH_IMAGE021
The acceleration at the moment of time is,Mis the size of the sliding window. In a specific implementation process, the selection of the size of the sliding window is related to the acquisition frequency and the step frequency of the acceleration data, and the sliding window is generally set to be about 5, so that a good gait detection effect can be obtained.
And carrying out gait detection after the original acceleration data are subjected to smooth filtering. Because the posture of the pedestrian holding the mobile phone is not fixed, if the single-axis acceleration value is adopted for gait detection, the problem of unobvious periodic characteristics can be encountered, and therefore the three-axis combined acceleration is used
Figure 548513DEST_PATH_IMAGE022
Determination as gait detectionAccording, its size is calculated as:
Figure 247479DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 838997DEST_PATH_IMAGE024
Figure 64442DEST_PATH_IMAGE025
Figure 637375DEST_PATH_IMAGE026
respectively represent the smooth filtered acceleration
Figure 213849DEST_PATH_IMAGE027
A shaft,
Figure 15583DEST_PATH_IMAGE028
A shaft,
Figure 95535DEST_PATH_IMAGE029
A component of the axis;
based on the resultant acceleration
Figure 839369DEST_PATH_IMAGE030
And the time interval between two successive steps to be determined to determine whether a step occurs:
suppose that
Figure 637560DEST_PATH_IMAGE031
Resultant acceleration of time of day
Figure 305302DEST_PATH_IMAGE032
Is as follows
Figure 380706DEST_PATH_IMAGE033
The peak value in step time is recorded as
Figure 436386DEST_PATH_IMAGE034
. Then
Figure 580929DEST_PATH_IMAGE035
It should satisfy:
Figure 317940DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 372484DEST_PATH_IMAGE037
is composed oft-a resultant acceleration at time 1,
Figure 208853DEST_PATH_IMAGE038
is composed oftThe resultant acceleration at time + 1;
the specific criteria for determining the occurrence of one step are:
Figure 981637DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 256761DEST_PATH_IMAGE040
is an acceleration peak threshold;
Figure 556024DEST_PATH_IMAGE041
being the time interval of adjacent peaks, i.e. of
Figure 687928DEST_PATH_IMAGE042
The duration of the step(s) is,
Figure 885691DEST_PATH_IMAGE043
and
Figure 636609DEST_PATH_IMAGE044
lower and upper thresholds for the time interval.
Considering the influence of the speed of the pedestrian, the acceleration peak value threshold value is required
Figure 931324DEST_PATH_IMAGE045
And time interval threshold
Figure 171813DEST_PATH_IMAGE046
Figure 309402DEST_PATH_IMAGE047
Further dynamic settings are made. Peak threshold value
Figure 723066DEST_PATH_IMAGE048
Is limited to
Figure 809971DEST_PATH_IMAGE049
m/s 2 According to the ratio of the current acceleration peak value to the mean value of the acceleration peak values at the previous two moments, the dynamic adjustment is carried out as follows:
Figure 159043DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 128136DEST_PATH_IMAGE051
is a firstkThe peak value threshold at the time of step calculation,
Figure 470125DEST_PATH_IMAGE052
is as followskThe peak threshold at the time of the +1 step calculation,
Figure 208274DEST_PATH_IMAGE053
is as follows
Figure 728248DEST_PATH_IMAGE054
The ratio of the peak value of the synthesized acceleration of the step to the average peak value of the synthesized acceleration of the previous three steps. The peak threshold for the first three steps of calculations was set at around 12.
Time interval threshold
Figure 184637DEST_PATH_IMAGE055
Figure 143366DEST_PATH_IMAGE056
The setting of (2) needs to be considered in combination with the frequency range of 0.5-5Hz when the pedestrian walks normally. The specific dynamic adjustment is as follows:
Figure 126234DEST_PATH_IMAGE057
after the pedestrian is detected to take place in one step, the step length and the heading of the step are estimated. Estimate the second by using Weinbeng step size model
Figure 941743DEST_PATH_IMAGE058
Step length of step
Figure 495216DEST_PATH_IMAGE059
Namely:
Figure 319952DEST_PATH_IMAGE060
wherein the content of the first and second substances,
Figure 235956DEST_PATH_IMAGE061
the step length coefficient is different in value for different pedestrians and is related to factors such as the height and the step frequency of each person;
Figure 347000DEST_PATH_IMAGE062
Figure 43561DEST_PATH_IMAGE063
is the first
Figure 344092DEST_PATH_IMAGE064
The maximum and minimum values of acceleration are synthesized during the step.
In the specific implementation process, the heading estimation based on the gyroscope can only provide a relative heading estimation value for the PDR, and on the premise of knowing initial heading information, the angular rate output by the gyroscope is subjected to integral calculation to obtain the heading at the current moment, wherein the integral calculation is as follows:
Figure 786706DEST_PATH_IMAGE065
wherein, the first and the second end of the pipe are connected with each other,
Figure 209597DEST_PATH_IMAGE066
is the initial course angle;
Figure 65557DEST_PATH_IMAGE067
for gyroscopes relative to navigational coordinate systemsZA measure of the angular velocity of the shaft,
Figure 91151DEST_PATH_IMAGE068
is as follows
Figure 44064DEST_PATH_IMAGE069
The amount of change in the course angle of the step,
Figure 247643DEST_PATH_IMAGE070
as an initial moment of dead reckoning,
Figure 387637DEST_PATH_IMAGE071
is as follows
Figure 29971DEST_PATH_IMAGE064
The time corresponding to the step.
Step 4, visual positioning: when the walking distance of the pedestrian reaches a set threshold value, obtaining the global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map
When the walking distance of the pedestrian calculated by adopting the PDR method reaches the set threshold value
Figure 696445DEST_PATH_IMAGE072
And then, calling a position identification algorithm based on the visual feature map, and calculating in the area of the visual feature map established in the step 1 to obtain a visual global positioning result. The visual position recognition is the same as the loop detection principle in the visual SLAM, and the first step of global positioning can be regarded as a closed-loop detection process. Firstly, ORB feature points and feature descriptors of a current frame are extracted, and the Bag-of-Words direction of the current frame is calculated based on a Bag-of-Words (BoW) modelAnd (4) quantity information. Then, in a visual feature map composed of key frames, by utilizing position prior information calculated by PDR, the distance between bag-of-word vectors of different images (namely the similarity between the images) is calculated to find the key frames in the map similar to the current frame, and the key frames can be used as candidate frames for further fine positioning process.
Referring to fig. 4, the process of global positioning based on the visual feature map specifically includes:
establishing a dictionary of the visual feature map: the dictionary of the feature descriptors in the visual SLAM is obtained by clustering features of a large number of images, the visual feature map in the step 1 is established, and an ORB dictionary specific to the visual feature map can be generated by clustering all feature points appearing in the visual feature map.
The dictionary training is based on K-means algorithmNIndividual Word (Word):
Figure 461138DEST_PATH_IMAGE073
. In order to improve the efficiency of image matching and query, a dictionary is expressed by using a K-ary tree, and leaf layers are called words. After the K-tree dictionary is constructed, a weight is given to each word by adopting a TF-IDF (Term Frequency-Inverse Document Frequency) method. The idea of IDF is that the lower the frequency of occurrence of a word in a dictionary, the higher the degree of discrimination of the classified image:
Figure 291691DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 409820DEST_PATH_IMAGE075
as words
Figure 337325DEST_PATH_IMAGE076
Is/are as followsIDFThe value of the sum of the values,
Figure 131974DEST_PATH_IMAGE077
for all the feature quantities in the dictionary,
Figure 184244DEST_PATH_IMAGE078
as words
Figure 496276DEST_PATH_IMAGE079
The number of features in (1);
the idea of TF is that the more times a word appears in an image, the higher its discrimination. Hypothetical image
Figure 888075DEST_PATH_IMAGE080
Chinese word
Figure 932254DEST_PATH_IMAGE081
Appear to
Figure 534137DEST_PATH_IMAGE082
Second, the number of co-occurring words is
Figure 243335DEST_PATH_IMAGE083
Then word
Figure 817536DEST_PATH_IMAGE084
Is/are as followsTFValue of
Figure 94934DEST_PATH_IMAGE085
Comprises the following steps:
Figure 59479DEST_PATH_IMAGE086
finally, the word is obtained
Figure 385418DEST_PATH_IMAGE087
Weight of (2)
Figure 142021DEST_PATH_IMAGE088
Comprises the following steps:
Figure 528003DEST_PATH_IMAGE089
for a certain imageAIts feature points are corresponding to multiple words, and calculation is performedIF-IDFThe bag-of-words vector that is worth describing the image is:
Figure 963533DEST_PATH_IMAGE090
wherein the content of the first and second substances,
Figure 155480DEST_PATH_IMAGE091
for all the number of words of the dictionary,
Figure 704273DEST_PATH_IMAGE092
as words
Figure 198839DEST_PATH_IMAGE093
Is/are as followsTF-IDFValue of a step of,
Figure 997031DEST_PATH_IMAGE094
as an imageABag of words vector.
Similarity calculation between images typically using bag-of-word vector distance
Figure 851723DEST_PATH_IMAGE095
Norm form, i.e.:
Figure 255023DEST_PATH_IMAGE096
in the formula (I), the compound is shown in the specification,
Figure 45124DEST_PATH_IMAGE097
as an imageAAnd imagesBThe degree of similarity between the two images,
Figure 533874DEST_PATH_IMAGE098
as an imageBThe bag-of-words vector of (c),
Figure 942990DEST_PATH_IMAGE099
as a bag of words vector
Figure 263113DEST_PATH_IMAGE100
To (1) a
Figure 161799DEST_PATH_IMAGE101
The number of the components is such that,
Figure 59217DEST_PATH_IMAGE102
as a bag of words vector
Figure 396657DEST_PATH_IMAGE103
To (1)
Figure 181073DEST_PATH_IMAGE104
The number of the components is such that,
Figure 516240DEST_PATH_IMAGE105
Figure 776320DEST_PATH_IMAGE106
bag of expression vector
Figure 776506DEST_PATH_IMAGE107
And
Figure 743325DEST_PATH_IMAGE108
to (1) aiIndividual components, i.e. each visual word
Figure 311709DEST_PATH_IMAGE109
Weight value of (A), and
Figure 668872DEST_PATH_IMAGE110
Figure 285799DEST_PATH_IMAGE111
the meanings indicated are the same;
acquiring a current frame image acquired by a camera on a smart phone, calculating the similarity between the current frame and all key frames near a PDR calculated position in a visual feature map, selecting a plurality of frames with the highest similarity as candidate frames, and performing feature matching and PnP pose solving to obtain accurate global positioning information, wherein the specific implementation process comprises the following steps:
feature matching refers to determining correspondence between feature points of different images, and the similarity between feature points is usually measured by using a feature descriptor distance. For BRIEF binary descriptors of ORB features, hamming distance is usually adopted
Figure 435020DEST_PATH_IMAGE112
To express the similarity, i.e.:
Figure 33361DEST_PATH_IMAGE113
wherein the content of the first and second substances,
Figure 205716DEST_PATH_IMAGE114
representing an exclusive or operation;
Figure 423071DEST_PATH_IMAGE115
Figure 302165DEST_PATH_IMAGE116
and (3) respectively representing BRIEF descriptors of ORB characteristic points in the two images.
The feature similarity measurement method based on the Hamming distance adopts fast approximate nearest neighbor (FLANN) to match feature points. Considering the problem that mismatching may occur, random sampling consistency (RANSC) is used to screen matching and remove some mismatching point pairs.
After the feature matching relation between the current frame and the candidate frame is obtained, the position and pose of the current frame relative to the map are solved by adopting a PnP (passive-n-Point) method due to the fact that the three-dimensional coordinates of the feature points of the candidate frame are known in the visual feature map. PnP is a method for solving the pose by using a 3D-2D point pair, wherein the 3D point is obtained from a visual feature map, and the 2D point is a feature point of the current frame. The PnP problem is constructed as a non-linear minimum problem with respect to minimizing reprojection errors by means of non-linear optimization.
Consider that
Figure 946773DEST_PATH_IMAGE117
A three-dimensional space point
Figure 606424DEST_PATH_IMAGE118
And projection point
Figure 752104DEST_PATH_IMAGE119
To find the pose of the camera
Figure 610338DEST_PATH_IMAGE120
Its lie group is represented as
Figure 35635DEST_PATH_IMAGE121
. Assume a spatial point coordinate of
Figure 182582DEST_PATH_IMAGE122
Projected pixel coordinates of
Figure 741740DEST_PATH_IMAGE123
. Due to the unknown pose of the camera and the influence of the noise of the observation point, an error exists between the projection position and the observation position of the 3D point. Summing all the reprojection errors to construct a least square problem, and iteratively solving the optimal camera pose
Figure 579114DEST_PATH_IMAGE124
To minimize it, namely:
Figure 831104DEST_PATH_IMAGE125
wherein the content of the first and second substances,
Figure 199769DEST_PATH_IMAGE126
representing a scale factor;
Figure 172404DEST_PATH_IMAGE127
is a camera internal reference matrix.
After the translation and the rotation between each candidate frame and the current frame are calculated, some abnormal candidate frames are removed by a RANSAC method. Finally, the rest of the wine is treatedAll map points in the selected frame are projected to the current frame for searching feature matching, and if the number of matching is more than a set threshold value
Figure 270810DEST_PATH_IMAGE128
And receiving a camera pose result, otherwise, not performing vision correction, and skipping the filtering and fusing step of the following step 5.
After the pose result of the camera is obtained through calculation, the position of the camera
Figure 631384DEST_PATH_IMAGE129
As the position reference information of the pedestrian at the current moment, the attitude matrix of the camera is determined
Figure 674295DEST_PATH_IMAGE130
Converting into Euler angle to obtain the current reference course angle information of pedestrian
Figure 575255DEST_PATH_IMAGE131
The method comprises the following steps:
Figure 465851DEST_PATH_IMAGE132
wherein, the first and the second end of the pipe are connected with each other,
Figure 935009DEST_PATH_IMAGE133
as a matrix of gesturesRThe element in the 2 nd row and 1 st column position,
Figure 75004DEST_PATH_IMAGE134
as a matrix of posturesRThe element at the 1 st row and 1 st column position.
And 5, correcting the PDR positioning result by taking the visual positioning result as a positioning reference, and repeating the step 3-5 after the corrected PDR positioning result is taken as a new initial position and a new course angle of the pedestrian.
In the specific implementation process, the PDR and the visual positioning result are loosely combined and fused based on an Extended Kalman Filtering (EKF) method, and the visual position identification result is used as a positioning reference, so that the accumulated error of the PDR can be corrected, the positioning precision is improved, and the problem of pedestrian positioning of the PDR in a three-dimensional space can be solved.
In the prediction phase of the extended Kalman Filter method EKF, first
Figure 717338DEST_PATH_IMAGE135
The state transition equation of the pedestrian in steps is as follows:
Figure 649390DEST_PATH_IMAGE136
wherein the content of the first and second substances,
Figure 148505DEST_PATH_IMAGE137
is as follows
Figure 979058DEST_PATH_IMAGE138
State prediction vector of step, i.e. pedestrian first by pedestrian dead reckoning PDR
Figure 362766DEST_PATH_IMAGE139
Position coordinates and course angles of the steps;
Figure 759112DEST_PATH_IMAGE140
for the first time by the extended Kalman Filter method EFK
Figure 632390DEST_PATH_IMAGE141
State vectors obtained by step-by-step optimal estimation, i.e. the first pedestrian obtained by vision correction
Figure 137189DEST_PATH_IMAGE141
Position coordinates and course angle of the step
Figure 183643DEST_PATH_IMAGE142
Figure 637758DEST_PATH_IMAGE143
Figure 619620DEST_PATH_IMAGE144
(ii) a The initial values are set to the initial position and heading angle of the PDR, i.e.
Figure 221503DEST_PATH_IMAGE145
Figure 9331DEST_PATH_IMAGE146
Representing a non-linear function in a state transition equation;
Figure 504903DEST_PATH_IMAGE147
indicating the step number corresponding to the last time of calling the visual positioning result to correct the PDR positioning result;
Figure 782300DEST_PATH_IMAGE148
is a process noise vector;
transfer of non-linear functions in state equations
Figure 74741DEST_PATH_IMAGE149
In that
Figure 338364DEST_PATH_IMAGE150
The vicinity is subjected to linearization processing, and a high-order part is omitted to obtain a second
Figure 563809DEST_PATH_IMAGE151
State matrix corresponding to step
Figure 215370DEST_PATH_IMAGE152
The method comprises the following steps:
Figure 916478DEST_PATH_IMAGE153
wherein the content of the first and second substances,
Figure 46108DEST_PATH_IMAGE154
representing a non-linear function
Figure 391639DEST_PATH_IMAGE155
In that
Figure 886206DEST_PATH_IMAGE156
Carrying out linearization treatment nearby;
then to the predicted variables
Figure 684397DEST_PATH_IMAGE157
Covariance matrix of
Figure 617718DEST_PATH_IMAGE158
Updating is carried out as follows:
Figure 207968DEST_PATH_IMAGE159
wherein the content of the first and second substances,
Figure 935753DEST_PATH_IMAGE160
shows the EKF pair using the extended Kalman filtering method
Figure 221241DEST_PATH_IMAGE161
The covariance matrix of the optimal estimated value of the step state is set to the initial value
Figure 630357DEST_PATH_IMAGE162
Figure 684900DEST_PATH_IMAGE163
Representing the process noise matrix brought by the prediction model itself, which is composed of the average errors of the elements of the pedestrian dead reckoning method PDR, wherein
Figure 849165DEST_PATH_IMAGE164
Figure 12162DEST_PATH_IMAGE165
The average error of the position is represented,
Figure 21707DEST_PATH_IMAGE166
indicating the heading angle average error.
In the updating stage of the extended Kalman filtering method EKF, the observation equation of the system is as follows:
Figure 196336DEST_PATH_IMAGE167
wherein the content of the first and second substances,
Figure 203606DEST_PATH_IMAGE168
is an observation matrix;
Figure 666949DEST_PATH_IMAGE169
is shown in
Figure 276922DEST_PATH_IMAGE170
The observation vector obtained by visual positioning identification is stepped,
Figure 696270DEST_PATH_IMAGE171
Figure 999076DEST_PATH_IMAGE172
is the position information of the visual positioning at the k step,
Figure 684135DEST_PATH_IMAGE173
the course angle of the visual positioning in the k step is obtained;
Figure 238744DEST_PATH_IMAGE174
is an observation error vector;
Figure 325649DEST_PATH_IMAGE175
is shown in
Figure 799356DEST_PATH_IMAGE176
Step by the observation vector obtained by PDR positioning identification,
Figure 158662DEST_PATH_IMAGE177
Figure 313699DEST_PATH_IMAGE178
location information for PDR location at step k,
Figure 317428DEST_PATH_IMAGE179
the heading angle for the PDR location at step k.
Calculate the first
Figure 102981DEST_PATH_IMAGE180
Step EKF gain matrix
Figure 559370DEST_PATH_IMAGE181
The method comprises the following steps:
Figure 518099DEST_PATH_IMAGE182
wherein the content of the first and second substances,
Figure 500967DEST_PATH_IMAGE183
is as follows
Figure 519739DEST_PATH_IMAGE184
And calculating an observation noise covariance matrix corresponding to the step by the following formula:
Figure 463424DEST_PATH_IMAGE185
wherein the content of the first and second substances,
Figure 163527DEST_PATH_IMAGE186
as regards the length of the window, it is,
Figure 876268DEST_PATH_IMAGE187
is as follows
Figure 800362DEST_PATH_IMAGE188
The visual position identifies the resulting observation vector during stepping,
Figure 621556DEST_PATH_IMAGE189
is as follows
Figure 718825DEST_PATH_IMAGE190
And (3) calculating the state vector by the PDR during the step.
Line of calculationHuman being to
Figure 754914DEST_PATH_IMAGE191
Optimal estimation of step state
Figure 787592DEST_PATH_IMAGE192
The method comprises the following steps:
Figure 909132DEST_PATH_IMAGE193
and simultaneously updating the covariance matrix of the optimal state estimation value, and calculating the EKF by using the Kalman filtering method for the next time, wherein the method comprises the following steps of:
Figure 810092DEST_PATH_IMAGE194
wherein the content of the first and second substances,
Figure 645496DEST_PATH_IMAGE195
is an identity matrix.
In the specific calculation process, the height value in the visual positioning result can be directly adopted for the height position of the pedestrian, so that the indoor positioning of the pedestrian in the three-dimensional space is realized.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or any other related technical fields, which are directly or indirectly applied to the present invention, are included in the scope of the present invention.

Claims (8)

1. A pedestrian positioning method based on a smart phone PDR and vision correction is characterized by comprising the following steps:
step 1, establishing a visual feature map of a region to be detected, wherein the visual feature map is stored as a map database by taking a key frame as a basic organization form;
step 2, determining the initial position and the course angle of the pedestrian based on the global positioning of the visual feature map;
step 3, PDR positioning: calculating the dead reckoning of the pedestrian based on the PDR on the basis of the initial position and the course angle, and calculating the walking distance of the pedestrian;
step 4, visual positioning: when the walking distance of the pedestrian reaches a set threshold value, obtaining global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map;
and 5, correcting the PDR positioning result by taking the visual positioning result as a positioning reference, and repeating the step 3-5 after the corrected PDR positioning result is taken as a new initial position and a new course angle of the pedestrian.
2. The pedestrian positioning method based on smartphone PDR and vision correction as claimed in claim 1, wherein in step 3, the dead reckoning of the pedestrian dead reckoning based on PDR specifically includes:
acquiring original acceleration data of an accelerometer in the smart phone, and preprocessing the data by adopting smooth filtering, wherein the method comprises the following steps:
Figure 314129DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 762428DEST_PATH_IMAGE002
is composed oftThe acceleration after the time of day has been filtered,
Figure 214269DEST_PATH_IMAGE003
is as follows
Figure 343899DEST_PATH_IMAGE004
The acceleration at the moment of time is,Mis the size of the sliding window;
synthesizing the three-axis components of the filtered original acceleration data to obtain a synthesized accelerationaccThe method comprises the following steps:
Figure 955009DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 433264DEST_PATH_IMAGE006
Figure 434718DEST_PATH_IMAGE007
Figure 430356DEST_PATH_IMAGE008
respectively, represents the acceleration after smooth filtering
Figure 505759DEST_PATH_IMAGE009
A shaft,
Figure 233544DEST_PATH_IMAGE010
A shaft,
Figure 519031DEST_PATH_IMAGE011
A component of the axis;
according to acceleration
Figure 177415DEST_PATH_IMAGE012
And the time interval between two successive steps to be determined, the criterion of one-step occurrence is:
Figure 435221DEST_PATH_IMAGE013
wherein, the first and the second end of the pipe are connected with each other,
Figure 661803DEST_PATH_IMAGE014
is as follows
Figure 309953DEST_PATH_IMAGE015
The peak value of the acceleration is synthesized within the step time,
Figure 116235DEST_PATH_IMAGE016
is an acceleration peak threshold;
Figure 494126DEST_PATH_IMAGE017
is a first
Figure 485085DEST_PATH_IMAGE018
The duration of the step(s) is,
Figure 10744DEST_PATH_IMAGE019
and
Figure 823980DEST_PATH_IMAGE020
lower and upper thresholds for the time interval;
after the detection step occurs, estimating the step length and the course of the step, and the method comprises the following steps:
Figure 728482DEST_PATH_IMAGE021
wherein, the first and the second end of the pipe are connected with each other,
Figure 296866DEST_PATH_IMAGE022
is a first
Figure 903297DEST_PATH_IMAGE023
The step size of the step(s) is,
Figure 520223DEST_PATH_IMAGE024
is a step-size coefficient that is,
Figure 669445DEST_PATH_IMAGE025
Figure 346414DEST_PATH_IMAGE026
is the first
Figure 190873DEST_PATH_IMAGE027
Resultant acceleration during stepMaximum and minimum values of;
Figure 408228DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure 536590DEST_PATH_IMAGE029
is as follows
Figure 384460DEST_PATH_IMAGE030
The course angle of the step(s) is,
Figure 840849DEST_PATH_IMAGE031
is the initial course angle;
Figure 737261DEST_PATH_IMAGE032
for gyroscopes relative to navigational coordinate systemsZA measure of the angular velocity of the shaft,
Figure 533179DEST_PATH_IMAGE033
is as follows
Figure 348688DEST_PATH_IMAGE030
The amount of change in the course angle of the step,
Figure 682586DEST_PATH_IMAGE034
as an initial moment of the dead-reckoning,
Figure 445006DEST_PATH_IMAGE035
is as follows
Figure 423326DEST_PATH_IMAGE036
The time corresponding to the step;
and finally, updating the position according to the step length and the course, which comprises the following steps:
Figure 19524DEST_PATH_IMAGE037
in the formula (I), the compound is shown in the specification,
Figure 653767DEST_PATH_IMAGE038
for the pedestrian is at the firstkThe position of the person in the step is,
Figure 751036DEST_PATH_IMAGE039
for the pedestrian is at the firstk-position at step 1.
3. The pedestrian positioning method based on smart phone PDR and vision correction as claimed in claim 2, wherein the influence of pedestrian walking speed is considered, and the acceleration peak value threshold value is considered
Figure 974076DEST_PATH_IMAGE040
And dynamically setting as follows:
Figure 69071DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 252928DEST_PATH_IMAGE042
is as followskThe peak value threshold at the time of step calculation,
Figure 29254DEST_PATH_IMAGE043
is a firstkThe peak threshold at the time of the +1 step calculation,
Figure 654270DEST_PATH_IMAGE044
is as follows
Figure 248062DEST_PATH_IMAGE045
The ratio of the peak value of the synthesized acceleration of the step to the average peak value of the synthesized acceleration of the previous three steps.
4. Smartphone-based smart phone according to claim 3The pedestrian positioning method based on PDR and vision correction is characterized in that the influence of the speed of walking of pedestrians on time interval threshold is considered
Figure 512691DEST_PATH_IMAGE046
Figure 420604DEST_PATH_IMAGE047
And dynamically setting as follows:
if the peak threshold value in the calculation of the current step is less than 12, then
Figure 228023DEST_PATH_IMAGE048
If the peak threshold value in the calculation of the current step is greater than or equal to 12 and less than 13.5, then
Figure 602503DEST_PATH_IMAGE049
If the peak threshold value when the current step is calculated is greater than or equal to 13.5, then
Figure 433056DEST_PATH_IMAGE050
5. The pedestrian positioning method based on the smartphone PDR and the visual correction according to any one of claims 1 to 4, wherein in step 4, the global positioning based on the visual feature map obtains global positioning information of a pedestrian at the current time, specifically:
acquiring a current frame image through a camera of the smart phone, extracting ORB feature points and feature descriptors of the current frame, and calculating bag-of-word vectors of the current frame;
searching a key frame similar to the current frame in the visual feature map based on the distance between the bag-of-word vectors of different images to serve as a candidate frame;
establishing 2D-3D point pair matching between a current frame and a candidate frame, after eliminating abnormal candidate frames by adopting an RANSAC method, projecting all map points in the remaining candidate frames to the current frame for searching characteristic matching, summing all re-projection errors to construct a least square problem, and then solving by adopting a PnP method to obtain the pose of the camera;
and finally, converting the attitude matrix of the camera into an Euler angle to obtain the course angle of the current position of the pedestrian.
6. The pedestrian positioning method based on smart phone PDR and vision correction as claimed in any one of claims 1 to 4, wherein in step 5, the PDR positioning result and the vision positioning result are loosely combined and fused based on the extended Kalman filtering method, and the vision positioning result is used as a positioning reference to correct the accumulated error of the PDR positioning result.
7. The pedestrian positioning method based on smartphone PDR and vision correction as claimed in claim 6, wherein in step 5, the accumulated error of the positioning result of PDR corrected by taking the vision positioning result as the positioning reference specifically is:
in the prediction stage of the extended Kalman filtering method, the first step is established
Figure 941398DEST_PATH_IMAGE051
The pedestrian state transition equation during stepping is:
Figure 485815DEST_PATH_IMAGE052
wherein, the first and the second end of the pipe are connected with each other,
Figure 359094DEST_PATH_IMAGE053
is as follows
Figure 739259DEST_PATH_IMAGE054
State prediction vector of step, i.e. pedestrian first through PDR
Figure 395500DEST_PATH_IMAGE055
Position coordinates and course angle of the step
Figure 115194DEST_PATH_IMAGE056
Figure 221690DEST_PATH_IMAGE057
Figure 948207DEST_PATH_IMAGE058
Figure 736034DEST_PATH_IMAGE059
For the second one by the extended Kalman Filter method
Figure 372552DEST_PATH_IMAGE060
State vectors obtained by stepwise optimal estimation of the pedestrian, i.e. the pedestrian obtained by visual correction
Figure 259736DEST_PATH_IMAGE061
Position coordinates and course angle of step
Figure 286598DEST_PATH_IMAGE062
Figure 940434DEST_PATH_IMAGE063
Figure 290512DEST_PATH_IMAGE064
Figure 676494DEST_PATH_IMAGE065
Is a non-linear function;
Figure 252969DEST_PATH_IMAGE066
indicating the step number corresponding to the last time of calling the visual positioning result to correct the PDR positioning result;
Figure 320282DEST_PATH_IMAGE067
in order to be a vector of the process noise,
Figure 400234DEST_PATH_IMAGE068
is as followsiThe step size of the step(s) is,
Figure 222696DEST_PATH_IMAGE069
is as followsiThe course angle of the step(s) is,
Figure 145522DEST_PATH_IMAGE070
is as followsiStep course angle variation;
to make a non-linear function
Figure 141160DEST_PATH_IMAGE071
In that
Figure 278880DEST_PATH_IMAGE072
The neighborhood is linearized, and the high-order part is removed to obtain the second
Figure 209927DEST_PATH_IMAGE073
State matrix corresponding to step
Figure 495415DEST_PATH_IMAGE074
The method comprises the following steps:
Figure 966847DEST_PATH_IMAGE075
based on state matrices
Figure 411604DEST_PATH_IMAGE076
For predicted variable
Figure 372607DEST_PATH_IMAGE077
Covariance matrix of
Figure 83074DEST_PATH_IMAGE078
Updating is carried out as follows:
Figure 295880DEST_PATH_IMAGE079
wherein, the first and the second end of the pipe are connected with each other,
Figure 673772DEST_PATH_IMAGE080
represents the application of the extended Kalman Filter method to the second
Figure 540097DEST_PATH_IMAGE081
A covariance matrix of the optimal estimate of the step state,
Figure 190390DEST_PATH_IMAGE082
representing a process noise matrix;
in the updating stage of the extended Kalman filtering method, the observation equation of the system is as follows:
Figure 3625DEST_PATH_IMAGE083
wherein the content of the first and second substances,
Figure 32761DEST_PATH_IMAGE084
in order to observe the matrix, the system,
Figure 945353DEST_PATH_IMAGE085
is shown in
Figure 958309DEST_PATH_IMAGE086
The observation vector obtained by visual positioning identification is stepped,
Figure 575235DEST_PATH_IMAGE087
Figure 849090DEST_PATH_IMAGE088
Figure 526059DEST_PATH_IMAGE089
is as followskThe position information and course angle of visual positioning during walking,
Figure 760732DEST_PATH_IMAGE090
an observation error vector is obtained;
calculate the first
Figure 587873DEST_PATH_IMAGE091
Step EKF gain matrix
Figure 591601DEST_PATH_IMAGE092
The method comprises the following steps:
Figure 173892DEST_PATH_IMAGE093
wherein the content of the first and second substances,
Figure 20495DEST_PATH_IMAGE094
is a first
Figure 979223DEST_PATH_IMAGE095
Step one, corresponding observation noise covariance matrix;
calculate the number of the pedestrian
Figure 103037DEST_PATH_IMAGE096
Optimal estimation of step state
Figure 793913DEST_PATH_IMAGE097
The method comprises the following steps:
Figure 940860DEST_PATH_IMAGE098
wherein, the first and the second end of the pipe are connected with each other,
Figure 500017DEST_PATH_IMAGE099
namely the corrected PDR positioning result.
8. The pedestrian positioning method based on smartphone PDR and vision correction as claimed in claim 7, wherein the update process of the covariance matrix of the optimal state estimation value is:
Figure 416021DEST_PATH_IMAGE100
wherein, the first and the second end of the pipe are connected with each other,
Figure 527065DEST_PATH_IMAGE101
is as follows
Figure 223626DEST_PATH_IMAGE102
A covariance matrix of the optimal estimated value of the step state,
Figure 196261DEST_PATH_IMAGE103
is an identity matrix.
CN202211133744.1A 2022-09-19 2022-09-19 Pedestrian positioning method based on smart phone PDR and vision correction Active CN115235455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211133744.1A CN115235455B (en) 2022-09-19 2022-09-19 Pedestrian positioning method based on smart phone PDR and vision correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211133744.1A CN115235455B (en) 2022-09-19 2022-09-19 Pedestrian positioning method based on smart phone PDR and vision correction

Publications (2)

Publication Number Publication Date
CN115235455A true CN115235455A (en) 2022-10-25
CN115235455B CN115235455B (en) 2023-01-13

Family

ID=83681806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211133744.1A Active CN115235455B (en) 2022-09-19 2022-09-19 Pedestrian positioning method based on smart phone PDR and vision correction

Country Status (1)

Country Link
CN (1) CN115235455B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681935A (en) * 2023-05-31 2023-09-01 国家深海基地管理中心 Autonomous recognition and positioning method and system for deep sea hydrothermal vent

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090192708A1 (en) * 2008-01-28 2009-07-30 Samsung Electronics Co., Ltd. Method and system for estimating step length pedestrian navigation system
EP2386828A1 (en) * 2010-05-12 2011-11-16 Technische Universität Graz Method and system for detection of a zero velocity state of an object
CN104215238A (en) * 2014-08-21 2014-12-17 北京空间飞行器总体设计部 Indoor positioning method of intelligent mobile phone
WO2018043934A1 (en) * 2016-09-02 2018-03-08 유치헌 System and method for zero-delay real-time detection of walking using acceleration sensor
CN109405829A (en) * 2018-08-28 2019-03-01 桂林电子科技大学 Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion
CN111595344A (en) * 2020-06-01 2020-08-28 中国矿业大学 Multi-posture downlink pedestrian dead reckoning method based on map information assistance
CN112129281A (en) * 2019-06-25 2020-12-25 南京航空航天大学 High-precision image navigation positioning method based on local neighborhood map
CN112637762A (en) * 2020-12-11 2021-04-09 武汉科技大学 Indoor fusion positioning method based on improved PDR algorithm
CN113029148A (en) * 2021-03-06 2021-06-25 西南交通大学 Inertial navigation indoor positioning method based on course angle accurate correction
CN114111784A (en) * 2021-10-26 2022-03-01 杭州电子科技大学 Crowdsourcing-based automatic construction method and system for indoor corridor map

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090192708A1 (en) * 2008-01-28 2009-07-30 Samsung Electronics Co., Ltd. Method and system for estimating step length pedestrian navigation system
EP2386828A1 (en) * 2010-05-12 2011-11-16 Technische Universität Graz Method and system for detection of a zero velocity state of an object
CN104215238A (en) * 2014-08-21 2014-12-17 北京空间飞行器总体设计部 Indoor positioning method of intelligent mobile phone
WO2018043934A1 (en) * 2016-09-02 2018-03-08 유치헌 System and method for zero-delay real-time detection of walking using acceleration sensor
CN109405829A (en) * 2018-08-28 2019-03-01 桂林电子科技大学 Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion
CN112129281A (en) * 2019-06-25 2020-12-25 南京航空航天大学 High-precision image navigation positioning method based on local neighborhood map
CN111595344A (en) * 2020-06-01 2020-08-28 中国矿业大学 Multi-posture downlink pedestrian dead reckoning method based on map information assistance
CN112637762A (en) * 2020-12-11 2021-04-09 武汉科技大学 Indoor fusion positioning method based on improved PDR algorithm
CN113029148A (en) * 2021-03-06 2021-06-25 西南交通大学 Inertial navigation indoor positioning method based on course angle accurate correction
CN114111784A (en) * 2021-10-26 2022-03-01 杭州电子科技大学 Crowdsourcing-based automatic construction method and system for indoor corridor map

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
包川: "多传感器融合的移动机器人三维地图构建", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
揭云飞等: "视觉SLAM系统分析", 《电脑知识与技术》 *
朱会平: "基于图像检索和航位推算的室内定位方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李弋星等: "基于改进关键帧选择的RGB-D SLAM算法", 《大连理工大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681935A (en) * 2023-05-31 2023-09-01 国家深海基地管理中心 Autonomous recognition and positioning method and system for deep sea hydrothermal vent
CN116681935B (en) * 2023-05-31 2024-01-23 国家深海基地管理中心 Autonomous recognition and positioning method and system for deep sea hydrothermal vent

Also Published As

Publication number Publication date
CN115235455B (en) 2023-01-13

Similar Documents

Publication Publication Date Title
CN112014857B (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN110125928B (en) Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames
JP7326720B2 (en) Mobile position estimation system and mobile position estimation method
CN105371847B (en) A kind of interior real scene navigation method and system
CN105241445B (en) A kind of indoor navigation data capture method and system based on intelligent mobile terminal
CN105761242B (en) Blind person walking positioning method based on computer binocular vision and inertial measurement
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
CN110553648A (en) method and system for indoor navigation
CN110579207B (en) Indoor positioning system and method based on combination of geomagnetic signals and computer vision
CN108549376A (en) A kind of navigation locating method and system based on beacon
CN111595344B (en) Multi-posture downlink pedestrian dead reckoning method based on map information assistance
CN110631578B (en) Indoor pedestrian positioning and tracking method under map-free condition
CN110032965A (en) Vision positioning method based on remote sensing images
KR101985344B1 (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN115235455B (en) Pedestrian positioning method based on smart phone PDR and vision correction
CN116007609A (en) Positioning method and computing system for fusion of multispectral image and inertial navigation
CN112985392B (en) Pedestrian inertial navigation method and device based on graph optimization framework
CN113155126A (en) Multi-machine cooperative target high-precision positioning system and method based on visual navigation
CN112798020B (en) System and method for evaluating positioning accuracy of intelligent automobile
Liu et al. Integrated velocity measurement algorithm based on optical flow and scale-invariant feature transform
CN113114850B (en) Online fusion positioning method based on surveillance video and PDR
WO2020076263A2 (en) A system for providing position determination with high accuracy
Chen et al. ReLoc-PDR: Visual Relocalization Enhanced Pedestrian Dead Reckoning via Graph Optimization
CN115574816B (en) Bionic vision multi-source information intelligent perception unmanned platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant