CN115235455A - Pedestrian positioning method based on smart phone PDR and vision correction - Google Patents
Pedestrian positioning method based on smart phone PDR and vision correction Download PDFInfo
- Publication number
- CN115235455A CN115235455A CN202211133744.1A CN202211133744A CN115235455A CN 115235455 A CN115235455 A CN 115235455A CN 202211133744 A CN202211133744 A CN 202211133744A CN 115235455 A CN115235455 A CN 115235455A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- pdr
- positioning
- visual
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Abstract
The invention discloses a pedestrian positioning method based on a smart phone PDR and vision correction, which comprises the following steps: establishing a visual characteristic map of a region to be detected; determining an initial position and a course angle of the pedestrian based on the global positioning of the visual feature map; calculating the dead reckoning of the pedestrian based on the PDR on the basis of the initial position and the course angle, and calculating the walking distance of the pedestrian; when the walking distance of the pedestrian reaches a set threshold value, obtaining global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map; and correcting the PDR positioning result by taking the visual positioning result as a positioning reference. The method is applied to the field of pedestrian navigation, and corrects the position and course angle errors of the PDR by indirectly calling visual positioning, so that the method not only can obtain better positioning performance improvement, but also expands the application scene of the traditional PDR from a two-dimensional plane to a three-dimensional space, and has practical research significance and application value.
Description
Technical Field
The invention relates to the technical field of pedestrian navigation, in particular to a pedestrian positioning method based on a smart phone PDR and visual correction.
Background
With the increase of the demand of people for location services, the indoor positioning technology becomes a research hotspot. Due to signal shielding and interference, the satellite navigation system cannot meet the requirement of indoor positioning service of a user in most cases. In order to solve the problem of satellite signal occlusion in a complex indoor environment, researchers have proposed many indoor positioning methods. Typical indoor location technologies include Wi-Fi fingerprinting, bluetooth, radio frequency identification, ultra wideband, vision, dead reckoning, and the like. With the development of microelectronic technology, pedestrian Dead Reckoning (PDR) based on a mobile intelligent terminal MEMS sensor is favored by researchers due to its strong autonomy, continuity and convenience without deploying a base station in advance.
At present, sensors such as an accelerometer, a gyroscope, a magnetometer and the like are arranged in most smart phones. The pedestrian dead reckoning is an autonomous relative positioning algorithm for estimating the position of a pedestrian by using an inertial sensor of a smart phone, and the walking route and the position of the pedestrian are calculated by carrying out gait detection, step length estimation and course calculation on the pedestrian. However, due to the limited precision of the built-in MEMS sensor of the smart phone and the accumulated error of the inertial sensor device, the positioning error of the PDR will be larger and larger when the position estimation is performed for a long time. In addition, the traditional PDR method can only achieve pedestrian position estimation on a two-dimensional plane, and when the pedestrian has a change in the height position of going upstairs or downstairs, the PDR cannot perform accurate positioning.
In order to solve the problem of PDR error accumulation, many scholars propose a solution that combines PDR with other indoor positioning means, such as correcting the PDR positioning result by using additional information such as Wi-Fi, bluetooth, and geomagnetism, so as to reduce the PDR positioning error accumulation. However, the auxiliary means using external signals such as Wi-Fi and bluetooth requires a large amount of infrastructure to be deployed in an indoor scene in advance, and relies on the external signals, which are susceptible to interference from signals in the environment. The PDR method based on the indoor magnetic field characteristic assistance needs to spend a large amount of time and energy to construct a fine-grained signal fingerprint database in an off-line stage, and the PDR positioning method based on the map information constraint puts higher requirements on the drawing of a high-precision indoor map. The scheme utilizes the absolute position positioning technology and the PDR algorithm to carry out fusion, although the problem of error accumulation of the PDR can be solved, additional infrastructure needs to be arranged, the cost of a positioning system is increased, the advantages of autonomy and continuity of inertial navigation are weakened to a certain extent, and obvious limitations are lacked in practical application. Therefore, the technology for accurately and robustly positioning the pedestrian indoor through researching a low-cost auxiliary PDR independent of external facilities has important application value.
In recent years, computer vision technology is rapidly developed, and a vision SLAM algorithm is continuously developed and matured. The global positioning technology based on the visual feature map is the same as the loop detection principle of SLAM, and is essentially an information retrieval method, and the position of a user is estimated by using a visual feature matching mode. The implementation based on the visual positioning technology is not limited by an external environment, a user only needs to provide one camera to acquire a current image, and a camera sensor is arranged in each current smart phone. Therefore, in the process of dead reckoning of the pedestrian, visual positioning can be carried out by means of a camera sensor built in the smart phone, so that the accumulated error of the PDR method can be corrected in an auxiliary mode, and the purpose of improving positioning accuracy is achieved. However, although the traditional visual matching method can obtain the positioning information, the image query and matching efficiency is low, the real-time requirement cannot be met, and the practical application deployment is difficult to obtain.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the pedestrian positioning method based on the smart phone PDR and the vision correction, which not only can obtain better positioning performance improvement, but also expands the application scene of the traditional PDR from a two-dimensional plane to a three-dimensional space, and has practical research significance and application value.
In order to achieve the above object, the present invention provides a pedestrian positioning method based on a smartphone PDR and vision correction, comprising the following steps:
step 1, establishing a visual characteristic map of a region to be detected, wherein the process is as follows: the method comprises the steps of collecting scene images in a region to be detected by adopting a vision sensor, carrying out synchronous positioning and mapping based on a vision SLAM algorithm, and storing SLAM mapping results as a map database in a key frame as a basic organization form for subsequent online vision positioning.
And 2, determining the initial position and the course angle of the pedestrian based on the global positioning of the visual feature map.
Step 3, PDR positioning: on the basis of the initial position and the course angle, the dead reckoning is carried out on the dead reckoning of the pedestrian based on the PDR, and the walking distance of the pedestrian is reckoned, and the process is as follows: the pedestrian gait detection is carried out by analyzing the output data of the smart phone accelerometer, after the pedestrian is detected to occur in one step, the step length of the pedestrian in the step is calculated according to the acceleration value, and the advancing direction of the pedestrian is calculated through the angular rate information output by the gyroscope. And under the premise of knowing the initial position and the initial course, the position of the pedestrian at each moment can be calculated according to the obtained step length and the course angle.
Step 4, visual positioning: when the walking distance of the pedestrian reaches a set threshold value, obtaining global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map, wherein the process is as follows: after the walking distance of the pedestrian is calculated to reach a set threshold value by using a PDR method, a camera of the smart phone is used for shooting a current scene image, and feature points and descriptor information of a current frame are detected. And performing feature matching on the PDR and the feature map which is established offline by using the prior position information of the PDR to find candidate key frames, and then establishing 2D-3D matching between the current frame and the candidate frames so as to obtain global positioning information at the current moment.
And 5, correcting the PDR positioning result by taking the visual positioning result as a positioning reference, and repeating the steps 3-5 after taking the corrected PDR positioning result as a new initial position and a new course angle of the pedestrian. The PDR and the visual positioning result are combined and fused by adopting an Extended Kalman Filtering (EKF) method. PDR is a relative positioning method, and has a problem of accumulated errors in the positioning process, and needs to be corrected by absolute position information. The visual positioning result based on the visual characteristic map is absolute position information and has no error drift, so that the visual positioning result can be indirectly used for correcting the accumulated error of the PDR, the positioning precision can be improved, and meanwhile, the application scene of the traditional PDR can be expanded from a two-dimensional plane to a three-dimensional space.
The invention provides a pedestrian positioning method based on a smart phone PDR and vision correction, which utilizes an accelerometer and a gyroscope sensor which are arranged in a smart phone to realize pedestrian dead reckoning, simultaneously utilizes a camera sensor of the smart phone to shoot a scene image, carries out vision characteristic matching positioning based on a bag-of-words model, and carries out loose combination fusion on the PDR and vision positioning results by adopting an extended Kalman filter algorithm (EKF) to obtain a fusion positioning result of the pedestrian position. The position and course angle errors of the PDR are corrected by indirectly calling visual positioning, so that the better positioning performance improvement can be obtained, and meanwhile, the application scene of the traditional PDR is expanded from a two-dimensional plane to a three-dimensional space, and the practical research significance and application value are realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of a pedestrian positioning method based on a smartphone PDR and vision correction in an embodiment of the present invention;
FIG. 2 is a diagram illustrating information contained in a single key frame according to an embodiment of the present invention;
FIG. 3 is a flow chart of PDR location in an embodiment of the present invention;
FIG. 4 is a flowchart of visual positioning according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all directional indicators (such as up, down, left, right, front, back \8230;) in the embodiments of the present invention are only used to explain the relative positional relationship between the components, the motion situation, etc. in a specific posture (as shown in the attached drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; the connection can be mechanical connection, electrical connection, physical connection or wireless communication connection; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
Fig. 1 shows a pedestrian positioning method based on a smart phone PDR and visual correction disclosed in this embodiment, which mainly includes the following steps 1 to 5.
Step 1, establishing a visual characteristic map of a region to be detected
The technology for establishing the visual feature map is that certain sensor information is utilized to convert visual features in visual information at different moments into a uniform feature map which can be used for global positioning, and the establishment of the visual feature map is essentially a synchronous mapping and positioning (SLAM) process.
In consideration of the real-time performance of visual positioning and the requirements of scale invariance and rotation invariance of the visual features, the visual SLAM algorithm based on the ORB features is adopted to establish the visual feature map in the to-be-detected area in an off-line manner. And the local map is established by adopting a local BA optimization algorithm, and the pose of each camera and the spatial position of each characteristic point are optimized simultaneously by minimizing the reprojection error of the camera.
Assume the pose of the camera isCorresponding to the plum groupThe spatial position of the feature point isThe observation data being pixel coordinatesConstructing a least squares problem for the observation error as:
wherein the content of the first and second substances,to be in camera positionRoad sign for observingThe generated data;in order to observe the equation(s),the number of key frames viewed together with the current frame,the number of map points which are viewed together.
The visual feature map obtained by SLAM mapping is stored as map data by taking a key frame as a basic organization form. Referring to fig. 2, each keyframe includes the pose of the keyframe in the map coordinate system, the pixel coordinates and the three-dimensional spatial position of the feature point, and the feature descriptor of the feature point, and the complete visual feature map is composed of all the keyframes in the mapping area. In a specific implementation process, the key frame screening/determining process adopts two criteria:
1) The average disparity between the current frame and the previous key frame is greater than a set threshold value keyframe _ disparity, and is usually set to be about 10;
2) The number of feature points tracked by the current frame is lower than a set threshold track _ num, and is generally set to be about 50.
Step 2, determining the initial position and the course angle of the pedestrian based on the global positioning of the visual feature map
In the specific implementation process, when a pedestrian enters the area to be detected for the first time, a position identification algorithm based on the visual feature map can be called, a visual global positioning result is obtained through calculation in the area where the visual feature map is established in the step 1, namely the visual global positioning result can be used as the initial position and the course angle of the pedestrian. The process of obtaining the visual global positioning result is the same as that in step 4, and is not repeated here.
Step 3, PDR positioning: the method comprises the steps of calculating the dead reckoning of the pedestrian based on PDR on the basis of the initial position and the course angle, and calculating the walking distance of the pedestrian
The process of reckoning the pedestrian dead reckoning based on the PDR comprises the following steps: gait detection is carried out by analyzing data output by an accelerometer of the smart phone, after a pedestrian is detected to occur in one step, the step length of the step is calculated according to the acceleration value, and the heading angle of the pedestrian is calculated according to the angular rate data of a gyroscope. On the basis of the position of the pedestrian at the previous moment, the position of the pedestrian at the current moment can be calculated according to the calculated step length and the course information, so that the position updating process is as follows:
in the formula (I), the compound is shown in the specification,for the pedestrian is at the firstkThe position of the person in the step (c),for the pedestrian in the first placek-the position at step 1,is at the firstkThe course angle of the step at the time of the step,is as followskStep size of the step.
Referring to fig. 3, the process of reckoning the dead reckoning of the pedestrian based on the PDR specifically includes:
the walking process of the pedestrian has a periodic change rule. According to the movement characteristics of the pedestrian in the walking process, the walking steps can be accurately calculated by analyzing the three-axis acceleration change rule of the accelerometer. Due to the shaking of the body and the error of the sensor in the walking process of the pedestrian, the raw acceleration data needs to be preprocessed by adopting a smoothing filtering method after being acquired, namely:
wherein the content of the first and second substances,is composed oftThe acceleration after the time of day has been filtered,is as followsThe acceleration at the moment of time is,Mis the size of the sliding window. In a specific implementation process, the selection of the size of the sliding window is related to the acquisition frequency and the step frequency of the acceleration data, and the sliding window is generally set to be about 5, so that a good gait detection effect can be obtained.
And carrying out gait detection after the original acceleration data are subjected to smooth filtering. Because the posture of the pedestrian holding the mobile phone is not fixed, if the single-axis acceleration value is adopted for gait detection, the problem of unobvious periodic characteristics can be encountered, and therefore the three-axis combined acceleration is usedDetermination as gait detectionAccording, its size is calculated as:
wherein the content of the first and second substances,、、respectively represent the smooth filtered accelerationA shaft,A shaft,A component of the axis;
based on the resultant accelerationAnd the time interval between two successive steps to be determined to determine whether a step occurs:
suppose thatResultant acceleration of time of dayIs as followsThe peak value in step time is recorded as. ThenIt should satisfy:
wherein the content of the first and second substances,is composed oft-a resultant acceleration at time 1,is composed oftThe resultant acceleration at time + 1;
the specific criteria for determining the occurrence of one step are:
wherein the content of the first and second substances,is an acceleration peak threshold;being the time interval of adjacent peaks, i.e. ofThe duration of the step(s) is,andlower and upper thresholds for the time interval.
Considering the influence of the speed of the pedestrian, the acceleration peak value threshold value is requiredAnd time interval threshold、Further dynamic settings are made. Peak threshold valueIs limited tom/s 2 According to the ratio of the current acceleration peak value to the mean value of the acceleration peak values at the previous two moments, the dynamic adjustment is carried out as follows:
wherein the content of the first and second substances,is a firstkThe peak value threshold at the time of step calculation,is as followskThe peak threshold at the time of the +1 step calculation,is as followsThe ratio of the peak value of the synthesized acceleration of the step to the average peak value of the synthesized acceleration of the previous three steps. The peak threshold for the first three steps of calculations was set at around 12.
Time interval threshold、The setting of (2) needs to be considered in combination with the frequency range of 0.5-5Hz when the pedestrian walks normally. The specific dynamic adjustment is as follows:
after the pedestrian is detected to take place in one step, the step length and the heading of the step are estimated. Estimate the second by using Weinbeng step size modelStep length of stepNamely:
wherein the content of the first and second substances,the step length coefficient is different in value for different pedestrians and is related to factors such as the height and the step frequency of each person;、is the firstThe maximum and minimum values of acceleration are synthesized during the step.
In the specific implementation process, the heading estimation based on the gyroscope can only provide a relative heading estimation value for the PDR, and on the premise of knowing initial heading information, the angular rate output by the gyroscope is subjected to integral calculation to obtain the heading at the current moment, wherein the integral calculation is as follows:
wherein, the first and the second end of the pipe are connected with each other,is the initial course angle;for gyroscopes relative to navigational coordinate systemsZA measure of the angular velocity of the shaft,is as followsThe amount of change in the course angle of the step,as an initial moment of dead reckoning,is as followsThe time corresponding to the step.
Step 4, visual positioning: when the walking distance of the pedestrian reaches a set threshold value, obtaining the global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map
When the walking distance of the pedestrian calculated by adopting the PDR method reaches the set threshold valueAnd then, calling a position identification algorithm based on the visual feature map, and calculating in the area of the visual feature map established in the step 1 to obtain a visual global positioning result. The visual position recognition is the same as the loop detection principle in the visual SLAM, and the first step of global positioning can be regarded as a closed-loop detection process. Firstly, ORB feature points and feature descriptors of a current frame are extracted, and the Bag-of-Words direction of the current frame is calculated based on a Bag-of-Words (BoW) modelAnd (4) quantity information. Then, in a visual feature map composed of key frames, by utilizing position prior information calculated by PDR, the distance between bag-of-word vectors of different images (namely the similarity between the images) is calculated to find the key frames in the map similar to the current frame, and the key frames can be used as candidate frames for further fine positioning process.
Referring to fig. 4, the process of global positioning based on the visual feature map specifically includes:
establishing a dictionary of the visual feature map: the dictionary of the feature descriptors in the visual SLAM is obtained by clustering features of a large number of images, the visual feature map in the step 1 is established, and an ORB dictionary specific to the visual feature map can be generated by clustering all feature points appearing in the visual feature map.
The dictionary training is based on K-means algorithmNIndividual Word (Word):. In order to improve the efficiency of image matching and query, a dictionary is expressed by using a K-ary tree, and leaf layers are called words. After the K-tree dictionary is constructed, a weight is given to each word by adopting a TF-IDF (Term Frequency-Inverse Document Frequency) method. The idea of IDF is that the lower the frequency of occurrence of a word in a dictionary, the higher the degree of discrimination of the classified image:
wherein the content of the first and second substances,as wordsIs/are as followsIDFThe value of the sum of the values,for all the feature quantities in the dictionary,as wordsThe number of features in (1);
the idea of TF is that the more times a word appears in an image, the higher its discrimination. Hypothetical imageChinese wordAppear toSecond, the number of co-occurring words isThen wordIs/are as followsTFValue ofComprises the following steps:
for a certain imageAIts feature points are corresponding to multiple words, and calculation is performedIF-IDFThe bag-of-words vector that is worth describing the image is:
wherein the content of the first and second substances,for all the number of words of the dictionary,as wordsIs/are as followsTF-IDFValue of a step of,as an imageABag of words vector.
in the formula (I), the compound is shown in the specification,as an imageAAnd imagesBThe degree of similarity between the two images,as an imageBThe bag-of-words vector of (c),as a bag of words vectorTo (1) aThe number of the components is such that,as a bag of words vectorTo (1)The number of the components is such that,、bag of expression vectorAndto (1) aiIndividual components, i.e. each visual wordWeight value of (A), and、the meanings indicated are the same;
acquiring a current frame image acquired by a camera on a smart phone, calculating the similarity between the current frame and all key frames near a PDR calculated position in a visual feature map, selecting a plurality of frames with the highest similarity as candidate frames, and performing feature matching and PnP pose solving to obtain accurate global positioning information, wherein the specific implementation process comprises the following steps:
feature matching refers to determining correspondence between feature points of different images, and the similarity between feature points is usually measured by using a feature descriptor distance. For BRIEF binary descriptors of ORB features, hamming distance is usually adoptedTo express the similarity, i.e.:
wherein the content of the first and second substances,representing an exclusive or operation;、and (3) respectively representing BRIEF descriptors of ORB characteristic points in the two images.
The feature similarity measurement method based on the Hamming distance adopts fast approximate nearest neighbor (FLANN) to match feature points. Considering the problem that mismatching may occur, random sampling consistency (RANSC) is used to screen matching and remove some mismatching point pairs.
After the feature matching relation between the current frame and the candidate frame is obtained, the position and pose of the current frame relative to the map are solved by adopting a PnP (passive-n-Point) method due to the fact that the three-dimensional coordinates of the feature points of the candidate frame are known in the visual feature map. PnP is a method for solving the pose by using a 3D-2D point pair, wherein the 3D point is obtained from a visual feature map, and the 2D point is a feature point of the current frame. The PnP problem is constructed as a non-linear minimum problem with respect to minimizing reprojection errors by means of non-linear optimization.
Consider thatA three-dimensional space pointAnd projection pointTo find the pose of the cameraIts lie group is represented as. Assume a spatial point coordinate ofProjected pixel coordinates of. Due to the unknown pose of the camera and the influence of the noise of the observation point, an error exists between the projection position and the observation position of the 3D point. Summing all the reprojection errors to construct a least square problem, and iteratively solving the optimal camera poseTo minimize it, namely:
wherein the content of the first and second substances,representing a scale factor;is a camera internal reference matrix.
After the translation and the rotation between each candidate frame and the current frame are calculated, some abnormal candidate frames are removed by a RANSAC method. Finally, the rest of the wine is treatedAll map points in the selected frame are projected to the current frame for searching feature matching, and if the number of matching is more than a set threshold valueAnd receiving a camera pose result, otherwise, not performing vision correction, and skipping the filtering and fusing step of the following step 5.
After the pose result of the camera is obtained through calculation, the position of the cameraAs the position reference information of the pedestrian at the current moment, the attitude matrix of the camera is determinedConverting into Euler angle to obtain the current reference course angle information of pedestrianThe method comprises the following steps:
wherein, the first and the second end of the pipe are connected with each other,as a matrix of gesturesRThe element in the 2 nd row and 1 st column position,as a matrix of posturesRThe element at the 1 st row and 1 st column position.
And 5, correcting the PDR positioning result by taking the visual positioning result as a positioning reference, and repeating the step 3-5 after the corrected PDR positioning result is taken as a new initial position and a new course angle of the pedestrian.
In the specific implementation process, the PDR and the visual positioning result are loosely combined and fused based on an Extended Kalman Filtering (EKF) method, and the visual position identification result is used as a positioning reference, so that the accumulated error of the PDR can be corrected, the positioning precision is improved, and the problem of pedestrian positioning of the PDR in a three-dimensional space can be solved.
In the prediction phase of the extended Kalman Filter method EKF, firstThe state transition equation of the pedestrian in steps is as follows:
wherein the content of the first and second substances,is as followsState prediction vector of step, i.e. pedestrian first by pedestrian dead reckoning PDRPosition coordinates and course angles of the steps;for the first time by the extended Kalman Filter method EFKState vectors obtained by step-by-step optimal estimation, i.e. the first pedestrian obtained by vision correctionPosition coordinates and course angle of the step、、(ii) a The initial values are set to the initial position and heading angle of the PDR, i.e.;Representing a non-linear function in a state transition equation;indicating the step number corresponding to the last time of calling the visual positioning result to correct the PDR positioning result;is a process noise vector;
transfer of non-linear functions in state equationsIn thatThe vicinity is subjected to linearization processing, and a high-order part is omitted to obtain a secondState matrix corresponding to stepThe method comprises the following steps:
wherein the content of the first and second substances,representing a non-linear functionIn thatCarrying out linearization treatment nearby;
wherein the content of the first and second substances,shows the EKF pair using the extended Kalman filtering methodThe covariance matrix of the optimal estimated value of the step state is set to the initial value;Representing the process noise matrix brought by the prediction model itself, which is composed of the average errors of the elements of the pedestrian dead reckoning method PDR, wherein、The average error of the position is represented,indicating the heading angle average error.
In the updating stage of the extended Kalman filtering method EKF, the observation equation of the system is as follows:
wherein the content of the first and second substances,is an observation matrix;is shown inThe observation vector obtained by visual positioning identification is stepped,、is the position information of the visual positioning at the k step,the course angle of the visual positioning in the k step is obtained;is an observation error vector;is shown inStep by the observation vector obtained by PDR positioning identification,、location information for PDR location at step k,the heading angle for the PDR location at step k.
wherein the content of the first and second substances,is as followsAnd calculating an observation noise covariance matrix corresponding to the step by the following formula:
wherein the content of the first and second substances,as regards the length of the window, it is,is as followsThe visual position identifies the resulting observation vector during stepping,is as followsAnd (3) calculating the state vector by the PDR during the step.
Line of calculationHuman being toOptimal estimation of step stateThe method comprises the following steps:
and simultaneously updating the covariance matrix of the optimal state estimation value, and calculating the EKF by using the Kalman filtering method for the next time, wherein the method comprises the following steps of:
In the specific calculation process, the height value in the visual positioning result can be directly adopted for the height position of the pedestrian, so that the indoor positioning of the pedestrian in the three-dimensional space is realized.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or any other related technical fields, which are directly or indirectly applied to the present invention, are included in the scope of the present invention.
Claims (8)
1. A pedestrian positioning method based on a smart phone PDR and vision correction is characterized by comprising the following steps:
step 1, establishing a visual feature map of a region to be detected, wherein the visual feature map is stored as a map database by taking a key frame as a basic organization form;
step 2, determining the initial position and the course angle of the pedestrian based on the global positioning of the visual feature map;
step 3, PDR positioning: calculating the dead reckoning of the pedestrian based on the PDR on the basis of the initial position and the course angle, and calculating the walking distance of the pedestrian;
step 4, visual positioning: when the walking distance of the pedestrian reaches a set threshold value, obtaining global positioning information of the pedestrian at the current moment based on the global positioning of the visual feature map;
and 5, correcting the PDR positioning result by taking the visual positioning result as a positioning reference, and repeating the step 3-5 after the corrected PDR positioning result is taken as a new initial position and a new course angle of the pedestrian.
2. The pedestrian positioning method based on smartphone PDR and vision correction as claimed in claim 1, wherein in step 3, the dead reckoning of the pedestrian dead reckoning based on PDR specifically includes:
acquiring original acceleration data of an accelerometer in the smart phone, and preprocessing the data by adopting smooth filtering, wherein the method comprises the following steps:
wherein the content of the first and second substances,is composed oftThe acceleration after the time of day has been filtered,is as followsThe acceleration at the moment of time is,Mis the size of the sliding window;
synthesizing the three-axis components of the filtered original acceleration data to obtain a synthesized accelerationaccThe method comprises the following steps:
wherein the content of the first and second substances,、、respectively, represents the acceleration after smooth filteringA shaft,A shaft,A component of the axis;
according to accelerationAnd the time interval between two successive steps to be determined, the criterion of one-step occurrence is:
wherein, the first and the second end of the pipe are connected with each other,is as followsThe peak value of the acceleration is synthesized within the step time,is an acceleration peak threshold;is a firstThe duration of the step(s) is,andlower and upper thresholds for the time interval;
after the detection step occurs, estimating the step length and the course of the step, and the method comprises the following steps:
wherein, the first and the second end of the pipe are connected with each other,is a firstThe step size of the step(s) is,is a step-size coefficient that is,、is the firstResultant acceleration during stepMaximum and minimum values of;
wherein the content of the first and second substances,is as followsThe course angle of the step(s) is,is the initial course angle;for gyroscopes relative to navigational coordinate systemsZA measure of the angular velocity of the shaft,is as followsThe amount of change in the course angle of the step,as an initial moment of the dead-reckoning,is as followsThe time corresponding to the step;
and finally, updating the position according to the step length and the course, which comprises the following steps:
3. The pedestrian positioning method based on smart phone PDR and vision correction as claimed in claim 2, wherein the influence of pedestrian walking speed is considered, and the acceleration peak value threshold value is consideredAnd dynamically setting as follows:
wherein the content of the first and second substances,is as followskThe peak value threshold at the time of step calculation,is a firstkThe peak threshold at the time of the +1 step calculation,is as followsThe ratio of the peak value of the synthesized acceleration of the step to the average peak value of the synthesized acceleration of the previous three steps.
4. Smartphone-based smart phone according to claim 3The pedestrian positioning method based on PDR and vision correction is characterized in that the influence of the speed of walking of pedestrians on time interval threshold is considered、And dynamically setting as follows:
If the peak threshold value in the calculation of the current step is greater than or equal to 12 and less than 13.5, then;
5. The pedestrian positioning method based on the smartphone PDR and the visual correction according to any one of claims 1 to 4, wherein in step 4, the global positioning based on the visual feature map obtains global positioning information of a pedestrian at the current time, specifically:
acquiring a current frame image through a camera of the smart phone, extracting ORB feature points and feature descriptors of the current frame, and calculating bag-of-word vectors of the current frame;
searching a key frame similar to the current frame in the visual feature map based on the distance between the bag-of-word vectors of different images to serve as a candidate frame;
establishing 2D-3D point pair matching between a current frame and a candidate frame, after eliminating abnormal candidate frames by adopting an RANSAC method, projecting all map points in the remaining candidate frames to the current frame for searching characteristic matching, summing all re-projection errors to construct a least square problem, and then solving by adopting a PnP method to obtain the pose of the camera;
and finally, converting the attitude matrix of the camera into an Euler angle to obtain the course angle of the current position of the pedestrian.
6. The pedestrian positioning method based on smart phone PDR and vision correction as claimed in any one of claims 1 to 4, wherein in step 5, the PDR positioning result and the vision positioning result are loosely combined and fused based on the extended Kalman filtering method, and the vision positioning result is used as a positioning reference to correct the accumulated error of the PDR positioning result.
7. The pedestrian positioning method based on smartphone PDR and vision correction as claimed in claim 6, wherein in step 5, the accumulated error of the positioning result of PDR corrected by taking the vision positioning result as the positioning reference specifically is:
in the prediction stage of the extended Kalman filtering method, the first step is establishedThe pedestrian state transition equation during stepping is:
wherein, the first and the second end of the pipe are connected with each other,is as followsState prediction vector of step, i.e. pedestrian first through PDRPosition coordinates and course angle of the step、、;For the second one by the extended Kalman Filter methodState vectors obtained by stepwise optimal estimation of the pedestrian, i.e. the pedestrian obtained by visual correctionPosition coordinates and course angle of step、、;Is a non-linear function;indicating the step number corresponding to the last time of calling the visual positioning result to correct the PDR positioning result;in order to be a vector of the process noise,is as followsiThe step size of the step(s) is,is as followsiThe course angle of the step(s) is,is as followsiStep course angle variation;
to make a non-linear functionIn thatThe neighborhood is linearized, and the high-order part is removed to obtain the secondState matrix corresponding to stepThe method comprises the following steps:
based on state matricesFor predicted variableCovariance matrix ofUpdating is carried out as follows:
wherein, the first and the second end of the pipe are connected with each other,represents the application of the extended Kalman Filter method to the secondA covariance matrix of the optimal estimate of the step state,representing a process noise matrix;
in the updating stage of the extended Kalman filtering method, the observation equation of the system is as follows:
wherein the content of the first and second substances,in order to observe the matrix, the system,is shown inThe observation vector obtained by visual positioning identification is stepped,、、is as followskThe position information and course angle of visual positioning during walking,an observation error vector is obtained;
wherein the content of the first and second substances,is a firstStep one, corresponding observation noise covariance matrix;
calculate the number of the pedestrianOptimal estimation of step stateThe method comprises the following steps:
8. The pedestrian positioning method based on smartphone PDR and vision correction as claimed in claim 7, wherein the update process of the covariance matrix of the optimal state estimation value is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211133744.1A CN115235455B (en) | 2022-09-19 | 2022-09-19 | Pedestrian positioning method based on smart phone PDR and vision correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211133744.1A CN115235455B (en) | 2022-09-19 | 2022-09-19 | Pedestrian positioning method based on smart phone PDR and vision correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115235455A true CN115235455A (en) | 2022-10-25 |
CN115235455B CN115235455B (en) | 2023-01-13 |
Family
ID=83681806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211133744.1A Active CN115235455B (en) | 2022-09-19 | 2022-09-19 | Pedestrian positioning method based on smart phone PDR and vision correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115235455B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681935A (en) * | 2023-05-31 | 2023-09-01 | 国家深海基地管理中心 | Autonomous recognition and positioning method and system for deep sea hydrothermal vent |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090192708A1 (en) * | 2008-01-28 | 2009-07-30 | Samsung Electronics Co., Ltd. | Method and system for estimating step length pedestrian navigation system |
EP2386828A1 (en) * | 2010-05-12 | 2011-11-16 | Technische Universität Graz | Method and system for detection of a zero velocity state of an object |
CN104215238A (en) * | 2014-08-21 | 2014-12-17 | 北京空间飞行器总体设计部 | Indoor positioning method of intelligent mobile phone |
WO2018043934A1 (en) * | 2016-09-02 | 2018-03-08 | 유치헌 | System and method for zero-delay real-time detection of walking using acceleration sensor |
CN109405829A (en) * | 2018-08-28 | 2019-03-01 | 桂林电子科技大学 | Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion |
CN111595344A (en) * | 2020-06-01 | 2020-08-28 | 中国矿业大学 | Multi-posture downlink pedestrian dead reckoning method based on map information assistance |
CN112129281A (en) * | 2019-06-25 | 2020-12-25 | 南京航空航天大学 | High-precision image navigation positioning method based on local neighborhood map |
CN112637762A (en) * | 2020-12-11 | 2021-04-09 | 武汉科技大学 | Indoor fusion positioning method based on improved PDR algorithm |
CN113029148A (en) * | 2021-03-06 | 2021-06-25 | 西南交通大学 | Inertial navigation indoor positioning method based on course angle accurate correction |
CN114111784A (en) * | 2021-10-26 | 2022-03-01 | 杭州电子科技大学 | Crowdsourcing-based automatic construction method and system for indoor corridor map |
-
2022
- 2022-09-19 CN CN202211133744.1A patent/CN115235455B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090192708A1 (en) * | 2008-01-28 | 2009-07-30 | Samsung Electronics Co., Ltd. | Method and system for estimating step length pedestrian navigation system |
EP2386828A1 (en) * | 2010-05-12 | 2011-11-16 | Technische Universität Graz | Method and system for detection of a zero velocity state of an object |
CN104215238A (en) * | 2014-08-21 | 2014-12-17 | 北京空间飞行器总体设计部 | Indoor positioning method of intelligent mobile phone |
WO2018043934A1 (en) * | 2016-09-02 | 2018-03-08 | 유치헌 | System and method for zero-delay real-time detection of walking using acceleration sensor |
CN109405829A (en) * | 2018-08-28 | 2019-03-01 | 桂林电子科技大学 | Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion |
CN112129281A (en) * | 2019-06-25 | 2020-12-25 | 南京航空航天大学 | High-precision image navigation positioning method based on local neighborhood map |
CN111595344A (en) * | 2020-06-01 | 2020-08-28 | 中国矿业大学 | Multi-posture downlink pedestrian dead reckoning method based on map information assistance |
CN112637762A (en) * | 2020-12-11 | 2021-04-09 | 武汉科技大学 | Indoor fusion positioning method based on improved PDR algorithm |
CN113029148A (en) * | 2021-03-06 | 2021-06-25 | 西南交通大学 | Inertial navigation indoor positioning method based on course angle accurate correction |
CN114111784A (en) * | 2021-10-26 | 2022-03-01 | 杭州电子科技大学 | Crowdsourcing-based automatic construction method and system for indoor corridor map |
Non-Patent Citations (4)
Title |
---|
包川: "多传感器融合的移动机器人三维地图构建", 《中国优秀硕士学位论文全文数据库基础科学辑》 * |
揭云飞等: "视觉SLAM系统分析", 《电脑知识与技术》 * |
朱会平: "基于图像检索和航位推算的室内定位方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
李弋星等: "基于改进关键帧选择的RGB-D SLAM算法", 《大连理工大学学报》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681935A (en) * | 2023-05-31 | 2023-09-01 | 国家深海基地管理中心 | Autonomous recognition and positioning method and system for deep sea hydrothermal vent |
CN116681935B (en) * | 2023-05-31 | 2024-01-23 | 国家深海基地管理中心 | Autonomous recognition and positioning method and system for deep sea hydrothermal vent |
Also Published As
Publication number | Publication date |
---|---|
CN115235455B (en) | 2023-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112014857B (en) | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN110125928B (en) | Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames | |
JP7326720B2 (en) | Mobile position estimation system and mobile position estimation method | |
CN105371847B (en) | A kind of interior real scene navigation method and system | |
CN105241445B (en) | A kind of indoor navigation data capture method and system based on intelligent mobile terminal | |
CN105761242B (en) | Blind person walking positioning method based on computer binocular vision and inertial measurement | |
US9071829B2 (en) | Method and system for fusing data arising from image sensors and from motion or position sensors | |
CN110553648A (en) | method and system for indoor navigation | |
CN110579207B (en) | Indoor positioning system and method based on combination of geomagnetic signals and computer vision | |
CN108549376A (en) | A kind of navigation locating method and system based on beacon | |
CN111595344B (en) | Multi-posture downlink pedestrian dead reckoning method based on map information assistance | |
CN110631578B (en) | Indoor pedestrian positioning and tracking method under map-free condition | |
CN110032965A (en) | Vision positioning method based on remote sensing images | |
KR101985344B1 (en) | Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method | |
CN115235455B (en) | Pedestrian positioning method based on smart phone PDR and vision correction | |
CN116007609A (en) | Positioning method and computing system for fusion of multispectral image and inertial navigation | |
CN112985392B (en) | Pedestrian inertial navigation method and device based on graph optimization framework | |
CN113155126A (en) | Multi-machine cooperative target high-precision positioning system and method based on visual navigation | |
CN112798020B (en) | System and method for evaluating positioning accuracy of intelligent automobile | |
Liu et al. | Integrated velocity measurement algorithm based on optical flow and scale-invariant feature transform | |
CN113114850B (en) | Online fusion positioning method based on surveillance video and PDR | |
WO2020076263A2 (en) | A system for providing position determination with high accuracy | |
Chen et al. | ReLoc-PDR: Visual Relocalization Enhanced Pedestrian Dead Reckoning via Graph Optimization | |
CN115574816B (en) | Bionic vision multi-source information intelligent perception unmanned platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |