CN108426582B - Indoor three-dimensional map matching method for pedestrians - Google Patents

Indoor three-dimensional map matching method for pedestrians Download PDF

Info

Publication number
CN108426582B
CN108426582B CN201810176554.5A CN201810176554A CN108426582B CN 108426582 B CN108426582 B CN 108426582B CN 201810176554 A CN201810176554 A CN 201810176554A CN 108426582 B CN108426582 B CN 108426582B
Authority
CN
China
Prior art keywords
pedestrian
indoor
height
state
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810176554.5A
Other languages
Chinese (zh)
Other versions
CN108426582A (en
Inventor
任明荣
郭红雨
王普
韩红桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201810176554.5A priority Critical patent/CN108426582B/en
Publication of CN108426582A publication Critical patent/CN108426582A/en
Application granted granted Critical
Publication of CN108426582B publication Critical patent/CN108426582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a pedestrian indoor three-dimensional map matching method, and belongs to the technical field of indoor pedestrian positioning. The method comprises the steps of collecting indoor motion information of pedestrians by using an MEMS-INS sensor bound on the instep of the pedestrian, resolving the speed, position and course of the pedestrian, analyzing an indoor structure, creating state points, and establishing a conditional random field model according to navigation output position information and state point position information. And extracting horizontal two-dimensional position information of the indoor pedestrian according to the fixed-length walking distance, extracting pedestrian height information at the zero-speed moment, acquiring an observation point of the CRF model, and respectively recording the two-dimensional position of the indoor pedestrian and the sampling moment of the height information. And establishing an indoor electronic map, creating a state point according to the indoor structure information, and storing the coordinate of the state point. The invention can realize the three-dimensional position positioning of the pedestrian, and has high algorithm precision; a method of separately matching two-dimensional position and height information is adopted, so that the algorithm complexity is simplified; and the two-dimensional position and the height information are fused by adopting the approaching moment, so that the accuracy of map matching is improved.

Description

Indoor three-dimensional map matching method for pedestrians
Technical Field
The invention belongs to the technical field of indoor pedestrian positioning, and relates to a three-dimensional map matching method based on Micro-Electro-Mechanical System (MEMS) technology, namely, an Inertial Navigation System (INS) synchronous positioning, matching algorithm structure design and position information fusion under the known indoor environment.
Background
In recent years, with the appearance and development of logistics, intelligent wards and new concept supermarkets, indoor navigation is widely concerned by academic circles and engineering circles. The integration and miniaturization of the MEMS-INS make the MEMS-INS become the leading technology in the navigation field.
However, inertial device errors can accumulate over time, which can eventually lead to a divergence in the pedestrian's position trajectory if not effectively corrected. To solve the problem of inertial navigation position errors, different map matching algorithms are proposed. Such as a particle filter algorithm, a map matching algorithm based on the main heading, a map matching algorithm based on hidden markov, etc. However, these algorithms pay more attention to the two-dimensional information of pedestrians, and walking indoors by pedestrians not only includes movement of two-dimensional plane coordinates, but also can change the height through stairs. Therefore, the indoor three-dimensional map matching method for the pedestrians has important theoretical significance and application value.
Y.M' eneroux et al propose two common measurement methods of average Hausdorff distance and area difference to solve the problem of the relationship between matching accuracy and network quality index, and also provide the upper limit of the influence of the reference network; xiao Z et al, however, propose to solve the problem of inertial navigation position error divergence by using a Conditional Random Field (CRF) algorithm to mathematically model the navigation output position and the indoor state points. However, the former method is considered as an outdoor map matching method to perform path matching for vehicles, pedestrians walk indoors more randomly, and the concept of paths is relatively weak, so that the method is not suitable for indoor environments; the latter considers the indoor special environment and assists map matching with the state points, but ignores indoor height information and only carries out experimental research on a single floor, thereby only presenting two-dimensional position information of pedestrians.
In order to effectively realize the three-dimensional map matching of the pedestrian under the indoor environmental condition, the state point and the navigation output position information acquired by the sensor are subjected to the condition random field model for simultaneous connection, so that the indoor three-dimensional positioning of the pedestrian is realized. Through three-dimensional position positioning, it is very important to improve the accuracy of the algorithm precision and map matching and simplify the complexity of the algorithm.
Disclosure of Invention
Aiming at the problem of divergence caused by the navigation position of a pedestrian wearing an MEMS-INS sensor under an indoor environment condition, the invention provides a three-dimensional map matching method for indoor pedestrian navigation. The method comprises the steps of collecting indoor motion information of pedestrians by using an MEMS-INS sensor bound on the instep of the pedestrian, resolving the speed, position and course of the pedestrian, analyzing an indoor structure, creating a state point, and establishing a conditional random field model according to navigation output position information and state point position information to achieve indoor three-dimensional positioning of the pedestrian.
In order to achieve the technical purpose, the invention adopts the technical scheme that the indoor three-dimensional map matching method for the pedestrian comprises the following steps:
step 1: and acquiring data, and preliminarily calculating the three-dimensional position and the course of the indoor pedestrian.
Step 1.1, a pedestrian collects pedestrian movement data by wearing an MEMS-INS sensor, wherein the pedestrian movement data comprises: three-axis acceleration data and three-axis gyro data.
And step 1.2, solving the three-dimensional position and course information of the collected pedestrian motion data by using a strapdown calculation algorithm.
Step 2: and extracting the observation points of the conditional random field model.
And 2.1, extracting horizontal two-dimensional position information of the indoor pedestrian according to the fixed-length walking distance, extracting pedestrian height information at the zero-speed moment, acquiring an observation point of the CRF model, and respectively recording the two-dimensional position of the indoor pedestrian and the sampling moment of the height information.
And step 3: and establishing an indoor electronic map, creating a state point according to the indoor structure information, and storing the coordinate of the state point.
And 3.1, creating an indoor electronic map by using the known indoor map, and storing the indoor electronic map in the navigation computer.
And 3.2, solving coordinate points with the minimum and maximum numerical values in the electronic map as the value range of the state points, covering the whole indoor range with the equally spaced state points, and storing the state point information.
And 3.3, adding a state point on the height information by taking the step height as a standard.
And 4, performing a two-dimensional position map matching algorithm based on the conditional random field algorithm.
Step 4.1, establishing a characteristic equation according to the relationship between the two-dimensional position observation point coordinates and each state point coordinate;
and 4.2, establishing a characteristic equation according to the azimuth angle between the azimuth information of the observation point and the state point at the corresponding moment.
And 4.3, establishing a two-dimensional map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the two-dimensional position is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the position.
And 5, a height information map matching algorithm based on the conditional random field algorithm.
And 5.1, dividing the walking height of each step of the pedestrian into different states according to the height of the step and the limit value of the step of the pedestrian.
And 5.2, establishing a characteristic equation which takes the height as an observation point and a state point corresponding to the observation point.
And 5.3, solving the mean square error between the height information of all the previous adjacent observation points and the height of the matched state point.
And 5.4, establishing a characteristic equation by taking the mean square error as another characteristic according to the relation between the heights of the adjacent state points and the height difference of each state point.
And 5.5, establishing a height map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the height is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the height.
Step 6: two-dimensional position and height information fusion
And 6.1, inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best two-dimensional position matching for storage, and inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best height matching for storage.
And 6.2, combining the two-dimensional optimal matching point and the height optimal matching point by using a method of adjacent time.
And 6.3, correcting the three-dimensional position information output by the inertial navigation system according to the matched three-dimensional position.
Compared with the prior art, the invention has the following beneficial effects:
firstly, positioning of the three-dimensional position of the pedestrian is realized, and the algorithm precision is high; secondly, a method of separately matching two-dimensional position and height information is adopted, so that the complexity of an algorithm is simplified; thirdly, the two-dimensional position and the height information are fused by adopting the approaching moment, so that the accuracy of map matching is improved.
Drawings
FIG. 1 is a block diagram of a framework of a method according to the invention;
FIG. 2 is a manner in which a pedestrian wears a sensor;
FIG. 3 is a system flow diagram of an inertial navigation solution;
FIG. 4 is a diagram showing a comparison of the indoor structures before and after the treatment;
FIG. 5 is the processed electronic map and status points;
FIG. 6 is a flow chart of an indoor three-dimensional positioning system;
FIG. 7 is three-dimensional position information of inertial navigation output;
fig. 8 is a three-dimensional indoor map matching output pedestrian matching trajectory.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The frame structure of the method of the invention is shown in figure 1 and comprises the following steps:
step 1: and acquiring data, and preliminarily calculating the three-dimensional position and the course of the indoor pedestrian.
Step 1.1, the pedestrian collects pedestrian movement data by wearing the MEMS-INS sensor, and the mode that the pedestrian wears the sensor is shown in figure 2. The pedestrian motion data includes: three-axis acceleration data and three-axis gyro data.
Step 1.2, solving the three-dimensional position and heading information of the collected pedestrian motion data by adopting a strapdown calculation algorithm, wherein a system flow chart of inertial navigation calculation is shown in fig. 3.
Based on physics, under the condition that the sampling interval is very short, the speed position relation satisfies:
Figure BDA0001587479100000041
Figure BDA0001587479100000042
Figure BDA0001587479100000043
wherein p isnThree-dimensional position coordinates representing a pedestrian; t denotes the sampling time, dnRepresenting the distance traveled by the pedestrian; t is a sampling interval; v. ofnRepresenting speed information of a pedestrian, anRepresents a pedestrian acceleration;
Figure BDA0001587479100000044
a coordinate transformation matrix representing a transformation from the carrier coordinate system b to the navigation coordinate system n; gnIs the acceleration of gravity.
Step 2: and extracting the observation points of the conditional random field model.
And 2.1, extracting horizontal two-dimensional position information according to a fixed-length walking distance, extracting pedestrian height information at a zero-speed moment, acquiring observation points of the CRF model, and respectively recording the horizontal two-dimensional position of the pedestrian and the sampling moment of the height information.
(1) The horizontal two-dimensional position observation point extraction model is as follows, and when the following conditions are met:
Figure BDA0001587479100000051
recording the coordinates of the position point at the moment as a two-dimensional position observation point, and simultaneously recording the corresponding time:
Figure BDA0001587479100000052
time1(tob1)=t
wherein, Distance represents the Euclidean Distance between the current time and the sampling point coordinate of the last time; pob1(tob1) Represents tob1Observing two-dimensional coordinates of the points;
Figure BDA0001587479100000053
when represents tPosition coordinates of navigation output are carved; threshold is the set walking distance Threshold; time1 represents the time of day for all observation points.
(2) The height observation point extraction model is as follows when the condition is satisfied:
if v(t)==0
recording the height information at the moment as a height observation point, and simultaneously recording the corresponding time:
Figure BDA0001587479100000054
time2(tob2)=t
wherein Hob2(tob2) Denotes the t-thob2An individual height observation point; time2 represents the time at which the altitude observation point corresponds;
Figure BDA0001587479100000055
height information indicating the navigation output at time t.
And step 3: and establishing an indoor electronic map, creating a state point according to the indoor structure information, and storing the coordinate of the state point.
And 3.1, creating an indoor electronic map by using the known indoor map, wherein the electronic map is as shown in figure 4, and storing the electronic map in the navigation computer.
And 3.2, solving coordinate points with the minimum and maximum numerical values in the electronic map as the value range of the state points, covering the whole value range with the equally spaced state points, and storing coordinate information (X, Y) of all the state points.
(1) Solving coordinate points with minimum and maximum numerical values in the electronic map:
pmin=(xmin,ymin)=min(X,Y)
pmax=(xmax,ymax)=max(X,Y)
wherein < pmin,pmaxThe minimum position point and the maximum position point of the coverage range of the state point are represented.
(2) The distribution range of the state points obtained according to the maximum position point and the minimum position point is as follows:
r1=pmin=(xmin,ymin)
r2=(xmin,ymax)
r3=(xmax,ymin)
r4=pmax=(xmax,ymax)
wherein < r1,r2,r3,r4The four vertex coordinates of the state point range matrix.
(3) And (3) solving the coordinates of the state points, namely selecting the minimum coordinate point as a first state point, selecting the Threshold length as the interval between the state points, wherein the state point model is as follows:
state1(0,0)=r1
state1=(is,js)=r1+(is×Threshold,js×Threshold)<r4
wherein state1 is a collective term for all state points; (i)s,js) Indicating the state of the state point store.
Step 3.3, because the height variation of each step is integral multiple of the height of the step when the pedestrian walks the corridor, the height of the state point in the stair adding area is based on the height of the step, the rule that the step height standing _ high is combined with the state point distribution of the two-dimensional position is set, and the operation model of the step height is as follows:
State2(N)=0+N×stair_high
wherein, State2 is the set of all step height State points, N represents the number, and State2(N) is determined by the number of steps. And combining the State1 and the State2 to obtain the three-dimensional coordinates of the State points, and storing the coordinate information of all the State points. The three-dimensional state point and the digital map are as shown in fig. 5.
And 4, a flow chart of a two-dimensional position map matching algorithm, a matching and fusing method based on the conditional random field algorithm is shown in FIG. 6.
Step 4.1, establishing a characteristic method according to the relation between the two-dimensional position observation point coordinates and each state point coordinatesA process; the state point corresponding to the two-dimensional position is represented as Sp
Figure BDA0001587479100000061
Figure BDA0001587479100000062
Wherein f iscRepresenting the relationship between the observation point coordinates and the state point coordinates; sp(tob1) Represents tob1(x, y) coordinates of the time of day state point, and
Figure BDA0001587479100000071
Pob1(tob1) Represents tob1The (x, y) coordinates of the observation point at the moment of time, and
Figure BDA0001587479100000072
σcrepresenting the covariance of the range error of the state point from the observation point.
And 4.2, calculating the azimuth information of the observation point, and establishing a characteristic equation according to the azimuth angle between the azimuth information of the observation point and the state point at the moment corresponding to the azimuth information of the observation point.
Figure BDA0001587479100000073
Figure BDA0001587479100000074
Figure BDA0001587479100000075
Figure BDA0001587479100000076
Wherein sigmaθA covariance representing an observed azimuth error; b (S)p(tob1-1),Sp(tob1) ) represents tob1-1 and tob1Functions between the time of day state points; theta (S)p(tob1-1),Sp(tob1) Represents tob1-1 and tob1And an azimuth angle function between the time state points, wherein the function takes the positive direction of the X axis of the map coordinate system as a reference.
And 4.3, establishing a two-dimensional map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the two-dimensional position is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the position.
Figure BDA0001587479100000077
Figure BDA0001587479100000078
Figure BDA0001587479100000079
Calculating a maximum probability state point sequence S by using a Viterbi algorithmP *
Wherein λ isppRespectively representing the weight corresponding to each feature in the two-dimensional map matching mathematical model, wherein all the weights are set to be 1; i, l represents the number of characteristic functions; zob1Is a normalization factor.
And 5, a height information map matching algorithm based on the conditional random field algorithm.
And 5.1, estimating the state corresponding to each step of walking of the pedestrian according to the height of the step and the limit value of the step of walking of the pedestrian.
By using the limit of the step crossed by the pedestrian every time walking, the pedestrian is supposed to cross N at most every timeTThe step number is one, the height variation of each pedestrian walking should be (-N)T,NT) Within this range. Thus, the pedestrian is high for each walkThe state of the degree is:
Sh=(((-NT)×stair_high),((1-NT)×stair_high),…,(NT×stair_high))
and 5.2, establishing a characteristic equation which takes the height as an observation point and the state point corresponding to the observation point.
Figure BDA0001587479100000081
Figure BDA0001587479100000082
Figure BDA0001587479100000083
Wherein S ish(tob2) Represents tob2The height of the time status point; hob2(tob2) Represents tob2The height of the observation point at the moment; g represents the functional relation between the observation point and the height of the state point; sigmahRepresenting the covariance between the heights between the state point and the observation point; STAIR _ HIGH represents the height of each step; b (S)h(tob2-1),Sh(tob2) ) represents tob2-1 and tob2Functions between the time of day state points; h (S)h(tob2-1),Sh(tob2) ) represents tob2-1 and tob2The relative height function between the state points at the time.
And 5.3, solving the mean square error between the height information of all the previous observation points and the height of the matched state point.
δHob2(tob2)=Hob2(tob2)-Sh *(tob2)
Figure BDA0001587479100000084
Figure BDA0001587479100000085
δHob2The error vector between the observation point height and the matching state point height,
Figure BDA0001587479100000086
the average error vector is represented.
And 5.4, establishing a characteristic equation by taking the mean square error as another characteristic according to the relation between the heights of the adjacent state points and the height difference of each state point.
Figure BDA0001587479100000091
gc(Sh(tob2),Hob2(tob2),S_Hob2(tob2))=(Hob2(tob2)-Sh(tob2))-S_Hob2(tob2)
Wherein σsRepresents the height error covariance; s _ Hob2(tob2) Represents tob2Covariance of all observed errors prior to time of day.
And 5.5, establishing a height map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the height is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the height.
Figure BDA0001587479100000092
Figure BDA0001587479100000093
Figure BDA0001587479100000094
Calculating a maximum probability state point sequence S by using a Viterbi algorithmh*。
Wherein λ ishhAnd representing the weight values corresponding to the features, wherein the weight values are all set to be 1. i, l represents the number of characteristic functions; zob2Is a normalization factor.
Step 6: two-dimensional position and height information fusion
And 6.1, inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best two-dimensional position matching for storage, and inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best height matching for storage.
And 6.2, combining the two-dimensional optimal matching point and the height optimal matching point by using a method of adjacent time.
time=|time1(tob1)-time2(kob2)|kob2=1…tob2
Wherein the requirements are as follows: 0 < tob1<tob2. Finding out the minimum value in time and corresponding kob2Marking as
Figure BDA0001587479100000095
Figure BDA0001587479100000101
S*(tob1)=<SP *(tob1),S(tob1)>
Wherein, S (t)ob1) Represents tob1Pedestrian height information obtained by adopting near point fusion is adopted at any moment; s*Is the final pedestrian trajectory information.
And 6.3, correcting and feeding back the three-dimensional position information output by the inertial navigation system according to the matched three-dimensional position. The mathematical model of the correction feedback is as follows:
pn(t)=S*(tob1)
to verify the validity of the algorithm, experimental verification was performed. Taking a certain indoor office environment as an example, the experimental place comprises two indoor environments, namely a corridor and a corridor. Fig. 7 shows three-dimensional position information calculated by the inertial navigation system. Therefore, errors exist in navigation output no matter two-dimensional track or height information, and positioning accuracy is not accurate. A three-dimensional indoor map matching algorithm based on conditional random fields is shown in fig. 8. The experimental result shows that the matching result of the method has high accuracy and effectiveness.

Claims (1)

1. The pedestrian indoor three-dimensional map matching method is characterized by comprising the following steps: the method comprises the following steps of,
step 1: data acquisition, namely preliminarily resolving the three-dimensional position and the course of an indoor pedestrian;
step 1.1, a pedestrian collects pedestrian movement data by wearing an MEMS-INS sensor, wherein the pedestrian movement data comprises: three-axis acceleration data and three-axis gyro data;
step 1.2, solving three-dimensional position and course information of the collected pedestrian motion data by using a strapdown resolving algorithm;
step 2: extracting observation points of the conditional random field model;
step 2.1, extracting horizontal two-dimensional position information of indoor pedestrians according to a fixed-length walking distance, extracting pedestrian height information at a zero-speed moment, acquiring observation points of a CRF (learning random access control) model, and respectively recording two-dimensional positions of the indoor pedestrians and sampling moments of the height information;
and step 3: establishing an indoor electronic map, creating state points according to indoor structure information, and storing state point coordinates;
step 3.1, creating an indoor electronic map by using a known indoor map, and storing the indoor electronic map in a navigation computer;
step 3.2, coordinate points with the minimum and maximum numerical values in the electronic map are obtained and used as value ranges of the state points, then the state points at equal intervals cover the whole indoor range, and state point information is stored;
step 3.3, adding a state point on the height information by taking the step height as a standard;
step 4, a two-dimensional position map matching algorithm based on a conditional random field algorithm;
step 4.1, establishing a characteristic equation according to the relationship between the two-dimensional position observation point coordinates and each state point coordinate;
step 4.2, establishing a characteristic equation according to the azimuth angle between the azimuth information of the observation point and the state point at the corresponding moment;
step 4.3, establishing a two-dimensional map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the two-dimensional position is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the position;
step 5, a height information map matching algorithm based on the conditional random field algorithm;
step 5.1, dividing the walking height of each step of the pedestrian into different states according to the height of the step and the limit value of the step of the pedestrian;
step 5.2, establishing a characteristic equation which takes the height as an observation point and a state point corresponding to the observation point;
step 5.3, solving the mean square error between the height information of all the previous adjacent observation points and the height of the matched state point;
step 5.4, taking the mean square error as another characteristic, and establishing a characteristic equation according to the relation of the height difference of the adjacent state points;
step 5.5, establishing a height map matching mathematical model based on the conditional random field, and obtaining the maximum probability of a state sequence under the condition that the height is taken as an observation sequence, wherein the maximum probability sequence is the optimal state matching of the height;
step 6: two-dimensional position and height information fusion
Step 6.1, inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best matching in the two-dimensional position for storage, and inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best matching in the height for storage;
step 6.2, combining the two-dimensional optimal matching point and the height optimal matching point by using a method of adjacent time;
and 6.3, correcting the three-dimensional position information output by the inertial navigation system according to the matched three-dimensional position.
CN201810176554.5A 2018-03-03 2018-03-03 Indoor three-dimensional map matching method for pedestrians Active CN108426582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810176554.5A CN108426582B (en) 2018-03-03 2018-03-03 Indoor three-dimensional map matching method for pedestrians

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810176554.5A CN108426582B (en) 2018-03-03 2018-03-03 Indoor three-dimensional map matching method for pedestrians

Publications (2)

Publication Number Publication Date
CN108426582A CN108426582A (en) 2018-08-21
CN108426582B true CN108426582B (en) 2021-07-30

Family

ID=63157697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810176554.5A Active CN108426582B (en) 2018-03-03 2018-03-03 Indoor three-dimensional map matching method for pedestrians

Country Status (1)

Country Link
CN (1) CN108426582B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032709B (en) * 2019-01-24 2023-04-14 太原理工大学 Positioning and estimation method for abnormal point in geographic coordinate conversion
CN110337065A (en) * 2019-05-09 2019-10-15 南京工程学院 A kind of intelligent hinge personnel positioning monitoring and pre-warning system and method based on three-dimensional map
CN111982132B (en) * 2019-05-22 2022-06-14 合肥四维图新科技有限公司 Data processing method, device and storage medium
CN110543917B (en) * 2019-09-06 2021-09-28 电子科技大学 Indoor map matching method by utilizing pedestrian inertial navigation track and video information
CN113720332B (en) * 2021-06-30 2022-06-07 北京航空航天大学 Floor autonomous identification method based on floor height model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8605998B2 (en) * 2011-05-06 2013-12-10 Toyota Motor Engineering & Manufacturing North America, Inc. Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
CN104023228A (en) * 2014-06-12 2014-09-03 北京工业大学 Self-adaptive indoor vision positioning method based on global motion estimation
CN106871894A (en) * 2017-03-23 2017-06-20 北京工业大学 A kind of map-matching method based on condition random field
CN107179085A (en) * 2016-03-10 2017-09-19 中国科学院地理科学与资源研究所 A kind of condition random field map-matching method towards sparse floating car data
CN107635204A (en) * 2017-09-27 2018-01-26 深圳大学 A kind of indoor fusion and positioning method and device of motor behavior auxiliary, storage medium
CN108322889A (en) * 2018-02-01 2018-07-24 深圳市交投科技有限公司 A kind of method, storage medium and the intelligent terminal of multisource data fusion indoor positioning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8605998B2 (en) * 2011-05-06 2013-12-10 Toyota Motor Engineering & Manufacturing North America, Inc. Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
CN104023228A (en) * 2014-06-12 2014-09-03 北京工业大学 Self-adaptive indoor vision positioning method based on global motion estimation
CN107179085A (en) * 2016-03-10 2017-09-19 中国科学院地理科学与资源研究所 A kind of condition random field map-matching method towards sparse floating car data
CN106871894A (en) * 2017-03-23 2017-06-20 北京工业大学 A kind of map-matching method based on condition random field
CN107635204A (en) * 2017-09-27 2018-01-26 深圳大学 A kind of indoor fusion and positioning method and device of motor behavior auxiliary, storage medium
CN108322889A (en) * 2018-02-01 2018-07-24 深圳市交投科技有限公司 A kind of method, storage medium and the intelligent terminal of multisource data fusion indoor positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Indoor Pedestrian Navigation Based on Conditional Random Field Algorithm;Mingrong Ren 等;《micromachines》;20171030;第8卷(第11期);第1-11页 *
一种基于语音识别的室内定位方法;张晓军 等;《小型微型计算机系统》;20160831;第37卷(第8期);第1883-1888页 *

Also Published As

Publication number Publication date
CN108426582A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN108426582B (en) Indoor three-dimensional map matching method for pedestrians
CN111272165B (en) Intelligent vehicle positioning method based on characteristic point calibration
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN110118560B (en) Indoor positioning method based on LSTM and multi-sensor fusion
CN107635204B (en) Indoor fusion positioning method and device assisted by exercise behaviors and storage medium
CN112639502A (en) Robot pose estimation
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
Engel et al. Deeplocalization: Landmark-based self-localization with deep neural networks
JP4984659B2 (en) Own vehicle position estimation device
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
CN103674021A (en) Integrated navigation system and method based on SINS (Strapdown Inertial Navigation System) and star sensor
CN110763239B (en) Filtering combined laser SLAM mapping method and device
CN108195376A (en) Small drone Camera calibration method
CN110208783B (en) Intelligent vehicle positioning method based on environment contour
CN112965063B (en) Robot mapping and positioning method
CN112004183B (en) Robot autonomous positioning method based on convolution neural network fusion IMU and WiFi information
CN112284376A (en) Mobile robot indoor positioning mapping method based on multi-sensor fusion
CN110412596A (en) A kind of robot localization method based on image information and laser point cloud
CN109164411A (en) A kind of personnel positioning method based on multi-data fusion
CN113741503B (en) Autonomous positioning unmanned aerial vehicle and indoor path autonomous planning method thereof
CN111060099A (en) Real-time positioning method for unmanned automobile
CN114323033A (en) Positioning method and device based on lane lines and feature points and automatic driving vehicle
CN115639823A (en) Terrain sensing and movement control method and system for robot under rugged and undulating terrain
CN109341682B (en) Method for improving geomagnetic field positioning accuracy
CN115183762A (en) Airport warehouse inside and outside mapping method, system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant