CN113029153B - Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification - Google Patents
Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification Download PDFInfo
- Publication number
- CN113029153B CN113029153B CN202110334800.7A CN202110334800A CN113029153B CN 113029153 B CN113029153 B CN 113029153B CN 202110334800 A CN202110334800 A CN 202110334800A CN 113029153 B CN113029153 B CN 113029153B
- Authority
- CN
- China
- Prior art keywords
- mobile phone
- acceleration
- axis
- pedestrian
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000004927 fusion Effects 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 238000004364 calculation method Methods 0.000 claims abstract description 6
- 230000001133 acceleration Effects 0.000 claims description 77
- 238000012549 training Methods 0.000 claims description 37
- 238000001914 filtration Methods 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 14
- 230000005484 gravity Effects 0.000 claims description 12
- 238000001228 spectrum Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000002790 cross-validation Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000012706 support-vector machine Methods 0.000 abstract description 7
- 238000012360 testing method Methods 0.000 description 11
- 238000000926 separation method Methods 0.000 description 4
- 230000036544 posture Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Navigation (AREA)
- Telephone Function (AREA)
Abstract
The invention relates to a multi-scene PDR positioning method based on multi-sensor fusion and SVM classification of a smart phone in the technical field of positioning, which comprises two parts, namely, measuring the motion characteristics of pedestrians by a plurality of sensors on the smart phone, detecting the steps by a peak value detection method, estimating the steps by a nonlinear step model, estimating the direction by the motion characteristics, and combining the steps detection, the step estimation and the direction estimation to obtain high-precision pedestrian movement track data; secondly, a method for distinguishing the modes of the smart mobile phones carried by pedestrians by using a support vector machine is provided, corresponding calculation models can be applied to different modes, and larger errors of the traditional PDR method in multi-scene switching are effectively avoided. The designed multi-scene PDR positioning method has higher precision and higher running speed, and only needs one intelligent mobile phone without depending on external auxiliary equipment, so that the positioning precision and robustness are obviously improved.
Description
Technical Field
The invention relates to the field of indoor positioning, in particular to a multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification.
Background
Along with popularization of intelligent mobile equipment, market demands for indoor positioning are becoming larger and larger, and how to realize high-precision indoor positioning in a close-range complex environment gradually becomes a research hot spot, and indoor positioning modes such as WIFI positioning, bluetooth positioning, ultrasonic positioning, infrared positioning and magnetic field positioning are continuously emerging. In complex and changeable indoor environments, the existing methods have certain limitations and are difficult to put into practical application.
The pedestrian dead reckoning technology (PDR) obtains the step number, step length and direction of the pedestrian according to the sensor of the mobile terminal, and then reckons the current position of the pedestrian. Compared with the positioning modes such as infrared and ultrasonic, the PDR does not need to additionally arrange equipment in the environment, and the position estimation can be completed by relying on the accelerometer, gyroscope and magnetometer built in the smart phone. However, the positioning accuracy of the mobile terminal is affected by the single-step error accumulation, inconsistent movement of the mobile terminal and pedestrians, and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification.
The aim of the invention is realized by the following technical scheme: a multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification comprises the following steps:
step 1: acquiring the initial position of the current equipment in a UWB or WiFi positioning mode;
Step 2: acquiring the acceleration, angular velocity and barometric pressure value of the current mobile phone through an accelerometer, a gyroscope and a barometer integrated with the mobile phone;
Step 3: carrying out low-pass filtering treatment on the data acquired in the step 2 to filter sampling noise;
Step 4: performing quaternion transformation on the acceleration and angular velocity data filtered in the step 3, and converting the acceleration and angular velocity data from a carrier coordinate system to a world coordinate system;
step 5: taking the magnitude of the z-axis acceleration under the world coordinate system as the basis of step number detection, and judging the step number by adopting an acceleration peak value detection method;
Step 6: after detecting one step according to the peak detection method in the step 5, calculating the time T spent by the one step to obtain the step frequency f=1/T, and calculating the step length according to a nonlinear step length model;
step 7: classifying by adopting an SVM method based on the frequency domain characteristics of the acceleration after filtering, and distinguishing four states of a pedestrian pocket of the mobile phone, a pedestrian swing arm of the mobile phone when the mobile phone is in the left hand, a pedestrian swing arm of the mobile phone when the mobile phone is in the right hand and a flat-end mobile phone;
Step 8: for four different states, calculating the advancing direction in one step by detecting the angular speed or acceleration characteristic value;
step 9: updating the position of the pedestrian after walking for one step based on the step length calculated in the step 6 and the direction calculated in the step 8;
step 10: continuously detecting the change of the barometer, and judging that the pedestrian upstairs if the barometer shows a continuous descending trend; if the barometer shows that the continuous ascending trend appears, the pedestrian is judged to go downstairs;
Step 11: continuously detecting the change of the step number after finishing the position updating of one step; if a new step appears, repeating the steps 5 to 10; otherwise, ending the positioning process.
Further, in the step 3, the passband cut-off frequency of the low-pass filtering is 2.3Hz to 3Hz, and the upper limit of the step frequency of the pedestrian is preferable.
Further, in the step 4, the differential equation of the quaternion is as follows:
Wherein ω x、ωy、ωz represents the triaxial angular velocity measured by the gyro sensor, and q 0、q1、q2、q3 represents the quaternion to be solved;
the differential equation is solved iteratively by a first order Bi Kafa, and the specific calculation formula is as follows:
wherein Δt represents a time differential amount;
updating quaternion by continuously measured angular velocity of the gyroscope, and calculating the Euler angle according to the relationship between the quaternion direction cosine matrix and the Euler angle and the following formula:
wherein phi, theta and phi respectively represent Roll angle (Roll), pitch angle (Pitch) and Yaw angle (Yaw) in Euler angles, and conversion from a carrier coordinate system to a world coordinate system is completed.
Further, in the step 5, the step of implementing the peak detection step number is as follows:
S1: band-pass filtering is carried out on the z-axis acceleration of pedestrians during walking under the world coordinate system, the pass band cut-off frequency is respectively 0.5Hz and 2.5Hz, and noise and gravity components are filtered;
s2: taking the wave peak of which the modulus of the z-axis acceleration is not smaller than 0.5, and if the interval between two adjacent wave peaks is not smaller than 0.6s, then the detection is regarded as one step.
Further, in the step 6, step length SL is calculated according to the nonlinear step length model by combining step frequency f, height h input by a user and individual characteristic parameter c:
SL=[0.7+0.371×(h-1.75)+0.227×(f-1.79)×h÷1.75]×c;
wherein the value of the individual characteristic parameter c is between 0.8 and 1.2, typically 1.
Further, in the step 7, the basis of SVM classification is acceleration spectrum characteristics acquired by a mobile phone during walking; on the premise of supervised learning, the acceleration frequency spectrum features of the walking of the user are used for training a classification model, training results are stored in the cloud, and the acceleration frequency spectrum features of the walking of the user are matched with the training results in actual use, so that real-time state classification detection is realized; the SVM classification is realized as follows:
S0: description of principle: SVMs, support vector machines, also known as maximum-interval classifiers, can divide an original sample set into two or more parts by separating hyperplanes. The basic idea of the SVM is to find a hyperplane that can correctly divide the training set and has the greatest geometric separation and maximize the separation, i.e. to translate into a convex quadratic programming problem.
S1: under the condition of supervised learning, carrying out band-pass filtering processing on acceleration data acquired by the mobile phone state of the known pedestrian, wherein the passband cut-off frequency is respectively 0.5Hz and 2.5Hz, and filtering noise and gravity components;
s2: judging the number of steps by a peak value detection method, and marking;
s3: performing frequency domain transformation on the time domain data with one segment in two steps to obtain frequency domain characteristics as a training set;
S4: the four categories of data are respectively attached with labels: flat end label=1, pocket label=2, left swing arm label=3, right swing arm label=4;
s5: carrying out normalization processing on the data of the training set, and mapping the data to between [0,1 ];
S6: training: the mirror image basis function RBF is adopted as a kernel function of the SVM, and the optimal punishment parameter c and RBF kernel parameter g are found in a 3-fold cross validation mode. Substituting the optimal parameters c and g obtained after verification and the training set label into an SVM training function to obtain a training model;
S7: and (3) predicting: and (3) inputting acceleration frequency spectrum data actually collected in the walking process of the pedestrians into an SVM classification model trained in the step (S6) after filtering, segmentation and frequency domain transformation operations in the steps (S1) to (S3) to obtain a predicted classification result.
Further, in the step 8, the specific method for detecting the direction of the mobile phone in different states is as follows:
s1: for the state that the mobile phone is in a handheld flat end and in a pedestrian pocket, firstly, band-pass filtering is carried out on an acceleration value to filter a gravitational acceleration component; then find the z-axis acceleration a z =0 in one step period, and the z-axis acceleration is increased, namely Taking the x-axis acceleration vector and the y-axis acceleration vector and the direction at the time t 0 as the advancing direction of the step;
S2: when the angular velocity in the z-axis direction in one step is larger than 0 for the condition that the mobile phone is positioned on the right hand swing arm of the pedestrian, the peak value vector sum direction of the angular velocity in the x-axis and the y-axis is taken as the advancing direction of the step;
S3: and when the angular velocity in the z-axis direction in one step is smaller than 0 under the condition that the mobile phone is positioned on the left hand swing arm of the pedestrian, taking the peak value vector sum direction of the angular velocities in the x-axis and the y-axis as the advancing direction of the step.
Further, in the step 8, for the case that the mobile phone is located in the left and right hand swing arms of the pedestrian, in the running process, the swing arm motion of the pedestrian can be simplified and decomposed into the following two sub-motions: the rotation of the arm tail end around the shoulder point on the horizontal plane is represented by the angular speed omega z rotating around the z axis, and the rotation of the arm tail end around the shoulder point on the vertical plane is represented by the angular speed omega x rotating around the x axis and the angular speed omega y rotating around the y axis; when a pedestrian holds the mobile phone swing arm by hand in the right-hand direction and moves towards the north direction, the mobile phone angular speed data is represented by an angular speed omega z which is approximately sinusoidal along the time and is changed along the x axis, and an angular speed omega x which is approximately sinusoidal along the time and is changed along the x axis, and as the person walks north, the arm swings towards the north and south direction, so that the arm almost does not rotate along the y axis, namely omega y is approximately equal to 0; estimating the pedestrian walking direction from the three-axis angular velocity information: if the hand is the right hand, when the arms of the pedestrians swing from back to front, omega z is more than 0, and when the mobile phone reaches the lowest point, the swing speed is the fastest, namely omega x and omega y reach the peak value; the relative size of the two data peaks can determine about which horizontal axis the mobile phone rotates at the moment, the axis is rotated by 90 degrees anticlockwise, if the hand holding the mobile phone is left hand, the axis is rotated by 90 degrees clockwise, and the advancing direction of the pedestrian at the moment can be obtained.
Further, in the step 8, when the flat end of the mobile phone is on the hand of a pedestrian or the mobile phone is placed in a pocket next to the body of the pedestrian, the motion of the mobile phone can be regarded as consistent with the motion of the trunk of the pedestrian, and the process of taking one step by the pedestrian can be divided into two stages of leg movement and center of gravity conversion: in the leg stepping stage, the rear legs leave the ground and gradually swing forward, and finally the heel touches the front ground; the acquired acceleration information shows that the acceleration after the gravity is filtered in the z-axis direction of the human trunk shows the change from a negative value to a positive value from the foot to the highest point to the moment before the heel contacts the ground; in the process, at the moment when the acceleration in the z-axis direction is 0, the speed of the human body in the vertical downward direction reaches the maximum, the human body is in an attempted stable state, the body is minimally swayed left and right at the moment, the acceleration in the horizontal plane is only generated by the translational movement of the human body in the advancing direction, namely, the acceleration synthesis direction in the horizontal plane can accurately reflect the walking direction of the traveler.
The beneficial effects of the invention are as follows: firstly, obtaining pedestrian motion data through an accelerometer, a gyroscope, a magnetometer and a barometer on a smart phone terminal, detecting steps by using a peak detection method, estimating steps by using a nonlinear step model, estimating directions by using acceleration and angular velocity characteristics, and combining the steps to obtain high-precision pedestrian motion track data; secondly, a method for distinguishing the mode that pedestrians carry smart phones during positioning by using a support vector machine is provided, different pedestrian movement track calculation models can be applied in different modes, and larger errors of the traditional method when PDR is switched and used in multiple scenes are effectively avoided. In a word, the method is an indoor positioning method with high precision, low cost and high universality, and is suitable for practical application.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a waveform of three-axis angular velocity of a pedestrian walking in a north-oriented direction in accordance with an embodiment of the present invention;
FIG. 3 is a diagram of actual classification and predictive classification of a test set in an embodiment of the invention.
Detailed Description
The following description of specific embodiments of the invention is provided to facilitate an understanding of the invention by those skilled in the art. It should be noted that it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the principles of the invention, which are also intended to fall within the scope of the appended claims.
As shown in fig. 1, the multi-scene PDR positioning method based on smart phone multi-sensor fusion and SVM classification provided by the invention comprises the following steps:
step 1: acquiring the initial position of the current equipment in a UWB or WiFi positioning mode;
Step 2: acquiring the acceleration, angular velocity and barometric pressure value of the current mobile phone through an accelerometer, a gyroscope and a barometer integrated with the mobile phone;
Step 3: carrying out low-pass filtering treatment on the data acquired in the step 2 to filter sampling noise;
Step 4: performing quaternion transformation on the acceleration and angular velocity data filtered in the step 3, and converting the acceleration and angular velocity data from a carrier coordinate system to a world coordinate system;
step 5: taking the magnitude of the z-axis acceleration under the world coordinate system as the basis of step number detection, and judging the step number by adopting an acceleration peak value detection method;
Step 6: after detecting one step according to the peak detection method in the step 5, calculating the time T spent by the one step to obtain the step frequency f=1/T, and calculating the step length according to a nonlinear step length model;
step 7: classifying by adopting an SVM method based on the frequency domain characteristics of the acceleration after filtering, and distinguishing four states of a pedestrian pocket of the mobile phone, a pedestrian swing arm of the mobile phone when the mobile phone is in the left hand, a pedestrian swing arm of the mobile phone when the mobile phone is in the right hand and a flat-end mobile phone;
Step 8: for four different states, calculating the advancing direction in one step by detecting the angular speed or acceleration characteristic value;
step 9: updating the position of the pedestrian after walking for one step based on the step length calculated in the step 6 and the direction calculated in the step 8;
step 10: continuously detecting the change of the barometer, and judging that the pedestrian upstairs if the barometer shows a continuous descending trend; if the barometer shows that the continuous ascending trend appears, the pedestrian is judged to go downstairs;
Step 11: continuously detecting the change of the step number after finishing the position updating of one step; if a new step appears, repeating the steps 5 to 10; otherwise, ending the positioning process.
Further, in the step 3, the passband cut-off frequency of the low-pass filtering is 2.3Hz to 3Hz, and the upper limit of the step frequency of the pedestrian is preferable.
Further, in the step 4, the differential equation of the quaternion is as follows:
Wherein ω x、ωy、ωz represents the triaxial angular velocity measured by the gyro sensor, and q 0、q1、q2、q3 represents the quaternion to be solved;
the differential equation is solved iteratively by a first order Bi Kafa, and the specific calculation formula is as follows:
wherein Δt represents a time differential amount;
updating quaternion by continuously measured angular velocity of the gyroscope, and calculating the Euler angle according to the relationship between the quaternion direction cosine matrix and the Euler angle and the following formula:
wherein phi, theta and phi respectively represent Roll angle (Roll), pitch angle (Pitch) and Yaw angle (Yaw) in Euler angles, and conversion from a carrier coordinate system to a world coordinate system is completed.
Further, in the step 5, the step of implementing the peak detection step number is as follows:
S1: band-pass filtering is carried out on the z-axis acceleration of pedestrians during walking under the world coordinate system, the pass band cut-off frequency is respectively 0.5Hz and 2.5Hz, and noise and gravity components are filtered;
s2: taking the wave peak of which the modulus of the z-axis acceleration is not smaller than 0.5, and if the interval between two adjacent wave peaks is not smaller than 0.6s, then the detection is regarded as one step.
Further, in the step 6, step length SL is calculated according to the nonlinear step length model by combining step frequency f, height h input by a user and individual characteristic parameter c:
SL=[0.7+0.371×(h-1.75)+0.227×(f-1.79)×h÷1.75]×c;
wherein the value of the individual characteristic parameter c is between 0.8 and 1.2, typically 1.
Further, in the step 7, the basis of SVM classification is acceleration spectrum characteristics acquired by a mobile phone during walking; on the premise of supervised learning, the acceleration frequency spectrum features of the walking of the user are used for training a classification model, training results are stored in the cloud, and the acceleration frequency spectrum features of the walking of the user are matched with the training results in actual use, so that real-time state classification detection is realized; the SVM classification is realized as follows:
S0: description of principle: SVMs, support vector machines, also known as maximum-interval classifiers, can divide an original sample set into two or more parts by separating hyperplanes. The basic idea of the SVM is to find a hyperplane that can correctly divide the training set and has the greatest geometric separation and maximize the separation, i.e. to translate into a convex quadratic programming problem.
S1: under the condition of supervised learning, carrying out band-pass filtering processing on acceleration data acquired by the mobile phone state of the known pedestrian, wherein the passband cut-off frequency is respectively 0.5Hz and 2.5Hz, and filtering noise and gravity components;
s2: judging the number of steps by a peak value detection method, and marking;
s3: performing frequency domain transformation on the time domain data with one segment in two steps to obtain frequency domain characteristics as a training set;
S4: the four categories of data are respectively attached with labels: flat end label=1, pocket label=2, left swing arm label=3, right swing arm label=4;
s5: carrying out normalization processing on the data of the training set, and mapping the data to between [0,1 ];
s6: training: the mirror image basis function RBF is adopted as a kernel function of the SVM, and the optimal punishment parameter c and RBF kernel parameter g are found in a 3-fold cross validation mode. The parameter c is a trade-off between interface simplicity and classification samples, and the parameter g is the impact size of a single training sample. In the 3-fold cross validation process, the variation range of parameters c and g can be set to be 2-10-2-10, and the step size is set to be 0.5. Substituting the optimal parameters c and g obtained after verification and the training set label into an SVM training function to obtain a training model;
S7: and (3) predicting: and (3) inputting acceleration frequency spectrum data actually collected in the walking process of the pedestrians into an SVM classification model trained in the step (S6) after filtering, segmentation and frequency domain transformation operations in the steps (S1) to (S3) to obtain a predicted classification result.
Further, in the step 8, the specific method for detecting the direction of the mobile phone in different states is as follows:
s1: for the state that the mobile phone is in a handheld flat end and in a pedestrian pocket, firstly, band-pass filtering is carried out on an acceleration value to filter a gravitational acceleration component; finding the z-axis (empty-axis) acceleration a z =0 in one step period, and increasing the z-axis acceleration, i.e Taking the x-axis acceleration vector and the y-axis acceleration vector and the direction at the time t 0 as the advancing direction of the step;
S2: when the angular velocity in the z-axis direction in one step is larger than 0 for the condition that the mobile phone is positioned on the right hand swing arm of the pedestrian, the peak value vector sum direction of the angular velocity in the x-axis and the y-axis is taken as the advancing direction of the step;
S3: and when the angular velocity in the z-axis direction in one step is smaller than 0 under the condition that the mobile phone is positioned on the left hand swing arm of the pedestrian, taking the peak value vector sum direction of the angular velocities in the x-axis and the y-axis as the advancing direction of the step.
The specific principle of the direction detection of the mobile phone in different states is as follows:
S1: for the situation that the mobile phone is positioned on the left hand swing arm and the right hand swing arm of the pedestrian, in the running process, the swing arm motion of the pedestrian can be simplified and decomposed into the following two sub-motions: the rotation of the arm tail end around the shoulder point on the horizontal plane is represented by the angular speed omega z rotating around the z axis, and the rotation of the arm tail end around the shoulder point on the vertical plane is represented by the angular speed omega x rotating around the x axis and the angular speed omega y rotating around the y axis; when a pedestrian holds the mobile phone swing arm by hand in the right-hand direction and moves towards the north direction, the mobile phone angular speed data is represented by an angular speed omega z which is approximately sinusoidal along the time and is changed along the x axis, and an angular speed omega x which is approximately sinusoidal along the time and is changed along the x axis, and as the person walks north, the arm swings towards the north and south direction, so that the arm almost does not rotate along the y axis, namely omega y is approximately equal to 0; the three-axis angular velocity waveform of the pedestrian walking continuously north may be approximated as shown in fig. 2; estimating the pedestrian walking direction from the three-axis angular velocity information: if the hand is the right hand, when the arms of the pedestrians swing from back to front, omega z is more than 0, and when the mobile phone reaches the lowest point, the swing speed is the fastest, namely omega x and omega y reach the peak value; the relative size of the two data peaks can determine about which horizontal axis the mobile phone rotates at the moment, the axis is rotated by 90 degrees anticlockwise, if the hand holding the mobile phone is left hand, the axis is rotated by 90 degrees clockwise, and the advancing direction of the pedestrian at the moment can be obtained.
S2: when the flat end of the mobile phone is on the hand of a pedestrian or the mobile phone is placed in a pocket close to the body of the pedestrian, the motion of the mobile phone can be regarded as consistent with the motion of the trunk of the pedestrian, and the process of taking one step by the pedestrian can be divided into two stages of leg movement and gravity center conversion: in the leg stepping stage, the rear legs leave the ground and gradually swing forward, and finally the heel touches the front ground; the acquired acceleration information shows that the acceleration after the gravity is filtered in the z-axis direction of the human trunk shows the change from a negative value to a positive value from the foot to the highest point to the moment before the heel contacts the ground; in the process, at the moment when the acceleration in the z-axis direction is 0, the speed of the human body in the vertical downward direction reaches the maximum, the human body is in an attempted stable state, the body is minimally swayed left and right at the moment, the acceleration in the horizontal plane is only generated by the translational movement of the human body in the advancing direction, namely, the acceleration synthesis direction in the horizontal plane can accurately reflect the walking direction of the traveler.
The SVM classification performance is tested for a specific scene, and the test process and the result are as follows:
The specific scene is as follows: in a mall, pedestrians walk with the mobile phone in four different postures, each posture walks for about two minutes, and the sensor in the mobile phone can acquire data at different moments and store the data.
The operation steps are as follows:
Data preprocessing: performing frequency domain transformation on the time domain acceleration data with one step as one section through step number judgment to obtain frequency domain characteristics, wherein 75% of the data are used as training sets, and 25% of the data are used as test sets; the four categories of data are respectively attached with labels: the flat end label=1, the pocket label=2, the left swing arm label=3 and the right swing arm label=4, and a training set label and a test set label can be obtained respectively;
and (3) SVM: the mirror image basis function RBF is used as a kernel function of the SVM, and the optimal parameter c and the optimal parameter g are found out in a 3-fold cross validation mode. The parameter c is a trade-off between interface simplicity and classification samples, and the parameter g is the impact size of a single training sample.
Training: in the 3-fold cross validation process, the variation range of parameters c and g can be set to be 2-10-2-10, and the step size is set to be 0.5. Substituting the optimal parameters c=0.23 and g=0.015625 obtained after 3-fold cross validation and the training set label into an SVM training function, and obtaining a training model.
And (3) testing: substituting the test set, the test label and the trained model into the prediction function can obtain the test result of the test set, and comparing the test result with the test set label, so that the classification accuracy can be obtained.
As shown in fig. 3, the total 64 sets of data have only 1 data classification error, and the classification accuracy is 98.4375%.
In summary, the invention relates to a multi-scene PDR positioning method based on multi-sensor fusion and SVM classification of a smart phone in the technical field of positioning, the method comprises two parts, namely, measuring the motion characteristics of pedestrians through an accelerometer, a gyroscope, a magnetometer and a barometer on a smart phone terminal, detecting the steps through a method for judging a threshold value, estimating the steps through a nonlinear step size model, estimating the directions through the characteristics of the accelerometer and the gyroscope, and combining the steps to obtain high-precision pedestrian movement track data; secondly, a method for distinguishing the mode that pedestrians carry smart phones during positioning by using a support vector machine is provided, different pedestrian movement track calculation models can be applied in different modes, and larger errors of the traditional method when PDR is switched and used in multiple scenes are effectively avoided. The designed pedestrian movement track prediction algorithm and SVM classification algorithm have higher precision and faster running speed, and only need a common smart phone, and are completely independent of external auxiliary equipment, so that the positioning precision and the robustness of a positioning system are obviously improved.
Claims (8)
1. A multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification is characterized by comprising the following steps:
step 1: acquiring the initial position of the current equipment in a UWB or WiFi positioning mode;
Step 2: acquiring the acceleration, angular velocity and barometric pressure value of the current mobile phone through an accelerometer, a gyroscope and a barometer integrated with the mobile phone;
Step 3: carrying out low-pass filtering treatment on the data acquired in the step 2 to filter sampling noise;
Step 4: performing quaternion transformation on the acceleration and angular velocity data filtered in the step 3, and converting the acceleration and angular velocity data from a carrier coordinate system to a world coordinate system;
step 5: taking the magnitude of the z-axis acceleration under the world coordinate system as the basis of step number detection, and judging the step number by adopting an acceleration peak value detection method;
Step 6: after detecting one step according to the peak detection method in the step 5, calculating the time T spent by the one step to obtain the step frequency f=1/T, and calculating the step length according to a nonlinear step length model; combining the step frequency f, the height h input by the user and the individual characteristic parameter c, and calculating according to a nonlinear step model to obtain a step SL:
SL=[0.7+0.371×(h-1.75)+0.227×(f-1.79)×h÷1.75]×c;
Wherein the value of the individual characteristic parameter c is between 0.8 and 1.2, typically 1;
step 7: classifying by adopting an SVM method based on the frequency domain characteristics of the acceleration after filtering, and distinguishing four states of a pedestrian pocket of the mobile phone, a pedestrian swing arm of the mobile phone when the mobile phone is in the left hand, a pedestrian swing arm of the mobile phone when the mobile phone is in the right hand and a flat-end mobile phone;
Step 8: for four different states, calculating the advancing direction in one step by detecting the angular speed or acceleration characteristic value;
s1: for the state that the mobile phone is in a handheld flat end and in a pedestrian pocket, firstly, band-pass filtering is carried out on an acceleration value to filter a gravitational acceleration component; then find the z-axis acceleration a z =0 in one step period, and the z-axis acceleration is increased, namely Taking the x-axis acceleration vector and the y-axis acceleration vector and the direction at the time t 0 as the advancing direction of the step;
S2: when the angular velocity in the z-axis direction in one step is larger than 0 for the condition that the mobile phone is positioned on the right hand swing arm of the pedestrian, the peak value vector sum direction of the angular velocity in the x-axis and the y-axis is taken as the advancing direction of the step;
S3: when the angular velocity in the z-axis direction in one step is smaller than 0 for the condition that the mobile phone is positioned on the left hand swing arm of the pedestrian, the peak value vector sum direction of the angular velocities in the x-axis and the y-axis is taken as the advancing direction of the step;
step 9: updating the position of the pedestrian after walking for one step based on the step length calculated in the step 6 and the direction calculated in the step 8;
step 10: continuously detecting the change of the barometer, and judging that the pedestrian upstairs if the barometer shows a continuous descending trend; if the barometer shows that the continuous ascending trend appears, the pedestrian is judged to go downstairs;
Step 11: continuously detecting the change of the step number after finishing the position updating of one step; if a new step appears, repeating the steps 5 to 10; otherwise, ending the positioning process.
2. The multi-scene PDR positioning method based on smart phone multi-sensor fusion and SVM classification according to claim 1, wherein in the step 3, the passband cut-off frequency of the low-pass filtering is 2.3Hz to 3Hz, and the upper limit of the step frequency of the pedestrian is preferable.
3. The multi-scenario PDR positioning method according to claim 1, wherein in the step 4, the differential equation of the quaternion is as follows:
Wherein ω x、ωy、ωz represents the triaxial angular velocity measured by the gyro sensor, and q 0、q1、q2、q3 represents the quaternion to be solved;
the differential equation is solved iteratively by a first order Bi Kafa, and the specific calculation formula is as follows:
wherein Δt represents a time differential amount;
updating quaternion by continuously measured angular velocity of the gyroscope, and calculating the Euler angle according to the relationship between the quaternion direction cosine matrix and the Euler angle and the following formula:
wherein phi, theta and phi respectively represent Roll angle (Roll), pitch angle (Pitch) and Yaw angle (Yaw) in Euler angles, and conversion from a carrier coordinate system to a world coordinate system is completed.
4. The multi-scene PDR positioning method based on smart phone multi-sensor fusion and SVM classification according to claim 1, wherein in the step 5, the peak detection step number is implemented as follows:
S1: band-pass filtering is carried out on the z-axis acceleration of pedestrians during walking under the world coordinate system, the pass band cut-off frequency is respectively 0.5Hz and 2.5Hz, and noise and gravity components are filtered;
s2: taking the wave peak of which the modulus of the z-axis acceleration is not smaller than 0.5, and if the interval between two adjacent wave peaks is not smaller than 0.6s, then the detection is regarded as one step.
5. The multi-scene PDR positioning method based on smart phone multi-sensor fusion and SVM classification according to claim 1, wherein in the step 7, the basis of SVM classification is acceleration spectrum characteristics acquired by a mobile phone during walking; on the premise of supervised learning, the acceleration frequency spectrum features of the walking of the user are used for training the classification model, the training result is stored in the cloud, and the acceleration frequency spectrum features of the walking of the user are matched with the training result in actual use, so that real-time state classification detection is realized.
6. The multi-scene PDR positioning method based on smart phone multi-sensor fusion and SVM classification according to claim 5, wherein the implementation steps of the SVM classification are as follows:
S1: under the condition of supervised learning, carrying out band-pass filtering processing on acceleration data acquired by the mobile phone state of the known pedestrian, wherein the passband cut-off frequency is respectively 0.5Hz and 2.5Hz, and filtering noise and gravity components;
s2: judging the number of steps by a peak value detection method, and marking;
s3: performing frequency domain transformation on the time domain data with one segment in two steps to obtain frequency domain characteristics as a training set;
S4: the four categories of data are respectively attached with labels: flat end label=1, pocket label=2, left swing arm label=3, right swing arm label=4;
s5: carrying out normalization processing on the data of the training set, and mapping the data to between [0,1 ];
S6: training: adopting a mirror image basis function RBF as a kernel function of the SVM and finding out an optimal punishment parameter c and RBF kernel parameter g in the kernel function by a 3-fold cross validation mode; substituting the optimal parameters c and g obtained after verification and the training set label into an SVM training function to obtain a training model;
S7: and (3) predicting: and (3) inputting acceleration frequency spectrum data actually collected in the walking process of the pedestrians into an SVM classification model trained in the step (S6) after filtering, segmentation and frequency domain transformation operations in the steps (S1) to (S3) to obtain a predicted classification result.
7. The multi-scene PDR positioning method based on multi-sensor fusion and SVM classification of the smart phone according to claim 1, wherein in the step 8, for the case that the smart phone is located in the left and right hand swing arms of the pedestrian, the swing arm motion of the pedestrian can be simplified and decomposed into the following two sub-motions in the running process: the rotation of the arm tail end around the shoulder point on the horizontal plane is represented by the angular speed omega z rotating around the z axis, and the rotation of the arm tail end around the shoulder point on the vertical plane is represented by the angular speed omega x rotating around the x axis and the angular speed omega y rotating around the y axis; when a pedestrian holds the mobile phone swing arm by hand in the right-hand direction and moves towards the north direction, the mobile phone angular speed data is represented by an angular speed omega z which is approximately sinusoidal along the time and is changed along the x axis, and an angular speed omega x which is approximately sinusoidal along the time and is changed along the x axis, and as the person walks north, the arm swings towards the north and south direction, so that the arm almost does not rotate along the y axis, namely omega y is approximately equal to 0; estimating the pedestrian walking direction from the three-axis angular velocity information: if the hand is the right hand, when the arms of the pedestrians swing from back to front, omega z is more than 0, and when the mobile phone reaches the lowest point, the swing speed is the fastest, namely omega x and omega y reach the peak value; the relative size of the two data peaks can determine about which horizontal axis the mobile phone rotates at the moment, the axis is rotated by 90 degrees anticlockwise, if the hand holding the mobile phone is left hand, the axis is rotated by 90 degrees clockwise, and the advancing direction of the pedestrian at the moment can be obtained.
8. The multi-scene PDR positioning method based on smart phone multi-sensor fusion and SVM classification according to claim 1, wherein in the step 8, when the flat end of the mobile phone is on the hand of a pedestrian or the mobile phone is placed in a pocket next to the body of the pedestrian, the motion of the mobile phone can be regarded as consistent with the motion of the trunk of the pedestrian, and the process of taking one step by the pedestrian is divided into two stages of leg movement and center of gravity conversion: in the leg stepping stage, the rear legs leave the ground and gradually swing forward, and finally the heel touches the front ground; the acquired acceleration information shows that the acceleration after the gravity is filtered in the z-axis direction of the human trunk shows the change from a negative value to a positive value from the foot to the highest point to the moment before the heel contacts the ground; in the process, at the moment when the acceleration in the z-axis direction is 0, the speed of the human body in the vertical downward direction reaches the maximum, the human body is in an attempted stable state, the body is minimally swayed left and right at the moment, the acceleration in the horizontal plane is only generated by the translational movement of the human body in the advancing direction, namely, the acceleration synthesis direction in the horizontal plane can accurately reflect the walking direction of the traveler.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110334800.7A CN113029153B (en) | 2021-03-29 | 2021-03-29 | Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110334800.7A CN113029153B (en) | 2021-03-29 | 2021-03-29 | Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113029153A CN113029153A (en) | 2021-06-25 |
CN113029153B true CN113029153B (en) | 2024-05-28 |
Family
ID=76452724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110334800.7A Active CN113029153B (en) | 2021-03-29 | 2021-03-29 | Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113029153B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114061616A (en) * | 2021-10-22 | 2022-02-18 | 北京自动化控制设备研究所 | Self-adaptive peak detection step counting method |
CN117367487B (en) * | 2022-07-01 | 2024-09-10 | 荣耀终端有限公司 | Climbing state identification method and device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101151508A (en) * | 2005-03-28 | 2008-03-26 | 旭化成电子材料元件株式会社 | Traveling direction measuring apparatus and traveling direction measuring method |
JP2009133691A (en) * | 2007-11-29 | 2009-06-18 | Kddi Corp | Portable terminal, program, and method for determining traveling direction of pedestrian by using acceleration sensor and geomagnetic sensor |
JP2012168004A (en) * | 2011-02-14 | 2012-09-06 | Kddi Corp | Portable terminal, program and method for determining travel direction of pedestrian using acceleration data in swing phase |
CN104215238A (en) * | 2014-08-21 | 2014-12-17 | 北京空间飞行器总体设计部 | Indoor positioning method of intelligent mobile phone |
FR3015072A1 (en) * | 2013-12-18 | 2015-06-19 | Movea | METHOD OF DETERMINING THE ORIENTATION OF A MOBILE TERMINAL-RELATED SENSOR MARK WITH SENSOR ASSEMBLY USED BY A USER AND COMPRISING AT LEAST ONE MOTION-MOVING MOTION SENSOR |
CN104977006A (en) * | 2015-08-11 | 2015-10-14 | 北京纳尔信通科技有限公司 | Indoor positioning method based on fuzzy theory and multi-sensor fusion |
DE202011110882U1 (en) * | 2011-08-04 | 2017-01-19 | Google Inc. | Motion direction detection with noisy signals from inertial navigation systems in mobile devices |
JP2017023689A (en) * | 2015-07-24 | 2017-02-02 | 株式会社東芝 | Monitoring system, monitoring method, and program |
WO2017158633A1 (en) * | 2016-03-17 | 2017-09-21 | Gipstech S.R.L. | Method for estimating the direction of motion of an individual |
WO2017215024A1 (en) * | 2016-06-16 | 2017-12-21 | 东南大学 | Pedestrian navigation device and method based on novel multi-sensor fusion technology |
CN108225304A (en) * | 2018-01-26 | 2018-06-29 | 青岛美吉海洋地理信息技术有限公司 | Based on method for rapidly positioning and system in Multiple Source Sensor room |
CN108844533A (en) * | 2018-04-24 | 2018-11-20 | 西安交通大学 | A kind of free posture PDR localization method based on Multi-sensor Fusion and attitude algorithm |
CN109163723A (en) * | 2018-08-16 | 2019-01-08 | 东南大学 | A kind of angle measurement method when mobile phone is swung in pocket |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014110672A1 (en) * | 2013-01-21 | 2014-07-24 | Trusted Positioning Inc. | Method and apparatus for determination of misalignment between device and pedestrian |
-
2021
- 2021-03-29 CN CN202110334800.7A patent/CN113029153B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101151508A (en) * | 2005-03-28 | 2008-03-26 | 旭化成电子材料元件株式会社 | Traveling direction measuring apparatus and traveling direction measuring method |
JP2009133691A (en) * | 2007-11-29 | 2009-06-18 | Kddi Corp | Portable terminal, program, and method for determining traveling direction of pedestrian by using acceleration sensor and geomagnetic sensor |
JP2012168004A (en) * | 2011-02-14 | 2012-09-06 | Kddi Corp | Portable terminal, program and method for determining travel direction of pedestrian using acceleration data in swing phase |
DE202011110882U1 (en) * | 2011-08-04 | 2017-01-19 | Google Inc. | Motion direction detection with noisy signals from inertial navigation systems in mobile devices |
FR3015072A1 (en) * | 2013-12-18 | 2015-06-19 | Movea | METHOD OF DETERMINING THE ORIENTATION OF A MOBILE TERMINAL-RELATED SENSOR MARK WITH SENSOR ASSEMBLY USED BY A USER AND COMPRISING AT LEAST ONE MOTION-MOVING MOTION SENSOR |
CN104215238A (en) * | 2014-08-21 | 2014-12-17 | 北京空间飞行器总体设计部 | Indoor positioning method of intelligent mobile phone |
JP2017023689A (en) * | 2015-07-24 | 2017-02-02 | 株式会社東芝 | Monitoring system, monitoring method, and program |
CN104977006A (en) * | 2015-08-11 | 2015-10-14 | 北京纳尔信通科技有限公司 | Indoor positioning method based on fuzzy theory and multi-sensor fusion |
WO2017158633A1 (en) * | 2016-03-17 | 2017-09-21 | Gipstech S.R.L. | Method for estimating the direction of motion of an individual |
WO2017215024A1 (en) * | 2016-06-16 | 2017-12-21 | 东南大学 | Pedestrian navigation device and method based on novel multi-sensor fusion technology |
CN108225304A (en) * | 2018-01-26 | 2018-06-29 | 青岛美吉海洋地理信息技术有限公司 | Based on method for rapidly positioning and system in Multiple Source Sensor room |
CN108844533A (en) * | 2018-04-24 | 2018-11-20 | 西安交通大学 | A kind of free posture PDR localization method based on Multi-sensor Fusion and attitude algorithm |
CN109163723A (en) * | 2018-08-16 | 2019-01-08 | 东南大学 | A kind of angle measurement method when mobile phone is swung in pocket |
Non-Patent Citations (3)
Title |
---|
基于MEMS与Android智能手机融合的室内个人导航算法;张会清;许潇民;代汝勇;;北京工业大学学报(第05期);全文 * |
基于手机加速度计的行人步态分析;郭英;刘清华;姬现磊;李冠泽;王胜利;;中国惯性技术学报;20171215(第06期);全文 * |
基于智能终端MEMS传感器的三维自主室内定位系统研究;李博;中国优秀硕士学位论文全文数据库 信息科技辑 (月刊);37-60 * |
Also Published As
Publication number | Publication date |
---|---|
CN113029153A (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113029153B (en) | Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification | |
JP6745017B2 (en) | Pedestrian dead reckoning technology | |
CN110553643B (en) | Pedestrian self-adaptive zero-speed updating point selection method based on neural network | |
CN107255474B (en) | PDR course angle determination method integrating electronic compass and gyroscope | |
CN210402266U (en) | Sign language translation system and sign language translation gloves | |
CN109540143B (en) | Pedestrian unconventional action direction identification method based on multi-sensing-source dynamic peak fusion | |
Wu et al. | Natural gesture modeling and recognition approach based on joint movements and arm orientations | |
CN105068657B (en) | The recognition methods of gesture and device | |
CN110068322A (en) | A kind of pedestrian's localization method and pedestrian's positioning device based on terminal | |
CN110163264B (en) | Walking pattern recognition method based on machine learning | |
Wang et al. | A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu | |
Xu et al. | A long term memory recognition framework on multi-complexity motion gestures | |
CN111435083A (en) | Pedestrian track calculation method, navigation method and device, handheld terminal and medium | |
CN110236560A (en) | Six axis attitude detecting methods of intelligent wearable device, system | |
CN109567814B (en) | Classification recognition method, computing device, system and storage medium for tooth brushing action | |
Chandel et al. | Airite: Towards accurate & infrastructure-free 3-d tracking of smart devices | |
CN112130676B (en) | Wearable terminal and wrist turning identification method thereof | |
CN101853073B (en) | Distance measuring method for rotary feature codes applied to gesture identification | |
Bulugu | Real-time Complex Hand Gestures Recognition Based on Multi-Dimensional Features. | |
Lu et al. | I am the uav: A wearable approach for manipulation of unmanned aerial vehicle | |
CN110390281B (en) | Sign language recognition system based on sensing equipment and working method thereof | |
Wang et al. | Posture recognition and adaptive step detection based on hand-held terminal | |
Mahajan et al. | Digital pen for handwritten digit and gesture recognition using trajectory recognition algorithm based on triaxial accelerometer | |
CN114964225B (en) | Gait recognition system design method based on single-node sensor | |
CN117315790B (en) | Analysis method of hand writing action and intelligent pen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Qiu Xiaohan Inventor after: Fan Yuxin Inventor after: Wang Diao Inventor after: Yan Chenggang Inventor after: Shi Zhiguo Inventor before: Qiu Xiaohan Inventor before: Fan Yuxin Inventor before: Wang Diao Inventor before: Shi Zhiguo |
|
GR01 | Patent grant | ||
GR01 | Patent grant |