CN109405829A - Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion - Google Patents

Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion Download PDF

Info

Publication number
CN109405829A
CN109405829A CN201810988953.1A CN201810988953A CN109405829A CN 109405829 A CN109405829 A CN 109405829A CN 201810988953 A CN201810988953 A CN 201810988953A CN 109405829 A CN109405829 A CN 109405829A
Authority
CN
China
Prior art keywords
pedestrian
positioning
smart phone
image
pseudo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810988953.1A
Other languages
Chinese (zh)
Inventor
宋浠瑜
王玫
仇洪冰
罗丽燕
孙昊彬
李晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201810988953.1A priority Critical patent/CN109405829A/en
Publication of CN109405829A publication Critical patent/CN109405829A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention discloses pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion, by carrying out acoustic location to pedestrian, while carrying out vision positioning to pedestrian;Last positioning result, pedestrian's walking states equation is handled using Unscented kalman filtering algorithm, nonlinear problem in observation equation, independent of application places infrastructure, progress acoustic range, PDR positioning are merged with vision positioning, realize that the pedestrian of indoor and outdoor seamless connection is self-positioning.Amendment by pseudo- supersonic sounding to PDR result, position error can be made to be down to sub-meter grade from meter level, in combination with the data information of vision gyroscope and visual odometry, it can make the positioning system not by inertial navigation drift effect, independent of the auxiliary equipment of application places, there is preferable universality and robustness.The method of the present invention is applicable not only to indoor environment, and the pedestrian for being also applied for outdoor environment is self-positioning.

Description

Pedestrian self-positioning method based on intelligent mobile phone audio and video multi-source information fusion
Technical Field
The invention relates to the field of indoor and outdoor positioning, in particular to a pedestrian self-positioning method for realizing indoor and outdoor seamless connection based on a technology of pseudo-ultrasonic distance measurement, Pedestrian Dead Reckoning (PDR) and visual distance measurement by a smart phone, and has important application value in a position sensing and position service scene.
Background
In 2012, the state clearly indicates that the development of indoor positioning technology is to be promoted, so that indoor and outdoor cooperative real-time positioning is realized, and the improvement of the position service level is promoted. With the popularization of mobile communication, the maturity of indoor positioning technology and the awareness and acceptance of people to the indoor location service market are continuously improved, and the market demand is increasing. The pedestrian self-positioning equipment with better user experience has to be connected indoors and outdoors in a seamless mode, so that the use requirements of portability, reasonable price, simplicity and easiness in use are met, and the application requirements of real-time and accuracy of position information are met. As a miniaturized device that people habitually carry, a smart phone has become the most common sensing front end in a sensing network because more than 10 sensors are built in the smart phone. The built-in sensor of the smart phone is effectively utilized to realize indoor and outdoor position sensing and service, which become common knowledge in the industry.
At present, a plurality of combined positioning schemes of Pedestrian Dead Reckoning (PDR) and electromagnetic waves based on a smart phone are available, and compared with a pedestrian navigation positioning scheme only depending on the PDR, the combined positioning scheme can provide more accurate long-time positioning service. Such as: at present, the internet of things industry popularizes more application schemes, namely the combined positioning of the PDR and the GPS, and the positioning accuracy is usually 5-10 meters. However, due to the obstruction of buildings, especially multiple walls, it is difficult to receive enough satellite signals for positioning in indoor environment or between high-rise buildings outdoors, and even if the satellite signals can be received, the accuracy is insufficient. Another common internet-of-things indoor positioning scheme is combined positioning of PDR and WIFI, the positioning accuracy is 1-3 meters, the scheme depends on infrastructure of a target environment, a database with a position corresponding to the signal intensity of WIFI needs to be established in advance for a target positioning place, and a floor plan or map information is combined to provide real-time positioning service for pedestrians. However, WIFI is very susceptible to interference of environmental factors or human factors, which causes distortion of signal strength, and the result of matching signal strength features is very satisfactory, and such a method is only suitable for an application place where a position-related information database is established, so that it is difficult to meet the application requirement of self-positioning of pedestrians.
Acoustic signals have a lower propagation velocity than electromagnetic waves, so that their wavelength is relatively long, theoretically having a lower value for the positioning error. In recent years, the method has attracted great attention in the field of space positioning, and provides a new solution for indoor and outdoor pedestrian positioning. For example, in the research on combining the PDR and the Acoustic Positioning System (APS), the positioning of the pedestrian carrying the smartphone is realized by complementing the advantages and disadvantages of the short-term accuracy of the PDR and the long-term stability of the APS, but most of the solutions require a plurality of acoustic receiving or emitting devices with known positions to be arranged in a target place, and thus the solutions have no universality. Visual positioning is widely applied to fixed target identification because the visual positioning does not depend on the infrastructure of a target place and the image data is easy to obtain. However, pedestrians walk at random, such as: abrupt stops or turns, no fixed walking path, etc., so that visual positioning methods relying only on image recognition for feature matching remain limited in pedestrian positioning applications.
Recent research results show that: the multi-sensor of the smart phone which is commonly used is utilized to sense and fuse multi-source information of a target application place, so that a good indoor and outdoor positioning effect can be provided. Therefore, by fully exploiting the advantages of the multiple sensors of the smart phone and deeply researching the positioning theory and technology of sound wave, vision and PDR multi-source information fusion, the multi-source information fusion model is established, and the method is an effective method for realizing indoor and outdoor seamless connection pedestrian self-positioning.
Disclosure of Invention
The invention provides a pedestrian self-positioning method based on audio and video multi-source information fusion of a smart phone, and aims to solve the problem that indoor and outdoor pedestrian self-positioning accuracy is insufficient due to the fact that electromagnetic wave attenuation is large and Pedestrian Dead Reckoning (PDR) is influenced by inertial navigation drift in an indoor and outdoor complex environment. The method does not need to erect foundation assistance, and has lower complexity and higher universality.
The invention relates to a pedestrian self-positioning method based on the audio and video multi-source information fusion of a smart phone, which mainly comprises the following three steps:
(1) carrying out sound wave positioning on the pedestrian:
the pedestrian pseudo-ultrasonic ranging signal (the sound wave frequency band is 16-2216 kHz) is sent and received by means of a smart phone loudspeaker and a double microphone, and an accurate ranging result is obtained by adopting a delay summation beam forming (DSB) shape so as to correct an accumulated error generated by the PDR due to inertial navigation drift and realize sound wave positioning;
(2) simultaneously, visually positioning the pedestrians:
acquiring continuous image frames of places where pedestrians pass through by a camera of a smart phone, and performing image center projection vanishing point (vanning point) positioning and Homography transformation (Homography) to estimate steering angles and distance values of the pedestrians to realize visual positioning;
(3) and for the sound wave positioning and visual positioning results, an Unscented Kalman Filtering (UKF) algorithm is adopted to process a pedestrian walking state equation, the nonlinear problem in the equation is observed, and under the condition of not depending on infrastructure of an application place, the integration of acoustic ranging, PDR positioning and visual positioning is carried out, so that the indoor and outdoor seamless connection pedestrian self-positioning is realized.
The method comprises the following steps of (1) performing sound wave positioning on indoor and outdoor pedestrians, wherein the sound wave positioning method specifically comprises the following two steps:
(1.1) pseudo-ultrasonic distance measurement of the pedestrian:
the method comprises the steps of regarding a double microphone of the smart phone as a first-order end-fire array, obtaining an accurate sound wave distance measurement result through DSB by utilizing close distance prior between a speaker of the smart phone and a main microphone, and setting the speaker of the smart phone as S and the double microphones of the smart phone as m1And m2Wherein m is1Is the main microphone, dmIs m1And m2The distance value between the S and the reflection wall closest to the S is obtained through measurement, and D is the distance between the S and the reflection wall closest to the S in the walking process of the pedestrian carrying the mobile phone;
due to S and m1The distance between the two is very small, so when the S emits a pseudo ultrasonic signal to the wall, the pseudo ultrasonic signal is reflected to m through the wall1And m2The arrival angle of the first-order echo signal is considered to be the same, and therefore, the flight path difference of the first-order echo is considered to be dmC represents the propagation speed of sound waves in air, and c is a constant value when the temperature of the air is considered to be relatively stable with respect to the temperatureRelative time delay of the order end-fire array is tau dmAnd c, according to the beam forming result of the DSB, the output signal y (n) of the microphone is as follows:
wherein x isiDenotes the received signal of the ith microphone, M denotes the number of microphones, muiDenotes the weight, τ, of the ith microphoneiDenotes the time delay of the ith microphone, when i ≠ 0, τi=τ;
Assuming that the pseudo ultrasonic signal emitted by the speaker is s (n), the cross-correlation signal between the transceiving signals can be expressed as:
wherein,G(ω)=s(ω)y(ω)*s (ω) and y (ω) represent the Fourier transforms of s (n) and y (n), respectively, and are superscripted (·)*Represents a conjugate operation;
therefore, the distance between the S and the nearest reflection wall body in the walking process of the pedestrian carrying the mobile phone can be obtained as follows:
(1.2) correcting the accumulated error of the PDR due to inertial navigation drift, which comprises the following three steps:
(1.2.1) assume pedestrian Start position (x)t,yt) The method is known;
(1.2.2) reading the data collected by the accelerometer and the gyroscope of the mobile phone, filtering and smoothing, and respectively integrating to obtain the walking of the pedestrianDynamic step length l oftAnd heading thetat
(1.2.3) mixing ltAnd thetatSubstituting into dead reckoning formula to calculate pedestrian position coordinate (x) at the momentt+1,yt+1):
Drift error of gyroscope versus θ when air temperature is considered constanttThe influence of the value is negligible, so (x)t+1,yt+1) Is dependent on the dynamic step size ltThe accuracy of (d);
using dynamic step size instead of fixed step size, using l as step size estimation parameter, amaxAnd aminRespectively representing the maximum and minimum acceleration values in the one-step walking process, and estimating the dynamic step length l according to a Weinberg approach modelt
Due to the introduction of the dynamic step length, the accumulated error of the pedestrian in the walking direction, namely the y-axis direction, is correspondingly reduced along with the updating of each gait, and then, on the basis of the PDR estimation, a coordinate matrix (x) of a reflecting wall k is introduced by utilizing a pseudo-ultrasonic distance measurement result Dk,yt+1) Wherein x iskThe x-axis coordinate value of the reflecting wall k when pseudo-ultrasonic ranging is performed is represented, and the length of the reflecting wall k is a known quantity, so that the method can be used for positioning correction of PDR (pulse width modulation), namely: pseudo-ultrasonic ranging and positioning
And (2) carrying out visual positioning on indoor and outdoor pedestrians, wherein the visual positioning method specifically comprises the following two steps:
(2.1) calculating the steering angle of the pedestrian
Introducing a visual gyroscope, mapping an object of a three-dimensional real scene to a feature point of a two-dimensional image, realizing target identification through a projection vanishing point, and observing target steering information by tracking the projection vanishing point in a continuous image;
the image coordinate system is divided into a physical coordinate system and an image pixel coordinate system, and the central point O of the image is used1Is the origin of the image physical coordinate system, O0Is the origin of the image pixel coordinate system;
let the size of each pixel in the physical coordinate system be fx,fyThe K matrix is an internal reference matrix of a camera of the mobile phone;
let the rotation matrix of the camera of the mobile phone be R, and the coordinate value of the projection vanishing point be V (V)x,vy,vz) And then:
wherein (u)0,v0) Is the origin of the image pixel coordinate system, theta is the mobile phone attitude roll angle,the mobile phone attitude pitch angle is obtained by solving a quaternion differential equation through a four-order Runge Kutta method;
therefore, the steering angle theta of the pedestrian can be obtained by trigonometric function calculation between the projected vanishing points of the continuous image framesv
(2.2) calculating the distance value of the pedestrian
Introducing a visual odometer, and solving the mapping relation of the projection vanishing point from one image plane to another image plane by utilizing homography transformation so as to obtain the translation amount of the projection vanishing point, namely the distance value lv of pedestrian walking;
let H denote the homography transformation matrix, W denote the extrinsic parameter matrix of the camera, svRepresenting the transformation scale, H representing the camera height, HvThe imaging height is represented, Dis represents the axial distance from the camera to the target feature point, (u, v, 1) represents the pixel coordinate of the projection vanishing point of the current frame, and the pixel coordinate (u ', v', 1) of the projection vanishing point after homography transformation in the next frame image can represent that:
wherein:
when (u, v, 1) and (u ', v', 1) are known, lvThen the calculation can be carried out;
thus, visually locate the result
And (3) processing the results of the sound wave positioning and the visual positioning by adopting an unscented Kalman filtering algorithm to observe the nonlinear problem in the equation. The unscented kalman filter algorithm is prior art.
The invention provides an indoor and outdoor pedestrian self-positioning method independent of application site infrastructure assistance. According to the method, under the condition that only a smart phone with a plurality of built-in sensors is used as a positioning device, multi-sensor information is effectively utilized, the problem of multi-source information nonlinear fusion of acoustic ranging, PDR positioning and visual positioning is solved through an unscented Kalman filtering method, and indoor and outdoor seamless connection pedestrian self-positioning is achieved. The problem of the traditional PDR mode that the positioning accuracy is not high in the long-time use process due to factors such as the drift error of the inertial navigation system is solved.
Drawings
FIG. 1 is a block diagram of a pedestrian self-positioning method based on multi-source audio and video information fusion of a smart phone;
FIG. 2 is a diagram of a first-order end-fire array formed by two microphones of a smart phone according to the present invention;
FIG. 3 is a diagram of an image coordinate system;
fig. 4 is a schematic diagram of homography transformation of an image.
Detailed Description
The present invention will be further described with reference to the following examples and drawings, but the present invention is not limited thereto.
Referring to fig. 1, the pedestrian self-positioning method based on the multi-source audio and video information fusion of the smart phone mainly comprises three steps:
(1) carrying out sound wave positioning on the pedestrian:
sending and receiving pseudo ultrasonic ranging signals (the sound wave frequency range is 16-226 kHz) of pedestrians by a smart phone loudspeaker and a double microphone, and obtaining an accurate ranging result by adopting time delay summation beam forming (DSB) so as to correct accumulated errors generated by PDR due to inertial navigation drift and realize sound wave positioning;
(2) simultaneously, visually positioning the pedestrians:
acquiring continuous image frames of places where pedestrians pass through by a camera of a smart phone, and performing image center projection vanishing point (vanning point) positioning and Homography transformation (Homography) to estimate steering angles and distance values of the pedestrians to realize visual positioning;
(3) and for the sound wave positioning and visual positioning results, an Unscented Kalman Filtering (UKF) algorithm is adopted to process a pedestrian walking state equation, the nonlinear problem in the equation is observed, and under the condition of not depending on infrastructure of an application place, the integration of acoustic ranging, PDR positioning and visual positioning is carried out, so that the indoor and outdoor seamless connection pedestrian self-positioning is realized.
The method comprises the following steps of (1) performing sound wave positioning on indoor and outdoor pedestrians, wherein the sound wave positioning method specifically comprises the following two steps:
(1.1) pseudo-ultrasonic distance measurement of the pedestrian:
regarding the dual microphones of the smart phone as a first-order end-fire array, by using the close distance prior between the speaker and the main microphone of the smart phone, an accurate sound wave ranging result can be obtained through the DSB, as shown in fig. 2: s is a loudspeaker, m1And m2Is a dual microphone of a handset, where m1Is the main microphone, dmIs m1And m2The distance value between the S and the reflection wall closest to the S is obtained through measurement, and D is the distance between the S and the reflection wall closest to the S in the walking process of the pedestrian carrying the mobile phone;
due to S and m1The distance between the two is very small, so when the S emits a pseudo ultrasonic signal to the wall, the pseudo ultrasonic signal is reflected to m through the wall1And m2The arrival angle of the first-order echo signal can be considered to be the same, and therefore, the flight path difference of the first-order echo can be considered to be dmC is a constant, and the relative time delay of the first-order end-fire matrix is τ dmAnd c, according to the beam forming result of the DSB, the output signal y (n) of the microphone is as follows:
wherein x isiDenotes the received signal of the ith microphone, M denotes the number of microphones, muiDenotes the weight, τ, of the ith microphoneiDenotes the time delay of the ith microphone, when i ≠ 0, τi=τ;
Assuming that the pseudo ultrasonic signal emitted by the speaker is s (n), the cross-correlation signal between the transceiving signals can be expressed as:
wherein,G(ω)=s(ω)y(ω)*s (ω) and y (ω) represent the Fourier transforms of s (n) and y (n), respectively, and are superscripted (·)*Represents a conjugate operation;
therefore, the distance between the S and the nearest reflection wall body in the walking process of the pedestrian carrying the mobile phone can be obtained as follows:
D=c·(arg maxτRs,y(τ)) (3)。
(1.2) correcting the accumulated error of the PDR due to inertial navigation drift, which comprises the following three steps:
(1.2.1) assume pedestrian Start position (x)t,yt) The method is known;
(1.2.2) reading the data collected by the accelerometer and the gyroscope of the mobile phone, filtering and smoothing, and respectively carrying out integration to obtain the dynamic step length l of the walking of the pedestriantAnd heading thetat
(1.2.3) mixing ltAnd thetatSubstituting into dead reckoning formula to calculate pedestrian position coordinate (x) at the momentt+1,yt+1):
Drift error of gyroscope versus θ when air temperature is considered constanttThe influence of the value is negligible, so (x)t+1,yt+1) Is dependent on the dynamic step size ltThe accuracy of (d);
the dynamic step length is adopted instead of the fixed step length, so that the walking habit of the pedestrian is closer to, and the accuracy of step length calculation is better improved;
step size estimation parameter is denoted by l, amaxAnd aminRespectively representing the maximum and minimum acceleration values in the one-step walking process, and estimating the dynamic step length l according to a Weinberg approach modelt
Due to the introduction of the dynamic step length, the accumulated error of the pedestrian in the walking direction (namely the y-axis direction) is correspondingly reduced along with the updating of each gait, and then, on the basis of the PDR estimation, a coordinate matrix (x-axis) of a reflecting wall body is introduced by using a pseudo-ultrasonic distance measurement result Dk,yt+1) Wherein x iskThe x-axis coordinate value of the reflecting wall k when pseudo-ultrasonic ranging is performed is represented, and the length of the reflecting wall k is a known quantity, so that the method can be used for positioning correction of PDR (pulse width modulation), namely: pseudo-ultrasonic ranging and positioning
The pedestrian walks with the smart phone, and the steering angle and the distance value of pedestrian's walking are reflected to the image that the camera of cell-phone was shot.
And (2) carrying out visual positioning on indoor and outdoor pedestrians, wherein the visual positioning method specifically comprises the following two steps:
(2.1) calculating the steering angle of the pedestrian
Introducing a visual gyroscope concept, mapping an object of a three-dimensional real scene to a feature point of a two-dimensional image, namely realizing target identification through a projection vanishing point (variation point), and observing target steering information by tracking the projection vanishing point in continuous images;
as shown in FIG. 3, the image coordinate system is divided into a physical coordinate system and an image pixel coordinate system, and generally takes the center point O of the image1Is the origin of the image physical coordinate system, O0Is the origin of the image pixel coordinate system;
let the size of each pixel in the physical coordinate system be fx,fyThe K matrix is an internal reference matrix of a camera of the mobile phone;
let the rotation matrix of the camera of the mobile phone be R, and the coordinate value of the projection vanishing point be V (V)x,vy,vz) And then:
wherein (u)0,v0) Is the origin of the image pixel coordinate system, theta is the mobile phone attitude roll angle,the mobile phone attitude pitch angle is obtained by solving a quaternion differential equation through a four-order Runge Kutta method;
therefore, the steering angle theta of the pedestrian can be obtained by trigonometric function calculation between the projected vanishing points of the continuous image framesv
(2.2) calculating the distance value of the pedestrian
Introducing visual odometer concept, and solving the mapping relation of projection vanishing points from one image plane to another image plane by utilizing homography transformation so as to obtain the translation amount of the projection vanishing points, namely the distance value l of pedestrian walkingv
As shown in FIG. 4, let H denote the homography transformation matrix, W denote the extrinsic parameter matrix of the camera, svRepresenting the transformation scale, H representing the camera height, HvThe imaging height is represented, Dis represents the axial distance from the camera to the target feature point, (u, v, 1) represents the pixel coordinate of the projection vanishing point of the current frame, and the pixel coordinate (u ', v', 1) of the projection vanishing point after homography transformation in the next frame image can represent that:
wherein:
when (u, v, 1) and (u ', v', 1) are known, lvThen the calculation can be carried out;
thus, visually locate the result
Unscented Kalman filtering determines the posterior probability density of a sample approximation state through a series of determination, namely, the probability density distribution of a nonlinear function is approximated, high-order items of a nonlinear equation are reserved, derivation is not needed to be carried out on a system state equation and a Jacobian matrix of an observation equation, the calculation complexity is low, and the calculation precision is high for the statistic of the nonlinear distribution.
The process of processing the pedestrian walking state equation by adopting the unscented Kalman filter algorithm (UKF) in the step (3) is as follows:
assuming an arbitrary time t, a nonlinear system composed of a state variable s (t) having additive system noise w (t) and an observation variable z (t) having additive measurement noise v (t) is:
S(t+1)=f(S(t),W(t)) (11)
wherein f (-) is a non-linear equation of state function,is a non-linear observation equation function;
let W (t) be Q, and V (t) be
The UKF algorithm updates the system state S (t) by utilizing the Unscented Transformation (UT) to deal with the nonlinear transfer problem of mean and covariance, and the updating of S (t) depends on the system gainTherefore, in the process of UKF, each step of prediction of s (t) and z (t) requires updating the corresponding covariance matrix, so as to update the system gain of each step.
The unscented kalman filtering process is as follows:
①, setting the variance of S (t) as P and the scaling parameter as lambda, and obtaining a Signa point set through the unscented transformation:
② calculate a one-step prediction of the 2n +1 Signa point sets:
Si(t|t-1)=f[t-1,Si(t-1|t-1)](15)
③ calculating one-step prediction and covariance matrix of system state quantity:
④ according to the one-step prediction, the unscented transformation is used again to generate a new Signa point set
⑤ substituting the new point set into the observation equation to obtain the predicted observed quantity:
⑥, weighted summation is carried out on the prediction observed quantity obtained in the last step to obtain the system prediction mean valueAnd protocol difference:
⑦ calculating Kalman gain matrix
⑧ status update and covariance update for computing system
The invention solves the application problem of low positioning precision of the traditional PDR mode in the long-time use process due to factors such as drift error of an inertial navigation system and the like. The PDR result is corrected through pseudo-ultrasonic ranging, so that the positioning error can be reduced from a meter level to a sub-meter level, and meanwhile, the positioning system is not influenced by inertial navigation drift by combining data information of a visual gyroscope and a visual odometer, does not depend on auxiliary facilities of an application place, and has better universality and robustness. The method is not only suitable for indoor environment, but also suitable for pedestrian self-positioning in outdoor environment.

Claims (3)

1. A pedestrian self-positioning method based on audio and video multi-source information fusion of a smart phone is characterized by mainly comprising the following three steps:
(1) carrying out sound wave positioning on the pedestrian:
the pedestrian pseudo-ultrasonic ranging signals are sent and received by means of a smart phone loudspeaker and a double microphone, and accurate ranging results are obtained by adopting delay summation beam forming so as to correct accumulated errors of PDR due to inertial navigation drift and realize sound wave positioning;
(2) simultaneously, visually positioning the pedestrians:
acquiring continuous image frames of places where pedestrians pass through by a camera of a smart phone, and performing image center projection vanishing point positioning and homography transformation to estimate a steering angle and a distance value of the pedestrians to realize visual positioning;
(3) and for the sound wave positioning and visual positioning results, an Unscented Kalman Filtering (UKF) algorithm is adopted to process a pedestrian walking state equation, the nonlinear problem in the equation is observed, and under the condition of not depending on infrastructure of an application place, the integration of acoustic ranging, PDR positioning and visual positioning is carried out, so that the indoor and outdoor seamless connection pedestrian self-positioning is realized.
2. The indoor and outdoor pedestrian self-positioning method based on the multi-source audio and video information fusion of the smart phone according to claim 1, wherein the step (1) of performing sound wave positioning on indoor and outdoor pedestrians specifically comprises two steps:
(1.1) pseudo-ultrasonic distance measurement of the pedestrian:
the method comprises the steps of regarding a double microphone of the smart phone as a first-order end-fire array, obtaining an accurate sound wave distance measurement result through DSB by utilizing close distance prior between a speaker of the smart phone and a main microphone, and setting the speaker of the smart phone as S and the double microphones of the smart phone as m1And m2Wherein m is1Is the main microphone, dmIs m1And m2The distance value between the S and the reflection wall closest to the S is obtained through measurement, and D is the distance between the S and the reflection wall closest to the S in the walking process of the pedestrian carrying the mobile phone;
due to S and m1The distance between the two is very small, so when the S emits a pseudo ultrasonic signal to the wall, the pseudo ultrasonic signal is reflected to m through the wall1And m2The arrival angle of the first-order echo signal is considered to be the same, and therefore, the flight path difference of the first-order echo is considered to be dmC is a constant, and the relative time delay of the first-order end-fire matrix is τ dmAnd c, according to the beam forming result of the DSB, the output signal y (n) of the microphone is as follows:
wherein x isiDenotes the received signal of the ith microphone, M denotes the number of microphones, muiDenotes the weight, τ, of the ith microphoneiDenotes the time delay of the ith microphone, when i ≠ 0, τi=τ;
Assuming that the pseudo ultrasonic signal emitted by the speaker is s (n), the cross-correlation signal between the transceiving signals can be expressed as:
wherein,G(ω)=s(ω)y(ω)*s (ω) and y (ω) represent the Fourier transforms of s (n) and y (n), respectively, and are superscripted (·)*Represents a conjugate operation;
therefore, the distance between the S and the nearest reflection wall body in the walking process of the pedestrian carrying the mobile phone can be obtained as follows:
D=c·(arg maxτRs,y(τ)) (3)
(1.2) correcting the accumulated error of the PDR due to inertial navigation drift, which comprises the following three steps:
(1.2.1) assume pedestrian Start position (x)t,yt) The method is known;
(1.2.2) reading the data collected by the accelerometer and the gyroscope of the mobile phone, filtering and smoothing, and respectively carrying out integration to obtain the dynamic step length l of the walking of the pedestriantAnd heading thetat
(1.2.3) mixing ltAnd thetatSubstituting into dead reckoning formula to calculate pedestrian position coordinate (x) at the momentt+1,yt+1):
Drift error of gyroscope when air temperature is considered constantTo thetatThe influence of the value is negligible, so (x)t+1,yt+1) Is dependent on the dynamic step size ltThe accuracy of (d);
using dynamic step size instead of fixed step size, using l as step size estimation parameter, amaxAnd aminRespectively representing the maximum and minimum acceleration values in the one-step walking process, and estimating the dynamic step length l according to a Weinberg approach modelt
Due to the introduction of the dynamic step length, the accumulated error of the pedestrian in the walking direction, namely the y-axis direction, is correspondingly reduced along with the updating of each gait, and then, on the basis of the PDR estimation, a coordinate matrix (x) of a reflecting wall k is introduced by utilizing a pseudo-ultrasonic distance measurement result Dk,yt+1) Wherein x iskThe x-axis coordinate value of the reflecting wall k when pseudo-ultrasonic ranging is performed is represented, and the length of the reflecting wall k is a known quantity, so that the method can be used for positioning correction of PDR (pulse width modulation), namely: pseudo-ultrasonic ranging and positioning
3. The indoor and outdoor pedestrian self-positioning method based on the multi-source audio and video information fusion of the smart phone according to claim 1, wherein the step (2) of visually positioning indoor and outdoor pedestrians comprises two steps:
(2.1) calculating the steering angle of the pedestrian
Introducing a visual gyroscope, mapping an object of a three-dimensional real scene to a feature point of a two-dimensional image, realizing target identification through a projection vanishing point, and observing target steering information by tracking the projection vanishing point in a continuous image;
image coordinate systemIs divided into a physical coordinate system and an image pixel coordinate system, and takes the central point O of the image1Is the origin of the image physical coordinate system, O0Is the origin of the image pixel coordinate system;
let the size of each pixel in the physical coordinate system be fx,fyThe K matrix is an internal reference matrix of a camera of the mobile phone;
let the rotation matrix of the camera of the mobile phone be R, and the coordinate value of the projection vanishing point be V (V)x,vy,vz) And then:
wherein (u)0,v0) Is the origin of the image pixel coordinate system, theta is the mobile phone attitude roll angle,the mobile phone attitude pitch angle is obtained by solving a quaternion differential equation through a four-order Runge Kutta method;
therefore, the steering angle theta of the pedestrian can be obtained by trigonometric function calculation between the projected vanishing points of the continuous image framesv
(2.2) calculating the distance value of the pedestrian
Introducing a visual odometer, and solving the mapping relation of the projection vanishing point from one image plane to another image plane by utilizing homography so as to obtain the translation amount of the projection vanishing point, namely the distance value l of pedestrian walkingv
Let H denote the homography transformation matrix, W denote the extrinsic parameter matrix of the camera, svRepresenting the transformation scale, H representing the camera height, HvThe imaging height is represented, Dis represents the axial distance from the camera to the target feature point, (u, v, 1) represents the pixel coordinate of the projection vanishing point of the current frame, and the pixel coordinate (u ', v', 1) of the projection vanishing point after homography transformation in the next frame image can represent that:
wherein:
when (u, v, 1) and (u ', v', 1) are known, lvThen the calculation can be carried out;
thus, visually locate the result
CN201810988953.1A 2018-08-28 2018-08-28 Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion Pending CN109405829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810988953.1A CN109405829A (en) 2018-08-28 2018-08-28 Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810988953.1A CN109405829A (en) 2018-08-28 2018-08-28 Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion

Publications (1)

Publication Number Publication Date
CN109405829A true CN109405829A (en) 2019-03-01

Family

ID=65464403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810988953.1A Pending CN109405829A (en) 2018-08-28 2018-08-28 Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion

Country Status (1)

Country Link
CN (1) CN109405829A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110068333A (en) * 2019-04-16 2019-07-30 深兰科技(上海)有限公司 A kind of high-speed rail robot localization method, apparatus and storage medium
CN110646764A (en) * 2019-10-12 2020-01-03 桂林电子科技大学 Indoor positioning system and positioning method based on pseudo-ultrasound
CN111551180A (en) * 2020-05-22 2020-08-18 桂林电子科技大学 Smart phone indoor positioning system and method capable of identifying LOS/NLOS acoustic signals
CN111652831A (en) * 2020-06-28 2020-09-11 腾讯科技(深圳)有限公司 Object fusion method and device, computer-readable storage medium and electronic equipment
CN112378407A (en) * 2020-11-25 2021-02-19 中国人民解放军战略支援部队信息工程大学 Indoor positioning method based on combination of smart phone sensor and sound wave positioning
CN112729282A (en) * 2020-12-21 2021-04-30 杭州电子科技大学 Indoor positioning method integrating single anchor point ranging and pedestrian track calculation
CN115100298A (en) * 2022-08-25 2022-09-23 青岛杰瑞工控技术有限公司 Light-sound image fusion method for deep and open sea visual culture
CN115184941A (en) * 2022-07-07 2022-10-14 浙江德清知路导航科技有限公司 Audio-based positioning and object-finding method, system and equipment
CN115235455A (en) * 2022-09-19 2022-10-25 中国人民解放军国防科技大学 Pedestrian positioning method based on smart phone PDR and vision correction
CN116380056A (en) * 2023-06-02 2023-07-04 中国船舶集团有限公司第七〇七研究所 Inertial positioning method, inertial positioning device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100286477A1 (en) * 2009-05-08 2010-11-11 Ouyang Xiaolong Internal tissue visualization system comprising a rf-shielded visualization sensor module
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN103281500A (en) * 2013-04-24 2013-09-04 贵阳朗玛信息技术股份有限公司 Method and device for processing videos
CN105452897A (en) * 2013-05-31 2016-03-30 株式会社明通易 Positioning system, positioning method, and positioning program
CN105445776A (en) * 2015-12-28 2016-03-30 天津大学 Indoor and outdoor seamless positioning system
CN105809108A (en) * 2016-02-24 2016-07-27 中国科学院自动化研究所 Pedestrian positioning method and system based on distributed vision
CN107084718A (en) * 2017-04-14 2017-08-22 桂林电子科技大学 Indoor orientation method based on pedestrian's reckoning
CN107179079A (en) * 2017-05-29 2017-09-19 桂林电子科技大学 The indoor orientation method merged based on PDR with earth magnetism
CN107504971A (en) * 2017-07-05 2017-12-22 桂林电子科技大学 A kind of indoor orientation method and system based on PDR and earth magnetism
CN107830862A (en) * 2017-10-13 2018-03-23 桂林电子科技大学 A kind of method of the indoor positioning pedestrian tracking based on smart mobile phone
CN108008348A (en) * 2017-11-16 2018-05-08 华南理工大学 Underwater Wave arrival direction estimating method and device based on adjustable angle even linear array

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100286477A1 (en) * 2009-05-08 2010-11-11 Ouyang Xiaolong Internal tissue visualization system comprising a rf-shielded visualization sensor module
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN103281500A (en) * 2013-04-24 2013-09-04 贵阳朗玛信息技术股份有限公司 Method and device for processing videos
CN105452897A (en) * 2013-05-31 2016-03-30 株式会社明通易 Positioning system, positioning method, and positioning program
CN105445776A (en) * 2015-12-28 2016-03-30 天津大学 Indoor and outdoor seamless positioning system
CN105809108A (en) * 2016-02-24 2016-07-27 中国科学院自动化研究所 Pedestrian positioning method and system based on distributed vision
CN107084718A (en) * 2017-04-14 2017-08-22 桂林电子科技大学 Indoor orientation method based on pedestrian's reckoning
CN107179079A (en) * 2017-05-29 2017-09-19 桂林电子科技大学 The indoor orientation method merged based on PDR with earth magnetism
CN107504971A (en) * 2017-07-05 2017-12-22 桂林电子科技大学 A kind of indoor orientation method and system based on PDR and earth magnetism
CN107830862A (en) * 2017-10-13 2018-03-23 桂林电子科技大学 A kind of method of the indoor positioning pedestrian tracking based on smart mobile phone
CN108008348A (en) * 2017-11-16 2018-05-08 华南理工大学 Underwater Wave arrival direction estimating method and device based on adjustable angle even linear array

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ERIC FOXLIN: "Pedestrian Tracking with shoe-mounted inertial sensors", 《IEEE COMPUTER GRAPHICS AND APPLICATIONS》 *
LAURA RUOTSALAINEN等: "Monocular Visual SLAM for Tactical Situational Awareness", 《2015 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN)》 *
公续平等: "一种面向智能终端的视觉陀螺仪/PDR/GNSS组合导航方法", 《第六届中国卫星导航学术年会论文集—S08卫星导航模型与方法》 *
刘坤等: "VPDR:视觉辅助行人航位推算方法研究", 《第九届中国卫星导航学术年会论文集——S10多源融合导航技术》 *
宋浠瑜等: "基于声波测距与PDR融合的手机室内定位方法", 《物联网学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110068333A (en) * 2019-04-16 2019-07-30 深兰科技(上海)有限公司 A kind of high-speed rail robot localization method, apparatus and storage medium
CN110646764A (en) * 2019-10-12 2020-01-03 桂林电子科技大学 Indoor positioning system and positioning method based on pseudo-ultrasound
CN111551180A (en) * 2020-05-22 2020-08-18 桂林电子科技大学 Smart phone indoor positioning system and method capable of identifying LOS/NLOS acoustic signals
CN111652831A (en) * 2020-06-28 2020-09-11 腾讯科技(深圳)有限公司 Object fusion method and device, computer-readable storage medium and electronic equipment
CN112378407A (en) * 2020-11-25 2021-02-19 中国人民解放军战略支援部队信息工程大学 Indoor positioning method based on combination of smart phone sensor and sound wave positioning
CN112729282B (en) * 2020-12-21 2022-03-25 杭州电子科技大学 Indoor positioning method integrating single anchor point ranging and pedestrian track calculation
CN112729282A (en) * 2020-12-21 2021-04-30 杭州电子科技大学 Indoor positioning method integrating single anchor point ranging and pedestrian track calculation
CN115184941A (en) * 2022-07-07 2022-10-14 浙江德清知路导航科技有限公司 Audio-based positioning and object-finding method, system and equipment
CN115100298A (en) * 2022-08-25 2022-09-23 青岛杰瑞工控技术有限公司 Light-sound image fusion method for deep and open sea visual culture
CN115235455A (en) * 2022-09-19 2022-10-25 中国人民解放军国防科技大学 Pedestrian positioning method based on smart phone PDR and vision correction
CN115235455B (en) * 2022-09-19 2023-01-13 中国人民解放军国防科技大学 Pedestrian positioning method based on smart phone PDR and vision correction
CN116380056A (en) * 2023-06-02 2023-07-04 中国船舶集团有限公司第七〇七研究所 Inertial positioning method, inertial positioning device, electronic equipment and storage medium
CN116380056B (en) * 2023-06-02 2023-08-15 中国船舶集团有限公司第七〇七研究所 Inertial positioning method, inertial positioning device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109405829A (en) Pedestrian's method for self-locating based on smart phone audio-video Multi-source Information Fusion
CN106931961B (en) Automatic navigation method and device
JP2021516401A (en) Data fusion method and related equipment
US10969462B2 (en) Distance-based positioning system and method using high-speed and low-speed wireless signals
CN105974456B (en) A kind of autonomous underwater vehicle combined navigation system
CN1952684A (en) Method and device for localization of sound source by microphone
WO2024027350A1 (en) Vehicle positioning method and apparatus, computer device and storage medium
CN110823211A (en) Multi-sensor map construction method, device and chip based on visual SLAM
CN112862818B (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN109696173A (en) A kind of car body air navigation aid and device
CN103472434A (en) Robot sound positioning method
CN114545918A (en) Robot inspection system and inspection method capable of accessing mobile terminal
Khoshelham et al. Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry
Cao et al. WiFi RTT indoor positioning method based on Gaussian process regression for harsh environments
CN113534226B (en) Indoor and outdoor seamless positioning algorithm based on smart phone scene recognition
Murakami et al. Five degrees-of-freedom pose-estimation method for smartphones using a single acoustic anchor
CN106908054A (en) A kind of positioning path-finding method and device based on ultra-wideband signal
KR101502071B1 (en) Camera Data Generator for Landmark-based Vision Navigation System and Computer-readable Media Recording Program for Executing the Same
WO2022018964A1 (en) Information processing device, information processing method, and program
CN115205384A (en) Blind guiding method and device and readable storage medium
CN114429515A (en) Point cloud map construction method, device and equipment
WO2020001629A1 (en) Information processing device, flight path generating method, program, and recording medium
CN109375223B (en) Indoor space perception and mobile sound source self-positioning method based on sound wave particle duality
KR100874425B1 (en) System for measuring size of signboard and method for measuring size of signboard using the same
CN114966547B (en) Compensation method, system and device for improving sound source positioning accuracy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301