CN113114850B - Online fusion positioning method based on surveillance video and PDR - Google Patents

Online fusion positioning method based on surveillance video and PDR Download PDF

Info

Publication number
CN113114850B
CN113114850B CN202110290805.4A CN202110290805A CN113114850B CN 113114850 B CN113114850 B CN 113114850B CN 202110290805 A CN202110290805 A CN 202110290805A CN 113114850 B CN113114850 B CN 113114850B
Authority
CN
China
Prior art keywords
track
pdr
video
pedestrian
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110290805.4A
Other languages
Chinese (zh)
Other versions
CN113114850A (en
Inventor
杨帆
黄翠彦
苟柳燕
胡丁文
胡凯翔
霍永青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110290805.4A priority Critical patent/CN113114850B/en
Publication of CN113114850A publication Critical patent/CN113114850A/en
Application granted granted Critical
Publication of CN113114850B publication Critical patent/CN113114850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The invention discloses an online fusion positioning method based on a surveillance video and a PDR. The method comprises the steps of collecting data of a mobile phone motion sensor of a user, and obtaining a PDR track of the user according to a PDR algorithm; collecting video stream data of a real-time monitoring camera, and obtaining a video pedestrian track by adopting a deep learning pedestrian detection framework and a feature weighting algorithm; matching the PDR track and the video pedestrian track by using a dual track matching algorithm; and determining the current position and the tracking track of the user according to the matching result. In order to solve the problems of difficulty in balancing system deployment cost and positioning accuracy, high system cost, low practical application feasibility and the like in the prior positioning technology, the video pedestrian tracking technology and the PDR technology are comprehensively utilized, monitoring camera equipment which is widely deployed in an indoor environment and a smart phone which is carried by a user are fully utilized, additional positioning equipment does not need to be deployed, the cost of a positioning system is reduced, the respective defects of the two technologies are mutually compensated, and the positioning accuracy is improved.

Description

Online fusion positioning method based on surveillance video and PDR
Technical Field
The invention relates to a positioning method, in particular to an online fusion positioning method based on a surveillance video and a PDR.
Background
The existing indoor positioning technology is mainly divided into two categories, the first category is that wireless signals such as Wi-Fi, PDR, geomagnetism, Bluetooth, UWB and radio frequency are utilized to perform positioning, the main problem of the positioning technology is that the balance between system deployment cost and positioning accuracy is difficult, if the positioning technologies such as Wi-Fi, PDR and geomagnetism are adopted, the existing hardware equipment can be utilized, the positioning cost is reduced, but signals are unstable, and are easily interfered by the environment, and the positioning accuracy is low; if positioning technologies such as Bluetooth, UWB and radio frequency are adopted, good positioning accuracy can be obtained, but hardware equipment needs to be additionally deployed, the system construction cost is high, and the system is not beneficial to practical popularization and application. The second type is to use multiple positioning technologies to perform fusion positioning, and the positioning accuracy is higher than that of a single positioning technology, but the problems of high system cost or low practical application feasibility and the like still exist at present.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an online fusion positioning method based on a surveillance video and a PDR.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
an online fusion positioning method based on surveillance videos and PDRs comprises the following steps:
s1, collecting the data of the mobile phone motion sensor of the user, and obtaining the PDR track of the user according to the PDR algorithm;
s2, collecting video stream data of a real-time monitoring camera, and obtaining a video pedestrian track by adopting a deep learning pedestrian detection framework and a feature weighting algorithm;
s3, matching the PDR track with the video pedestrian track by using a dual track matching algorithm;
and S4, determining the current position and the tracking track of the user according to the matching result of the step S3.
The invention has the following beneficial effects: the monitoring video and the mobile phone PDR information are fused for positioning, the problems of shielding and incapability of identifying the identity of a user exist on the basis of a video pedestrian tracking technology, autonomous positioning service is difficult to provide for the user independently, the initial position point of the user needs to be determined on the basis of the PDR technology, the problem of accumulated errors exists, and the method is not suitable for independent positioning. The invention adopts the fusion of two technologies, not only makes up the respective defects mutually, improves the positioning precision, but also avoids the arrangement of additional hardware equipment, and simultaneously adopts a dual track matching algorithm to match the PDR track and the video pedestrian track, thereby not only improving the matching speed, ensuring the response time of a positioning system, but also improving the matching precision of the two tracks.
Preferably, step S1 includes the following substeps:
s11, collecting the data of the motion sensors of the accelerometer, the magnetometer and the gyroscope which are arranged in the mobile phone of the user, processing the data of the sensors by adopting a filtering algorithm, and converting the coordinate system data of the mobile phone carrier to a plane coordinate system by utilizing a coordinate system conversion algorithm;
s12, based on the mobile phone sensor data collected in the step S11, the step number, the step length corresponding to each step and the course angle in the user motion process are obtained through a step number detection algorithm, a step length estimation algorithm and a course angle estimation algorithm, and the PDR track of the user is obtained through a PDR algorithm according to the initial position point.
The preferred scheme has the following beneficial effects: the data acquisition is carried out by utilizing the information of the sensor of the smart phone carried by the user, the PDR track of the user is calculated, and the data acquisition process and the calculation process are convenient and fast.
Preferably, step S2 includes the following substeps:
s21, collecting video stream data of the real-time monitoring camera;
s22, processing the video stream obtained in the step S21 into images according to frames, processing the images by using a deep learning pedestrian detection frame to obtain all pedestrian detection results in each frame of image, and taking all pedestrian detection frame pixel coordinates of the first frame of video stream as an initial video pedestrian track;
s23, extracting features of each frame of video pedestrian detection result obtained in the step S22, calculating the similarity between the current video image frame detection result and the initial video pedestrian track by using a feature weighting algorithm, and associating the current video image frame pedestrian detection result with the initial video pedestrian track for updating to obtain a new video pedestrian track;
and S24, converting the video pedestrian track obtained in the step S23 from a pixel coordinate system to a plane coordinate system by adopting a DLT plane space correction method with a calibration point.
The preferred scheme has the following beneficial effects: the video pedestrian track is obtained by utilizing the monitoring camera equipment which is widely deployed in the indoor environment, no additional positioning equipment is required to be deployed, and the cost of the positioning system is reduced.
Preferably, step S3 includes the following substeps:
s31, carrying out real-time matching on the PDR track and the video pedestrian track by adopting a single-step quick matching algorithm;
and S32, matching the PDR track and the video pedestrian track by adopting a multi-step global matching algorithm.
The preferred scheme has the following beneficial effects: the double-track matching algorithm is adopted to match the PDR track and the video pedestrian track, so that the matching speed is improved, the response time of a positioning system is ensured, and the matching precision of the two tracks is improved.
Preferably, step S31 includes the following substeps:
s311, according to the step length of the single step of the user, the course and the system time of the mobile phone, time synchronization is carried out through a processing server to obtain a video pedestrian track corresponding to the single step;
s312, fitting the video pedestrian track obtained in the step S311 by adopting a least square method to obtain a fitted track of the video pedestrian track;
s313, extracting to obtain the pedestrian track characteristics of the video based on the fitting track obtained in the step S312, and obtaining the PDR track characteristics based on the user step length and the course angle calculated in the step S12;
s314, based on the video pedestrian track characteristics extracted in the step S313, calculating the total similarity M of the ith PDR track segment and the jth video track segment by adopting a characteristic weighting algorithmi,jSequentially calculating eachDetermining the total similarity of the PDR track segments and all the video track segments, and determining a similarity matrix M;
s315, matching the PDR track and the video pedestrian track by adopting a Hungarian algorithm, wherein the specific contents are as follows:
firstly, calculating a cost matrix C of the Hungarian algorithm based on the similarity matrix M of the step S314, wherein the calculation formula is as follows:
C=I-M
and performing Hungarian algorithm matching according to the cost matrix to obtain a result matrix R, wherein if an element R in the R isi,jAnd if the number of the PDR track segments is 1, the ith PDR track segment is considered to be matched with the jth video track segment.
The preferred scheme has the following beneficial effects: the single-step matching algorithm has less calculation data quantity, high calculation speed and suitability for real-time online matching positioning, and can quickly determine the current position of the user.
Preferably, step S312 specifically includes:
pedestrian trajectory segment W based on videoABL data samples of (1), wherein the video pedestrian trajectory segment WABTaking a single-step video pedestrian track segment with a starting point A and an end point B, and adopting the following fitting function hθ(x) Fitting the data:
hθ(x)=θ0x+θ1
wherein theta is1And theta0Inputting the horizontal coordinates of the starting point A and the end point B into a fitting function to obtain the vertical coordinates corresponding to the two points to obtain the coordinate (x) of the starting point A as a fitting parameterA,yA) And end point B coordinate (x)B,yB)。
The preferred scheme has the following beneficial effects: a pedestrian detection frame obtained by a pedestrian detection algorithm shakes, certain noise is introduced when a pixel coordinate is converted into a plane coordinate system, a video pedestrian track corresponding to a single-step PDR track is short, the effect of directly extracting video track features is not good, a fitting track of the video track section is obtained by performing data fitting between video track sections AB through a least square method, feature extraction is subsequently performed on the fitting track, noise influence can be reduced, and a more accurate video pedestrian track is obtained.
Preferably, step S313 specifically includes:
calculating pedestrian trajectory segment W of videoABLength L ofA,BThe calculation formula is as follows:
Figure BDA0002982547830000041
calculating a video pedestrian track section W by adopting an arc tangent functionABDirection of movement of
Figure BDA0002982547830000054
The calculation formula is as follows:
Figure BDA0002982547830000051
taking the step size of the single step obtained in the step S12 as a single step PDR track segment RabLength L ofa,bWith the heading angle of the single step as the PDR trajectory segment RabDirection of movement of
Figure BDA0002982547830000055
The preferred scheme has the following beneficial effects: the two characteristics of the track length and the motion direction of the walking of the pedestrian are relatively effective characteristics for judging the motion state of the pedestrian, the calculation mode is simple, and the accuracy of the track matching algorithm and the operation speed of the algorithm can be improved.
Preferably, step S314 specifically includes:
computing a single step PDR trace segment RabLength L ofa,bAnd video pedestrian trajectory segment WABLength L ofA,BAnd normalized to [0,1 ]]The range, the calculation formula is:
Figure BDA0002982547830000052
recalculating single step PDR trajectory segment RabDirection of movement of
Figure BDA0002982547830000056
And video pedestrian trajectory segment WABDirection of movement of
Figure BDA0002982547830000057
And normalized to [0,1 ]]The range, the calculation formula is:
Figure BDA0002982547830000058
and then the total similarity M of the ith PDR track segment and the jth video track segment is calculated by weighting the similarity of two characteristics of track length and running directioni,jThe calculation formula is as follows:
Figure BDA0002982547830000053
wherein ω is12=1,τχAnd sequentially calculating the total similarity of each PDR track segment and all video track segments for a preset threshold value, and determining a similarity matrix M.
The preferred scheme has the following beneficial effects: the similarity is calculated by utilizing the characteristic weighting algorithm, so that the method has the advantages of simple algorithm, good real-time performance and the like, and static and dynamic pedestrians can be effectively distinguished according to the walking track length of the pedestrians.
Preferably, step S32 includes the following substeps:
s321, obtaining a multi-step video pedestrian track by adopting a time synchronization mode according to the multi-step length and course angle data of the user;
s322, taking the starting point of the video pedestrian track obtained in the step S321 as the starting position point of the PDR track, and obtaining the PDR track according to the step number, the step length and the course angle data;
s323, extracting the characteristics of the PDR track and the video pedestrian track, including sampling point distance, motion direction cosine value and sampling point time difference, the concrete process is as follows:
let PDR trace sampling point b coordinate be (x)b,yb) And the B coordinate of the video pedestrian track sampling point is (x)B,yB) Then distance d of sampling pointb,BThe calculation formula is as follows:
Figure BDA0002982547830000061
obtaining the motion direction of the PDR sampling points by adopting a course angle estimation algorithm, calculating the motion direction of each sampling point of the video pedestrian track by adopting the arctangent function in the step S314 according to the coordinates of the front and the rear continuous sampling points, and calculating the cosine value phi of the motion directionb,BThe calculation formula is as follows:
Figure BDA0002982547830000063
wherein
Figure BDA0002982547830000064
The moving direction of the PDR trace sampling point b,
Figure BDA0002982547830000065
the motion direction of a video pedestrian track sampling point B is shown;
calculating the time difference Deltat of the sampling pointb,BThe calculation formula is as follows:
Δtb,B=|tb-tB|
wherein t isbSample time, t, of PDR trace sampling point bBSampling time of a video pedestrian track sampling point B;
normalizing the extracted three features, and calculating the similarity between a single PDR sampling point and a single video pedestrian track sampling point by feature weighting
Figure BDA0002982547830000062
The calculation formula is as follows:
Figure BDA0002982547830000071
wherein α + β + γ ═ 1;
s324, calculating the similarity between the PDR track and the video pedestrian track by adopting a DTW algorithm, wherein the specific process is as follows:
let PDR locus be U ═ U1,u2,u3,…,unAnd n is the number of sampling points of the PDR track, and the video pedestrian track is V ═ V }1,v2,v3,…,vmAnd e, wherein m is the number of sampling points of the video pedestrian trajectory, constructing a matrix network Θ with the size of n × m, wherein a grid element Θ (i, j) is the similarity between the r-th sampling point of the PDR trajectory obtained in the step S23 and the c-th sampling point of the video pedestrian trajectory, and obtaining an optimal path H ═ H in the matrix network Θ by using a dynamic programming algorithm1,h2,h3,…,hgWherein max (n, m) is less than or equal to g<(n + m-1), the cumulative similarity of the kth point in the optimal path is calculated as follows:
Ωk(rk,ck)=Θ(rk,ck)+min{Ωk-1(rk-1,ck),Ωk-1(rk,ck-1),Ωk-1(rk-1,ck-1)}
wherein omega1(r1,c1) Θ (0,0), k denotes the kth position in the optimal path, rkAnd ckThe k-th position respectively representing the optimal path is at the row sequence number and the column sequence number of the matrix network. G similarity omega of last position point of optimal pathg(rg,cg) All the previous location points h are accumulated1,h2,h3,…,hg-1The similarity of the PDR trajectory U and the video pedestrian trajectory V Φ (U, V) is calculated as follows:
Figure BDA0002982547830000072
and S325, matching the PDR track with the video pedestrian track by adopting a Hungarian algorithm according to the similarity of the PDR track U and the video pedestrian track V obtained in the step S324.
The preferred scheme has the following beneficial effects: the single-step matching algorithm does not fully utilize historical data, and the matching precision of the PDR track and the video pedestrian track is influenced, so that besides single-step matching, a user walks for a period of time and also utilizes the historical data to perform global matching once, and the matching precision of the PDR track and the video pedestrian track is further improved.
Preferably, step S4 includes the following substeps:
s41, determining a video pedestrian track W matched with each PDR track according to the PDR tracks and the video pedestrian track matching results of the steps S315 and S325c
S42, judging the PDR track and the video pedestrian track WcWhether the similarity of (2) exceeds a threshold value
Figure BDA0002982547830000081
If yes, go to step S43, otherwise go to step S49;
s43, judging whether the current matching is the first matching of the PDR track, if so, executing a step S48, otherwise, executing a step S44;
s44, judging the pedestrian track W of the videocWhether the serial number of (a) is matched with the pedestrian track W of the last matched videopThe numbers are the same, if yes, the step S48 is executed, otherwise, the step S45 is executed;
s45, judging the PDR track and the video pedestrian track WpIf the similarity and the distance at the current moment are both within the threshold, executing a step S410 if the similarity and the distance at the current moment are both within the threshold, otherwise executing a step S46;
s46, judging the PDR track and the video pedestrian track WcIf the similarity or the distance is within the threshold, executing step S47, otherwise executing step S49;
s47, judging the PDR track and the video pedestrian track WpWhether the preset times are continuously matched for N times is judged, if so, the step S49 is executed, otherwise, the step S48 is executed;
s48, according to the current video pedestrian track WcDetermining the current position of the user, and returning to the step S41;
s49 rootAccording to the pedestrian track W of the video at the last matchingpDetermining the current position of the user, and returning to the step S41;
and S410, determining the current position of the user according to the PDR track, and returning to the step S41.
The preferred scheme has the following beneficial effects: under a general positioning scene, the continuity of the video track of the same pedestrian is relatively good in a short time, the track matching precision can be improved by adopting threshold value constraint, and the fact that the same PDR track is frequently matched with different video pedestrian tracks in the track matching process is avoided. In addition, under the condition that no video pedestrian track can be matched, the PDR track is adopted to determine the current position of the user, so that the continuity of the positioning track of the user can be ensured, the effect of sustainable positioning when the user is shielded is realized, and the effective positioning range is expanded.
Drawings
FIG. 1 is a flow chart of an online fusion positioning method based on surveillance video and PDR of the present invention;
FIG. 2 is a flow chart of a specific positioning method in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a corresponding relationship between a PDR trajectory and a video pedestrian trajectory in single-step matching according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the derivation of a PDR trajectory from a video pedestrian trajectory during multi-step matching according to an embodiment of the present invention;
FIG. 5 is a flow chart of determining a current location of a user in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1 and fig. 2, the present invention provides an online fusion positioning method based on surveillance video and PDR, comprising the following steps:
s1, collecting the data of the mobile phone motion sensor of the user, and obtaining the PDR track of the user according to the PDR algorithm;
in the embodiment of the present invention, step S1 includes the following sub-steps:
s11, collecting the data of the motion sensor of the accelerometer, magnetometer and gyroscope built in the mobile phone of the user, processing the sensor data by adopting a filtering algorithm, converting the data of the mobile phone carrier coordinate system into a plane coordinate system by utilizing a coordinate system conversion algorithm, wherein the data of the motion sensor collected by the mobile phone is of the mobile phone carrier coordinate system, and the position navigation can be carried out only by converting the data of the motion sensor into the plane coordinate system. (ii) a
And S12, based on the mobile phone sensor data collected in the step S11, obtaining information such as the step number in the user movement process, the step length corresponding to each step, the course angle and the like by adopting step number detection, step length estimation and course angle estimation algorithms, and obtaining the PDR track of the user through a PDR algorithm according to the initial position point under the condition that the initial position point can be determined.
S2, collecting video stream data of a real-time monitoring camera, and obtaining a video pedestrian track by adopting a deep learning pedestrian detection framework and a feature weighting algorithm;
in the embodiment of the present invention, step S2 includes the following sub-steps:
s21, installing a monitoring camera in the area to be positioned, and uploading monitored video stream data to a processing server in real time through network transmission by using a wired or wireless network transmission method;
s22, processing the video stream into images according to frames, inputting the images into a YOLOv3 pedestrian detection frame for processing to obtain all pedestrian detection results in each frame of images, wherein the obtained pedestrian detection results comprise pedestrian detection frame position information (x, y, w, h) and confidence scores of detection frames, wherein (x, y) is pixel coordinates of a vertex at the upper left corner of the detection frame, w is the width of the detection frame, and h is the height of the detection frame;
s23, extracting features of each frame of video pedestrian detection result obtained in the step S22, wherein the features comprise motion features, appearance features and other feature information capable of measuring the similarity of different image frame detection frames, calculating the similarity of the current video image frame detection result and the initial video pedestrian track by using a feature weighting algorithm, and associating the current video image frame pedestrian detection result with the previous initial video pedestrian track according to a Hungarian algorithm or a Markov model to obtain the latest video pedestrian track;
s24, converting the video pedestrian track obtained in the step S23 from a pixel coordinate system to a plane coordinate system by adopting a DLT plane space correction method with calibration points, wherein the video pedestrian track obtained in the step S23 is relative to the pixel coordinate system, and the PDR track is relative to the plane coordinate system, so that the subsequent two tracks can be matched only by converting the pixel coordinate system and the plane coordinate system, firstly, some map calibration points are arranged in an area to be positioned, the coordinates of the map calibration points in the plane coordinate system and the coordinates in the pixel coordinate system are recorded, and a model parameter gamma of coordinate transformation is solved by a DLT algorithm. Suppose there are n index points with plane position coordinates of (x)i,yi) 1, 2.. times.n, corresponding to the image pixel coordinate is (u)i,vi) According to the DLT algorithm, the conversion relation between the plane coordinates and the image pixel coordinates can be constructed as the following equation set:
Figure BDA0002982547830000111
the above system of equations is expressed in matrix form:
CΓ=P
where Γ is the conversion coefficient for the image pixel coordinates to planar coordinates,
Figure BDA0002982547830000112
according to the least square method, 8 parameters in the Γ equation can be calculated, and the calculation formula is as follows:
Γ=(CTC)-1CTP
and S3, matching the PDR track and the video pedestrian track by using a double track matching algorithm, wherein the step S1 and the step S2 are performed simultaneously, and after the PDR track and the video pedestrian track of the pedestrian are obtained, the PDR track and the video pedestrian track are matched by using the double track matching algorithm. The dual track matching algorithm is used for matching the PDR track and the video pedestrian track by respectively adopting a local fast matching algorithm (also called a single-step fast matching algorithm) and a global optimization matching algorithm;
in the embodiment of the present invention, step S3 includes the following sub-steps:
s31, performing real-time matching of the PDR track and the video pedestrian track by adopting a single-step fast matching algorithm, namely performing PDR track and video pedestrian track matching once every time a user walks by one step;
in the embodiment of the present invention, step S31 includes the following sub-steps:
s311, according to the step length of the single step of the user, the course and the system time of the mobile phone, time synchronization is carried out through the processing server to find a video pedestrian track corresponding to the single step, a schematic diagram of the corresponding relation between the PDR track of the single step and the video pedestrian track is shown in FIG. 3, wherein the starting position point a and the ending position point B of the step1 correspond to the position point A and the position point B of the video pedestrian track respectively, and the video pedestrian track is determined through the time synchronization relation. The video stream frame rate is faster than the sampling frequency of a mobile phone sensor, so that the video track data is more than the PDR track;
s312, fitting the video pedestrian track obtained in the step S311 by adopting a least square method, wherein a pedestrian detection frame obtained by a pedestrian detection algorithm shakes, certain noise is introduced when a pixel coordinate is converted into a plane coordinate system, the video pedestrian track corresponding to the single-step PDR track is short, the effect of directly extracting video track features is not good, in order to reduce noise influence and obtain more accurate video pedestrian track, the least square method is adopted between video track sections AB to perform data fitting to obtain the fitting track of the video track section, and feature extraction is performed on the fitted track;
in the embodiment of the present invention, step S312 specifically includes:
pedestrian trajectory segment W based on videoABL data samples of { (x)(0),y(0)),(x(1),y(1)),(x(2),y(2)),...,(x(L-1),y(L-1)) In which video pedestrian trajectory segment WABTaking a single-step video pedestrian track segment with a starting point A and an end point B, and adopting the following fitting function hθ(x) Fitting the data:
hθ(x)=θ0x+θ1
wherein theta is1And theta0For fitting parameters, the formula is trained by adopting a large amount of video pedestrian trajectory data, and theta is solved by regression0And theta1Inputting the abscissa of the starting point A and the abscissa of the ending point B into the trained fitting function hθ(x) The vertical coordinate corresponding to the two points is obtained to obtain the coordinate (x) of the starting point AA,yA) And end point B coordinate (x)B,yB)。
S313, extracting to obtain the pedestrian track characteristics of the video based on the fitting track obtained in the step S312, and obtaining the PDR track characteristics based on the user step length and the course angle calculated in the step S12;
in the embodiment of the present invention, step S313 specifically includes:
calculating pedestrian trajectory segment W of videoABLength L ofA,BThe length is measured by Euclidean distance, and the calculation formula is as follows:
Figure BDA0002982547830000131
calculating a video pedestrian track section W by adopting an arc tangent functionABDirection of movement of
Figure BDA0002982547830000134
The calculation formula is as follows:
Figure BDA0002982547830000132
then the step size of the single step obtained in the step S12 is taken as the single step PDR track segment RabLength L ofa,bWith the heading angle of the single step as the PDR trajectory segment RabDirection of movement of
Figure BDA0002982547830000137
The step length and the course angle are both obtained by the data of the mobile phone motion sensor through a step length estimation algorithm and a course angle estimation algorithm.
S314, based on the video pedestrian track characteristics extracted in the step S313, calculating the total similarity M of the ith PDR track segment and the jth video track segment by adopting a characteristic weighting algorithmi,jSequentially calculating the total similarity of each PDR track segment and all video track segments, and determining a similarity matrix M;
in this embodiment of the present invention, step S314 specifically includes:
computing a single step PDR trace segment RabLength L ofa,bAnd video pedestrian trajectory segment WABLength L ofA,BAnd normalized to [0,1 ]]The range, the calculation formula is:
Figure BDA0002982547830000133
recalculating single step PDR trajectory segment RabDirection of movement of
Figure BDA0002982547830000135
And video pedestrian trajectory segment WABDirection of movement of
Figure BDA0002982547830000136
And normalized to [0,1 ]]The range, the calculation formula is:
Figure BDA0002982547830000142
and then the total similarity M of the ith PDR track segment and the jth video track segment is calculated by weighting the similarity of two characteristics of track length and running directioni,jThe calculation formula is as follows:
Figure BDA0002982547830000141
wherein ω is12=1,τχFor presetting the threshold value, if the ratio of the lengths of the two tracks is less than the threshold value tauχOr the cosine value of the angle is greater than the threshold value tauφIf the total similarity of the ith PDR track segment and the jth video track segment is 0, namely the greater the total similarity is, the greater the probability of matching the two tracks is, the total similarity of each PDR track segment and all the video track segments is calculated in sequence, and a similarity matrix M is determined.
S315, matching the PDR track and the video pedestrian track by adopting a Hungarian algorithm, wherein the specific contents are as follows:
firstly, calculating a cost matrix C of the Hungarian algorithm based on the similarity matrix M of the step S315, wherein the calculation formula is as follows:
C=I-M
i.e. ci,jThe smaller the probability of matching the ith PDR track segment with the jth video track segment is, the larger the probability of matching the ith PDR track segment with the jth video track segment is, the Hungarian algorithm is an overall optimal allocation algorithm, is mainly used for the allocation of many-to-many tasks and can be suitable for the matching of the PDR track set and the video pedestrian track set, the Hungarian algorithm is matched according to the cost matrix to obtain a result matrix R, only one element of each row or column of elements in the R is 1, the rest are 0, and if the element R in the R isi,jAnd if the number of the PDR track segments is 1, the ith PDR track segment is considered to be matched with the jth video track segment.
S32, matching the PDR track and the video pedestrian track by adopting a multi-step global matching algorithm, namely matching the PDR track and the video pedestrian track once when a user walks for k steps, and improving the matching precision through historical data. In order to improve the matching precision of the PDR track and the video pedestrian track, when a user walks for a period of time, global matching is performed once by using historical data.
In the embodiment of the present invention, step S32 includes the following sub-steps:
s321, acquiring the step length and course angle data of the nearest k steps and the video pedestrian track every k steps of the user for the preset times, and obtaining the video pedestrian track in the k steps in a time period in a time synchronization mode;
s322, taking the starting point of the video pedestrian trajectory obtained in step S321 as the starting position point of the PDR trajectory, obtaining the PDR trajectory according to the step number, the step length, and the heading angle data, and when k is 3, deriving a schematic diagram of the PDR trajectory by using the video pedestrian trajectory, as shown in fig. 4;
s323, extracting the features of the PDR track and the video pedestrian track obtained in the steps S321 and S322, wherein the extracted features comprise the distance between a PDR track sampling point and a video pedestrian track sampling point, a cosine value of the motion direction and a time difference of the sampling point, and the specific process comprises the following steps:
the distance is measured by adopting the Euclidean distance, and the b coordinate of a PDR track sampling point is set as (x)b,yb) And the B coordinate of the video pedestrian track sampling point is (x)B,yB) Then distance d of sampling pointb,BThe calculation formula is as follows:
Figure BDA0002982547830000151
db,Bsmaller means that the B point and the B point are more similar. Obtaining the motion direction of the PDR sampling point by adopting a course angle estimation algorithm, calculating the motion direction of each sampling point of the pedestrian track of the video by adopting the arctangent function in the step S314 according to the coordinates of two continuous sampling points in front and back, and calculating the cosine value phi of the motion directionb,BThe calculation formula is as follows:
Figure BDA0002982547830000152
wherein
Figure BDA0002982547830000153
The moving direction of the PDR trace sampling point b,
Figure BDA0002982547830000154
the smaller phi is the motion direction of a video pedestrian track sampling point B, the greater the similarity between a point B and a point B is;
calculating the time difference Deltat of the sampling pointb,BThe calculation formula is as follows:
Δtb,B=|tb-tB|
wherein t isbSample time, t, of PDR trace sampling point bBSampling time, Δ t, of video pedestrian trajectory sampling point Bb,BSmaller points indicate greater similarity between the B point and the B point;
normalizing the extracted three features, and calculating the similarity between a single PDR sampling point and a single video pedestrian track sampling point by feature weighting
Figure BDA0002982547830000161
The calculation formula is as follows:
Figure BDA0002982547830000162
wherein α + β + γ ═ 1;
s324, calculating the similarity between the PDR track and the video pedestrian track by adopting a DTW algorithm, wherein the specific process is as follows:
in the multi-step global optimization matching, the PDR trajectory and the video pedestrian trajectory both include a plurality of sampling points, but the number of sampling points of the two trajectories is different, and the similarity calculation between the single sampling points is as shown in step S315. The similarity of the whole track is calculated by adopting a DTW algorithm, and the PDR track is set as U-U1,u2,u3,…,unAnd n is the number of sampling points of the PDR track, and the video pedestrian track is V ═ V }1,v2,v3,…,vmAnd e, wherein m is the number of sampling points of the video pedestrian trajectory, constructing a matrix network Θ with the size of n × m, wherein a grid element Θ (i, j) is the similarity between the r-th sampling point of the PDR trajectory obtained in the step S23 and the c-th sampling point of the video pedestrian trajectory, and obtaining an optimal path H ═ H in the matrix network Θ by using a dynamic programming algorithm1,h2,h3,…,hgWherein max (n, m) is less than or equal to g<(n + m-1), and taking the accumulated average similarity corresponding to the optimal path as the similarity of the PDR track and the video pedestrian track.
The optimal path needs to satisfy three constraints:
1) boundary property: that is, the starting point of the optimal path must be Θ (1,1), and the end point must be Θ (n, m);
2) continuity: the points in the optimal path must be adjacent in the grid matrix Θ;
3) monotonicity: for point h in the pathg-1(i, j), the next point h thereofg(i ', j') is required to satisfy i '-i.gtoreq.0 and j' -j.gtoreq.0.
The cumulative similarity of the kth point in the optimal path is calculated as follows:
Ωk(rk,ck)=Θ(rk,ck)+min{Ωk-1(rk-1,ck),Ωk-1(rk,ck-1),Ωk-1(rk-1,ck-1)}
wherein omega1(r1,c1) Θ (0,0), k denotes the kth position in the optimal path, rkAnd ckThe k-th position respectively representing the optimal path is at the row sequence number and the column sequence number of the matrix network. G similarity omega of last position point of optimal pathg(rg,cg) All the previous location points h are accumulated1,h2,h3,…,hg-1The similarity of the PDR trajectory U and the video pedestrian trajectory V Φ (U, V) is calculated as follows:
Figure BDA0002982547830000171
and S325, according to the similarity between the PDR track U and the video pedestrian track V obtained in the step S324, matching the PDR track and the video pedestrian track by adopting a Hungarian algorithm, wherein the calculation method is similar to the step S315, and is only solved according to the similarity matrix obtained in the step S324 when the cost matrix is solved.
Under the condition that the similarity threshold value is not exceeded, the video track allocated in Hungarian in the step S325 is used as the track matched with the pedestrian at the current moment, the matching error possibly occurring in the single-step matching result in the step S315 is corrected by using historical data, and the matching precision of the system is improved.
And S4, determining the current position and the tracking track of the user according to the matching result of the step S3.
Referring to fig. 5, in the embodiment of the present invention, step S4 includes the following sub-steps:
s41, determining a video pedestrian track W matched with each PDR track according to the PDR tracks and the video pedestrian track matching results of the steps S315 and S325c
S42, judging the PDR track and the video pedestrian track WcWhether the similarity of (2) exceeds a threshold value
Figure BDA0002982547830000172
If so, executing step S43, otherwise, assuming that the PDR track does not have a video track matching therewith currently, executing step S49;
s43, judging whether the current matching is the first matching of the PDR track, if so, executing a step S48, otherwise, executing a step S44;
s44, judging the pedestrian track W of the videocWhether the serial number of (a) is matched with the pedestrian track W of the last matched videopThe numbers are the same, if yes, the step S48 is executed, otherwise, the step S45 is executed;
s45, judging the PDR track and the video pedestrian track WpIf the similarity and the distance at the current moment are both within the threshold, executing a step S410 if the similarity and the distance at the current moment are both within the threshold, otherwise executing a step S46;
s46, judging the PDR track and the video pedestrian track WcIf so, executing step S47, otherwise, executing step S49, if it is determined that the PDR track does not have a video track matching the PDR track;
s47, judging the PDR track and the video pedestrian track WpIf the PDR track is continuously matched for the preset number of times N (for example, set to 5), if so, the PDR track is considered not to have a matching video track currently, and step S49 is executed, otherwise, the step is executedS48;
S48, according to the current video pedestrian track WcDetermining the current position of the user, and returning to the step S41;
s49, according to the pedestrian trajectory W of the video in the last matchingpDetermining the current position of the user, and returning to the step S41;
and S410, determining the current position of the user according to the PDR track, and returning to the step S41.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (8)

1. An online fusion positioning method based on surveillance videos and PDRs is characterized by comprising the following steps:
s1, collecting the data of the mobile phone motion sensor of the user, and obtaining the PDR track of the user according to the PDR algorithm;
s2, collecting video stream data of a real-time monitoring camera, and obtaining a video pedestrian track by adopting a deep learning pedestrian detection framework and a feature weighting algorithm;
s3, matching the PDR track and the video pedestrian track by using a dual track matching algorithm, and comprising the following steps:
s31, performing real-time matching of the PDR track and the video pedestrian track by adopting a single-step fast matching algorithm, and comprising the following steps:
s311, according to the step length of the single step of the user, the course and the system time of the mobile phone, time synchronization is carried out through a processing server to obtain a video pedestrian track corresponding to the single step;
s312, fitting the video pedestrian track obtained in the step S311 by adopting a least square method to obtain a fitted track of the video pedestrian track;
s313, extracting to obtain the pedestrian track characteristics of the video based on the fitting track obtained in the step S312, and obtaining the PDR track characteristics based on the user step length and the course angle calculated in the step S12;
s314, based on the video pedestrian track characteristics extracted in the step S313, calculating the total similarity M of the ith PDR track segment and the jth video track segment by adopting a characteristic weighting algorithmi,jSequentially calculating the total similarity of each PDR track segment and all video track segments, and determining a similarity matrix M;
s315, matching the PDR track and the video pedestrian track by adopting a Hungarian algorithm, wherein the specific contents are as follows:
firstly, calculating a cost matrix C of the Hungarian algorithm based on the similarity matrix M of the step S314, wherein the calculation formula is as follows:
C=I-M
and performing Hungarian algorithm matching according to the cost matrix to obtain a result matrix R, wherein if an element R in the R isi,jIf the number of the PDR track segments is 1, the ith PDR track segment is considered to be matched with the jth video track segment;
s32, matching the PDR track and the video pedestrian track by adopting a multi-step global matching algorithm;
and S4, determining the current position and the tracking track of the user according to the matching result of the step S3.
2. The online fusion positioning method based on surveillance video and PDR as claimed in claim 1, wherein said step S1 comprises the following sub-steps:
s11, collecting the data of the motion sensors of the accelerometer, the magnetometer and the gyroscope which are arranged in the mobile phone of the user, processing the data of the sensors by adopting a filtering algorithm, and converting the coordinate system data of the mobile phone carrier to a plane coordinate system by utilizing a coordinate system conversion algorithm;
s12, based on the mobile phone sensor data collected in the step S11, the step number, the step length corresponding to each step and the course angle in the user motion process are obtained through a step number detection algorithm, a step length estimation algorithm and a course angle estimation algorithm, and the PDR track of the user is obtained through a PDR algorithm according to the initial position point.
3. The online fusion positioning method based on surveillance video and PDR as claimed in claim 2, wherein said step S2 comprises the following sub-steps:
s21, collecting video stream data of the real-time monitoring camera;
s22, processing the video stream obtained in the step S21 into images according to frames, processing the images by using a deep learning pedestrian detection frame to obtain all pedestrian detection results in each frame of image, and taking all pedestrian detection frame pixel coordinates of the first frame of video stream as an initial video pedestrian track;
s23, extracting features of each frame of video pedestrian detection result obtained in the step S22, calculating the similarity between the current video image frame detection result and the initial video pedestrian track by using a feature weighting algorithm, and associating the current video image frame pedestrian detection result with the initial video pedestrian track for updating to obtain a new video pedestrian track;
and S24, converting the video pedestrian track obtained in the step S23 from a pixel coordinate system to a plane coordinate system by adopting a DLT plane space correction method with a calibration point.
4. The online fusion positioning method based on surveillance video and PDR as claimed in claim 1, wherein said step S312 specifically comprises:
pedestrian trajectory segment W based on videoABL data samples of (1), wherein the video pedestrian trajectory segment WABTaking a single-step video pedestrian track segment with a starting point A and an end point B, and adopting the following fitting function hθ(x) Fitting the data:
hθ(x)=θ0x+θ1
wherein theta is1And theta0Inputting the horizontal coordinates of the starting point A and the end point B into a fitting function to obtain the vertical coordinates corresponding to the two points to obtain the coordinate (x) of the starting point A as a fitting parameterA,yA) And end point B coordinate (x)B,yB)。
5. The monitoring video and PDR-based online fusion positioning method according to claim 4, wherein the step S313 specifically comprises:
calculating pedestrian trajectory segment W of videoABLength L ofA,BThe calculation formula is as follows:
Figure FDA0003188299210000031
calculating a video pedestrian track section W by adopting an arc tangent functionABDirection of movement of
Figure FDA0003188299210000032
The calculation formula is as follows:
Figure FDA0003188299210000033
then the step size of the single step obtained in the step S12 is taken as a single step PDR track segment RabLength L ofa,bWith the heading angle of the single step as the PDR trajectory segment RabDirection of movement of
Figure FDA0003188299210000034
6. The online fusion positioning method based on surveillance video and PDR as claimed in claim 5, wherein said step S314 specifically comprises:
computing a single step PDR trace segment RabLength L ofa,bAnd video pedestrian trajectory segment WABLength L ofA,BAnd normalized to [0,1 ]]The range, the calculation formula is:
Figure FDA0003188299210000035
recalculating single step PDR trajectory segment RabDirection of movement of
Figure FDA0003188299210000041
He-ShiFrequent pedestrian trajectory segment WABDirection of movement of
Figure FDA0003188299210000042
And normalized to [0,1 ]]The range, the calculation formula is:
Figure FDA0003188299210000043
and then the total similarity M of the ith PDR track segment and the jth video track segment is calculated by weighting the similarity of two characteristics of track length and running directioni,jThe calculation formula is as follows:
Figure FDA0003188299210000044
wherein ω is12=1,τχAnd sequentially calculating the total similarity of each PDR track segment and all video track segments for a preset threshold value, and determining a similarity matrix M.
7. The online fusion positioning method based on surveillance video and PDR as claimed in claim 6, wherein said step S32 comprises the following sub-steps:
s321, obtaining a multi-step video pedestrian track by adopting a time synchronization mode according to the multi-step length and course angle data of the user;
s322, taking the starting point of the video pedestrian track obtained in the step S321 as the starting position point of the PDR track, and obtaining the PDR track according to the step number, the step length and the course angle data;
s323, extracting the characteristics of the PDR track and the video pedestrian track, including sampling point distance, motion direction cosine value and sampling point time difference, the concrete process is as follows:
let PDR trace sampling point b coordinate be (x)b,yb) And the B coordinate of the video pedestrian track sampling point is (x)B,yB) Then distance d of sampling pointb,BThe calculation formula is as follows:
Figure FDA0003188299210000045
obtaining the motion direction of the PDR sampling points by adopting a course angle estimation algorithm, calculating the motion direction of each sampling point of the video pedestrian track by adopting the arctangent function in the step S314 according to the coordinates of the front and the rear continuous sampling points, and calculating the cosine value phi of the motion directionb,BThe calculation formula is as follows:
Figure FDA0003188299210000046
wherein
Figure FDA0003188299210000051
The moving direction of the PDR trace sampling point b,
Figure FDA0003188299210000052
the motion direction of a video pedestrian track sampling point B is shown;
calculating the time difference Deltat of the sampling pointb,BThe calculation formula is as follows:
Δtb,B=|tb-tB|
wherein t isbSample time, t, of PDR trace sampling point bBSampling time of a video pedestrian track sampling point B;
normalizing the extracted three features, and calculating the similarity between a single PDR sampling point and a single video pedestrian track sampling point by feature weighting
Figure FDA0003188299210000053
The calculation formula is as follows:
Figure FDA0003188299210000054
wherein α + β + γ ═ 1;
s324, calculating the similarity between the PDR track and the video pedestrian track by adopting a DTW algorithm, wherein the specific process is as follows:
let PDR locus be U ═ U1,u2,u3,…,unAnd n is the number of sampling points of the PDR track, and the video pedestrian track is V ═ V }1,v2,v3,…,vmAnd e, wherein m is the number of sampling points of the video pedestrian trajectory, constructing a matrix network Θ with the size of n × m, wherein a grid element Θ (i, j) is the similarity between the r-th sampling point of the PDR trajectory obtained in the step S23 and the c-th sampling point of the video pedestrian trajectory, and obtaining an optimal path H ═ H in the matrix network Θ by using a dynamic programming algorithm1,h2,h3,…,hgWhere max (n, m) ≦ g < (n + m-1), the cumulative similarity for the k-th point in the optimal path is calculated as:
Ωk(rk,ck)=Θ(rk,ck)+min{Ωk-1(rk-1,ck),Ωk-1(rk,ck-1),Ωk-1(rk-1,ck-1)}
wherein omega1(r1,c1) Θ (0,0), k denotes the kth position in the optimal path, rkAnd ckRespectively representing the row sequence number and the column sequence number of the k-th position of the optimal path in the matrix network, and the g similarity omega of the last position point of the optimal pathg(rg,cg) All the previous location points h are accumulated1,h2,h3,…,hg-1The similarity of the PDR trajectory U and the video pedestrian trajectory V Φ (U, V) is calculated as follows:
Figure FDA0003188299210000061
and S325, matching the PDR track with the video pedestrian track by adopting a Hungarian algorithm according to the similarity of the PDR track U and the video pedestrian track V obtained in the step S324.
8. The online fusion positioning method based on surveillance video and PDR as claimed in claim 7, wherein said step S4 comprises the following sub-steps:
s41, determining a video pedestrian track W matched with each PDR track according to the PDR tracks and the video pedestrian track matching results of the steps S315 and S325c
S42, judging the PDR track and the video pedestrian track WcWhether the similarity of (2) exceeds a threshold value
Figure FDA0003188299210000062
If yes, go to step S43, otherwise go to step S49;
s43, judging whether the current matching is the first matching of the PDR track, if so, executing a step S48, otherwise, executing a step S44;
s44, judging the pedestrian track W of the videocWhether the serial number of (a) is matched with the pedestrian track W of the last matched videopThe numbers are the same, if yes, the step S48 is executed, otherwise, the step S45 is executed;
s45, judging the PDR track and the video pedestrian track WpIf the similarity and the distance at the current moment are both within the threshold, executing a step S410 if the similarity and the distance at the current moment are both within the threshold, otherwise executing a step S46;
s46, judging the PDR track and the video pedestrian track WcIf the similarity or the distance is within the threshold, executing step S47, otherwise executing step S49;
s47, judging the PDR track and the video pedestrian track WpWhether the preset times are continuously matched for N times is judged, if so, the step S49 is executed, otherwise, the step S48 is executed;
s48, according to the current video pedestrian track WcDetermining the current position of the user, and returning to the step S41;
s49, according to the pedestrian trajectory W of the video in the last matchingpDetermining the current position of the user, and returning to the step S41;
and S410, determining the current position of the user according to the PDR track, and returning to the step S41.
CN202110290805.4A 2021-03-18 2021-03-18 Online fusion positioning method based on surveillance video and PDR Active CN113114850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110290805.4A CN113114850B (en) 2021-03-18 2021-03-18 Online fusion positioning method based on surveillance video and PDR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110290805.4A CN113114850B (en) 2021-03-18 2021-03-18 Online fusion positioning method based on surveillance video and PDR

Publications (2)

Publication Number Publication Date
CN113114850A CN113114850A (en) 2021-07-13
CN113114850B true CN113114850B (en) 2021-09-21

Family

ID=76711851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110290805.4A Active CN113114850B (en) 2021-03-18 2021-03-18 Online fusion positioning method based on surveillance video and PDR

Country Status (1)

Country Link
CN (1) CN113114850B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108444473A (en) * 2018-03-20 2018-08-24 南京华苏科技有限公司 Track localization method in a kind of pedestrian room
CN109743680A (en) * 2019-02-28 2019-05-10 电子科技大学 A kind of indoor tuning on-line method based on PDR combination hidden Markov model
CN109934127A (en) * 2019-02-27 2019-06-25 电子科技大学 Pedestrian's recognition and tracking method based on video image and wireless signal
CN109977823A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 Pedestrian's recognition and tracking method, apparatus, computer equipment and storage medium
CN110487273A (en) * 2019-07-15 2019-11-22 电子科技大学 A kind of indoor pedestrian track projectional technique of level meter auxiliary
CN111784746A (en) * 2020-08-10 2020-10-16 上海高重信息科技有限公司 Multi-target pedestrian tracking method and device under fisheye lens and computer system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5951367B2 (en) * 2012-01-17 2016-07-13 シャープ株式会社 Imaging apparatus, captured image processing system, program, and recording medium
US9506761B2 (en) * 2014-01-10 2016-11-29 Alcatel Lucent Method and apparatus for indoor position tagging
CN106023253B (en) * 2016-05-18 2019-02-12 杭州智诚惠通科技有限公司 A kind of method of urban target trajectory track
CN105872477B (en) * 2016-05-27 2018-11-23 北京旷视科技有限公司 video monitoring method and video monitoring system
KR101794967B1 (en) * 2016-11-11 2017-11-07 광운대학교 산학협력단 Hybrid tracking system and method for indoor moving object
JP7001067B2 (en) * 2017-01-23 2022-01-19 ソニーグループ株式会社 Information processing equipment, information processing methods and computer programs
CN107635204B (en) * 2017-09-27 2020-07-28 深圳大学 Indoor fusion positioning method and device assisted by exercise behaviors and storage medium
CN109684916B (en) * 2018-11-13 2020-01-07 恒睿(重庆)人工智能技术研究院有限公司 Method, system, equipment and storage medium for detecting data abnormity based on path track
CN111862145B (en) * 2019-04-24 2022-05-17 四川大学 Target tracking method based on multi-scale pedestrian detection
CN111553234B (en) * 2020-04-22 2023-06-06 上海锘科智能科技有限公司 Pedestrian tracking method and device integrating facial features and Re-ID feature ordering
CN112037245B (en) * 2020-07-22 2023-09-01 杭州海康威视数字技术股份有限公司 Method and system for determining similarity of tracked targets

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108444473A (en) * 2018-03-20 2018-08-24 南京华苏科技有限公司 Track localization method in a kind of pedestrian room
CN109934127A (en) * 2019-02-27 2019-06-25 电子科技大学 Pedestrian's recognition and tracking method based on video image and wireless signal
CN109743680A (en) * 2019-02-28 2019-05-10 电子科技大学 A kind of indoor tuning on-line method based on PDR combination hidden Markov model
CN109977823A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 Pedestrian's recognition and tracking method, apparatus, computer equipment and storage medium
CN110487273A (en) * 2019-07-15 2019-11-22 电子科技大学 A kind of indoor pedestrian track projectional technique of level meter auxiliary
CN111784746A (en) * 2020-08-10 2020-10-16 上海高重信息科技有限公司 Multi-target pedestrian tracking method and device under fisheye lens and computer system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MTIIMU传感器数据在PDR室内定位中的可用性;H.Guo,M.Uradzinski;《ICINS》;20180530;第1-4页 *
基于EKF/PF的蓝牙/PDR/地图的融合定位算法研究;刘雯; 李晶; 邓中亮; 付潇; 王翰华; 姚喆;《第九届中国卫星导航学术年会》;20180523;第1-5页 *
基于惯性传感器和WiFi的室内定位算法研究;王政;《CNKI》;20181231;第四章 *

Also Published As

Publication number Publication date
CN113114850A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN110856112B (en) Crowd-sourcing perception multi-source information fusion indoor positioning method and system
CN112567201B (en) Distance measuring method and device
CN107635204B (en) Indoor fusion positioning method and device assisted by exercise behaviors and storage medium
CN112014857A (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN109934127B (en) Pedestrian identification and tracking method based on video image and wireless signal
WO2018068771A1 (en) Target tracking method and system, electronic device, and computer storage medium
EP2495632A1 (en) Map generating and updating method for mobile robot position recognition
CN109141395B (en) Sweeper positioning method and device based on visual loopback calibration gyroscope
CN104180805A (en) Smart phone-based indoor pedestrian positioning and tracking method
CN110553648A (en) method and system for indoor navigation
CN109743680B (en) indoor on-line positioning method based on PDR combined with hidden Markov model
CN104023228A (en) Self-adaptive indoor vision positioning method based on global motion estimation
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN111595344B (en) Multi-posture downlink pedestrian dead reckoning method based on map information assistance
WO2019047637A1 (en) Localization method and apparatus, mobile terminal and computer-readable storage medium
CN111277946A (en) Fingerprint database self-adaptive updating method in Bluetooth indoor positioning system
CN112037257B (en) Target tracking method, terminal and computer readable storage medium thereof
CN112529962A (en) Indoor space key positioning technical method based on visual algorithm
US10677881B2 (en) Map assisted inertial navigation
CN112539747B (en) Pedestrian dead reckoning method and system based on inertial sensor and radar
CN113114850B (en) Online fusion positioning method based on surveillance video and PDR
CN115235455B (en) Pedestrian positioning method based on smart phone PDR and vision correction
CN109000634B (en) Navigation object traveling route reminding method and system
CN113554705B (en) Laser radar robust positioning method under changing scene
CN113627497B (en) Space-time constraint-based cross-camera pedestrian track matching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant