CN111242972A - Online cross-scale multi-fluid target matching and tracking method - Google Patents

Online cross-scale multi-fluid target matching and tracking method Download PDF

Info

Publication number
CN111242972A
CN111242972A CN201911336384.3A CN201911336384A CN111242972A CN 111242972 A CN111242972 A CN 111242972A CN 201911336384 A CN201911336384 A CN 201911336384A CN 111242972 A CN111242972 A CN 111242972A
Authority
CN
China
Prior art keywords
target
matching
window
bipartite graph
fluid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911336384.3A
Other languages
Chinese (zh)
Other versions
CN111242972B (en
Inventor
刘佑达
陈建军
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 14 Research Institute
Original Assignee
CETC 14 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 14 Research Institute filed Critical CETC 14 Research Institute
Priority to CN201911336384.3A priority Critical patent/CN111242972B/en
Publication of CN111242972A publication Critical patent/CN111242972A/en
Application granted granted Critical
Publication of CN111242972B publication Critical patent/CN111242972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

The invention relates to an online cross-scale multi-fluid target matching and tracking method, which adopts a sliding window to extract a related object, carries out matching and tracking on specific characteristics on a related object image, and extracts fluid target characteristics with different scales; selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics with different scales; calculating composite association weight according to the feature information of two continuous frames, extracting spatial association and time association information between different body targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information; performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, and constructing a sparse weighted bipartite graph based on the matched target; and based on the constructed sparse weighted bipartite graph, target tracking and track prediction are carried out by adopting a Kalman filtering algorithm. The invention can effectively improve the multi-fluid target matching and tracking precision; the running time of the method is reduced to the second level, and the requirement of an online algorithm is met.

Description

Online cross-scale multi-fluid target matching and tracking method
Technical Field
The invention relates to an online cross-scale multi-fluid target matching and tracking method.
Background
At present, the multi-target tracking technology consists of four steps: target detection, correlation weight calculation, target matching and target track tracking. The multi-target tracking problem has achieved a lot of research results, but the tracking targets such as airplanes, pedestrians, vehicles and the like adopt models which are point targets, rigid body targets or deformable targets composed of multiple rigid bodies. Fluid targets are quite different from these targets. The shape and intensity distribution of the fluid object will vary continuously, with the dimensions of the different objects varying widely.
Existing classical target detection methods require the identification of individual target monomers. However, the form of the fluid target changes constantly in the moving process, and the separation and fusion judgment problems exist, namely, a single target can be split into a plurality of targets in the moving process, or the plurality of targets can be fused into a single target; and the fusion between different monomers and the separation of a large fluid target into small targets can cause the drastic change of important characteristics such as the size, the form, the strength, key points and the like of the targets, which can cause the large differences of the form, the strength and the position of the fluid targets extracted at different moments, and the difficulty in the matching of the associated characteristic selection and the targets is large, so that the matching and tracking are difficult. In addition, the scale span of different fluid targets is large, and the smallest fluid target may be only a few pixel points on the observation image. Such targets are difficult to effectively distinguish from observed noise, and often cause conditions such as jumping and loss of detection results among different frames. Therefore, the detection and matching method of the existing online multi-target tracking method is not suitable for fluid target tracking.
Disclosure of Invention
The invention aims to provide an online cross-scale multi-fluid target matching and tracking method aiming at the problems in the prior art, which can effectively improve the matching and tracking precision of a multi-fluid target and solve the problem that the long-time span target is difficult to match due to separation and fusion of the fluid target.
The purpose of the invention is realized by the following technical scheme:
the invention provides an online cross-scale multi-fluid target matching and tracking method, which comprises the following steps:
step S101, extracting a related object by adopting a sliding window, matching and tracking specific features on an image of the related object, and extracting fluid target features with different scales;
step S102, selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics with different scales;
step S103, calculating composite association weight according to the feature information of two continuous frames, extracting spatial association and time association information between different body targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information;
step S104, performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, removing isolated nodes in the target matching process, and constructing a sparse weighted bipartite graph based on the matched target;
and S105, based on the constructed sparse weighted bipartite graph, performing target tracking and track prediction by adopting a Kalman filtering algorithm.
More preferably, the composite correlation weight is a weighted sum of the form correlation parameter, the motion correlation parameter and the motion smoothing correlation parameter, and the weight coefficient is a constant.
More preferably, the morphological correlation parameter is a weighted sum of the shape feature and the value distribution feature, and the weighting coefficient is a constant;
the shape characteristic vector comprises the length and the width of a rectangular window, the target proportion in the window and the centroid position of a target in the window, and the cosine distance is adopted for calculating the weight;
the value distribution characteristic vector comprises the length and the width of a rectangular window and the image intensity distribution in the window, and the Gaussian distance is adopted for calculating the weight.
More preferably, the motion related parameter is a motion direction constraint. And calculating the situation fields of two continuous frames by adopting an optical flow method, and counting the average motion field in all rectangular windows. The rectangular window can select a plurality of characteristic points including angular points, centroids, centers and the like, and the Gaussian distance is calculated to serve as the motion parameter.
More preferably, the motion smoothing related parameter is described by using the estimated velocity of the situation field.
More preferably, the process of extracting the associated object by using the sliding window in step S101 includes:
performing sliding window on the image pyramid; the multi-scale sliding window is realized by zooming images in different scales, and the size of the rectangular window is kept unchanged;
carrying out uniform grid point sampling on the sliding window position on each scale, wherein the grid point position ensures that adjacent sliding windows are overlapped in a half area;
a plurality of standard rectangular sliding windows are arranged on each lattice point, and the specification is gradually transited from a short flat rectangle to a high narrow rectangle; the specification number is determined by the size of the rectangular window and the total number of the sliding windows.
More preferably, the process of performing online target matching by using a sparse weighted bipartite graph matching algorithm in step S104 includes:
determining targets which cannot be associated according to the physical motion limits of the nodes, removing edges corresponding to the targets, excluding isolated nodes, and constructing a sparse weighted bipartite graph;
and performing online target matching on the sparse weighted bipartite graph by adopting a sparse Kuhn-Munkres matching algorithm.
The technical scheme of the invention can show that the invention has the following technical effects:
1. according to the method, the local features are extracted through a dense sliding window according to the invariance of the local features of the fluid in the separation and fusion process, the judgment of a separation and fusion event is avoided, and the local features are matched and tracked; and for different scales of fluid features, collecting each scale of target by adopting a multilayer image pyramid. Therefore, the multi-fluid target matching tracking precision can be effectively improved by utilizing the space-time continuity of the overall characteristics of the multi-fluid target according to the characteristic that the fluid morphological characteristics continuously change in the process that the fluid is driven by the external situation field.
2. The calculation complexity is low, and online tracking can be realized. The calculation bottleneck of the method lies in the long time consumption of a large number of rectangular windows in matching, and the running time of the method can be reduced to the second level by adopting the sparse Kuhn-Munkres algorithm, so that the requirement of an online algorithm is met.
3. The present invention is versatile for a variety of fluid targets. As long as the premise that the observation frequency is higher than the fluid form change rate is met, the online matching tracking method can be used for online matching tracking and is irrelevant to the characteristics of the target.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a cross-scale multi-fluid target matching tracking method according to the present invention;
fig. 3 is a flow of target feature extraction in the present invention.
Detailed Description
The technical solution of the present invention will be further described in detail with reference to the accompanying drawings.
Example one
The invention provides an online cross-scale multi-fluid target matching and tracking method, which is shown in figure 1 and comprises the following steps:
and S101, extracting the associated object by adopting a sliding window, matching and tracking the specific characteristics on the image of the associated object, and extracting the fluid target characteristics with different scales.
Performing sliding window on the image pyramid; the multi-scale sliding window is realized by scaling the image by different scales, and the size of the rectangular window is kept unchanged. And carrying out uniform grid point sampling on the sliding window position on each scale, wherein the grid point position ensures that adjacent sliding windows overlap a half area. A plurality of standard rectangular sliding windows are arranged on each lattice point, and the specification is gradually transited from a short flat rectangle to a high narrow rectangle; the specification number is determined by the size of the rectangular window and the total number of the sliding windows.
The method adopts the sliding window to detect the target, avoids the characteristic of the existing tracking method of identifying the target individual, and converts the characteristic into the matching tracking of the specific characteristics on the image. And selecting a target existence characteristic area on the observation image, roughly acquiring the target existence area by a simple threshold value method, an elimination method and the like, and performing sliding window sampling in the area. To ensure that the extracted target features are not missed or lost, adjacent rectangular windows should overlap 1/2 areas. And sampling positions of the sliding window by adopting lattice points. If the dimension of the observed image is L and the window size is h, the starting position of the lattice point is (h/2 ), and then the lattice point is moved at intervals of h/2 until the boundary of the observed image is reached. And performing sliding windows with different specifications on each grid point. Taking h-5 as an example, a window containing (2,8), (3,7), (4,6), (5,5), (6,4), (7,3), (8,2) captures features of different shapes.
For fluid target features of different scales, the sliding window samples on different scales. And constructing an N-layer image pyramid, reducing the pyramid by a plurality of times on the basis of an original image, keeping the size of a sliding window unchanged, and repeating the lattice sampling process to obtain the features with different scales. Typically this sampling will output thousands or even tens of thousands of features.
And S102, selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics with different scales.
The invention selects basic characteristic parameters for characteristic extraction without using an integral image. The convolution neural network method is large in calculation amount for extracting features of the whole image, and the convolution method is not suitable for rectangular frames with different scales. The basic characteristic parameters can keep the length of the characteristic vectors output by different rectangular frames to be the same, and subsequent processing is facilitated.
Step S103, calculating a composite association weight according to the feature information of two continuous frames, extracting spatial association and temporal association information between different volume targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information.
The morphological correlation parameter is composed of a shape feature and a value distribution feature. The shape characteristic vector comprises the length and the width of a rectangular window, the target proportion in the window and the centroid position of a target in the window, and the cosine distance is adopted for calculating the weight. The value distribution characteristic vector comprises the length and width of a rectangular window and the image intensity distribution in the window, and the Gaussian distance is adopted for calculating the weight. Image intensity distribution may employ gray scale values or observed intensity values for a single channel; and for the multi-channel observation result, color distribution or multi-dimensional joint distribution and the like are adopted, and vectors with fixed lengths are output for calculation. The form related parameter is the weighted sum of the shape characteristic and the value distribution characteristic, and the weight coefficient is constant.
The motion related parameter is a motion direction constraint. And calculating the situation fields of two continuous frames by adopting an optical flow method, and counting the average motion field in all rectangular windows. The rectangular window can select a plurality of characteristic points including angular points, centroids, centers and the like, and the Gaussian distance is calculated to serve as the motion parameter.
The motion smoothing related parameters are described by adopting the estimated speed of the situation field. And calculating the Gaussian distance between the motion speed in the historical record and the current estimated speed.
The composite correlation weight comprises the form correlation parameter, the motion correlation parameter and the motion smoothing correlation parameter. The composite association weight is the weighted sum of the three parameters, and the weight coefficient is constant, so that the composite association weight can simultaneously extract the spatial association and time association information among different volume targets.
The block association needs to evaluate the feature association weight in all the sliding windows between two adjacent frames. The feature parameter of each sliding window is denoted as f, and the feature association weight between two rectangular windows is defined as follows:
A(fi,fj)=ω1Aappearance(fi,fj)+ω2Amotion(fi,fj)+ω3Asmooth(fi,fj) (1)
wherein A isappearanceIs a morphologically related parameter, AmotionFor motion-related parameters, AsmoothSmoothing the associated parameter, ω, for motioniThe coefficients are the coefficients of three types of parameters and are set according to the characteristics of the actual tracking target.
Morphological correlation parameter AappearanceShape features and value distribution features are used. The shape characteristic vector comprises the length and the width of a rectangular window, the target proportion in the window and the centroid position of a target in the window, and the cosine distance is adopted for calculating the weight. The value distribution characteristic vector comprises the length and the width of a rectangular window and the image intensity distribution in the window, and the Gaussian distance is adopted for calculating the weight. The image intensity distribution can adopt gray values or observed intensity values for a single channel, and can adopt color distribution or multidimensional joint distribution and the like for a multi-channel observation result, and vectors with fixed length are output for calculation. The morphological correlation parameter is defined as:
Figure BDA0002331057170000061
wherein, the feature vector of f special position is selected during calculation; w is a weight coefficient;<fi,fj>representing the inner product of 2 feature vectors;|fiI denotes fiThe modulus of this eigenvector; sigmaaThe weight coefficient is a Gaussian distance weight coefficient and is used for adjusting a distance description operator, and an empirical value is usually 5-20.
Motion related parameter AmotionMotion direction constraints are employed. And calculating the shape potential field (u, v) of two continuous frames by adopting an optical flow method. And (5) counting the average motion field (u, v) in the ith rectangular window. The rectangular window can select a plurality of characteristic points including angular points, centroids, centers and the like, coordinates are expressed by (x, y), and Gaussian distances are calculated to serve as motion parameters. Motion related parameter AmotionThe definition is as follows:
Figure BDA0002331057170000071
wherein f isi、fjIs the ith characteristic vector of the previous frame and the jth characteristic vector of the second frame, the characteristic vectors comprise the light manifold potential field (u, v) and the center coordinates (x, y), e is a natural constant, sigmabThe weight coefficient is a Gaussian distance weight coefficient and is used for adjusting a distance description operator, and an empirical value is usually 5-20.
Motion smoothing correlation parameter AsmoothAnd carrying out matching estimation by utilizing the potential field. After each iteration finishes matching, the motion direction (u, v) of the associated holding window calculated according to the matching result is recorded in f. The speed of motion of each object should be smoothly transitioned. Thus, motion smoothing parameters are constructed:
Figure BDA0002331057170000072
wherein f isi、fjIs the ith characteristic vector of the previous frame and the jth characteristic vector of the second frame, the characteristic vectors contain light manifold potential fields (u, v), e is a natural constant, sigmacThe weight coefficient is a Gaussian distance weight coefficient and is used for adjusting a distance description operator, and an empirical value is usually 5-20.
In summary, the correlation weight between all rectangular windows can be obtained, and a weighted bipartite graph G based on two adjacent frames can be constructed based on the correlation weight.
And step S104, performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, removing isolated nodes in the target matching process, and constructing a sparse weighted bipartite graph based on the matched target.
Sparse weighted bipartite graph matching algorithm: and carrying out node screening pretreatment on the weighted bipartite graph. And determining targets which cannot be associated according to the physical motion limits of the nodes, removing edges corresponding to the targets, and constructing a sparse weighted bipartite graph. Isolated nodes are excluded.
And (3) adopting a sparse Kuhn-Munkres matching algorithm to the sparse weighted bipartite graph.
The multi-target matching method comprises two types of adjacent frames and multiple frames. The multi-frame algorithm can match targets in continuous multi-frames, can avoid conditions of missing measurement, frame loss and the like in a single frame, but has high calculation complexity and is difficult to realize online matching. The adjacent frame method is used for correlating the monitoring targets in two adjacent frames, and the online correlation method can be realized. The adjacent frame matching is a weighted bipartite graph matching problem, and a Kuhn-Munkres algorithm (KM algorithm for short) is generally adopted. The computational complexity of the KM algorithm is O (n)3) And n is the number of nodes in the bipartite graph. For the target matching problem of thousands to tens of thousands of scales, the calculation speed of the KM algorithm cannot meet the real-time requirement of seconds or minutes.
The motion of a fluid target is mostly a smooth slowly varying motion. When the observation frequency is significantly higher than the fluid motion velocity, the fluid has a limited range of motion in successive frames. Therefore, the upper limit R of the physical fluid motion can be determinedlimThe vast majority of connections are excluded. If the number of the unilateral nodes of the bipartite graph is n, the theoretical possible connection is n2The actual number of connections is m. If the image distance between two nodes exceeds RlimThe connection is removed and the image size is recorded as L, when R is satisfiedlim<At L/3, m is obtained<n2/9<<n2At this time, the bipartite graph is a sparse bipartite graph. And by adopting a sparse weighting bipartite graph matching algorithm, the calculation complexity can be effectively reduced.
The sparse bipartite graph matching algorithm adds search constraint on the basis of the classical KM algorithm and enables the computational complexity to be increased from O (n)3) Reduced to O (mn)2+ nlogn), where n is bipartite graph single edgeThe number of nodes, m, is the number of connections m in the bipartite graph. Since m in the sparse graph<<n2The computational complexity is significantly reduced. Is characterized in that in the process of searching complete subgraphs, at least one group of verification matching satisfies the matching requirement each time, and the searching complexity in each verification matching is O (n)2) The total verification times are m, so that the search calculation amount is reduced.
And S105, based on the constructed sparse weighted bipartite graph, performing target tracking and track prediction by adopting a Kalman filtering algorithm.
The fluid motion is mostly driven by external fields, such as gravity field, wind field, air pressure field, etc. Such environmental fields vary relatively smoothly over an observable scale, and therefore the actual fluid motion trajectory is generally relatively smooth. On the premise that the observation frequency is obviously higher than the fluid movement speed, the position change of the movement of the fluid target in the continuous frames is gradual and continuous. Therefore, the tracking method can adopt a classical tracker, such as Kalman filtering, particle filtering and the like. In consideration of convergence speed and motion stability, the Kalman filtering method is adopted for tracking, and the requirements on precision and calculation complexity can be met.
A specific embodiment of the present invention is given below, and an overall implementation flowchart thereof is shown in fig. 2.
Step S10, inputting tkMeasurement image of time of day IkThe image height is recorded as H, the width is recorded as W, the number of channels is C, and the image is stacked in the buffer S1.
Step S20, measuring image IkDetecting the fluid objects, obtaining the fluid objects, the number of which is NkAnd according to NkAnd extracting basic parameters of each sliding window, including width, height, effective pixel number, centroid and lattice point position of the window. The detailed flow of this step is shown in fig. 3, and includes the following steps:
step S201, adopting a threshold value method to carry out comparison on NkPerforming a preliminary screening on each fluid object to generate a target mask Mk
Obtaining a target mask MkFor reducing the amount of extra computation introduced by extraneous information such as background, interference, etc. MkCan be input from the outside, or can be input by simple characteristics such as color,And (5) screening out texture and corner feature. If no simple method is available to implement the preliminary screening, MkThe matrix is 1 in all, which means that the whole image is input.
Step S202, according to the target mask M obtained after primary screeningkGenerating an input image pyramid
Figure BDA0002331057170000091
The image pyramid comprises s pictures, the 1 st picture is an original size picture, wherein s is the number of preset scales, and Ii kIs composed of an image IkScaling to obtain the i-th layer with reduced height and width 2i-1And (4) doubling.
Step S203, the sliding window is a rectangular window with the size of h multiplied by w, wherein h and w are the height and width of the rectangular sliding window, and the unit is pixel; according to the target mask MkThe position of the sliding window grid point is calculated.
The ordinate y of the grid point takes the value of 1 to H-H/2(H is the image height, H is the sliding window height), the interval is an arithmetic progression of H/2, and the abscissa x takes the value of 1 to W-W/2 and the interval is an arithmetic progression of W/2. If M at the position of the grid pointkIf 0, the grid point is removed.
In step S204, sliding windows are performed on all the sliding window grid points in S dimensions, where the sliding window includes j shape specifications, and the number of rectangular windows is L ═ 4 sjHW/(hw). Wherein H, W is the height and width of the whole image, h and w are the height and width of the rectangular sliding window, and the unit is pixel.
In step S205, the basic parameters of each sliding window, including the width, height, number of effective pixels, centroid, and grid position of the window, are extracted.
Step S30, calculating the characteristics of L rectangular windows, and calculating the parameters of the window according to the basic parameters of each sliding window obtained in step S20, including the information of shape, intensity distribution, position and the like.
Step S40, extracting t from the stackk-1Measurement of time of day Ik-1Calculating the sum ofkOptical flow field Pk
Step S50, calculating the optical flow median (u, v) of the effective pixel in each rectangular frame, adding the optical flow median (u, v) to the corresponding parameter information, and constructing tkFeature set F for a time of dayk(Nk). Wherein N iskIs tkNumber of feature vectors at time, FkContaining NkA feature vector f.
Step S60, extracting t from the stackk-1Time of day characteristic Fk-1(Nk-1) And calculating the association weight.
Calculating the correlation weight matrix according to the formulas (1) to (4)
Figure BDA0002331057170000101
Nk-1、NkRespectively represent tk-1Time and tkThe characteristic number of time, R represents a real number, and A (i, j) is A (f)i,fj) I.e. each element in the associated weight matrix a, is substituted by the feature vector of the corresponding position into a (f) in equation (1)i,fj) Thus obtaining the product. When the distance between the two rectangular windows exceeds Rlim timeAnd the corresponding position of A is marked as empty coupling.
Step S70, according to the rectangular window matching search of the associated weight matrix A, t is obtained by adopting the sparse Kuhn-Munkres algorithm to calculatek-1And tkAnd (4) corresponding relation of each rectangular window at the moment.
Step S80, according to tk-1And tkSetting t as the matching result of the corresponding relation of each rectangular window at the momentkAll the sliding windows are numbered at the moment, and the result is stacked to a target information stack S2;
step S90, updating each target tracker with the result;
and step S100, extrapolating the tracker to output a prediction result, integrating the target track in the target information stack S2, and outputting a tracking result.
According to the technical scheme, the local features are extracted through the dense sliding window according to the invariance of the local features of the fluid in the separation and fusion process, the judgment of the separation and fusion event is avoided, and the local features are matched and tracked; and for different scales of fluid features, collecting each scale of target by adopting a multilayer image pyramid. Therefore, the multi-fluid target matching tracking precision can be effectively improved by utilizing the space-time continuity of the overall characteristics of the multi-fluid target according to the characteristic that the fluid morphological characteristics continuously change in the process that the fluid is driven by the external situation field. The invention adopts the sparse Kuhn-Munkres algorithm to reduce the running time of the method to the second level and meet the requirement of the online algorithm.
Although the present invention has been described in terms of the preferred embodiment, it is not intended that the invention be limited to the embodiment. Any equivalent changes or modifications made without departing from the spirit and scope of the present invention also belong to the protection scope of the present invention. The scope of the invention should therefore be determined with reference to the appended claims.

Claims (7)

1. An online cross-scale multi-fluid target matching and tracking method is characterized by comprising the following steps:
step S101, extracting a related object by adopting a sliding window, matching and tracking specific features on an image of the related object, and extracting fluid target features with different scales;
step S102, selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics with different scales;
step S103, calculating composite association weight according to the feature information of two continuous frames, extracting spatial association and time association information between different body targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information;
step S104, performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, removing isolated nodes in the target matching process, and constructing a sparse weighted bipartite graph based on the matched target;
and S105, based on the constructed sparse weighted bipartite graph, performing target tracking and track prediction by adopting a Kalman filtering algorithm.
2. The on-line cross-scale multi-fluid target matching tracking method of claim 1,
the composite correlation weight is the weighted sum of the form correlation parameter, the motion correlation parameter and the motion smoothing correlation parameter, and the weight coefficient is a constant.
3. The on-line cross-scale multi-fluid target matching tracking method of claim 2,
the form correlation parameter is the weighted sum of the shape characteristic and the value distribution characteristic, and the weight coefficient is a constant;
the shape characteristic vector comprises the length and the width of a rectangular window, the target proportion in the window and the centroid position of a target in the window, and the cosine distance is adopted for calculating the weight;
the value distribution characteristic vector comprises the length and the width of a rectangular window and the image intensity distribution in the window, and the Gaussian distance is adopted for calculating the weight.
4. The on-line cross-scale multi-fluid target matching tracking method of claim 2,
the motion related parameter is a motion direction constraint. And calculating the situation fields of two continuous frames by adopting an optical flow method, and counting the average motion field in all rectangular windows. The rectangular window can select a plurality of characteristic points including angular points, centroids, centers and the like, and the Gaussian distance is calculated to serve as the motion parameter.
5. The on-line cross-scale multi-fluid target matching tracking method of claim 2,
and describing the motion smoothing related parameters by adopting the estimated speed of the situation field.
6. The on-line cross-scale multi-fluid target matching tracking method according to claim 1, wherein the process of extracting the associated object by using a sliding window in step S101 includes:
performing sliding window on the image pyramid; the multi-scale sliding window is realized by zooming images in different scales, and the size of the rectangular window is kept unchanged;
carrying out uniform grid point sampling on the sliding window position on each scale, wherein the grid point position ensures that adjacent sliding windows are overlapped in a half area;
a plurality of standard rectangular sliding windows are arranged on each lattice point, and the specification is gradually transited from a short flat rectangle to a high narrow rectangle; the specification number is determined by the size of the rectangular window and the total number of the sliding windows.
7. The on-line cross-scale multi-fluid target matching tracking method according to claim 2, wherein the process of performing on-line target matching by using a sparse weighted bipartite graph matching algorithm in step S104 comprises:
determining targets which cannot be associated according to the physical motion limits of the nodes, removing edges corresponding to the targets, excluding isolated nodes, and constructing a sparse weighted bipartite graph;
and performing online target matching on the sparse weighted bipartite graph by adopting a sparse Kuhn-Munkres matching algorithm.
CN201911336384.3A 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method Active CN111242972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911336384.3A CN111242972B (en) 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911336384.3A CN111242972B (en) 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method

Publications (2)

Publication Number Publication Date
CN111242972A true CN111242972A (en) 2020-06-05
CN111242972B CN111242972B (en) 2023-05-16

Family

ID=70866241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911336384.3A Active CN111242972B (en) 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method

Country Status (1)

Country Link
CN (1) CN111242972B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330765A (en) * 2022-10-12 2022-11-11 南通新诚电子有限公司 Corrosion defect identification method for aluminum electrolytic capacitor anode foil production
CN115620098A (en) * 2022-12-20 2023-01-17 中电信数字城市科技有限公司 Evaluation method and system of cross-camera pedestrian tracking algorithm and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081043A1 (en) * 2009-10-07 2011-04-07 Sabol Bruce M Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
CN103035011A (en) * 2012-12-06 2013-04-10 河海大学 Motion vector estimation method based on target characteristics
WO2016034008A1 (en) * 2014-09-04 2016-03-10 华为技术有限公司 Target tracking method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081043A1 (en) * 2009-10-07 2011-04-07 Sabol Bruce M Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
CN103035011A (en) * 2012-12-06 2013-04-10 河海大学 Motion vector estimation method based on target characteristics
WO2016034008A1 (en) * 2014-09-04 2016-03-10 华为技术有限公司 Target tracking method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330765A (en) * 2022-10-12 2022-11-11 南通新诚电子有限公司 Corrosion defect identification method for aluminum electrolytic capacitor anode foil production
CN115620098A (en) * 2022-12-20 2023-01-17 中电信数字城市科技有限公司 Evaluation method and system of cross-camera pedestrian tracking algorithm and electronic equipment
CN115620098B (en) * 2022-12-20 2023-03-10 中电信数字城市科技有限公司 Evaluation method and system of cross-camera pedestrian tracking algorithm and electronic equipment

Also Published As

Publication number Publication date
CN111242972B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
CN108875794B (en) Image visibility detection method based on transfer learning
CN110490911B (en) Multi-camera multi-target tracking method based on non-negative matrix factorization under constraint condition
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
JP2011169896A (en) Advanced background estimation technique and circuit for hyper-spectral target detection method
CN109558815A (en) A kind of detection of real time multi-human face and tracking
CN112507845B (en) Pedestrian multi-target tracking method based on CenterNet and depth correlation matrix
CN110555868A (en) method for detecting small moving target under complex ground background
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN104217442B (en) Aerial video moving object detection method based on multiple model estimation
CN111242972A (en) Online cross-scale multi-fluid target matching and tracking method
Xiong et al. Contextual sa-attention convolutional LSTM for precipitation nowcasting: A spatiotemporal sequence forecasting view
Wang et al. Low-altitude infrared small target detection based on fully convolutional regression network and graph matching
CN110569706A (en) Deep integration target tracking algorithm based on time and space network
CN115661505A (en) Semantic perception image shadow detection method
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
Cai et al. A target tracking method based on KCF for omnidirectional vision
Li et al. Insect detection and counting based on YOLOv3 model
CN116188943A (en) Solar radio spectrum burst information detection method and device
Castellano et al. Crowd flow detection from drones with fully convolutional networks and clustering
CN112509014B (en) Robust interpolation light stream computing method matched with pyramid shielding detection block
Yu et al. Partial feature aggregation network for real-time object counting
CN110163346A (en) A kind of convolutional neural networks design method for multi-target detection
CN107067411B (en) Mean-shift tracking method combined with dense features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant