CN111242972B - On-line cross-scale multi-fluid target matching tracking method - Google Patents

On-line cross-scale multi-fluid target matching tracking method Download PDF

Info

Publication number
CN111242972B
CN111242972B CN201911336384.3A CN201911336384A CN111242972B CN 111242972 B CN111242972 B CN 111242972B CN 201911336384 A CN201911336384 A CN 201911336384A CN 111242972 B CN111242972 B CN 111242972B
Authority
CN
China
Prior art keywords
target
matching
window
association
bipartite graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911336384.3A
Other languages
Chinese (zh)
Other versions
CN111242972A (en
Inventor
刘佑达
陈建军
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 14 Research Institute
Original Assignee
CETC 14 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 14 Research Institute filed Critical CETC 14 Research Institute
Priority to CN201911336384.3A priority Critical patent/CN111242972B/en
Publication of CN111242972A publication Critical patent/CN111242972A/en
Application granted granted Critical
Publication of CN111242972B publication Critical patent/CN111242972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

The invention relates to an online cross-scale multi-fluid target matching tracking method, which adopts a sliding window to extract an associated object, performs matching tracking on specific features on an associated object image, and extracts fluid target features with different scales; aiming at fluid target characteristics of different scales, selecting basic characteristic parameters for characteristic extraction; calculating composite association weight according to the continuous two-frame characteristic information, extracting spatial association and time association information between different body targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information; performing on-line target matching by adopting a sparse weighted bipartite graph matching algorithm, and constructing a sparse weighted bipartite graph based on the matched target; based on the constructed sparse weighted bipartite graph, a Kalman filtering algorithm is adopted to conduct target tracking and track prediction. The invention can effectively improve the matching and tracking precision of the multi-fluid targets; and the running time of the method is reduced to the second level, so that the requirement of an online algorithm is met.

Description

On-line cross-scale multi-fluid target matching tracking method
Technical Field
The invention relates to an online cross-scale multi-fluid target matching tracking method.
Background
Currently, multi-target tracking techniques consist of four steps: target detection, associated weight calculation, target matching and target track tracking. The multi-target tracking problem has achieved many research results, but the model adopted by the tracking target such as an airplane, a pedestrian, a vehicle and the like is a point target, a rigid target or a deformable target formed by multiple rigid bodies. Fluid targets are quite different from these targets. The shape and intensity distribution of the fluid target can change continuously, and the dimensions of different targets are quite different.
The existing classical target detection methods require the identification of individual target monomers. However, the shape of the fluid target changes at all times in the movement process, and the problem of separation and fusion judgment exists, namely, a single target can be split into a plurality of targets in the movement process, or a plurality of targets are fused into a single target; and the fusion between different monomers and the separation of a large fluid target into small targets can cause severe changes of important characteristics such as the size, the shape, the strength, the key points and the like of the targets, so that the shape, the strength and the positions of the fluid targets extracted at different moments are greatly different, the matching difficulty of the associated characteristic selection and the targets is high, and the matching tracking is difficult. Furthermore, different fluid targets may span a large scale, and the smallest fluid target may be only a few pixels on the observed image. Such targets are difficult to distinguish effectively from observed noise, often resulting in detection results that are hops, lost, etc. between different frames. Therefore, the existing detection and matching methods of the online multi-target tracking method are not suitable for fluid target tracking.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides an online cross-scale multi-fluid target matching tracking method which can effectively improve the matching tracking precision of multi-fluid targets and solve the problem that targets are difficult to match in long time span due to separation and fusion of the fluid targets.
The aim of the invention is realized by the following technical scheme:
the invention provides an online cross-scale multi-fluid target matching tracking method, which comprises the following steps:
step S101, extracting an associated object by adopting a sliding window, carrying out matching tracking on specific features on an associated object image, and extracting fluid target features with different scales;
step S102, selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics of different scales;
step S103, calculating composite association weight according to the characteristic information of two continuous frames, extracting spatial association and time association information between different body targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information;
step S104, performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, eliminating isolated nodes in a target matching process, and constructing a sparse weighted bipartite graph based on the matched targets;
step S105, performing target tracking and track prediction by adopting a Kalman filtering algorithm based on the constructed sparse weighted bipartite graph.
More preferably, the composite association weight is a weighted sum of a morphological association parameter, a motion association parameter and a motion smoothing association parameter, and the weight coefficient is a constant.
More preferably, the morphological correlation parameter is a weighted summation of a shape feature and a value distribution feature, and the weight coefficient is a constant;
the shape feature vector comprises rectangular window length, rectangular window width, target duty ratio in the window and target centroid position in the window, and cosine distance is adopted for calculating the weight;
the value distribution characteristic vector comprises rectangular window length, rectangular window width and image intensity distribution in the window, and Gaussian distance is adopted for calculating the weight.
More preferably, the motion related parameter is a motion direction constraint. The optical flow method is adopted to calculate the situation fields of two continuous frames, and the average motion fields in all rectangular windows are counted. The rectangular window can select a plurality of characteristic points, including corner points, centroid, center and the like, and the Gaussian distance is calculated as a motion parameter.
More preferably, the motion smoothing correlation parameter is described by a velocity of the situational field estimation.
More preferably, the process of extracting the associated object using a sliding window in the step S101 includes:
sliding a window on the image pyramid; the multi-scale sliding window is realized by scaling the image by different scales, and the size of the rectangular window is kept unchanged;
the sliding window positions are used for carrying out uniform lattice point sampling on each scale, and the lattice point positions ensure that adjacent sliding windows overlap a half area;
a plurality of specifications of rectangular sliding windows are arranged on each grid point, and the specifications are gradually transited from a short flat rectangle to a high narrow rectangle; the specification number is determined by the size of the rectangular window and the total number of sliding windows.
More preferably, the process of performing online target matching by using the sparse weighted bipartite graph matching algorithm in step S104 includes:
determining targets which cannot be associated according to the physical motion limits of the nodes, removing edges corresponding to the targets, removing isolated nodes, and constructing a sparse weighted bipartite graph;
and carrying out online target matching on the sparse weighted bipartite graph by adopting a sparse Kuhn-Munkres matching algorithm.
As can be seen from the technical scheme of the invention, the invention has the following technical effects:
1. according to invariance of the local characteristics of the fluid in the separation and fusion process, the local characteristics are extracted through the dense sliding window, judgment of separation and fusion events is avoided, and matching and tracking of the local characteristics are achieved; for different scale fluid features, a multi-layer image pyramid is used to collect each scale target. Therefore, according to the characteristic of continuous change of the fluid morphological characteristics in the process that the fluid is driven by the external situational field, the invention can effectively improve the matching and tracking precision of the multi-fluid target by utilizing the space-time continuity of the integral characteristics of the multi-fluid target.
2. The calculation complexity is low, and the online tracking can be realized. The method has the computational bottleneck that a large number of rectangular windows are matched and time is consumed, and the sparse Kuhn-Munkres algorithm is adopted to reduce the running time of the method to the second level, so that the requirement of an online algorithm is met.
3. The present invention is versatile for a wide variety of fluid targets. The invention can be used for carrying out on-line matching tracking on the premise that the observation frequency is higher than the fluid form change rate, and is irrelevant to the characteristics of the target.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a module corresponding to the cross-scale multi-fluid target matching tracking method of the present invention;
fig. 3 is a flow of extracting target features in the present invention.
Detailed Description
The technical scheme of the invention will be further described in detail with reference to the accompanying drawings.
Example 1
The invention provides an online cross-scale multi-fluid target matching tracking method, which is shown in fig. 1 and comprises the following steps:
step S101, extracting the associated object by adopting a sliding window, carrying out matching tracking on specific features on the image of the associated object, and extracting fluid target features with different scales.
Sliding a window on the image pyramid; the multi-scale sliding window is implemented by scaling the image by different scales, the rectangular window size remaining unchanged. And uniformly sampling grid points at the sliding window positions on each scale, wherein the grid point positions ensure that adjacent sliding windows overlap a half area. A plurality of specifications of rectangular sliding windows are arranged on each grid point, and the specifications are gradually transited from a short flat rectangle to a high narrow rectangle; the specification number is determined by the size of the rectangular window and the total number of sliding windows.
According to the invention, the sliding window is adopted to detect the target, so that the characteristic of identifying the target individual by the existing tracking method is avoided, and the matching tracking of specific features on the image is changed. The target existence characteristic region is selected on the observation image, the target existence region can be roughly acquired through a simple threshold method or an elimination method, and sliding window sampling is carried out in the region. To ensure that extraction target features do not miss or lose, adjacent rectangular windows should overlap 1/2 of the area. The sliding window sampling position adopts lattice sampling. If the observed image scale is L and the window size is h, then the grid point starting position is (h/2), and then the grid point starting position is moved at intervals of h/2 until the boundary of the observed image is observed. Sliding windows of different specifications are made on each grid point. Taking h=5 as an example, windows containing (2, 8), (3, 7), (4, 6), (5, 5), (6, 4), (7, 3), (8, 2) capture features of different shapes.
For fluid target features of different scales, the sliding window samples at different scales. And (3) constructing an N-layer image pyramid, reducing the size of the sliding window by a plurality of times on the basis of the original image, repeating the latticed sampling process, and obtaining the features with different scales. Typically such sampling would output thousands or even tens of thousands of features.
Step S102, selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics of different scales.
The invention selects basic characteristic parameters instead of using integral images. The feature extraction calculation amount of the integral image based on the convolution neural network method is large, and the convolution method is not suitable for rectangular frames with different scales. The basic characteristic parameters can keep the same length of the characteristic vectors output by different rectangular frames, and the subsequent processing is convenient.
Step S103, calculating composite association weights according to the characteristic information of two continuous frames, extracting spatial association and time association information between different body targets through the composite association weights, and constructing a weighted bipartite graph based on two adjacent frames based on the association information.
The morphological correlation parameter consists of a shape feature and a value distribution feature. The shape feature vector comprises rectangular window length, rectangular window width, target duty ratio in the window and target centroid position in the window, and cosine distance is adopted for calculating the weight. The value distribution characteristic vector comprises rectangular window length, rectangular window width and image intensity distribution in the window, and Gaussian distance is adopted for calculating the weight. The image intensity distribution can adopt gray values or observed intensity values for a single channel; and (3) adopting color distribution or multidimensional joint distribution and the like for the multichannel observation result, and outputting a vector with a fixed length for calculation. The morphological correlation parameter is the weighted summation of the shape characteristic and the value distribution characteristic, and the weight coefficient is a constant.
The motion-related parameter is a motion direction constraint. The optical flow method is adopted to calculate the situation fields of two continuous frames, and the average motion fields in all rectangular windows are counted. The rectangular window can select a plurality of characteristic points, including corner points, centroid, center and the like, and the Gaussian distance is calculated as a motion parameter.
The motion smoothing correlation parameter is described by the estimated speed of the situational field. The gaussian distance of the motion speed from the current estimated speed in the history is calculated.
The composite association weight comprises the morphological association parameter, the motion association parameter and the motion smoothing association parameter. The composite association weight is the weighted summation of three parameters, and the weight coefficient is constant, so that the composite association weight can simultaneously extract the spatial association and the time association information between different targets.
The tile association requires evaluation of feature association weights in all sliding windows between two adjacent frames. The characteristic parameter of each sliding window is marked as f, and the characteristic association weight between two rectangular windows is defined as follows:
A(f i ,f j )=ω 1 A appearance (f i ,f j )+ω 2 A motion (f i ,f j )+ω 3 A smooth (f i ,f j ) (1)
wherein A is appearance For morphological correlation parameters A motion For the motion-related parameter A smooth For motion smoothing the associated parameters, ω i Is the coefficient of three parameters, and is set according to the actual tracking target characteristics.
Morphology associated parameter A appearance Shape features and value distribution features are employed. The shape feature vector comprises rectangular window length, rectangular window width, target duty ratio in the window and target centroid position in the window, and cosine distance is adopted for calculating the weight. The value distribution feature vector comprises rectangular window length, rectangular window width and image intensity distribution in the window, and Gaussian distance is adopted for calculating the weight. The image intensity distribution can adopt gray level values or observed intensity values for a single channel, and can adopt color distribution or multidimensional joint distribution for a multi-channel observation result, and the like, and a vector with a fixed length is output for calculation. The morphology association parameter is defined as:
Figure BDA0002331057170000061
wherein, the characteristic vector of f specific bits is selected for calculation; w is a weight coefficient;<f i ,f j >representing the inner product of 2 eigenvectors; i f i I represents f i A modulus of this feature vector; sigma (sigma) a The empirical value is usually 5-20, which is the weight coefficient of the Gaussian distance and is used for adjusting the distance description operator.
Motion-related parameter A motion The motion direction constraint is employed. The optical flow method is used to calculate the situational field (u, v) of two consecutive frames. The average motion field (u, v) is statistically obtained within the ith rectangular window. The rectangular window can select a plurality of characteristic points, including angular points, centroids, centers and the like, coordinates are expressed by (x, y), and Gaussian distances are calculated as motion parameters. Motion-related parameter A motion The definition is as follows:
Figure BDA0002331057170000071
wherein f i 、f j Is the ith feature vector of the previous frame and the jth feature vector of the second frame, the feature vectors comprise the optical manifold potential field (u, v) and the central coordinates (x, y), e is a natural constant, sigma b Weight coefficient for Gaussian distance for adjusting distanceFrom the description operator, the empirical value is usually 5 to 20.
Motion smoothing correlation parameter A smooth Matching estimation is performed using the shape potential field. After each iteration is matched, the motion direction (u, v) of the associated holding window calculated according to the matching result is recorded in f. The speed of motion of each target should be smoothly transitioned. Thus, a motion smoothing parameter is constructed:
Figure BDA0002331057170000072
wherein f i 、f j Is the ith feature vector of the previous frame and the jth feature vector of the second frame, the feature vectors comprise optical manifold potential fields (u, v), e is a natural constant, sigma c The empirical value is usually 5-20, which is the weight coefficient of the Gaussian distance and is used for adjusting the distance description operator.
In summary, the association weights between all rectangular windows can be obtained, and a weighted bipartite graph G based on two adjacent frames can be constructed based on the association weights.
And step S104, performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, eliminating isolated nodes in the target matching process, and constructing a sparse weighted bipartite graph based on the matched targets.
Sparse weighted bipartite graph matching algorithm: and carrying out node screening pretreatment on the weighted bipartite graph. And determining targets which cannot be associated according to the physical motion limits of the nodes, removing edges corresponding to the targets, and constructing a sparse weighted bipartite graph. The isolated node is excluded.
And (3) adopting a sparse Kuhn-Munkres matching algorithm for the sparse weighted bipartite graph.
The multi-target matching method comprises two adjacent frames and multiple frames. The multi-frame algorithm can match targets in continuous multi-frames, can avoid the conditions of missing measurement, frame loss and the like in a single frame, but has higher calculation complexity and is difficult to realize on-line matching. The adjacent frame method is used for associating the monitoring targets in the adjacent two frames, and an online association method can be realized. The neighbor frame matching is a weighted bipartite graph matching problem, and a Kuhn-Munkres algorithm (KM algorithm for short) is generally adopted. The computational complexity of the KM algorithm is O (n) 3 ) Wherein n is the number of nodes in the bipartite graph. For thousands to tens of thousands of scale target matching problems, the calculation speed of the KM algorithm cannot meet the real-time requirement of seconds or minutes.
The movement of the fluid target is mostly smooth and slowly varying movement. When the observed frequency is significantly higher than the fluid movement speed, the range of movement of the fluid in successive frames is limited. Thus, according to the upper limit R of the physical fluid movement lim Most of the connections are excluded. If the number of single-side nodes of the bipartite graph is n, the theoretical connection is n 2 The actual number of connections is m. If the image distance between two nodes exceeds R lim The connection is removed and the image size is recorded as L, when R is satisfied lim <At L/3, m can be obtained<n 2 /9<<n 2 At this time, the bipartite graph is a sparse bipartite graph. And the computation complexity can be effectively reduced by adopting a sparse weighted bipartite graph matching algorithm.
The sparse bipartite graph matching algorithm increases search constraint on the basis of a classical KM algorithm, and the computational complexity is reduced from O (n 3 ) Reduced to O (mn) 2 +nlogn), where n is the number of single-sided nodes of the bipartite graph and m is the number of connections m in the bipartite graph. Due to m in the sparse graph<<n 2 The computational complexity is significantly reduced. Characterized in that in the process of searching complete subgraph, at least one group of matching requirements are met in each verification matching, and the searching complexity in each verification matching is O (n) 2 ) The total verification times is m, so that the search calculation amount is reduced.
Step S105, performing target tracking and track prediction by adopting a Kalman filtering algorithm based on the constructed sparse weighted bipartite graph.
The fluid motion is mostly driven by external fields, such as gravitational fields, wind fields, air pressure fields, etc. Such environmental fields vary more smoothly within the observable dimensions, and thus the actual trajectory of the fluid motion is generally smoother. On the premise that the observation frequency is significantly higher than the fluid movement speed, the position change of the movement of the fluid target in the continuous frames is gradual and continuous. Therefore, the tracking method can adopt classical trackers, such as Kalman filtering, particle filtering and the like. In consideration of convergence speed and motion stability, the Kalman filtering method is adopted for tracking, so that the requirements of precision and computational complexity can be met.
A specific embodiment of the present invention is given below, and a flowchart of the overall implementation thereof is shown in fig. 2.
Step S10, input t k Time measurement image I k The image height is denoted as H, the width is denoted as W, the number of channels is denoted as C, and the images are stacked in the buffer S1.
Step S20, in the measurement image I k Detecting fluid objects, obtaining fluid objects, and obtaining the number N k And according to N k And extracting basic parameters of each sliding window, including the width, height, effective pixel number, centroid and lattice point position of the window. The detailed flow of this step is shown in fig. 3, and includes the following steps:
step S201, threshold method is adopted for N k Performing primary screening on the fluid objects to generate a target mask M k
Obtaining a target mask M k For reducing the additional computational effort introduced by extraneous information such as background, interference, etc. M is M k The input can be from the outside, and the simple features such as color, texture and corner features can be screened out. If no simple method is used for realizing the primary screening, M k A matrix of all 1's indicates that the entire image is input.
Step S202, according to the target mask M obtained after preliminary screening k Generating an input image pyramid
Figure BDA0002331057170000091
The image pyramid comprises s pictures, the 1 st picture is an original size picture, wherein s is the number of preset dimensions, I i k Is composed of image I k Scaled to obtain the i-th layer height and width reduced by 2 i-1 Multiple times.
Step S203, a rectangular window is adopted as the sliding window, the size is h multiplied by w, wherein h and w are the height and width of the rectangular sliding window, and the units are pixels; according to the target mask M k And calculating the position of the sliding window lattice point.
The ordinate y of the lattice point takes the value of 1 to H-H/2 (H is the image height, H is the sliding window height), the interval is an arithmetic progression of H/2, and the abscissa x takes the value of 1 to W-W/2, the interval is an arithmetic progression of W/2And (5) a plurality of columns. If M at the lattice point position k And 0, the lattice point is removed.
In step S204, sliding window is performed on all sliding window lattice points in S scales, the sliding window includes j shape specifications, and the number of rectangular windows is l=4sjhw/(hw). Wherein H, W is the height and width of the whole image, h and w are the height and width of the rectangular sliding window, and the units are pixels.
In step S205, basic parameters of each sliding window are extracted, including window width, window height, effective pixel number, centroid, lattice point position, etc.
And step S30, calculating the characteristics of the L rectangular windows, and calculating parameters of the windows according to the basic parameters of each sliding window obtained in the step S20, wherein the parameters comprise information such as shape, intensity distribution, position and the like.
Step S40, extracting t from the stack k-1 Measurement result I of time of day k-1 Calculation and I k Is the optical flow field P of (1) k
Step S50, calculating the median (u, v) of the optical flow of the effective pixels in each rectangular frame, adding to the corresponding parameter information, and constructing t k Feature set F of moments k (N k ). Wherein N is k At t k Number of feature vectors at time, F k Comprising N k And a feature vector f.
Step S60, extracting t from the stack k-1 Time of day feature F k-1 (N k-1 ) The associated weights are calculated.
Calculating an association weight matrix according to formulas (1) - (4)
Figure BDA0002331057170000101
N k-1 、N k Respectively represent t k-1 Time and t k The number of features at a time, R, represents a real number, a (i, j) =a (f i ,f j ) I.e. each element in the associated weight matrix a is a (f) carried into formula (1) by the eigenvector of the corresponding position i ,f j ) Obtained. When the distance between two rectangular windows exceeds R lim time The corresponding position of A is marked as a null junction.
Step S70, performing rectangular window matching search according to the association weight matrix A, and adopting sparse KuhnCalculation of t by the Munkres algorithm k-1 And t k And the corresponding relation of each rectangular window at the moment.
Step S80, according to t k-1 And t k Setting t as a matching result of the corresponding relation of each rectangular window at moment k All sliding windows are numbered at the moment, and the result is stacked on the target information stack S2;
step S90, updating each target tracker by using the result;
and S100, extrapolating the tracker to output a predicted result, integrating the target track in the target information stack S2, and outputting a tracked result.
According to the technical scheme, according to invariance of the local characteristics of the fluid in the separation and fusion process, the local characteristics are extracted through the dense sliding window, judgment of separation and fusion events is avoided, and matching and tracking of the local characteristics are achieved; for different scale fluid features, a multi-layer image pyramid is used to collect each scale target. Therefore, according to the characteristic of continuous change of the fluid morphological characteristics in the process that the fluid is driven by the external situational field, the invention can effectively improve the matching and tracking precision of the multi-fluid target by utilizing the space-time continuity of the integral characteristics of the multi-fluid target. The method adopts a sparse Kuhn-Munkres algorithm, so that the running time of the method can be reduced to the second level, and the requirement of an online algorithm is met.
While the invention has been disclosed in terms of preferred embodiments, the embodiments are not limiting of the invention. Any equivalent changes or modifications can be made without departing from the spirit and scope of the present invention, and are intended to be within the scope of the present invention. The scope of the invention should therefore be determined by the following claims.

Claims (6)

1. The online cross-scale multi-fluid target matching tracking method is characterized by comprising the following steps of:
step S101, extracting an associated object by adopting a sliding window, carrying out matching tracking on specific features on an associated object image, and extracting fluid target features with different scales;
step S102, selecting basic characteristic parameters for characteristic extraction aiming at fluid target characteristics of different scales;
step S103, calculating composite association weight according to the characteristic information of two continuous frames, extracting spatial association and time association information between different body targets through the composite association weight, and constructing a weighted bipartite graph based on two adjacent frames based on the association information; the composite association weight is a weighted summation of a morphological association parameter, a motion association parameter and a motion smoothing association parameter, and the weight coefficient is a constant;
step S104, performing online target matching by adopting a sparse weighted bipartite graph matching algorithm, eliminating isolated nodes in a target matching process, and constructing a sparse weighted bipartite graph based on the matched targets;
step S105, performing target tracking and track prediction by adopting a Kalman filtering algorithm based on the constructed sparse weighted bipartite graph.
2. The on-line cross-scale multi-fluid target match tracking method of claim 1, wherein,
the morphological correlation parameters are weighted summation of shape characteristics and value distribution characteristics, and the weight coefficient is a constant;
the shape feature vector comprises rectangular window length, rectangular window width, target duty ratio in the window and target centroid position in the window, and cosine distance is adopted for calculating the weight;
the value distribution characteristic vector comprises rectangular window length, rectangular window width and image intensity distribution in the window, and Gaussian distance is adopted for calculating the weight.
3. The on-line cross-scale multi-fluid target match tracking method of claim 1, wherein,
the motion related parameters are motion direction constraints; calculating the situation fields of two continuous frames by adopting an optical flow method, and counting the average motion fields in all rectangular windows; the rectangular window selects a plurality of characteristic points including angular points, centroid and center, and calculates Gaussian distance as motion parameters.
4. The on-line cross-scale multi-fluid target match tracking method of claim 1, wherein,
the motion smoothing correlation parameter is described by a velocity estimated by a situational field.
5. The online cross-scale multi-fluid target matching tracking method according to claim 1, wherein the process of extracting the associated object using a sliding window in step S101 includes:
sliding a window on the image pyramid; the multi-scale sliding window is realized by scaling the image by different scales, and the size of the rectangular window is kept unchanged;
the sliding window positions are used for carrying out uniform lattice point sampling on each scale, and the lattice point positions ensure that adjacent sliding windows overlap a half area;
a plurality of specifications of rectangular sliding windows are arranged on each grid point, and the specifications are gradually transited from a short flat rectangle to a high narrow rectangle; the specification number is determined by the size of the rectangular window and the total number of sliding windows.
6. The online cross-scale multi-fluid target matching tracking method according to claim 1, wherein the process of online target matching using a sparse weighted bipartite graph matching algorithm in step S104 includes:
determining targets which cannot be associated according to the physical motion limits of the nodes, removing edges corresponding to the targets, removing isolated nodes, and constructing a sparse weighted bipartite graph;
and carrying out online target matching on the sparse weighted bipartite graph by adopting a sparse Kuhn-Munkres matching algorithm.
CN201911336384.3A 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method Active CN111242972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911336384.3A CN111242972B (en) 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911336384.3A CN111242972B (en) 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method

Publications (2)

Publication Number Publication Date
CN111242972A CN111242972A (en) 2020-06-05
CN111242972B true CN111242972B (en) 2023-05-16

Family

ID=70866241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911336384.3A Active CN111242972B (en) 2019-12-23 2019-12-23 On-line cross-scale multi-fluid target matching tracking method

Country Status (1)

Country Link
CN (1) CN111242972B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330765A (en) * 2022-10-12 2022-11-11 南通新诚电子有限公司 Corrosion defect identification method for aluminum electrolytic capacitor anode foil production
CN115620098B (en) * 2022-12-20 2023-03-10 中电信数字城市科技有限公司 Evaluation method and system of cross-camera pedestrian tracking algorithm and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8116527B2 (en) * 2009-10-07 2012-02-14 The United States Of America As Represented By The Secretary Of The Army Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
CN103035011B (en) * 2012-12-06 2016-01-13 河海大学 A kind of method for estimating motion vector of based target feature
CN104376576B (en) * 2014-09-04 2018-06-05 华为技术有限公司 A kind of method for tracking target and device

Also Published As

Publication number Publication date
CN111242972A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN109064484B (en) Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics
CN109816695A (en) Target detection and tracking method for infrared small unmanned aerial vehicle under complex background
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN107633226B (en) Human body motion tracking feature processing method
CN107818571A (en) Ship automatic tracking method and system based on deep learning network and average drifting
CN109255781B (en) Object-oriented multispectral high-resolution remote sensing image change detection method
CN103854292B (en) A kind of number and the computational methods and device in crowd movement direction
CN107240122A (en) Video target tracking method based on space and time continuous correlation filtering
CN111080675A (en) Target tracking method based on space-time constraint correlation filtering
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN111882586B (en) Multi-actor target tracking method oriented to theater environment
CN105809714A (en) Track confidence coefficient based multi-object tracking method
CN111881840B (en) Multi-target tracking method based on graph network
CN111242972B (en) On-line cross-scale multi-fluid target matching tracking method
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN107862702A (en) A kind of conspicuousness detection method of combination boundary connected and local contrast
CN109961462A (en) Method for tracking target, device and system
CN104217442B (en) Aerial video moving object detection method based on multiple model estimation
CN104063880A (en) PSO based multi-cell position outline synchronous accurate tracking system
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN114972423A (en) Aerial video moving target detection method and system
CN116245949A (en) High-precision visual SLAM method based on improved quadtree feature point extraction
CN116188943A (en) Solar radio spectrum burst information detection method and device
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
Li et al. Insect detection and counting based on YOLOv3 model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant