CN115578421A - Target tracking algorithm based on multi-graph attention machine mechanism - Google Patents
Target tracking algorithm based on multi-graph attention machine mechanism Download PDFInfo
- Publication number
- CN115578421A CN115578421A CN202211438781.3A CN202211438781A CN115578421A CN 115578421 A CN115578421 A CN 115578421A CN 202211438781 A CN202211438781 A CN 202211438781A CN 115578421 A CN115578421 A CN 115578421A
- Authority
- CN
- China
- Prior art keywords
- target
- classification
- branch
- graph
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a target tracking algorithm based on a multi-graph attention machine mechanism, which belongs to the technical field of general image data processing or generation and is used for tracking a target in a video, wherein a first frame picture and a subsequent frame in the video are respectively used as the input of a template branch and a search branch, and feature extraction is carried out on the first frame picture and the subsequent frame picture through a twin network; inputting the output characteristics obtained in the last step into a graph attention module to perform cross-correlation operation; inputting the output obtained in the last step into an anchor-free tracking head network, obtaining the classification score of each pixel point in the characteristic diagram through classification branches, obtaining the distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches; and multiplying the classification score by the central degree branch to obtain an accurate classification score, finding out the pixel points with the highest scores and the corresponding target frame information to obtain the position of the target of the current frame, and repeating the steps.
Description
Technical Field
The invention discloses a target tracking algorithm based on a multi-graph attention machine mechanism, and belongs to the technical field of general image data processing or generation.
Background
Target tracking is one of three main flow directions of computer vision, and is always concerned by people, and with continuous deep research on the target tracking, the application field of the target tracking is wider, and the target tracking is applied to the fields of intelligent monitoring, vehicle tracking, man-machine interaction and the like. In practical applications, various complex and changeable scenes are often encountered, such as the target is blocked, the background is complex and changeable, the appearance of the target is changed, the motion blur and the like, and the existing tracker cannot well deal with the problems, so that the moving target tracking still faces huge challenges, and people need to continuously explore and improve a target tracking algorithm.
Single object tracking refers to a given object to be tracked in a first frame of a video and then tracking the object in subsequent frames. The previous research is mainly based on an algorithm based on relevant filtering, and with the development of deep learning, the strong feature extraction capability of a convolutional neural network is also widely concerned by people, and the research direction of target tracking is gradually changed to the deep learning direction.
Branches also gradually appear in the research process of the target tracking algorithm based on deep learning, wherein the target tracking algorithm based on the twin network enables the tracker to reasonably balance the tracking speed and the tracking precision by virtue of unique advantages. However, when the existing tracker faces the situations of fuzzy target, disordered background and the like, the characteristics of the target are difficult to accurately extract, and the position of the target cannot be accurately detected. On the other hand, most twin network trackers perform similarity matching with a search area by taking the characteristics of the whole template picture as a core, the state of a target in a tracking process is not fixed, when the target is deformed or shielded, the global characteristics of the target can change, and the accuracy of a final result can be influenced by performing global similarity matching.
Disclosure of Invention
The invention aims to provide a target tracking algorithm based on a multi-graph attention machine system, and aims to solve the problems that in the prior art, the target tracking algorithm cannot be accurately positioned to the position of a target due to the change of global characteristics of the target, the existing network characteristic extraction capability cannot cope with the complexity and the variability of a target background, and the like.
A multi-graph attention machine mechanism based target tracking algorithm, comprising:
s1, respectively taking a first frame picture and a subsequent frame in a video as input of a template branch and a search branch, and performing feature extraction on the first frame picture and the subsequent frame through a twin network;
s2, inputting the output characteristics obtained in the S1 into a graph attention module to perform mutual correlation operation;
s3, inputting the output obtained in the S2 into an anchor-free tracking head network, obtaining the classification score of each pixel point in the characteristic diagram through classification branches, obtaining the distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches;
s4, multiplying the classification fraction obtained in the step S3 by the central degree branch to obtain an accurate classification fraction, finding out the pixel point with the highest fraction and the corresponding target frame information thereof, and obtaining the position of the target of the current frame;
and S5, repeating S1 to S4 until the positions of the targets in all the subsequent frames of the video are obtained.
The twin network in the S1 is GoogleNet sharing weight, an Inception V3 structure is used, the twin network is combined with a SimAM attention mechanism, and the specific operation is as follows:
adjusting the Inception V3 structure of GoogleNet, only using the convolution and pooling layer in front of the Inception V3 and the three modules of Inception A, inception B and Inception C, and using none of the following Inception module and other network layers;
an attention module is added, one SimAM attention module is added after each of the three included modules, and one SimAM attention module is added after the first and third included modules.
The concrete construction process of the graph attention module of the S2 comprises the following steps:
s2.1, composing the feature images of the template frame and the search frame, and enabling each feature image to be in the feature imagesThe size part is used as a node to construct a corresponding bipartite graphWherein the node setVFrom template subgraphNode (b) ofAnd searching subgraphsNode (a) ofThe components of the composition are as follows,and is also a set of nodes that are,;
s2.2. According to the constructed bipartite graphGFor is toAndthe similarity of the nodes is solved, and three graph attention modules are usedRespectively operating the two images to obtain corresponding similarity graphs;
s2.3, obtaining three similarity graphsAre normalized by softmax respectivelyMiddle node pairThe attention of the middle node is obtainedArbitrary node ofjBy polymerization of;
S2.4. Polymerization characteristics to be obtainedAndthe linear characteristics of the corresponding nodes are fused to obtain characteristic expression;
S2.5, obtaining all nodes through the operationjIs characteristic ofCorresponding three complete characteristic maps are obtainedFAnd fusing the two to obtain a final feature expression for subsequent positioning and tracking.
S3, the tracking head network is divided into a classification branch and a regression branch, and the classification branch distinguishes the category of the target and positions the target; the regression branch regresses a target frame of the target to obtain scale information of the target;
the response graph obtained by the classification branch is shown asWhereinRWhich represents the size of the response map and,respectively representing the height and the width of the response graph, 2 representing the number of channels of the response graph, and storing classification scores of all pixel points in the two channels, wherein the classification scores are respectively the probability of a positive sample and the probability of a negative sample;
the final response graph of the regression branch isWherein each pixel point is in one-to-one correspondence with a pixel point of the classification response map, and each point (c), (d)i,j) The corresponding four channels contain the distance of the point from each edge of the bounding box, denoted as,Is shown byi,j) A corresponding set of four channels is provided,respectively, the distance of the point from the four sides of the bounding box.
The classification branch and the centrality branch use a cross entropy loss function to calculate the accuracy of the classification and the accuracy of the centrality score, respectively, the regression branch uses an IOU loss function, the final loss of the whole networkExpressed as:wherein、Andare respectively set as 1, 1 and 2,andthe classification loss, center loss and regression loss are indicated, respectively.
S4 response diagram of the central branch isThe center degree score of each pixel point isC(i,j) Will beC(i,j) And multiplying the classification score to further obtain a more accurate target score.
Compared with the prior art, the method has the advantages that the Inception V3 structure of GoogleNet is used and modified to be more suitable for the model provided by the invention, the training parameters are reduced, and simultaneously, the method is combined with the SimAM attention mechanism, so that the target feature extraction capability in the complex background and target blurring process is greatly improved without adding new parameters, and the subsequent target position positioning accuracy is improved; by constructing a plurality of bipartite graphs on the characteristic graphs of the template branches and the search branches, the traditional global matching mode taking the whole template picture as a core is converted into local characteristic matching, the problem of inaccurate characteristic matching when a target is deformed, shielded and the like in the tracking process is effectively solved, the accuracy of classifying each pixel point in the characteristic graphs is improved, and the tracking accuracy of the tracker is improved.
Drawings
FIG. 1 is a technical flow chart of the present invention.
Fig. 2 is an overall block diagram of the present invention.
FIG. 3 is a schematic diagram of the SimAM attention mechanism of the present invention.
FIG. 4 is a block diagram of the graph attention module of the present invention.
Fig. 5 is a graph comparing the accuracy of the present invention and existing tracking algorithms on the UAV 123.
Fig. 6 is a graph comparing the success rate of the present invention and existing tracking algorithms on the UAV 123.
Detailed Description
To make the objects, technical solutions and advantages of the present invention clearer and more complete, the technical solutions of the present invention are described below clearly, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A multi-graph attention machine mechanism based target tracking algorithm, comprising:
s1, respectively taking a first frame picture and a subsequent frame in a video as input of a template branch and a search branch, and performing feature extraction on the first frame picture and the subsequent frame through a twin network;
s2, inputting the output characteristics obtained in the S1 into a graph attention module to perform mutual correlation operation;
s3, inputting the output obtained in the S2 into an anchor-free tracking head network, obtaining the classification score of each pixel point in the characteristic diagram through classification branches, obtaining the distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches;
s4, multiplying the classification fraction obtained in the step S3 by the central degree branch to obtain an accurate classification fraction, finding out the pixel point with the highest fraction and the corresponding target frame information thereof, and obtaining the position of the target of the current frame;
and S5, repeating the steps from S1 to S4 until the positions of the targets in all the subsequent frames of the video are obtained.
The twin network in the S1 is GoogleNet sharing weight, an Inception V3 structure is used, the twin network is combined with a SimAM attention mechanism, and the specific operation is as follows:
adjusting the InceptitionV 3 structure of GoogleNet, wherein only the convolution and pooling layer in front of the InceptitionV 3 and the InceptitionA, the InceptitionB and the InceptitionC are used, and the following Inception module and other network layers are not used;
an attention module is added, one SimAM attention module is added after each of the three included modules, and one SimAM attention module is added after the first and third included modules.
The concrete construction process of the graph attention module of the S2 comprises the following steps:
s2.1, composing the feature images of the template frame and the search frame, and enabling each feature image to be in the feature imagesThe size part is used as a node to construct a corresponding bipartite graphWherein the node setVFrom template subgraphNode (a) ofAnd searching subgraphsNode (a) ofThe components of the composition are as follows,is also a set of nodes, which are,;
s2.2. According to the constructed bipartite graphGFor is toAndsolving the similarity of the nodes, and respectively operating the nodes by using three graph attention modules to obtain corresponding similarity graphs;
s2.3, obtaining three similarity graphsAre normalized by softmax respectivelyMiddle node pairThe attention of the middle node is obtainedArbitrary node ofjPolymerization feature of (2);
S2.4. Polymerization characteristics to be obtainedAndthe linear characteristics of the corresponding nodes are fused to obtain characteristic expression;
S2.5, obtaining all nodes through the operationjIs characterized by expression ofCorresponding three complete characteristic maps are obtainedFThey are fusedAnd obtaining a final feature expression for subsequent positioning and tracking.
S3, the tracking head network is divided into a classification branch and a regression branch, and the classification branch distinguishes the category of the target and positions the target; the regression branch regresses a target frame of the target to obtain scale information of the target;
the response graph obtained by the classification branch is shown asIn whichRWhich represents the size of the response map and,respectively representing the height and the width of the response graph, 2 representing the number of channels of the response graph, and storing classification scores of all pixel points in the two channels, wherein the classification scores are respectively the probability of a positive sample and the probability of a negative sample;
the final response graph of the regression branch isWherein each pixel point is in one-to-one correspondence with a pixel point of the classification response map, and each point (c), (d)i,j) The corresponding four channels contain the distance of the point from each edge of the bounding box, denoted as,Is represented by (i,j) A corresponding set of four channels is provided,respectively, the distance of the point from the four sides of the bounding box.
The classification branch and the centrality branch use a cross entropy loss function to calculate the accuracy of the classification and the accuracy of the centrality score, respectively, the regression branch uses an IOU loss function, the final loss of the whole networkExpressed as:in which、Andare respectively set as 1, 1 and 2,andthe classification loss, center loss, and regression loss are expressed respectively.
S4 response diagram of the centrality branch isThe center degree score of each pixel point isC(i,j) Will beC(i,j) And multiplying the classification score to further obtain a more accurate target score.
Now explaining part of the english meaning in the present invention, googleNet: a deep learning network architecture, inclusion v3: a neural network structure, simAM: a three-dimensional attention mechanism, incorporated b, incorporated c: a specific network module in GoogleNet, padding: filling, IOU: one measure is a criterion for the accuracy with which a respective object is detected in a particular data set. The IoU is the result of dividing the overlapping part of the two regions by the aggregation part of the two regions, and is compared with the IoU calculation result through a set threshold value. UAV123: a data set for testing tracker performance. CNN: convolutional neural network, group truth: artificially mark the approximate range of the object to be detected in the training set images, resNet: residual neural networks, alexNet: a deep learning network architecture, GOT10K: a data set for testing tracker performance. COCO, imageNet DET, imageNet VID, and YouTube-BB: target tracking a common training set, the data sets used to train the network, siamGat, siamCar, KCF, ocean-online, CFNet, MDNet, ECO, siamFC, SPM, siamRPN + +, siamFC + +, CGACD, siamBAN, siamRPN, siamww: some more advanced tracking algorithms for the tracking direction of objects.
The technical process of the invention is as shown in figure 1, and an integral network of the model is constructed, wherein the integral network consists of a feature extraction module, a graph attention module and a tracking head network. The characteristic extraction module consists of two CNNs shared by weights and is used for respectively extracting the characteristics of the template picture and the search area; the graph attention module is mainly used for solving the similarity between the template picture and the search area and embedding the characteristic information of the template into the search area; the tracking head network consists of classification and regression branches and is used for positioning and tracking the target, and the twin network structure of the invention is shown in table 1 and fig. 2.
TABLE 1
The SimAM attention mechanism is inspired by the human brain attention mechanism, and a 3D attention weight can be derived for the feature map without additional parameters, as shown in fig. 3, which is described in detail as follows: in neuroscience, information-rich neurons typically exhibit different firing patterns than peripheral neurons, and activation of neurons typically inhibits peripheral neurons, i.e., spatial domain inhibition. Neurons with spatial domain inhibitory effects should therefore be given higher importance. To find these neurons, one can measure the linear separability between one target neuron and the other neurons. Based on the findings of neuroscience, the SimAM defines an energy function, the minimization of the energy function is equivalent to training the linear separability between the neuron t and other neurons in the same channel, and a final energy function is obtained by adopting a binary label and adding a regular term. Can be used forThe lower the energy in the magnitude function, the more the neuron t differs from the peripheral neurons, and the higher the importance. Thus, the significance of a neuron can be determined byThus obtaining the product. Inspired by the importance of the energy function and the mining neurons, enhancement processing of features is required as defined by the attention mechanism. The whole feature extraction process can be represented by the following operations:in whichWhich represents a convolution of the two signals of the signal,zandxrepresenting the input of the template branch and the search branch respectively,andthe feature map obtained after feature extraction by inclusion V3 is shown.
The invention uses three graph attention modules to operate the graph respectively, and the obtained similarity graph can be expressed as follows:
(ii) a WhereinAndrespectively representAndthe vector of nodes of (a) is,、is passing throughThe convolution of (2) linearizes the node vector.
In order to solve the problems that a moving target is often exposed to illumination change, motion blur and the like. According to the method, bipartite graphs are established through the characteristics of template pictures and the characteristics of search areas, the local relation between nodes is established, then similarity calculation is carried out through a plurality of graph attention modules, and the detailed process is shown in figure 4.
Characteristics of polymerization:,k=1,2,3, polymerization characteristics to be obtainedAndthe linearized characteristics of the corresponding nodes in the tree are fused to obtain more expressive characteristics,,k =1,2,3, whereincatRepresenting the concatenation of features.
Obtaining the characteristic expression of all the nodes j through the operationAlso obtain corresponding threeAnd (4) fusing the complete characteristic graphs F to obtain a final characteristic expression for subsequent positioning and tracking.WhereinShowing the channel-wise stitching of the three signatures,is a three feature map, then passesAnd fusing the characteristic information by the convolution kernel with the size.
For faster regression networks, the classification branch will take the cross entropy loss function and the regression branch will take the IOU loss function. The upper left corner and the lower right corner of the bounding box of the target are respectively expressed by () And (a) and (b)) And (4) showing. Any point in the search area (x,y) The distance from the perimeter of the bounding box may be expressed as:,; lis the distance of any point from the left bounding box,ris the distance of any point from the right bounding box,tthe distance from any point to the upper bounding box,bis the distance from any point to the lower bounding box.
Calculating the difference between the group route bounding box and the prediction box through an IOU loss function, and regressing the target box.
According to the investigation, the score of the classification branch is not necessarily accurateIndicating the location of the target and most of the high quality target frames are generated at the center of the target. The invention decides to add a central degree branch to the classification branch to further evaluate the classification score, and the response graph of the central degree branchA centrality score C (for each pixel point)i,j) Expressed as:
by mixingC(i,j) And the classification scores are multiplied to further obtain more accurate target scores, so that the positioning is more accurate.
The method of the present invention was tested experimentally on GOT10K and UAV123 and compared to some of the currently more advanced trackers. When comparing on the UAV123 data set, the model used by the invention is trained by only one data set GOT10K, and the other trackers are models trained by four data sets COCO, imageNet DET, imageNet VID and YouTube-BB.
UAV123 contains 123 fully annotated high definition video datasets and benchmarks captured from low altitude aerial perspectives. The system comprises multiple attributes of aspect ratio change, background clutter, camera motion, rapid motion, complete shielding, illumination change, low resolution, visual field emergence, partial shielding, similar targets, scale change and visual angle change, and can better test the comprehensive performance of the tracker. The GOT10K test set consists of 180 video sequences, comprises 84 moving objects and 32 motion modes, can enable a test experiment to be closer to reality, and can better evaluate the performance of a tracker.
The tracker of the present invention was tested and evaluated together with advanced trackers such as SiamGat, siamCar, KCF, ocean, etc. on GOT10K, and the final results are shown in table 2.
TABLE 2
AO represents the average overlap rate between the prediction box and the true target box of the tracker,andrespectively representing the probability that the overlap ratio of the prediction frame and the real target frame is more than 50% and more than 75% in the prediction frame successfully tracked to the target, which can more accurately evaluate the tracking precision of the tracker. It can be seen from the table that the tracker of the present invention achieves good results in terms of overall performance.
Comparing the tracker of the present invention with advanced trackers such as Ocean, siamRPN + +, siamCAR, etc., the resulting tracking accuracy map and tracking accuracy map are shown in fig. 5 and 6, respectively, fig. 5 is an OPE accuracy map on the UAV123, including accuracy and position error thresholds, and it can be seen from the figures that the model of the present invention has great advantages in both accuracy and precision, when smaller data sets are used.
From the test results on the two data sets of GOT10K and UAV123, it can be seen that the tracker of the present invention has a very good improvement in the comprehensive performance, which also verifies the effectiveness of the algorithm proposed by the present invention.
Although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: it is to be understood that modifications may be made to the technical solutions described in the foregoing embodiments, or some or all of the technical features may be equivalently replaced, and the modifications or the replacements may not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (6)
1. A target tracking algorithm based on a multi-graph attention mechanism is characterized by comprising the following steps:
s1, respectively taking a first frame picture and a subsequent frame in a video as input of a template branch and a search branch, and performing feature extraction on the first frame picture and the subsequent frame through a twin network;
s2, inputting the output characteristics obtained in the S1 into a graph attention module to perform cross-correlation operation;
s3, inputting the output obtained in the S2 into an anchor-free tracking head network, obtaining the classification score of each pixel point in the characteristic diagram through classification branches, obtaining the distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches;
s4, multiplying the classification fraction obtained in the step S3 by the central degree branch to obtain an accurate classification fraction, finding out the pixel point with the highest fraction and the corresponding target frame information thereof, and obtaining the position of the target of the current frame;
and S5, repeating the steps from S1 to S4 until the positions of the targets in all the subsequent frames of the video are obtained.
2. The multi-graph attention machine mechanism-based target tracking algorithm of claim 1, wherein the twin network in S1 is google net sharing weight, using inclusion v3 structure, and the twin network is combined with the SimAM attention machine mechanism, and the specific operations are as follows:
adjusting the InceptitionV 3 structure of GoogleNet, wherein only the convolution and pooling layer in front of the InceptitionV 3 and the InceptitionA, the InceptitionB and the InceptitionC are used, and the following Inception module and other network layers are not used;
an attention module is added, one SimAM attention module is added after each of the three Inception modules, and one SimAM attention module is added after the first and third Inception C modules.
3. The multi-graph attention machine mechanism-based target tracking algorithm according to claim 1, wherein the graph attention module of the S2 is specifically constructed by the following process:
s2.1, composing the feature images of the template frame and the search frame, and enabling each feature image in the feature images to be combinedThe part of the size is used as a node to construct a corresponding bipartite graphWherein the node setVFrom the template subgraphNode (b) ofAnd searching subgraphsNode (b) ofThe components of the components are as follows,and is also a set of nodes that are,;
s2.2. According to the constructed bipartite graphGFor is toAndsolving the similarity of the nodes, and respectively operating the nodes by using three graph attention modules to obtain corresponding similarity graphs;
s2.3, obtaining three similarity graphsNormalized by softmax, respectivelyMiddle node pairThe attention of the middle node is obtainedArbitrary node ofjPolymerization feature of (2);
S2.4. Polymerization characteristics to be obtainedAndthe linear characteristics of the corresponding nodes are fused to obtain characteristic expression;
4. The multi-graph attention machine mechanism-based target tracking algorithm is characterized in that the tracking head network of the S3 is divided into a classification branch and a regression branch, wherein the classification branch distinguishes the category of the target and positions the target; the regression branch regresses a target frame of the target to obtain scale information of the target;
the response graph obtained by the classification branch is shown asIn whichRWhich represents the size of the response map and,respectively representing the height and the width of the response graph, 2 representing the number of channels of the response graph, and storing classification scores of all pixel points in the two channels, wherein the classification scores are respectively the probability of a positive sample and the probability of a negative sample;
the final response of the regression branch is plotted asWherein each pixel point is in one-to-one correspondence with a pixel point of the classification response map, each point (i,j) The corresponding four channels contain the distance of the point from each edge of the bounding box, denoted as,Is shown byi,j) A corresponding set of four channels is provided,respectively, the distance of the point from the four sides of the bounding box.
5. According to the rightThe multi-graph attention machine mechanism-based target tracking algorithm of claim 3, wherein the classification branch and the centrality branch use a cross entropy loss function to calculate the accuracy of classification and the accuracy of the centrality score respectively, the regression branch uses an IOU loss function, and the final loss of the whole networkExpressed as:wherein、Andare respectively set as 1, 1 and 2,andthe classification loss, center loss and regression loss are indicated, respectively.
6. The multi-graph attention machine mechanism-based target tracking algorithm as claimed in claim 5, wherein the response graph of the centrality branch of S4 isThe center degree score of each pixel point isC(i,j) Will beC(i,j) And multiplying the classification score to further obtain a more accurate target score.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211438781.3A CN115578421B (en) | 2022-11-17 | 2022-11-17 | Target tracking algorithm based on multi-graph attention machine mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211438781.3A CN115578421B (en) | 2022-11-17 | 2022-11-17 | Target tracking algorithm based on multi-graph attention machine mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115578421A true CN115578421A (en) | 2023-01-06 |
CN115578421B CN115578421B (en) | 2023-03-14 |
Family
ID=84589711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211438781.3A Active CN115578421B (en) | 2022-11-17 | 2022-11-17 | Target tracking algorithm based on multi-graph attention machine mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115578421B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090043818A1 (en) * | 2005-10-26 | 2009-02-12 | Cortica, Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
CN113256677A (en) * | 2021-04-16 | 2021-08-13 | 浙江工业大学 | Method for tracking visual target with attention |
CN114707604A (en) * | 2022-04-07 | 2022-07-05 | 江南大学 | Twin network tracking system and method based on space-time attention mechanism |
CN114821390A (en) * | 2022-03-17 | 2022-07-29 | 齐鲁工业大学 | Twin network target tracking method and system based on attention and relationship detection |
CN115187629A (en) * | 2022-05-24 | 2022-10-14 | 浙江师范大学 | Method for fusing target tracking features by using graph attention network |
-
2022
- 2022-11-17 CN CN202211438781.3A patent/CN115578421B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090043818A1 (en) * | 2005-10-26 | 2009-02-12 | Cortica, Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
CN113256677A (en) * | 2021-04-16 | 2021-08-13 | 浙江工业大学 | Method for tracking visual target with attention |
CN114821390A (en) * | 2022-03-17 | 2022-07-29 | 齐鲁工业大学 | Twin network target tracking method and system based on attention and relationship detection |
CN114707604A (en) * | 2022-04-07 | 2022-07-05 | 江南大学 | Twin network tracking system and method based on space-time attention mechanism |
CN115187629A (en) * | 2022-05-24 | 2022-10-14 | 浙江师范大学 | Method for fusing target tracking features by using graph attention network |
Non-Patent Citations (2)
Title |
---|
JIN-KUI: "A visual attention model for robot object tracking", 《INTERNATIONAL JOURNAL OF AUTOMATION & COMPUTING》 * |
董吉富等: "基于注意力机制的在线自适应孪生网络跟踪算法", 《激光与光电子学进展》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115578421B (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | SCSTCF: spatial-channel selection and temporal regularized correlation filters for visual tracking | |
Liu et al. | Overview and methods of correlation filter algorithms in object tracking | |
Liu et al. | Small traffic sign detection from large image | |
CN109165540B (en) | Pedestrian searching method and device based on prior candidate box selection strategy | |
Peng et al. | Rgb-t crowd counting from drone: A benchmark and mmccn network | |
CN107871106A (en) | Face detection method and device | |
CN111625667A (en) | Three-dimensional model cross-domain retrieval method and system based on complex background image | |
CN109492596B (en) | Pedestrian detection method and system based on K-means clustering and regional recommendation network | |
CN113256677A (en) | Method for tracking visual target with attention | |
CN112884742A (en) | Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method | |
Wang et al. | Pyramid-dilated deep convolutional neural network for crowd counting | |
Wang et al. | AutoScaler: Scale-attention networks for visual correspondence | |
CN111881804A (en) | Attitude estimation model training method, system, medium and terminal based on joint training | |
Cao et al. | FDTA: Fully convolutional scene text detection with text attention | |
CN116596966A (en) | Segmentation and tracking method based on attention and feature fusion | |
Jin et al. | The Open Brands Dataset: Unified brand detection and recognition at scale | |
Tong et al. | Transformer based line segment classifier with image context for real-time vanishing point detection in manhattan world | |
Ma et al. | Robust line segments matching via graph convolution networks | |
CN114492634A (en) | Fine-grained equipment image classification and identification method and system | |
Liu et al. | Graph matching based on feature and spatial location information | |
CN117557804A (en) | Multi-label classification method combining target structure embedding and multi-level feature fusion | |
Fan et al. | Generating high quality crowd density map based on perceptual loss | |
CN115578421B (en) | Target tracking algorithm based on multi-graph attention machine mechanism | |
Japar et al. | Coherent group detection in still image | |
CN114863132A (en) | Method, system, equipment and storage medium for modeling and capturing image spatial domain information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |