CN115578421B - Target tracking algorithm based on multi-graph attention machine mechanism - Google Patents
Target tracking algorithm based on multi-graph attention machine mechanism Download PDFInfo
- Publication number
- CN115578421B CN115578421B CN202211438781.3A CN202211438781A CN115578421B CN 115578421 B CN115578421 B CN 115578421B CN 202211438781 A CN202211438781 A CN 202211438781A CN 115578421 B CN115578421 B CN 115578421B
- Authority
- CN
- China
- Prior art keywords
- target
- branch
- classification
- graph
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a target tracking algorithm based on a multi-graph attention machine mechanism, which belongs to the technical field of general image data processing or generation and is used for tracking a target in a video, wherein a first frame picture and a subsequent frame in the video are respectively used as the input of a template branch and a search branch, and feature extraction is carried out on the first frame picture and the subsequent frame picture through a twin network; inputting the output characteristics obtained in the last step into a graph attention module to perform cross-correlation operation; inputting the output obtained in the last step into an anchor-free tracking head network, obtaining the classification score of each pixel point in the characteristic diagram through classification branching, obtaining the distance relation between each pixel point and the center of a target through centrality branching, and obtaining the target frame information corresponding to each pixel point through regression branching; and multiplying the classification score by the central degree branch to obtain an accurate classification score, finding the pixel point with the highest score and the corresponding target frame information thereof to obtain the position of the target of the current frame, and repeating the steps.
Description
Technical Field
The invention discloses a target tracking algorithm based on a multi-graph attention machine mechanism, and belongs to the technical field of general image data processing or generation.
Background
Target tracking is one of three main flow directions of computer vision, and is always concerned by people, and along with the continuous deep research on the target tracking, the application field of the target tracking also becomes wider, and the target tracking is applied to the fields of intelligent monitoring, vehicle tracking, man-machine interaction and the like. In practical applications, various complex and changeable scenes are often encountered, such as the target is blocked, the background is complex and changeable, the appearance of the target is changed, the motion blur and the like, and the existing tracker cannot well deal with the problems, so that the moving target tracking still faces huge challenges, and people need to continuously explore and improve a target tracking algorithm.
Single object tracking refers to a given object to be tracked in a first frame of a video and then tracking the object in subsequent frames. The previous research is mainly based on an algorithm based on correlation filtering, and with the development of deep learning, the strong feature extraction capability of a convolutional neural network is also widely concerned by people, and the research direction of target tracking is gradually changed to the deep learning direction.
In the research process of the target tracking algorithm based on deep learning, branches gradually appear, wherein the target tracking algorithm based on the twin network enables the tracker to reasonably balance the tracking speed and the tracking precision by virtue of unique advantages of the target tracking algorithm. However, when the existing tracker faces the situations of fuzzy targets, disordered backgrounds and the like, the characteristics of the targets are difficult to extract accurately, and the positions of the targets cannot be detected accurately. On the other hand, most twin network trackers perform similarity matching with a search area by taking the characteristics of the whole template picture as a kernel, the state of a target in the tracking process is not fixed, when the target is deformed or shielded, the global characteristics of the target can change, and the accuracy of a final result can be influenced by performing global similarity matching.
Disclosure of Invention
The invention aims to provide a target tracking algorithm based on a multi-graph attention machine system, and aims to solve the problems that in the prior art, the target tracking algorithm cannot be accurately positioned to the position of a target due to the change of global characteristics of the target, the existing network characteristic extraction capability cannot cope with the complexity and the variability of a target background, and the like.
A multi-graph attention machine mechanism based target tracking algorithm, comprising:
s1, respectively taking a first frame picture and a subsequent frame in a video as input of a template branch and a search branch, and performing feature extraction on the first frame picture and the subsequent frame through a twin network;
s2, inputting the output characteristics obtained in the S1 into a graph attention module to perform cross-correlation operation;
s3, inputting the output obtained in the S2 into an anchor-free tracking head network, obtaining the classification score of each pixel point in the characteristic diagram through classification branches, obtaining the distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches;
s4, multiplying the classification fraction obtained in the step S3 by the central degree branch to obtain an accurate classification fraction, finding out the pixel point with the highest fraction and the corresponding target frame information thereof, and obtaining the position of the target of the current frame;
and S5, repeating the steps from S1 to S4 until the positions of the targets in all the subsequent frames of the video are obtained.
The twin network in the S1 is GoogleNet sharing weight, an Inception V3 structure is used, the twin network is combined with a SimAM attention mechanism, and the specific operation is as follows:
adjusting the InceptitionV 3 structure of GoogleNet, wherein only the convolution and pooling layer in front of the InceptitionV 3 and the InceptitionA, the InceptitionB and the InceptitionC are used, and the following Inception module and other network layers are not used;
an attention module is added, one SimAM attention module is added after each of the three included modules, and one SimAM attention module is added after the first and third included modules.
The concrete construction process of the graph attention module of the S2 comprises the following steps:
s2.1, composing the feature images of the template frame and the search frame, and enabling each feature image to be in the feature imagesThe size part is used as a node to construct a corresponding bipartite graphWherein the node setVFrom the template subgraphNode (a) ofAnd searching subgraphsNode (a) ofThe components of the composition are as follows,is also a set of nodes, which are,;
s2.2. According to the constructed bipartite graphGFor is toAndsolving the similarity of the nodes, and respectively operating the nodes by using three graph attention modules to obtain corresponding similarity graphs;
s2.3, obtaining three similarity graphsNormalized by softmax, respectivelyMiddle node pairThe attention of the middle node is obtainedArbitrary node ofjBy polymerization of;
S2.4. Polymerization characteristics to be obtainedAndthe linear characteristics of the corresponding nodes are fused to obtain characteristic expression;
S2.5, obtaining all nodes through the operationjIs characterized by expression ofCorresponding three complete characteristic maps are obtainedFAnd fusing the two to obtain a final feature expression for subsequent positioning and tracking.
S3, the tracking head network is divided into a classification branch and a regression branch, and the classification branch distinguishes the category of the target and positions the target; the regression branch regresses a target frame of the target to obtain scale information of the target;
the response graph obtained by the classification branch is shown asWhereinRWhich represents the size of the response map,respectively representing the height and the width of the response graph, 2 representing the number of channels of the response graph, and storing classification scores of all pixel points in the two channels, wherein the classification scores are respectively the probability of a positive sample and the probability of a negative sample;
the final response graph of the regression branch isWherein each pixel point is in one-to-one correspondence with a pixel point of the classification response map, each point (i,j) The corresponding four channels contain the distance of the point from each edge of the bounding box, denoted as,Is shown byi,j) A corresponding set of four channels is provided,respectively, the distance of the point from the four sides of the bounding box.
The classification branch and the centrality branch use a cross entropy loss function to calculate the accuracy of the classification and the accuracy of the centrality score, respectively, the regression branch uses an IOU loss function, the final loss of the whole networkExpressed as:wherein、Andrespectively set as 1, 1 and 2,andrespectively represent classification loss, middleHeart loss and regression loss.
S4 response diagram of the central branch isThe center degree score of each pixel point isC(i,j) Will beC(i,j) And multiplying the classification score to further obtain a more accurate target score.
Compared with the prior art, the method has the advantages that the Inception V3 structure of GoogleNet is used and modified to be more suitable for the model provided by the invention, the training parameters are reduced, and simultaneously, the method is combined with the SimAM attention mechanism, so that the target feature extraction capability in the complex background and target blurring process is greatly improved without adding new parameters, and the subsequent target position positioning accuracy is improved; by constructing a plurality of bipartite graphs on the characteristic graphs of the template branches and the search branches, the traditional global matching mode taking the whole template picture as a core is converted into local characteristic matching, the problem of inaccurate characteristic matching when a target is deformed, shielded and the like in the tracking process is effectively solved, the accuracy of classifying each pixel point in the characteristic graphs is improved, and the tracking accuracy of the tracker is improved.
Drawings
FIG. 1 is a technical flow chart of the present invention.
Fig. 2 is an overall block diagram of the present invention.
FIG. 3 is a schematic diagram of the SimAM attention mechanism of the present invention.
FIG. 4 is a block diagram of the map attention module of the present invention.
Fig. 5 is a graph comparing the accuracy of the present invention and existing tracking algorithms on the UAV 123.
Fig. 6 is a graph comparing the success rate of the present invention and existing tracking algorithms on the UAV 123.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described clearly and completely below, and it is obvious that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A multi-graph attention machine mechanism based target tracking algorithm, comprising:
s1, respectively taking a first frame picture and a subsequent frame in a video as input of a template branch and a search branch, and performing feature extraction on the first frame picture and the subsequent frame through a twin network;
s2, inputting the output characteristics obtained in the S1 into a graph attention module to perform cross-correlation operation;
s3, inputting the output obtained in the S2 into an anchor-free tracking head network, obtaining a classification score of each pixel point in the characteristic diagram through classification branches, obtaining a distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches;
s4, multiplying the classification fraction obtained in the step S3 by the central degree branch to obtain an accurate classification fraction, finding out the pixel point with the highest fraction and the corresponding target frame information thereof, and obtaining the position of the target of the current frame;
and S5, repeating the steps from S1 to S4 until the positions of the targets in all the subsequent frames of the video are obtained.
The twin network in S1 is GoogleNet sharing weight, an Inception V3 structure is used, the twin network is combined with a SimAM attention mechanism, and the specific operation is as follows:
adjusting the InceptitionV 3 structure of GoogleNet, wherein only the convolution and pooling layer in front of the InceptitionV 3 and the InceptitionA, the InceptitionB and the InceptitionC are used, and the following Inception module and other network layers are not used;
an attention module is added, one SimAM attention module is added after each of the three included modules, and one SimAM attention module is added after the first and third included modules.
The concrete construction process of the graph attention module of the S2 comprises the following steps:
s2.1, composing the feature images of the template frame and the search frame, and enabling each feature image to be in the feature imagesThe size part is used as a node to construct a corresponding bipartite graphWherein the node setVFrom template subgraphNode (a) ofAnd searching subgraphsNode (a) ofThe components of the composition are as follows,is also a set of nodes, which are,;
s2.2. According to the constructed bipartite graphGTo pairAndsolving the similarity of the nodes, and respectively operating the nodes by using three graph attention modules to obtain corresponding similarity graphs;
s2.3, obtaining three similarity graphsNormalized by softmax, respectivelyMiddle node pairThe attention of the middle node is obtainedArbitrary node ofjBy polymerization of;
S2.4. Polymerization characteristics to be obtainedAndthe linear characteristics of the corresponding nodes are fused to obtain characteristic expression;
S2.5, obtaining all nodes through the operationjIs characterized by expression ofCorresponding three complete characteristic maps are obtainedFAnd fusing the two to obtain a final feature expression for subsequent positioning and tracking.
S3, the tracking head network is divided into a classification branch and a regression branch, and the classification branch distinguishes the category of the target and positions the target; the regression branch regresses a target frame of the target to obtain scale information of the target;
the response graph obtained by the classification branch is shown asWhereinRWhich represents the size of the response map,respectively representing the height and the width of the response graph, 2 representing the number of channels of the response graph, and storing classification scores of all pixel points in the two channels, wherein the classification scores are respectively the probability of a positive sample and the probability of a negative sample;
the final response of the regression branch is plotted asWherein each pixel point is in one-to-one correspondence with a pixel point of the classification response map, each point (i,j) The corresponding four channels contain the distance of the point from each edge of the bounding box, denoted as,Is represented by (i,j) A corresponding set of four channels is provided,respectively, the distance of the point from the four sides of the bounding box.
The classification branch and the centrality branch use a cross entropy loss function to calculate the accuracy of the classification and the accuracy of the centrality score, respectively, the regression branch uses an IOU loss function, the final loss of the whole networkExpressed as:in which、Andare respectively set as 1, 1 and 2,andthe classification loss, center loss and regression loss are indicated, respectively.
S4 response diagram of the central branch isThe center degree score of each pixel point isC(i,j) Will beC(i,j) And multiplying the classification score to further obtain a more accurate target score.
Now explaining part of the english meaning in the present invention, googleNet: a deep learning network architecture, inclusion v3: a neural network structure, simAM: a three-dimensional attention mechanism, incorporated b, incorporated c: a specific network module in GoogleNet, padding: filling, IOU: one measure is a criterion for detecting the accuracy of a corresponding object in a particular data set. IoU is the result of dividing the overlapping portion of two regions by the collective portion of the two regions, and this is compared to the IoU calculation by a set threshold. UAV123: a data set for testing tracker performance. CNN: convolutional neural network, group truth: artificially mark the approximate range of the object to be detected in the training set images, resNet: residual neural network, alexNet: a deep learning network architecture, GOT10K: a data set for testing tracker performance. COCO, imageNet DET, imageNet VID, and YouTube-BB: target tracking a common training set, data sets siamcat, siamCar, KCF, ocean-only, CFNet, MDNet, ECO, siamFC, SPM, siamRPN + +, siamFC + +, CGACD, siamBAN, siamRPN, siamww for training the network: some more advanced tracking algorithms for target tracking direction.
The technical process of the invention is as shown in figure 1, and an integral network of the model is constructed, wherein the integral network consists of a feature extraction module, a graph attention module and a tracking head network. The characteristic extraction module consists of two CNNs shared by weights, and is used for respectively extracting the characteristics of the template picture and the search area; the graph attention module is mainly used for solving the similarity between the template picture and the search area and embedding the characteristic information of the template into the search area; the tracking head network consists of classification and regression branches and is used for positioning and tracking the target, and the twin network structure of the invention is shown in table 1 and fig. 2.
TABLE 1
The SimAM attention mechanism is inspired by the human brain attention mechanism, and a 3D attention weight can be derived for the feature map without additional parameters, as shown in fig. 3, which is described in detail as follows: in neuroscience, information-rich neurons typically exhibit different firing patterns than peripheral neurons, and activation of neurons typically inhibits peripheral neurons, i.e., spatial domain inhibition. Neurons with spatial domain inhibitory effects should therefore be given higher importance. To find these neurons, one can measure the linear separability between one target neuron and the other neurons. Based on the findings of neuroscience, the SimAM defines an energy function, the minimization of the energy function is equivalent to training the linear separability between the neuron t and other neurons in the same channel, and a final energy function is obtained by adopting a binary label and adding a regular term. The lower the energy in the energy function, the more the neuron t differs from the peripheral neurons, the higher the importance. Thus, the importance of the neuron can be determined byThus obtaining the compound. Inspired by the importance of the energy function and the mining neurons, enhancement processing of features is required as defined by the attention mechanism. The whole feature extraction process can be represented by the following operations:whereinWhich represents a convolution of the signals of the first and second,zandxrepresenting the input of the template branch and the search branch respectively,andthe characteristic diagram obtained after characteristic extraction by inclusion V3 is shown.
The invention uses three graph attention modules to operate the graph respectively, and the obtained similarity graph can be expressed as follows:
(ii) a WhereinAndrespectively representAndthe vector of nodes of (a) is,、is passing throughThe convolution of (2) linearizes the node vector.
In order to solve the problems that a moving target is often exposed to illumination change, motion blur and the like. According to the invention, a bipartite graph is established by the characteristics of the template picture and the characteristics of the search area, the local relation between the nodes is established, and then similarity calculation is carried out by a plurality of graph attention modules, wherein the detailed process is shown in FIG. 4.
Characteristics of polymerization:,k=1,2,3, followed by polymerization features obtainedAndthe linearized characteristics of the corresponding nodes in the tree are fused to obtain more expressive characteristics,,k =1,2,3, whereincatRepresenting the concatenation of features.
Obtaining the feature expression of all the nodes j by the operationThree corresponding complete feature maps F are obtained and fused to obtain a final feature expression for subsequent positioning and tracking.In whichShowing the channel-wise stitching of the three signatures,is a three characteristic diagramThen byAnd fusing the characteristic information by the convolution kernel with the size.
For faster regression networks, the classification branch will take the cross entropy loss function and the regression branch will take the IOU loss function. The upper left corner and the lower right corner of the bounding box of the target are respectively expressed by () And (a)) And (4) showing. Any point in the search area (x,y) The distance around the bounding box can be expressed as:,; lis the distance of any point from the left bounding box,ris the distance of any point from the right bounding box,tthe distance from any point to the upper bounding box,bis the distance from any point to the lower bounding box.
And calculating the difference between the group truth bounding box and the prediction box through an IOU loss function, and regressing the target box.
According to the survey, the score of the classification branch does not necessarily accurately represent the position of the target, and most of the high-quality target frames are generated at the center of the target. The invention decides to add a central degree branch to the classification branch to further evaluate the classification score, and the response graph of the central degree branchThe centrality score C of each pixel point (C:)i,j) Expressed as:
by mixingC(i,j) And the classification scores are multiplied to further obtain more accurate target scores, so that the positioning is more accurate.
The method of the present invention was tested experimentally on GOT10K and UAV123 and compared to some of the currently more advanced trackers. When comparing on the UAV123 data set, the model used by the invention is trained by only one data set GOT10K, and the other trackers are models trained by four data sets COCO, imageNet DET, imageNet VID and YouTube-BB.
UAV123 contains 123 fully annotated high definition video datasets and fiducials captured from low altitude aerial perspectives. The method comprises the attributes of aspect ratio change, background clutter, camera motion, rapid motion, complete shielding, illumination change, low resolution, out-of-view, partial shielding, similar targets, scale change and view angle change, and can better test the comprehensive performance of the tracker. The GOT10K test set consists of 180 video sequences, comprises 84 moving objects and 32 motion modes, can enable a test experiment to be closer to reality, and can better evaluate the performance of a tracker.
The tracker of the present invention was tested and evaluated with advanced trackers such as SiamGat, siamCar, KCF, ocean, etc. on GOT10K, with the final results shown in table 2.
TABLE 2
AO represents the prediction box and the true target box of the trackerThe average rate of overlap between the two,andrespectively representing the probability that the overlap ratio of the prediction frame and the real target frame is more than 50% and more than 75% in the prediction frame successfully tracked to the target, which can more accurately evaluate the tracking precision of the tracker. It can be seen from the table that the tracker of the present invention achieves good results in terms of overall performance.
Comparing the tracker of the present invention with advanced trackers such as Ocean, siamRPN + +, siamCAR, etc., the resulting tracking accuracy map and tracking accuracy map are shown in fig. 5 and 6, respectively, fig. 5 is an OPE accuracy map on the UAV123, including accuracy and position error thresholds, and it can be seen from the figures that the model of the present invention has great advantages in both accuracy and precision, when smaller data sets are used.
From the test results on the two data sets of GOT10K and UAV123, it can be seen that the tracker of the present invention has a very good improvement in the comprehensive performance, which also verifies the effectiveness of the algorithm proposed by the present invention.
Although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: it is to be understood that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for some or all of the technical features thereof, without departing from the scope of the technical solutions of the embodiments of the present invention.
Claims (4)
1. A target tracking method based on a multi-graph attention machine mechanism is characterized by comprising the following steps:
s1, respectively taking a first frame picture and a subsequent frame in a video as input of a template branch and a search branch, and performing feature extraction on the first frame picture and the subsequent frame through a twin network;
s2, inputting the output characteristics obtained in the S1 into a graph attention module to perform cross-correlation operation;
s3, inputting the output obtained in the S2 into an anchor-free tracking head network, obtaining a classification score of each pixel point in the characteristic diagram through classification branches, obtaining a distance relation between each pixel point and a target center through centrality branches, and obtaining target frame information corresponding to each pixel point through regression branches;
s4, multiplying the classification fraction obtained in the step S3 by the central degree branch to obtain an accurate classification fraction, finding out the pixel point with the highest fraction and the corresponding target frame information thereof, and obtaining the position of the target of the current frame;
s5, repeating S1 to S4 until the positions of the targets in all subsequent frames of the video are obtained;
the concrete construction process of the graph attention module of the S2 comprises the following steps:
s2.1, composing the feature images of the template frame and the search frame, and enabling each feature image to be in the feature imagesThe size part is used as a node to construct a corresponding bipartite graphWherein the node setVFrom template subgraphNode (a) ofAnd searching subgraphsNode (a) ofThe components of the composition are as follows,is also a set of nodes, which are,;
s2.2. According to the constructed bipartite graphGTo pairAndsolving the similarity of the nodes, and respectively operating the nodes by using three graph attention modules to obtain corresponding similarity graphs;
s2.3, obtaining three similarity graphsNormalized by softmax, respectivelyMiddle node pairThe attention of the middle node is obtainedArbitrary node ofjBy polymerization of;
S2.4. Polymerization characteristics to be obtainedAndthe linear characteristics of the corresponding nodes are fused to obtain characteristic expression;
S2.5, obtaining all nodes through the operationjIs characterized by expression ofCorresponding three complete characteristic maps are obtainedFFusing the two to obtain final feature expression for subsequent positioning and tracking;
s3, the tracking head network is divided into a classification branch and a regression branch, and the classification branch distinguishes the category of the target and positions the target; the regression branch regresses a target frame of the target to obtain scale information of the target;
the response graph obtained by the classification branch is shown asWhereinRWhich represents the size of the response map and,respectively representing the height and the width of the response graph, 2 representing the number of channels of the response graph, and storing classification scores of all pixel points in the two channels, wherein the classification scores are respectively the probability of a positive sample and the probability of a negative sample;
the final response graph of the regression branch isWherein each pixel point is in one-to-one correspondence with a pixel point of the classification response map, each point (i,j) The corresponding four channels contain the distance of the point from each edge of the bounding box, denoted as,Is shown byi,j) A corresponding set of four channels is provided,respectively, the distance of the point from the four sides of the bounding box.
2. The method for tracking the target based on the multi-graph attention machine mechanism according to claim 1, wherein the twin network in S1 is google net sharing the weight, and an inclusion v3 structure is used, and the twin network is combined with the SimAM attention machine mechanism, and the method specifically operates as follows:
adjusting the InceptitionV 3 structure of GoogleNet, wherein only the convolution and pooling layer in front of the InceptitionV 3 and the InceptitionA, the InceptitionB and the InceptitionC are used, and the following Inception module and other network layers are not used;
an attention module is added, one SimAM attention module is added after each of the three included modules, and one SimAM attention module is added after the first and third included modules.
3. The method of claim 1, wherein the classification branch and the centrality branch use cross entropy loss function to calculate the accuracy of classification and the accuracy of centrality score, respectively, the regression branch uses IOU loss function, and the final loss of the whole networkExpressed as:wherein、Andare respectively set as 1, 1 and 2,andrespectively represent the classificationLoss, center loss, and regression loss.
4. The method for tracking the target based on the multi-graph attention machine mechanism as claimed in claim 3, wherein the response graph of the centrality branch of S4 isThe center degree score of each pixel point isC(i,j) Will beC(i,j) And multiplying the classification score to further obtain a more accurate target score.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211438781.3A CN115578421B (en) | 2022-11-17 | 2022-11-17 | Target tracking algorithm based on multi-graph attention machine mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211438781.3A CN115578421B (en) | 2022-11-17 | 2022-11-17 | Target tracking algorithm based on multi-graph attention machine mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115578421A CN115578421A (en) | 2023-01-06 |
CN115578421B true CN115578421B (en) | 2023-03-14 |
Family
ID=84589711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211438781.3A Active CN115578421B (en) | 2022-11-17 | 2022-11-17 | Target tracking algorithm based on multi-graph attention machine mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115578421B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256677A (en) * | 2021-04-16 | 2021-08-13 | 浙江工业大学 | Method for tracking visual target with attention |
CN114707604A (en) * | 2022-04-07 | 2022-07-05 | 江南大学 | Twin network tracking system and method based on space-time attention mechanism |
CN115187629A (en) * | 2022-05-24 | 2022-10-14 | 浙江师范大学 | Method for fusing target tracking features by using graph attention network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8326775B2 (en) * | 2005-10-26 | 2012-12-04 | Cortica Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
CN114821390B (en) * | 2022-03-17 | 2024-02-23 | 齐鲁工业大学 | Method and system for tracking twin network target based on attention and relation detection |
-
2022
- 2022-11-17 CN CN202211438781.3A patent/CN115578421B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256677A (en) * | 2021-04-16 | 2021-08-13 | 浙江工业大学 | Method for tracking visual target with attention |
CN114707604A (en) * | 2022-04-07 | 2022-07-05 | 江南大学 | Twin network tracking system and method based on space-time attention mechanism |
CN115187629A (en) * | 2022-05-24 | 2022-10-14 | 浙江师范大学 | Method for fusing target tracking features by using graph attention network |
Non-Patent Citations (2)
Title |
---|
A visual attention model for robot object tracking;Jin-Kui;《International Journal of Automation & Computing》;20101231;全文 * |
基于注意力机制的在线自适应孪生网络跟踪算法;董吉富等;《激光与光电子学进展》;20200125(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115578421A (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | SCSTCF: spatial-channel selection and temporal regularized correlation filters for visual tracking | |
Qin et al. | Ultra fast deep lane detection with hybrid anchor driven ordinal classification | |
CN111625667A (en) | Three-dimensional model cross-domain retrieval method and system based on complex background image | |
CN111915644B (en) | Real-time target tracking method of twin guide anchor frame RPN network | |
CN112884742A (en) | Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method | |
Wang et al. | AutoScaler: Scale-attention networks for visual correspondence | |
Cao et al. | FDTA: Fully convolutional scene text detection with text attention | |
Zhai et al. | An improved faster R-CNN pedestrian detection algorithm based on feature fusion and context analysis | |
Ma et al. | Robust line segments matching via graph convolution networks | |
Wei et al. | SARNet: Spatial Attention Residual Network for pedestrian and vehicle detection in large scenes | |
Chen et al. | Coupled Global–Local object detection for large VHR aerial images | |
Liu et al. | Graph matching based on feature and spatial location information | |
Gao et al. | Improved YOLOX for pedestrian detection in crowded scenes | |
CN112668662A (en) | Outdoor mountain forest environment target detection method based on improved YOLOv3 network | |
CN116596966A (en) | Segmentation and tracking method based on attention and feature fusion | |
CN115578421B (en) | Target tracking algorithm based on multi-graph attention machine mechanism | |
Fan et al. | Generating high quality crowd density map based on perceptual loss | |
CN112613472B (en) | Pedestrian detection method and system based on deep search matching | |
Wang et al. | SAFD: single shot anchor free face detector | |
Hassan et al. | Salient object detection based on CNN fusion of two types of saliency models | |
Fan et al. | A multi-scale face detection algorithm based on improved SSD model | |
Ali et al. | DCTNets: Deep crowd transfer networks for an approximate crowd counting | |
Pei et al. | Improved YOLOv5 for Dense Wildlife Object Detection | |
Hu et al. | Crowd R-CNN: An object detection model utilizing crowdsourced labels | |
Li et al. | Volleyball movement standardization recognition model based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |