CN111985375B - Visual target tracking self-adaptive template fusion method - Google Patents

Visual target tracking self-adaptive template fusion method Download PDF

Info

Publication number
CN111985375B
CN111985375B CN202010810873.4A CN202010810873A CN111985375B CN 111985375 B CN111985375 B CN 111985375B CN 202010810873 A CN202010810873 A CN 202010810873A CN 111985375 B CN111985375 B CN 111985375B
Authority
CN
China
Prior art keywords
target
template
frame
current frame
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010810873.4A
Other languages
Chinese (zh)
Other versions
CN111985375A (en
Inventor
胡静
康愫愫
沈宜帆
张旭阳
陈智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202010810873.4A priority Critical patent/CN111985375B/en
Publication of CN111985375A publication Critical patent/CN111985375A/en
Application granted granted Critical
Publication of CN111985375B publication Critical patent/CN111985375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention discloses a visual target tracking self-adaptive template fusion method, and belongs to the technical field of target tracking. The method is used for judging whether to update the template or not by calculating the ratio of the extreme value to the mean value of the response graph, and avoids using frames with poor quality for updating the template due to filtering out frames with weak target response degree, thereby improving the quality of the template and obtaining better tracking effect. According to the method, the fusion coefficient of the template is calculated in a self-adaptive manner, and the template which has stronger response with the current frame can obtain larger updating weight, so that the target state is updated, the pollution of target blurring and shielding (the response with the current frame is weaker at the moment and the weight during updating is small) to the template is reduced, the video data is utilized more fully, the problems of target deformation and background pollution to the template in the target tracking process are effectively inhibited, and the template quality in the tracking process is improved.

Description

Visual target tracking self-adaptive template fusion method
Technical Field
The invention belongs to the technical field of target tracking, and particularly relates to a visual target tracking self-adaptive template fusion method.
Background
Target tracking is widely used in the production and life fields. Object tracking is an important component, both in military and civilian applications. The visual target tracking technology has important significance in the fields of ecological environment protection, flight safety, animal husbandry automation and the like. For example, in the aspect of bird repelling in airports, flying birds in airports can also cause hidden dangers to airlines, so that huge economic loss is caused, and serious threats are brought to the safety of passengers. Therefore, flying birds in an airport need to be tracked as a basis for driving away. In addition, the unmanned aerial vehicle as a new intelligent aircraft has the characteristics of flexible action, low requirements on take-off and flight, no restriction of places, high lift-off speed, long hold-up time, easiness in acquisition, long control distance and the like. Tracking aiming at the unmanned aerial vehicle cluster is also an important means for applying the unmanned aerial vehicle in a large quantity and avoiding the influence of the unmanned aerial vehicle on important facilities such as airports and the like. Meanwhile, in the field of animal husbandry and animal research, many animals such as cattle, sheep, birds, etc. also move in a herd. Whether the migration mode of animals is researched or livestock is prevented from being dislocated in the grazing process, the migration and movement information of the animals needs to be effectively acquired. Researchers often need to track these target groups and obtain the motion trail of each individual.
In the multi-target tracking task, if the space distances between the targets are very close, the situations of shielding between the targets or crossing of motion tracks and the like can occur, and confusion is easy to generate. The animal target may deform to different degrees during the movement process, for example, the deformation of birds when swinging wings, and the shape and the size of the target change make it difficult to ensure the adaptability of the target template to the change of the target state. If a static template is used, the tracking performance may be poor, and a general dynamic template is easily polluted by the background, for example, relevant filtering, which still updates the template according to the target areas when the target is disturbed by motion blur, partial occlusion, and the like, so that template pollution is easily generated.
Disclosure of Invention
Aiming at the defects and the improvement requirements of the prior art, the invention provides a visual target tracking adaptive template fusion method, which aims to better resist the interference condition in the tracking process by judging whether the current template is updated or not and adaptively calculating the template weighting coefficient on the other hand.
To achieve the above object, according to a first aspect of the present invention, there is provided a visual target tracking adaptive template fusion method, comprising the steps of:
s1, performing convolution calculation on a target feature map of a previous frame and a feature map of a current frame to obtain a response map of the previous frame of the target;
s2, calculating the ratio of the maximum value to the mean value of the response image of the previous frame of the target;
s3, when the ratio is smaller than or equal to a set threshold, directly using the target template of the previous frame as the target template of the current frame, and when the ratio is larger than the set threshold, weighting and fusing the target template of the previous frame and the target feature map of the previous frame by the target template of the current frame;
and S4, performing weighted fusion on the target template of the current frame and the target feature map of the first frame to obtain a fusion template of the current frame, and updating the target template of the current frame.
Preferably, the calculation formula of the response map of the last frame of the target is as follows:
Figure BDA0002629996730000021
wherein z represents a target image and s represents a current frame image;
Figure BDA0002629996730000022
representing a feature extraction function; b represents a two-dimensional bias matrix and m represents a current frame number.
Preferably, the feature extraction function is obtained by adopting artificial design or deep learning network training.
Preferably, the target template calculation formula of the current frame is as follows:
Figure BDA0002629996730000031
wherein z represents the target image, m represents the frame number of the current frame, β represents the template updating coefficient, and ratio represents the maximum of the response graph of the previous frame of the targetA ratio of the value to the mean value, σ denotes a set threshold value for determining whether the target template should be updated,
Figure BDA0002629996730000032
the target feature map of the previous frame is represented,
Figure BDA0002629996730000033
target template representing the previous frame, target template of the first frame
Figure BDA0002629996730000034
Target feature map initialized to first frame
Figure BDA0002629996730000035
Preferably, β is 0.5 and σ is 1.
Preferably, the fusion template calculation formula of the current frame is as follows:
Figure BDA0002629996730000036
wherein the content of the first and second substances,
Figure BDA0002629996730000037
a fusion template representing the current frame is shown,
Figure BDA0002629996730000038
a target template representing the current frame is shown,
Figure BDA0002629996730000039
and representing the target characteristic diagram of the first frame, z represents a target image, m represents the frame number of the current frame, and lambda represents a template weighting coefficient.
Preferably, the template weighting factor calculation formula is as follows:
Figure BDA00026299967300000310
Figure BDA00026299967300000311
Figure BDA00026299967300000312
wherein f is1(z, s) represents
Figure BDA00026299967300000313
Calculated response map, f2(z, s) represents
Figure BDA00026299967300000314
And (4) calculating a response graph.
To achieve the above object, according to a second aspect of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the visual target tracking adaptive template fusion method according to the first aspect.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) the method is used for judging whether to update the template or not by calculating the ratio of the extreme value to the mean value of the response graph, and avoids using frames with poor quality for updating the template due to filtering out frames with weak target response degree, thereby improving the quality of the template and obtaining better tracking effect.
(2) According to the method, the fusion coefficient of the template is calculated in a self-adaptive manner, and the template which has stronger response with the current frame can obtain larger updating weight, so that the target state is updated, the pollution of target blurring and shielding (the response with the current frame is weaker at the moment and the weight during updating is small) to the template is reduced, the video data is utilized more fully, the problems of target deformation and background pollution to the template in the target tracking process are effectively inhibited, and the template quality in the tracking process is improved.
Drawings
FIG. 1 is a flow chart of a visual target tracking adaptive template fusion method provided by the present invention;
FIG. 2 is a graph of statistical result versus accuracy provided by the present invention;
fig. 3 is a graph of the comparison success rate of the statistical results provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the present invention provides a visual target tracking adaptive template fusion method, which includes the following steps:
and S1, carrying out convolution calculation on the target feature map of the previous frame and the feature map of the current frame to obtain a response map of the previous frame of the target.
Preferably, the calculation formula of the response map of the last frame of the target is as follows:
Figure BDA0002629996730000041
wherein z represents a target image and s represents a current frame image;
Figure BDA0002629996730000051
representing a feature extraction function; b represents a two-dimensional bias matrix and m represents a current frame number.
Preferably, the feature extraction function is obtained by adopting artificial design or deep learning network training. The present embodiment adopts a twin network as a target template feature extraction network, but the present invention is not limited to this feature extraction method in the use process.
Training process of full convolution twin network
The hardware environment comprises: a CPU with model number of Intel (R) core (TM) i7-6850K, 6 core, and CPU of 12 threads, and main frequency of 3.60 GHz; two GPUs with model number of Nvidia GTX 1080 Ti; and 64GB in memory. The software environment for the experiment included: ubuntu 16.04 operating system, deep learning framework Tensorflow. The network's training data uses the ILSVRC-VID dataset. All 4417 video data in this data set were used to train and adjust network parameters. The iterative training method adopts a random gradient descent method. The parameter initialization method is an Xavier method. The training iteration turns were 50, 50000 image pairs were trained per turn. The batch size was set to 32. The initial learning rate was set to 0.01, the learning rate exponentially declined, and the lowest learning rate was 0.00001. The maximum interval frame between the input picture pairs is 100. When not otherwise specified, β is empirically 0.5.
And S2, calculating the ratio of the maximum value to the mean value of the response image of the previous frame of the target.
And taking the ratio of the maximum value to the mean value of the response image of the previous frame of the target as the measure of the difference degree of the targets in the current frame and the previous frame, wherein the smaller the ratio is, the larger the difference degree is.
And S3, when the ratio is smaller than or equal to a set threshold, directly using the target template of the previous frame as the target template of the current frame, and when the ratio is larger than the set threshold, weighting and fusing the target template of the previous frame and the target feature map of the previous frame by the target template of the current frame.
When the ratio is less than or equal to the set threshold, the difference between the two is large, which is caused by target blurring or shielding, therefore, in order to not introduce background pollution into the target template, the target template edge of the current frame is used as the target template of the previous frame; when the ratio is larger than a set threshold, the difference degree is small, and the target template of the current frame is a target template of the previous frame and a target feature map weighted fusion map of the previous frame.
Preferably, the target template calculation formula of the current frame is as follows:
Figure BDA0002629996730000061
wherein z represents a target image, m represents a frame number of a current frame, β represents a template updating coefficient, ratio represents a ratio of a maximum value to a mean value of a response map of a frame above a target, σ represents a set threshold for judging whether the target template should be updated,
Figure BDA0002629996730000062
the target feature map of the previous frame is represented,
Figure BDA0002629996730000063
target template representing the previous frame, target template of the first frame
Figure BDA0002629996730000064
Target feature map initialized to first frame
Figure BDA0002629996730000065
Preferably, β is 0.5 and σ is 1.
And S4, carrying out weighted fusion on the target template of the current frame and the target characteristic image of the first frame to obtain a fusion template of the current frame, and updating the target template of the current frame.
Preferably, the fusion template calculation formula of the current frame is as follows:
Figure BDA0002629996730000066
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002629996730000067
a fusion template representing the current frame is shown,
Figure BDA0002629996730000068
a target template representing the current frame is shown,
Figure BDA0002629996730000069
representing the target characteristic diagram of the first frame, z representing the target image, m representing the frame number of the current frameAnd λ denotes a template weighting coefficient.
Preferably, the template weighting factor is calculated as follows:
Figure BDA00026299967300000610
Figure BDA00026299967300000611
Figure BDA00026299967300000612
wherein, f1(z, s) represents
Figure BDA00026299967300000613
Calculated response map, f2(z, s) is
Figure BDA00026299967300000614
The calculated response map.
According to the method, the target template of the current frame and the target feature map of the first frame are subjected to weighted fusion to obtain the fusion template of the current frame, so that the influence of target blurring or shielding on the template in the tracking process is avoided. In particular, the amount of the solvent to be used,
the smaller the target pollution is, the target template of the current frame
Figure BDA0002629996730000071
And feature map of the current frame
Figure BDA0002629996730000072
The higher the similarity is, the response result f1The larger (z, s), the max (f)1(z, s)) the larger the ratio of the current frame template to the fusion template is; if the current frame is interfered, the feature change of the target is large, and the target template of the current frame
Figure BDA0002629996730000073
And feature map of the current frame
Figure BDA0002629996730000074
The lower the similarity is, the response result f1The smaller (z, s), the max (f)1(z, s)) is smaller, the proportion of the current frame template to the fusion template is smaller, the proportion of the first frame feature map to the fusion template is larger, and the first frame feature map is used as the most accurate priori knowledge, so that the introduction of interference information in the template updating process is avoided.
The denominator of the template weighting factor is for normalization, so that the template weighting factor λ is in the [0,1] interval.
After obtaining the fusion template of the current frame, when further used for target tracking, the method also comprises the following steps:
calculating response maps and tracking results using convolution
Figure BDA0002629996730000075
Get the response chart f againSEAnd (z, s) as the tracking result of the current target at the maximum value. If the tracking of all the targets of the current frame is not finished, continuing to perform the next target i ═ i +1, returning to the step S1, otherwise, judging whether the image sequence or the video data is finished, if not, then m ═ m +1, returning to the step S1, if so, indicating that the tracking is finished, and outputting the positions of all the targets in the other frames except the first frame in the video.
Experiments were performed according to the parameters of table 1.
TABLE 1
Figure BDA0002629996730000076
Fig. 2 and fig. 3 are a comparison accuracy chart and a success rate chart of the performance statistics of the fusion template-based visual target tracking algorithm, respectively. The abscissa of the accuracy map is the center position error distance threshold in pixels. The ordinate is the distance accuracy corresponding to the threshold. The abscissa of the success rate graph is the intersection ratio threshold. The ordinate is the overlay accuracy corresponding to the threshold. Table 2 shows the statistical results of the experiment. Compared with experiments Test1 and Test2, the visual target tracking algorithm based on the fusion template has obvious advantages. For the experiment Fusion of the Fusion template, the accuracy curve area is improved by 0.1133, and the success rate curve area is improved by 0.0766, so that the effect of the Fusion template is very obvious. This experimental result demonstrates the effectiveness of the algorithm of the present invention. It can be seen that the curves of the inventive algorithm perform significantly better.
TABLE 2
Serial number Name of experiment Area of accuracy curve Area of success rate curve
1 Test1 0.4859 0.4108
2 Test2 0.4895 0.4094
3 Fusion 0.6266 0.5092
Furthermore, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the visual target tracking adaptive template fusion method according to the first aspect.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A visual target tracking adaptive template fusion method is characterized by comprising the following steps:
s1, performing convolution calculation on a target feature map of a previous frame and a feature map of a current frame to obtain a response map of the previous frame of the target;
s2, calculating the ratio of the maximum value to the mean value of the response image of the previous frame of the target;
s3, when the ratio is smaller than or equal to a set threshold, directly using the target template of the previous frame as the target template of the current frame, and when the ratio is larger than the set threshold, weighting and fusing the target template of the previous frame and the target feature map of the previous frame by the target template of the current frame;
and S4, carrying out weighted fusion on the target template of the current frame and the target characteristic image of the first frame to obtain a fusion template of the current frame, and updating the target template of the current frame.
2. The method of claim 1, wherein the response map of the previous frame of the target is calculated as follows:
Figure FDA0003597083340000011
wherein z represents a target image and s represents a current frame image;
Figure FDA0003597083340000012
representing a feature extraction function; b represents a two-dimensional bias matrix and m represents a current frame number.
3. The method of claim 2, wherein the feature extraction function is obtained by artificial design or deep learning network training.
4. A method as claimed in any one of claims 1 to 3, wherein the target template for the current frame is calculated as follows:
Figure FDA0003597083340000013
wherein z represents a target image, m represents a frame number of a current frame, β represents a template updating coefficient, ratio represents a ratio of a maximum value to a mean value of a response map of a frame above the target, λ represents a set threshold for judging whether the target template should be updated,
Figure FDA0003597083340000021
the target feature map of the previous frame is represented,
Figure FDA0003597083340000022
target template representing the previous frame, target template of the first frame
Figure FDA0003597083340000023
Target feature map initialized to first frame
Figure FDA0003597083340000024
5. The method of claim 4, wherein β is 0.5 and σ is 1.
6. The method of any one of claims 1 to 3, wherein the fusion template for the current frame is calculated as follows:
Figure FDA0003597083340000025
wherein the content of the first and second substances,
Figure FDA0003597083340000026
a fusion template representing the current frame is shown,
Figure FDA0003597083340000027
a target template representing the current frame is shown,
Figure FDA0003597083340000028
and representing the target characteristic diagram of the first frame, z represents a target image, m represents the frame number of the current frame, and lambda represents a template weighting coefficient.
7. The method of claim 6, wherein the template weighting factor is calculated as follows:
Figure FDA0003597083340000029
Figure FDA00035970833400000210
Figure FDA00035970833400000211
wherein f is1(z, s) represents
Figure FDA00035970833400000212
Calculated response map, f2(z, s) represents
Figure FDA00035970833400000213
And (4) calculating a response graph.
8. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements the visual target tracking adaptive template fusion method according to any one of claims 1 to 7.
CN202010810873.4A 2020-08-12 2020-08-12 Visual target tracking self-adaptive template fusion method Active CN111985375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010810873.4A CN111985375B (en) 2020-08-12 2020-08-12 Visual target tracking self-adaptive template fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010810873.4A CN111985375B (en) 2020-08-12 2020-08-12 Visual target tracking self-adaptive template fusion method

Publications (2)

Publication Number Publication Date
CN111985375A CN111985375A (en) 2020-11-24
CN111985375B true CN111985375B (en) 2022-06-14

Family

ID=73434184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010810873.4A Active CN111985375B (en) 2020-08-12 2020-08-12 Visual target tracking self-adaptive template fusion method

Country Status (1)

Country Link
CN (1) CN111985375B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129335B (en) * 2021-03-25 2023-03-14 西安电子科技大学 Visual tracking algorithm and multi-template updating strategy based on twin network
CN115731516A (en) * 2022-11-21 2023-03-03 国能九江发电有限公司 Behavior recognition method and device based on target tracking and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767405A (en) * 2017-09-29 2018-03-06 华中科技大学 A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking
CN110084836A (en) * 2019-04-26 2019-08-02 西安电子科技大学 Method for tracking target based on the response fusion of depth convolution Dividing Characteristics
CN111161324A (en) * 2019-11-20 2020-05-15 山东工商学院 Target tracking method based on adaptive multi-mode updating strategy

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180129934A1 (en) * 2016-11-07 2018-05-10 Qualcomm Incorporated Enhanced siamese trackers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767405A (en) * 2017-09-29 2018-03-06 华中科技大学 A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking
CN110084836A (en) * 2019-04-26 2019-08-02 西安电子科技大学 Method for tracking target based on the response fusion of depth convolution Dividing Characteristics
CN111161324A (en) * 2019-11-20 2020-05-15 山东工商学院 Target tracking method based on adaptive multi-mode updating strategy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种改进的SAD目标跟踪算法;赵柏山等;《微处理机》;20181231(第002期);全文 *
特征融合自适应目标跟踪;钟国崇等;《图学学报》;20181015(第05期);全文 *

Also Published As

Publication number Publication date
CN111985375A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111985375B (en) Visual target tracking self-adaptive template fusion method
CN110675423A (en) Unmanned aerial vehicle tracking method based on twin neural network and attention model
CN109102511B (en) Cerebrovascular segmentation method, system and electronic equipment
CN109859209B (en) Remote sensing image segmentation method and device, storage medium and server
CN103096185A (en) Method and device of video abstraction generation
CN109712149A (en) A kind of image partition method based on wavelet energy and fuzzy C-mean algorithm
Li et al. Sequential dynamic leadership inference using Bayesian Monte Carlo methods
CN106874862A (en) People counting method based on submodule technology and semi-supervised learning
CN110490894A (en) Background separating method before the video decomposed based on improved low-rank sparse
CN113281999A (en) Unmanned aerial vehicle autonomous flight training method based on reinforcement learning and transfer learning
CN111488552A (en) Close-proximity multi-target tracking method based on Gaussian mixture probability hypothesis density
MacDonald et al. Individual behavior at habitat edges may help populations persist in moving habitats
Zhang et al. A bionic dynamic path planning algorithm of the micro UAV based on the fusion of deep neural network optimization/filtering and hawk-eye vision
CN104517121A (en) Spatial big data dictionary learning method based on particle swarm optimization
Ollinger et al. Maximum likelihood reconstruction in fully 3D PET via the SAGE algorithm
CN117116096A (en) Airport delay prediction method and system based on multichannel traffic image and depth CNN
CN109190693B (en) Variant target high-resolution range profile recognition method based on block sparse Bayesian learning
CN115907079B (en) Airspace traffic flow prediction method based on attention space-time diagram convolutional network
CN111080647A (en) SAR image segmentation method based on adaptive sliding window filtering and FCM
CN115856811A (en) Micro Doppler feature target classification method based on deep learning
CN112215869B (en) Group target tracking method and system based on graph similarity constraint
CN114565861A (en) Airborne downward-looking target image positioning method based on probability statistic differential homoembryo set matching
CN111582299B (en) Self-adaptive regularization optimization processing method for image deep learning model identification
CN113126052A (en) High-resolution range profile target identification online library building method based on stage-by-stage segmentation training
CN112233141A (en) Moving target tracking method and system based on unmanned aerial vehicle vision in electric power scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant