CN112115815A - Target tracking method based on laser anti-unmanned aerial vehicle system - Google Patents

Target tracking method based on laser anti-unmanned aerial vehicle system Download PDF

Info

Publication number
CN112115815A
CN112115815A CN202010901232.XA CN202010901232A CN112115815A CN 112115815 A CN112115815 A CN 112115815A CN 202010901232 A CN202010901232 A CN 202010901232A CN 112115815 A CN112115815 A CN 112115815A
Authority
CN
China
Prior art keywords
image
image block
filter
detection frame
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010901232.XA
Other languages
Chinese (zh)
Other versions
CN112115815B (en
Inventor
王捷飞
郭健
方林峰
刘金魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010901232.XA priority Critical patent/CN112115815B/en
Publication of CN112115815A publication Critical patent/CN112115815A/en
Application granted granted Critical
Publication of CN112115815B publication Critical patent/CN112115815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a target tracking method based on a laser anti-unmanned aerial vehicle system, which comprises the steps of obtaining a target detection frame of a first frame image, intercepting an image block comprising the detection frame, extracting and processing fHOG characteristics of the image block to obtain a gradient characteristic diagram; calculating a salient feature map of the image block by using an MBD function and a raster scanning method; taking the gradient features as input samples, taking the significant feature map as an output label, and training a filter by using ridge regression; acquiring a next frame of image, collecting image blocks at the center position of the image, extracting a gradient characteristic graph, calculating the response of the image blocks by using a filter, wherein the maximum response position is the position updated by the unmanned aerial vehicle, and acquiring a new target detection frame; and re-extracting the characteristic diagram at the new detection frame and training the filter to track the target of the subsequent frame. The method is improved aiming at the KCF algorithm, so that the algorithm can be applied to a tracking task under a dynamic background, and the problem of target tracking loss caused by pollution of a filter when the background changes is solved.

Description

Target tracking method based on laser anti-unmanned aerial vehicle system
Technical Field
The invention relates to an intelligent striking technology for a low-altitude civil unmanned aerial vehicle, in particular to a target tracking method based on a laser anti-unmanned aerial vehicle system.
Background
Aiming at the problem of 'black flight' of a low-altitude unmanned aerial vehicle, the method for detecting the target by using the machine vision and emitting high-energy laser to strike the unmanned aerial vehicle becomes a novel coping method. The invention is based on a laser anti-unmanned aerial vehicle system, wherein a camera and a laser emission head are carried on a two-degree-of-freedom servo rotary table, a low-altitude unmanned aerial vehicle is detected through an image processing technology, the rotary table is controlled by using visual servo, high-precision tracking aiming is realized, and finally laser is emitted to strike the unmanned aerial vehicle after the aiming is finished.
In the process of target tracking, the servo turntable drives the camera to rotate, and the background of the image can change obviously, so that the tracking task is performed under a dynamic background. The existing target tracking algorithm is mostly used for tracking under a static background and cannot meet the task requirement. The tracking algorithm used in the dynamic background has a problem of poor real-time performance.
The KCF algorithm is a target tracking method with good real-time performance. However, this algorithm is suitable for static background tracking and, in addition, it uses a filter to match the target, which can be contaminated by drastic changes in the background, resulting in loss of the target. The drastic change in the background may be in particular: the unmanned aerial vehicle flies to the front of the building from the open sky background, and the background around the unmanned aerial vehicle changes from the sky to the outer wall of the building from the view angle of the camera.
Therefore, it is necessary to design a tracking method which is real-time and can be applied in a dynamic background.
Disclosure of Invention
The invention discloses a target tracking method based on a laser anti-unmanned aerial vehicle system, which is improved aiming at a KCF algorithm, so that the algorithm can be applied to a tracking task under a dynamic background, and the problem of target tracking loss caused by pollution of a filter when the background changes is solved.
The technical solution for realizing the purpose of the invention is as follows: a target tracking method based on a laser anti-unmanned aerial vehicle system specifically comprises the following steps:
step 1, acquiring a target detection frame of a first frame image, intercepting an image block comprising the detection frame, extracting and processing an fHOG feature of the image block to obtain a gradient feature map x, specifically:
step 1-1, acquiring the position of the unmanned aerial vehicle in a first frame image by adopting a target detection algorithm, and acquiring a target detection frame (x, y, w, h), wherein (x, y) is a coordinate of the upper left corner of the detection frame, and (w, h) is the width and height of the detection frame;
step 1-2, to
Figure BDA0002659836900000021
Intercepting an image block for the center, wherein the length and the width of the image block are k times of that of a target frame, extracting fHOG characteristics from the image block, and reducing the dimension by using PCA (principal component analysis) to obtain an original characteristic map tmp;
step 1-3, in order to reduce the boundary effect of the characteristic, multiplying the original characteristic diagram by a Hanning window to obtain a gradient characteristic diagram x, wherein the formula is as follows:
xi,j=tmpi,j×hanni,j (1)
the mathematical expression of the Hanning window is as follows, Rows and Cols respectively represent the row number and the column number of the characteristic diagram;
Figure BDA0002659836900000022
step 2, calculating a salient feature map y corresponding to the image block by using a raster scanning method, specifically:
step 2-1, calculating MBD characteristics of the image block by using a raster scanning method;
a path on an image is composed of a series of adjacent pixel points, the path cannot turn around in the process of extending from a starting point to an end point, namely, horizontal and vertical coordinates of the sequentially passing pixel points can only change in a single direction, MBD characteristics of the path represent the difference between the maximum pixel value and the minimum pixel value on the path, and any pixel p of an image blocki,jMBD characteristic of (a), (b), (c), (d), (p)i,j) The minimum MBD feature value in all paths formed with the pixel as the starting point and any edge pixel as the end point is shown.
Simplifying the calculation of the MBD characteristics by adopting a raster scanning method, and storing the MBD characteristics of each pixel and the maximum pixel value U (p) on the corresponding pathi,j) And a minimum pixel value L (p)i,j). Setting the initial characteristic value of each pixel to be infinite, and performing forward raster scanning, namely, from left to right and from top to bottom, firstly scanning a line, and then moving to the initial position of the next line to continue scanning. If the new path formed by the path of the upper neighborhood or the left neighborhood and the current pixel point has a smaller MBD characteristic value, updating according to the following formula:
β(pi,j)=min{max{U(pi-1,j),I(pi,j)}-min{L(pi-1,j),I(pi,j)},β(pi,j)}
U(pi,j)=max{U(pi-1,j),I(pi,j)} (3)
L(pi,j)=min{L(pi-1,j),I(pi,j)}
wherein p isi-1,jPixel points representing the upper neighborhood, I (p)i,j) Representing the pixel value. The left neighborhood is computed in the same way. After the forward scanning is finished, performing reverse scanning again, and updating by using the pixel points of the lower neighborhood and the right neighborhood;
and 2-2, normalizing the value of each pixel of the feature map to be between 0 and 1 to obtain the significant feature map y.
Step 3, taking the gradient feature map x as an input sample, taking the significant feature map y as an output label, and training a parameter alpha of the filter by using the ridge regression model, wherein the method specifically comprises the following steps:
step 3-1, the ridge regression is a linear model, and in the problem, the sample and the label are in a nonlinear relationship, so that the sample characteristics need to be converted into high-dimensional characteristics by using a nonlinear mapping function, so that the high-dimensional characteristics and the label are in a linear relationship, and the ridge regression can be applied.
The computation of the parameters of the filter in the high-dimensional space, i.e. the coefficients of the ridge regression, can be converted to computing the inner product of the high-dimensional features. The inner product operation of the low-dimensional space is mapped to the high-dimensional space by adopting the Gaussian kernel function, so that the inner product operation of the high-dimensional space is avoided, and the specific form of constructing the nonlinear mapping function is also avoided.
The formula of the gaussian kernel function is:
Figure BDA0002659836900000031
wherein the content of the first and second substances,
Figure BDA0002659836900000032
a Gaussian mapping value which is the inner product of two low-dimensional features, sigma is a width parameter of a Gaussian kernel, | | … | | survival2Denotes a two-norm, F-1(…) represents an inverse discrete Fourier transform,
Figure BDA00026598369000000310
discrete Fourier transform representing features, e.g.
Figure BDA0002659836900000033
Represents a feature x1、x2The discrete Fourier transform of,. indicates a dot product;
step 3-2, calculating parameters of the filter, wherein the formula is as follows:
Figure BDA0002659836900000034
wherein the content of the first and second substances,
Figure BDA0002659836900000035
representing the discrete fourier transforms of alpha, y,
Figure BDA0002659836900000036
by mixing x1=x,x2The formula (4) is substituted for x and discrete fourier transform determination is performed, and λ is a regularization coefficient.
Step 4, acquiring the next frame of image, collecting an image block at the central position of the image, extracting a characteristic graph z, calculating the response of the image block by using a filter, wherein the maximum response position is the position updated by the unmanned aerial vehicle, and obtaining a new target detection frame, specifically:
step 4-1, intercepting an image block at the center of the next frame of image, wherein the size of the image block is the same as that in the step 1, and extracting a gradient feature map z of the image block by adopting the method in the step 1;
and 4-2, calculating the response of the characteristic diagram z, wherein the calculation formula is as follows:
Figure BDA0002659836900000037
where f (z) is the response of the profile z,
Figure BDA0002659836900000038
by mixing x1=x,x2Substituting z into equation (4) and performing a discrete fourier transform determination,
Figure BDA0002659836900000039
discrete fourier transform of the filter α obtained in step 3;
and 4-3, calculating the coordinate of the maximum response position of the characteristic diagram to obtain the updated position of the unmanned aerial vehicle, wherein the new target detection frame takes the updated position as the center and has the same size as the original detection frame.
And 5, re-extracting the characteristic diagram at the new detection frame, training the filter, and repeating the steps 1-4.
A target tracking system of a laser anti-unmanned aerial vehicle system adopts the target tracking method of the laser anti-unmanned aerial vehicle system, and comprises the following steps:
the gradient feature extraction module is used for acquiring a target detection frame of the image, intercepting an image block comprising the detection frame, extracting and processing the fHOG feature of the image block to obtain a gradient feature map;
the significant feature extraction module is used for calculating the significant features of the image blocks by using an MBD function and a raster scanning method;
the filter training module is used for training the filter by using ridge regression by taking the gradient characteristics as input samples and the significant characteristic graph as an output label;
and the unmanned aerial vehicle tracking module is used for collecting image blocks at the central position of the image, extracting a gradient characteristic diagram, calculating the response of the image blocks by using a filter and determining the updated position of the unmanned aerial vehicle.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the above-mentioned laser anti-drone system target tracking method.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the above-described laser anti-drone system target tracking method.
Compared with the prior art, the invention has the remarkable advantages that: 1) the original KCF algorithm samples near the target position of the previous frame image, and can only complete tracking under a static background. Because the camera rotates, the target position of the previous frame is equivalent to the image center of the current frame, and the method samples the image center position according to the characteristics and can be used for target tracking under a dynamic background; 2) the original KCF algorithm uses a value generated by a two-dimensional Gaussian function as an output label training filter, the background part in the filter occupies a considerable weight, and when the background changes violently in the tracking process, the filter tends to track the original background instead of the unmanned aerial vehicle in a new background. The invention trains the filter by using the significant characteristic diagram of the unmanned aerial vehicle, weakens the weight of the background and improves the problem of tracking loss.
Drawings
Fig. 1 is a flow chart of a target tracking method based on a laser anti-drone system according to the present invention.
Fig. 2 is a diagram of image blocks and corresponding salient features in step 2 of the present invention.
Fig. 3 is a diagram of the effect of the invention on unmanned aerial vehicle tracking.
FIG. 4 is a graph comparing the effect of the present invention with the original KCF algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
A target tracking method of a laser anti-unmanned aerial vehicle system specifically comprises the following steps:
step 1, acquiring a target detection frame of a first frame image, intercepting an image block comprising the detection frame, extracting and processing an fHOG feature of the image block to obtain a gradient feature map x, specifically:
step 1-1, acquiring the position of the unmanned aerial vehicle in a first frame image by adopting a YOLOv4-tiny algorithm to obtain a target detection frame (x, y, w, h);
step 1-2, to
Figure BDA0002659836900000051
Intercepting an image block for the center, wherein the length and the width of the image block are 2.5 times of those of a target frame, extracting fHOG features from the image block, and reducing the fHOG features to 31 dimensions by PCA (principal component analysis), so as to obtain an original feature map tmp;
and 1-3, multiplying the original characteristic diagram by a Hanning window by using a formula (1) to obtain a gradient characteristic diagram x, wherein the mathematical expression of the Hanning window is shown as a formula (2) in order to reduce the boundary effect of the characteristic.
Step 2, calculating a salient feature map y corresponding to the image block by using a raster scanning method, specifically:
step 2-1, simplifying the calculation of the MBD characteristics by adopting a raster scanning method, and storing the MBD characteristics of each pixel and the maximum pixel value U (p) on the corresponding pathi,j) And a minimum pixel value L (p)i,j). Setting the initial characteristic value of each pixel to be infinite, and performing forward raster scanning, namely, from left to right and from top to bottom, firstly scanning a line, and then moving to the initial position of the next line to continue scanning. And if the path of the upper neighborhood or the left neighborhood and the new path formed by the current pixel point have smaller MBD characteristic values, updating according to a formula (3). The left neighborhood is computed in the same way. After the forward scanning is finished, performing reverse scanning again, and updating by using the pixel points of the lower neighborhood and the right neighborhood;
and 2-2, normalizing the value of each pixel of the feature map to be between 0 and 1 to obtain the significant feature map y. Fig. 2 shows an image block and its corresponding salient feature map.
Step 3, taking the gradient feature map x as an input sample, taking the significant feature map y as an output label, and training a parameter alpha of the filter by using the ridge regression model, wherein the method specifically comprises the following steps:
step 3-1, mapping the feature inner product of the low-dimensional space to a high-dimensional space by using a formula (4), wherein sigma is 0.6;
and 3-2, calculating the parameters of the filter by using the formula (5), wherein the lambda is 0.0001.
Step 4, acquiring the next frame of image, collecting an image block at the central position of the image, extracting a characteristic graph z, calculating the response of the image block by using a filter, wherein the maximum response position is the position updated by the unmanned aerial vehicle, and obtaining a new target detection frame, specifically:
step 4-1, intercepting an image block at the center of the next frame of image, wherein the size of the image block is the same as that in the step 1, and extracting a gradient feature map z of the image block by adopting the method in the step 1;
step 4-2, calculating the response of the characteristic diagram z by using a formula (6);
and 4-3, calculating the coordinate of the maximum response position of the characteristic diagram, and obtaining the updated position of the unmanned aerial vehicle. The new target detection frame takes the updated position as the center, and the size of the new target detection frame is the same as that of the original detection frame. Fig. 3 shows the updated drone detection box.
And 5, re-extracting the characteristic diagram at the new detection frame, training the filter, and repeating the steps 1-4. Fig. 4 shows a comparison graph of the effect of the present invention and the original KCF algorithm, and it can be seen that the present invention improves the problem of target tracking loss caused by the pollution of the filter when the background changes.
The invention also provides a target tracking system of the laser anti-unmanned aerial vehicle system, which comprises the following components:
the gradient feature extraction module is used for acquiring a target detection frame of the image, intercepting an image block comprising the detection frame, extracting and processing the fHOG feature of the image block to obtain a gradient feature map;
the salient feature extraction module is used for calculating a salient feature map of the image block by using an MBD function and a raster scanning method;
the filter training module is used for training the filter by using ridge regression by taking the gradient feature map as an input sample and the salient feature map as an output label;
and the unmanned aerial vehicle tracking module is used for collecting an image block at the central position of the image, calculating the response of the image block by using a filter, and determining the position of the maximum response as the updated position of the unmanned aerial vehicle.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the above-mentioned laser anti-drone system target tracking method.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the above-described laser anti-drone system target tracking method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A target tracking method based on a laser anti-unmanned aerial vehicle system is characterized by specifically comprising the following steps:
step 1, acquiring a target detection frame of a first frame of image, intercepting an image block comprising the detection frame, extracting and processing an fHOG feature of the image block to obtain a gradient feature map;
step 2, calculating a salient feature map of the image block by using an MBD function and a raster scanning method;
step 3, taking the gradient features as input samples, taking the significant feature map as an output label, and training a filter by utilizing ridge regression;
step 4, acquiring a next frame of image, collecting image blocks at the center position of the image, extracting a gradient characteristic diagram, calculating the response of the characteristic diagram by using a filter, wherein the maximum response position is the position updated by the unmanned aerial vehicle, and obtaining a new target detection frame;
and 5, repeating the steps 1-4, re-extracting the characteristic diagram at the new detection frame, training the filter, and carrying out target tracking on the subsequent frame.
2. The target tracking method based on the laser anti-unmanned aerial vehicle system according to claim 1, wherein in step 1, a target detection frame of a first frame image is obtained, an image block including the detection frame is intercepted, an fHOG feature of the image block is extracted and processed to obtain a gradient feature map x, and the specific method is as follows:
step 1-1, acquiring the position of the unmanned aerial vehicle in a first frame image by adopting a target detection algorithm, and acquiring a target detection frame (x, y, w, h), wherein (x, y) is a coordinate of the upper left corner of the detection frame, and (w, h) is the width and height of the detection frame;
step 1-2, to
Figure FDA0002659836890000011
Intercepting an image block for the center, wherein the length and the width of the image block are k times of that of a target frame, extracting fHOG characteristics from the image block, and reducing the dimension by using PCA (principal component analysis) to obtain an original characteristic map tmp;
step 1-3, multiplying the original characteristic diagram by a Hanning window to obtain a gradient characteristic diagram x, wherein the formula is as follows:
xi,j=tmpi,j×hanni,j (1)
the mathematical expression of the Hanning window is as follows, Rows and Cols respectively represent the row number and the column number of the characteristic diagram;
Figure FDA0002659836890000012
3. the target tracking method based on the laser anti-UAV system according to claim 1, wherein in step 2, the significant feature map y of the image block is calculated by using an MBD function and a raster scanning method, and the specific method is as follows:
step 2-1, calculating MBD characteristics of the image block by using a raster scanning method;
a path on an image is composed of a series of adjacent pixel points, the path cannot turn around in the process of extending from a starting point to an end point, namely, horizontal and vertical coordinates of the sequentially passing pixel points can only change in a single direction, MBD characteristics of the path represent the difference between the maximum pixel value and the minimum pixel value on the path, and any pixel p of an image blocki,jMBD characteristic of (a), (b), (c), (d), (p)i,j) Representing the minimum MBD characteristic value in all paths formed by taking the pixel as a starting point and any edge pixel as an end point;
method simplification by raster scanningCalculating the MBD feature, storing the MBD feature of each pixel and the maximum pixel value U (p) on the corresponding pathi,j) And a minimum pixel value L (p)i,j) Setting the initial characteristic value of each pixel to be infinite, carrying out forward raster scanning, namely, turning right from left, moving down from top to bottom, scanning a line first, then moving to the initial position of the next line to continue scanning, and if the path of the upper neighborhood or the left neighborhood and the new path formed by the current pixel have smaller MBD characteristic values, updating according to the following formula:
Figure FDA0002659836890000021
wherein p isi-1,jPixel points representing the upper or left neighbourhood, I (p)i,j) Expressing pixel values, performing reverse scanning again after completing forward scanning, and updating by using pixel points of a lower neighborhood and a right neighborhood;
and 2-2, normalizing the value of each pixel of the feature map to be between 0 and 1 to obtain the significant feature map y.
4. The target tracking method based on the laser anti-UAV system according to claim 1, wherein in step 3, the gradient feature x is used as an input sample, the significant feature map y is used as an output label, and a ridge regression is used to train the filter α, and the method specifically comprises:
step 3-1, mapping the gradient feature x to a high-dimensional space by adopting a Gaussian kernel function, wherein the formula of the Gaussian kernel function is as follows:
Figure FDA0002659836890000022
wherein the content of the first and second substances,
Figure FDA0002659836890000025
a Gaussian mapping value which is the inner product of two low-dimensional features, sigma is a width parameter of a Gaussian kernel, | | … | | survival2Denotes a two-norm, F-1(…) denotes inverse dispersionThe Fourier transform is carried out on the data to be processed,
Figure FDA0002659836890000023
represents a feature x1、x2The discrete Fourier transform of,. indicates a dot product;
step 3-2, calculating parameters of the filter, wherein the formula is as follows:
Figure FDA0002659836890000024
wherein the content of the first and second substances,
Figure FDA0002659836890000031
representing the discrete fourier transforms of alpha, y,
Figure FDA0002659836890000032
by mixing x1=x,x2The formula (4) is substituted for x and discrete fourier transform determination is performed, and λ is a regularization coefficient.
5. The target tracking method based on the laser anti-UAV system according to claim 1, wherein in step 4, the specific method for calculating the image block response by using the filter is as follows:
step 4-1, intercepting an image block at the center of the next frame of image, wherein the size of the image block is the same as that in the step 1, and extracting a gradient feature map z of the image block by adopting the method in the step 1;
and 4-2, calculating the response of the characteristic diagram z, wherein the calculation formula is as follows:
Figure FDA0002659836890000033
where f (z) is the response of the profile z,
Figure FDA0002659836890000034
by mixing x1=x,x2Substituting z into equation (4) anda discrete fourier transform determination is made and,
Figure FDA0002659836890000035
discrete fourier transform of the filter α obtained in step 3;
and 4-3, calculating the coordinate of the maximum response position of the characteristic diagram to obtain the updated position of the unmanned aerial vehicle, wherein the new target detection frame takes the updated position as the center and has the same size as the original detection frame.
6. A target tracking system of a laser anti-unmanned aerial vehicle system is characterized in that the target tracking method of the laser anti-unmanned aerial vehicle system according to any one of claims 1-5 is adopted, and the method comprises the following steps:
the gradient feature extraction module is used for acquiring a target detection frame of the image, intercepting an image block comprising the detection frame, extracting and processing the fHOG feature of the image block to obtain a gradient feature map;
the significant feature extraction module is used for calculating the significant features of the image blocks by using an MBD function and a raster scanning method;
the filter training module is used for training the filter by using ridge regression by taking the gradient characteristics as input samples and the significant characteristic graph as an output label;
and the unmanned aerial vehicle tracking module is used for collecting image blocks at the central position of the image, extracting a gradient characteristic diagram, calculating the response of the image blocks by using a filter and determining the updated position of the unmanned aerial vehicle.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor performing the laser anti-drone system target tracking method of any one of claims 1-5.
8. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the laser anti-drone system target tracking method of any one of claims 1-5.
CN202010901232.XA 2020-08-31 2020-08-31 Target tracking method based on laser anti-unmanned aerial vehicle system Active CN112115815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010901232.XA CN112115815B (en) 2020-08-31 2020-08-31 Target tracking method based on laser anti-unmanned aerial vehicle system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010901232.XA CN112115815B (en) 2020-08-31 2020-08-31 Target tracking method based on laser anti-unmanned aerial vehicle system

Publications (2)

Publication Number Publication Date
CN112115815A true CN112115815A (en) 2020-12-22
CN112115815B CN112115815B (en) 2022-12-06

Family

ID=73805075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010901232.XA Active CN112115815B (en) 2020-08-31 2020-08-31 Target tracking method based on laser anti-unmanned aerial vehicle system

Country Status (1)

Country Link
CN (1) CN112115815B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359536A (en) * 2018-09-14 2019-02-19 华南理工大学 Passenger behavior monitoring method based on machine vision
CN109685073A (en) * 2018-12-28 2019-04-26 南京工程学院 A kind of dimension self-adaption target tracking algorism based on core correlation filtering
US20190287264A1 (en) * 2018-03-14 2019-09-19 Tata Consultancy Services Limited Context based position estimation of target of interest in videos

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190287264A1 (en) * 2018-03-14 2019-09-19 Tata Consultancy Services Limited Context based position estimation of target of interest in videos
CN109359536A (en) * 2018-09-14 2019-02-19 华南理工大学 Passenger behavior monitoring method based on machine vision
CN109685073A (en) * 2018-12-28 2019-04-26 南京工程学院 A kind of dimension self-adaption target tracking algorism based on core correlation filtering

Also Published As

Publication number Publication date
CN112115815B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
US11488308B2 (en) Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN108053419B (en) Multi-scale target tracking method based on background suppression and foreground anti-interference
US20200134366A1 (en) Target recognition method and apparatus for a deformed image
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN111353512B (en) Obstacle classification method, obstacle classification device, storage medium and computer equipment
CN112132093A (en) High-resolution remote sensing image target detection method and device and computer equipment
DE102015207676A1 (en) Method and device for obstacle detection based on a monocular camera
CN114529837A (en) Building outline extraction method, system, computer equipment and storage medium
CN111105393A (en) Grape disease and pest identification method and device based on deep learning
CN112380926A (en) Weeding path planning system of field weeding robot
CN111091030A (en) Tree species identification method and device, computer equipment and readable storage medium
CN112861718A (en) Lightweight feature fusion crowd counting method and system
CN112131933A (en) Rapid pedestrian detection method and system based on improved YOLO network
CN112861755A (en) Method and system for real-time segmentation of multiple classes of targets
CN110253579B (en) Robot positioning method, device, equipment and medium based on arc feature extraction
CN112990175A (en) Method and device for recognizing handwritten Chinese characters, computer equipment and storage medium
CN110363103B (en) Insect pest identification method and device, computer equipment and storage medium
CN112115815B (en) Target tracking method based on laser anti-unmanned aerial vehicle system
CN113609941A (en) Crop disease and insect pest identification method based on deep learning
CN110751671B (en) Target tracking method based on kernel correlation filtering and motion estimation
JP2018180646A (en) Object candidate area estimation device, object candidate area estimation method and object candidate area estimation program
CN114494441B (en) Grape and picking point synchronous identification and positioning method and device based on deep learning
CN106909936B (en) Vehicle detection method based on double-vehicle deformable component model
CN112418043B (en) Corn weed occlusion determination method and device, robot, equipment and storage medium
CN113724330B (en) Monocular camera object pose estimation method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Guo Jian

Inventor after: Wang Jiefei

Inventor after: Fang Linfeng

Inventor after: Liu Jinkui

Inventor before: Wang Jiefei

Inventor before: Guo Jian

Inventor before: Fang Linfeng

Inventor before: Liu Jinkui

GR01 Patent grant
GR01 Patent grant