CN108961311A - A kind of rotor craft method for tracking target of double mode - Google Patents

A kind of rotor craft method for tracking target of double mode Download PDF

Info

Publication number
CN108961311A
CN108961311A CN201810639869.9A CN201810639869A CN108961311A CN 108961311 A CN108961311 A CN 108961311A CN 201810639869 A CN201810639869 A CN 201810639869A CN 108961311 A CN108961311 A CN 108961311A
Authority
CN
China
Prior art keywords
target
value
tracking
image
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810639869.9A
Other languages
Chinese (zh)
Other versions
CN108961311B (en
Inventor
田彦涛
张磊
石屹然
付春阳
黄海洋
洪伟
卢辉遒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201810639869.9A priority Critical patent/CN108961311B/en
Publication of CN108961311A publication Critical patent/CN108961311A/en
Application granted granted Critical
Publication of CN108961311B publication Critical patent/CN108961311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of rotor craft method for tracking target of double mode.There are two types of tracing modes for the tool of system designed by the present invention, the first tracing mode is the tracking mode based on electronic tag, in such a mode, rotor craft can get the information of target from farther away range, thereby executing tracing task, second of tracing mode is the tracking mode of view-based access control model, rotor craft obtains the location information of target by vision system, then distance of the target relative to rotor craft is gone out by the positional information calculation of target in image, to more accurately complete tracking task.The target tracking algorism that this double mode tracking and system use has good robustness, has resistance effect well for shake generated during aircraft flight and background variation.By being used cooperatively for two kinds of tracing modes, the system is improved in the practicability of aircraft target tracking domain.

Description

Dual-mode rotor craft target tracking method
Technical Field
The invention relates to the field of ground maneuvering target tracking of a rotor craft, in particular to a dual-mode rotor craft target tracking method.
Technical Field
Rotorcraft is a type of Unmanned Aerial Vehicle (UAV), and is now widely used in reconnaissance, rescue and relief work, aerial photography, and other tasks. The intellectualization of the unmanned aerial vehicle is an important development direction of the unmanned aerial vehicle, and no matter in the first line of disaster resistance rescue, the unmanned aerial vehicle reaches a place where rescue workers are difficult to reach, and starts a search and rescue task first; still on the snow road of skiing speed drop, unmanned aerial vehicle trails shoots, notes complete process, and unmanned aerial vehicle can both accomplish the task of these difficulties more independently. Therefore, the problem has very important practical significance to the research of the unmanned aerial vehicle moving target tracking problem.
The current research on a target tracking method of a rotorcraft mainly focuses on the aspect based on images, but the target needs to be in the visual field of a camera to start tracking based on the tracking of the images, and the target tracking method has the defect of narrow application range. For example, patent CN201710631781.8 discloses a long-term stable target tracking method for unmanned aerial vehicles, but does not consider the situation that the target is lost and the target is not in the field of view.
For the image tracking algorithm part, most of visual tracking algorithms are operated under the condition that a camera is static, and for the target tracking of a rotorcraft, the camera can shake at any time and various interferences can occur, so that the image tracking under the condition is very difficult, and a high requirement is provided for the robustness of the algorithm. The unmanned aerial vehicle target tracking method disclosed in CN201710322060.9 does not consider the problems caused by background interference and camera shaking during flight.
Disclosure of Invention
In order to solve the problems that the ground maneuvering target tracking range of the rotorcraft in the prior art is small and the robustness of a tracking algorithm is weak, the invention provides a dual-mode tracking method.
In order to achieve the purpose, the invention adopts the following technical scheme:
the method comprises the steps that firstly, when the rotary wing type aircraft starts target tracking at a far place, a first tracking mode is started, the position of the maneuvering target is collected through a GPS positioning module in an electronic tag carried by the ground maneuvering target, positioning information is processed through a data processing module in the electronic tag and converted into an absolute longitude and latitude value, and then the position information is sent to the multi-rotary wing aircraft through a wireless transmission module.
And secondly, the multi-rotor aircraft receives the position information U (LonU, LatU) of the ground maneuvering target transmitted by the electronic tag through the wireless receiving module, and then the position information is compared with the longitude and latitude value T (LonT, LatT) obtained by resolving through a GPS positioning module carried by the multi-rotor aircraft in the airborne information processing module to make a difference.
The system comprises a target, a multi-rotor aircraft, a LonU, a LatU, a LonT, a LatT and a LatT, wherein the LonU is a longitude value of the target, the LatU is a latitude value of the target, the LonT is a longitude value of the multi-rotor aircraft, and the LatT is a latitude value of the multi-rotor aircraft.
Thirdly, performing conversion calculation on the longitude and latitude difference value finally obtained in the last step, and finally converting the longitude and latitude difference value into an actual distance value, wherein the conversion calculation formula is as follows:
taking the calculated distance value as an input value x and an input value y of the rotorcraft, and controlling the rotorcraft to fly towards the target position;
fourthly, in the tracking process, the camera load always acquires images, the images are synchronously transmitted to the airborne image processing module and the ground PC end, and whether the target enters the camera view range or not is artificially checked through the ground PC end;
and fifthly, after the target enters the visual field range of the camera, manually selecting the ground maneuvering target in the image middle frame through the ground PC terminal, and starting a second tracking mode after the frame selection operation is finished.
Sixthly, in the first frame image of the selection frame, performing a cyclic shift operation on the data of the target area, setting an image unit in the target area as X, and respectively translating the image unit downwards and rightwards by a unit and b units, thereby obtaining the X subjected to the following shift operation1
X1=PaXPb
Wherein,
selecting an area around a target area through cyclic shift operation so as to obtain a search area, wherein the area is a range to be searched in the next frame of image, and a and b are valued according to the resolution of the image, and finally selecting an area 2.5 times around the target as the search area;
then, HOG characteristics are extracted from the target area in the current frame image, the extracted characteristic matrix is x1, discrete Fourier transform is carried out on the characteristic matrix x1, and an appearance model x 'of the target in a discrete Fourier domain is obtained't
x′t=F(x1)
Wherein, F (x)1) Is to x1And performing discrete Fourier transform.
Calculating a nuclear autocorrelation K, K ═ K (x't,x′t) The kernel function used in the calculation process is a gaussian kernel function, and the specific form is as follows:
whereinIs thatThe conjugate matrix of (a) is determined,is x'tObtained by discrete Fourier transform, F-1Is an inverse discrete Fourier transform, σ2Is the gaussian kernel bandwidth, which takes a value of 0.5.
Calculating filter parameters using a non-linear regression model
α′t=(K+λI)-1y
Wherein K ═ K (x't,x′t) λ is a regularization parameter used to prevent overfitting of the function, and y is the regression value y from each samplei(i is 1, 2, 3, 4, 5, 6 … …) and y is a column vectoriThe method is characterized in that a preset sample regression target value is obtained, a preset target position is used as a sample regression target value of the user and is used as a comparison value of regression calculation, namely the position of a target in a previous frame of image, and I is an identity matrix;
updating the position of the target and the filter parameters:
wherein alpha istFor the filtering function at the moment corresponding to the t-th frame image, xtβ is a preset learning update parameter and determines the degree of dependence on data at the previous moment, and the value range of β is [0,1 ]]usually, the value is 0.02, alphat-1As a filter parameter at the previous time, xt-1Is the target position at the last moment. The previous moment is the moment corresponding to the previous frame of image; corresponding to the first frame imagesay, α1According to the position of the target area selected manually, and then the position is obtained by substituting a filter parameter calculation formula; x is the number oftA target area location selected for human;
when the next frame image enters into calculation, HOG characteristics are extracted in a search area determined by the position of the previous frame image, and the extracted characteristic matrix is x2To obtain the feature matrix x2Performing discrete Fourier transform to obtain an appearance model Z of the target at the current momentt
Zt=F(x2)
Wherein, F (x)2) Is tox2, performing discrete fourier transform.
Calculating appearance model Zt of target at current moment and appearance model x 'at last moment'tNuclear correlation between Kxz=κ(x′t,Zt)。
Wherein,is thatThe conjugate matrix of (a) is determined,is ZtObtained by discrete Fourier transform, F-1Is the inverse discrete fourier transform, and σ is the gaussian kernel bandwidth, with a value of 0.5.
From the previously obtained filter parameters, the following response regression function is calculated:
wherein,is the nuclear dependency KxzThe first row of elements of the matrix, F (x), is a discrete fourier transform on x.
Setting the target position of the current frame as a region with the maximum amplitude in the response function value, then taking the position as a target center according to the current position, selecting the periphery of the target region as a search region of the next frame again, updating the filtering parameter, and then repeating the previous process.
When the target moves beyond the visual field range of the image acquisition module, the tracking mode is switched to the first tracking mode for tracking, then when the target enters the visual field range of the image acquisition module again, the frame selection operation is carried out again, and then the second tracking mode is executed.
Drawings
FIG. 1 is an overall block diagram of a rotorcraft ground maneuvering target tracking system;
FIG. 2 is a graph of the results of tracking a ground maneuvering target of a rotorcraft in a first tracking mode;
FIG. 3 is an algorithm flow diagram of a visual tracking algorithm
Fig. 4(a) -4(i) are graphs of results of tracking a ground maneuvering target of a rotorcraft in a second tracking mode.
Detailed description of the preferred embodiments
The following further illustrates the invention and its embodiments:
as shown in fig. 1, the ground maneuvering target tracking system of the rotorcraft provided by the invention comprises four main parts, namely an electronic tag, a rotorcraft unmanned aerial vehicle, a PC and a remote controller.
The electronic tag comprises a wireless transmission module 1, a positioning information processing unit and a GPS; the rotorcraft is mainly provided with an airborne information processing module, an image acquisition module, an image processing module and a flight control module; the PC is connected with the image processing module through the wireless transmission module 3 and the wireless transmission module 4, so that image information is transmitted with a ground PC end in real time; the remote controller is directly connected with the flight control module, and the remote controller has the highest control right, so that the flight stability is ensured.
The first step, when rotor craft began the target tracking in the distance, opened a tracking mode, the position of maneuvering target was gathered through the GPS orientation module in the electronic tags that ground maneuvering target carried, handled positioning information through the data processing module in the electronic tags, turned into absolute longitude and latitude value, and rethread wireless transmission module 1 sends position information for many rotor crafts.
And secondly, the multi-rotor aircraft receives the position information U (LonU, LatU) of the ground maneuvering target transmitted by the electronic tag through the wireless receiving module 2, and then in the onboard information processing module, the position information is compared with the longitude and latitude value T (LonT, LatT) obtained by resolving through a GPS positioning module carried by the multi-rotor aircraft.
The system comprises a target, a multi-rotor aircraft, a LonU, a LatU, a LonT, a LatT and a LatT, wherein the LonU is a longitude value of the target, the LatU is a latitude value of the target, the LonT is a longitude value of the multi-rotor aircraft, and the LatT is a latitude value of the multi-rotor aircraft.
Thirdly, performing conversion calculation on the longitude and latitude difference value finally obtained in the last step, and finally converting the longitude and latitude difference value into an actual distance value, wherein the conversion calculation formula is as follows:
taking the calculated distance value as an input value x and an input value y of the rotorcraft, and controlling the rotorcraft to fly towards the target position; the ground moving object in the first tracking mode and the result are shown in fig. 2.
The fourth step, at the tracking in-process, the image acquisition module that carries on the rotor craft also can gather the image always to give image processing module with image transmission, on the other hand gives ground PC end through wireless transmission module 4 transmission, and the PC end is received through wireless transmission module 3, and ground monitoring personnel whether get into image acquisition module field of vision within range through the artificial inspection target.
And fifthly, after the target enters the visual field range of the image acquisition module, manually selecting the ground maneuvering target in the image middle frame through the PC terminal, and starting a second tracking mode after the frame selection operation is finished as shown in fig. 4 (a).
The target tracking algorithm flow is shown with reference to fig. 3.
Sixthly, in the first frame image of the selection frame, performing a cyclic shift operation on the data of the target area, setting an image unit in the target area as X, and respectively translating the image unit downwards and rightwards by a unit and b units, thereby obtaining the X subjected to the following shift operation1
X1=PaXPb
Wherein,
selecting an area around a target area through cyclic shift operation so as to obtain a search area, wherein the area is a range to be searched in the next frame of image, and a and b are valued according to the resolution of the image, and finally selecting an area 2.5 times around the target as the search area; as shown in fig. 4(a), the white frame is the position where the target is located, and the black frame is the search area. Then, the selected target area is set as a positive sample, and the other areas in the search area are set as negative samples.
Then, HOG characteristic extraction is carried out on the target area in the current frame image, and the extracted characteristic matrix is x1And for the feature matrix x1Performing discrete Fourier transform to obtain an appearance model x 'of the target in a discrete Fourier domain't
x′t=F(x1)
Wherein, F (x)1) Is to x1And performing discrete Fourier transform.
Calculating the nuclear autocorrelation K, K ═ K (x't,x′t) The kernel function used in the calculation process is a gaussian kernel function, and the specific form is as follows:
whereinIs thatThe conjugate matrix of (a) is determined,is x'tObtained by discrete Fourier transform, F-1Is an inverse discrete Fourier transform, σ2Is the gaussian kernel bandwidth, which takes a value of 0.5.
Calculating filter parameters using a non-linear regression model
α′t=(K+λI)-1y
Wherein K ═ K (x't,x′t) λ is a regularization parameter used to prevent overfitting of the function, and y is the regression value y from each samplei(i is 1, 2, 3, 4, 5, 6 … …) and y is a column vectoriThe method is characterized in that a preset sample regression target value is obtained, a preset target position is used as a sample regression target value of the user and is used as a comparison value of regression calculation, namely the position of a target in a previous frame of image, and I is an identity matrix;
updating the position of the target and the filter parameters:
wherein alpha istFor the filtering function at the moment corresponding to the t-th frame image, xtβ is a preset learning update parameter and determines the degree of dependence on data at the previous moment, and the value range of β is [0,1 ]]usually, the value is 0.02, alphat-1As a filter parameter at the previous time, xt-1the target position of the last time, namely the time corresponding to the last frame image, α corresponding to the first frame image1According to the position of the target area selected manually, and then the position is obtained by substituting a filter parameter calculation formula; x is the number oftA target area location selected for human;
when the next frame image enters into calculation, HOG characteristics are extracted in a search area determined by the position of the previous frame image, and the extracted characteristic matrix is x2To obtain the feature matrix x2Performing discrete Fourier transform to obtain an appearance model Z of the target at the current momentt
Zt=F(x2)
Wherein, F (x)2) Is to x2And performing discrete Fourier transform.
Calculating the current timeTarget appearance model ZtAnd last moment appearance model x'tNuclear correlation between Kxz=κ(x′t,Zt)。
Wherein,is thatThe conjugate matrix of (a) is determined,is ZtObtained by discrete Fourier transform, F-1Is the inverse discrete fourier transform, and σ is the gaussian kernel bandwidth, with a value of 0.5.
From the previously obtained filter parameters, the following response regression function is calculated:
wherein,is the nuclear dependency KxzThe first row of elements of the matrix, F (x), is a discrete fourier transform on x.
Setting the target position of the current frame as a region with the maximum amplitude in the response function value, then taking the position as a target center according to the current position, selecting the periphery of the target region as a search region of the next frame again, updating the filtering parameter, and then repeating the previous process.
When the target moves beyond the visual field range of the image acquisition module, the tracking mode is switched to the first tracking mode for tracking, then when the target enters the visual field range of the image acquisition module again, the frame selection operation is carried out again, and then the second tracking mode is executed. The tracking results in the second tracking mode are shown in fig. 4(a) -4 (i).
Fig. 4(a) -4(i) show that the second tracking mode can well complete the tracking task for the target, and has a good effect of suppressing disturbance such as shake of the rotorcraft. As can be seen from fig. 4(g) -4(i), when the interference occurs in the background and enters the search area, the algorithm can still keep a good tracking effect.
When the target moves beyond the visual field range of the image acquisition module, the tracking mode is switched to the first tracking mode for tracking, then when the target enters the visual field range of the image acquisition module again, the frame selection operation is carried out again, and then the second tracking mode is executed.

Claims (4)

1. A dual-mode rotorcraft target tracking method specifically comprises the following steps:
the method comprises the steps that firstly, when the rotary wing type aircraft starts target tracking at a far place, a first tracking mode is started, the position of the maneuvering target is collected through a GPS positioning module in an electronic tag carried by the ground maneuvering target, positioning information is processed through a data processing module in the electronic tag and converted into an absolute longitude and latitude value, and then the position information is sent to the multi-rotary wing aircraft through a wireless transmission module.
And secondly, the multi-rotor aircraft receives the position information U (LonU, LatU) of the ground maneuvering target transmitted by the electronic tag through the wireless receiving module, and then the position information is compared with the longitude and latitude value T (LonT, LatT) obtained by resolving through a GPS positioning module carried by the multi-rotor aircraft in the airborne information processing module to make a difference.
The system comprises a target, a multi-rotor aircraft, a LonU, a LatU, a LonT, a LatT and a LatT, wherein the LonU is a longitude value of the target, the LatU is a latitude value of the target, the LonT is a longitude value of the multi-rotor aircraft, and the LatT is a latitude value of the multi-rotor aircraft.
Thirdly, performing conversion calculation on the longitude and latitude difference value finally obtained in the last step, and finally converting the longitude and latitude difference value into an actual distance value, wherein the conversion calculation formula is as follows:
taking the calculated distance value as an input value x and an input value y of the rotorcraft, and controlling the rotorcraft to fly towards the target position;
fourthly, in the tracking process, the camera load always acquires images, the images are synchronously transmitted to the airborne image processing module and the ground PC end, and whether the target enters the camera view range or not is artificially checked through the ground PC end;
and fifthly, after the target enters the visual field range of the camera, manually selecting the ground maneuvering target in the image middle frame through the ground PC terminal, and starting a second tracking mode after the frame selection operation is finished.
In the sixth step, the first step is carried out,
1) in the first frame image where the selection frame is located, performing cyclic shift operation on data of a target area in the first frame image where the selection frame is located, and finally selecting an area 2.5 times around the target as a search area;
2) extracting HOG characteristics from a target region in the current frame image, wherein the extracted characteristic matrix is x1And for the feature matrix x1The discrete fourier transform is performed and the discrete fourier transform,obtaining an appearance model x' of the target in a discrete Fourier domaint
x't=F(x1)
Wherein, F (x)1) Is to x1Performing discrete Fourier transform;
3) computing a nuclear autocorrelation K of a target appearance model
4) Calculating filter parameters using a non-linear regression model
α't=(K+λI)-1y
Wherein, K ═ K (x ″)t,x't) λ is a regularization parameter used to prevent overfitting of the function, and y is the regression value y from each samplei(i is 1, 2, 3, 4, 5, 6 … …) and y is a column vectoriThe method is characterized in that a preset sample regression target value is obtained, a preset target position is used as a sample regression target value of the user and is used as a comparison value of regression calculation, namely the position of a target in a previous frame of image, and I is an identity matrix;
5) updating the position of the target and the filter parameters:
wherein alpha istFor the filtering function at the moment corresponding to the t-th frame image, xtβ is a preset learning update parameter and determines the degree of dependence on data at the previous moment, and the value range of β is [0,1 ]]usually, the value is 0.02, alphat-1As a filter parameter at the previous time, xt-1the target position of the last time, namely the time corresponding to the last frame image, α corresponding to the first frame image1According to the position of the target area selected manually, and then the position is obtained by substituting a filter parameter calculation formula; x is the number oftA target area location selected for human;
6) when the next frame image enters into calculation, HOG characteristics are extracted in a search area determined by the position of the previous frame image, and the extracted characteristic matrix is x2To obtain the feature matrix x2Performing discrete Fourier transform to obtain an appearance model Z of the target at the current momentt
Zt=F(x2)
Wherein, F (x)2) Is to x2Performing discrete Fourier transform;
7) calculating an appearance model Z of the target at the current momenttAnd the appearance model x at the last momenttNuclear correlation betweenxz
8) From the previously obtained filter parameters, the following response regression function is calculated:
wherein,is the nuclear dependency KxzThe first row of elements of the matrix, F (x) is the discrete fourier transform on x;
9) setting the target position of the current frame as a region with the maximum amplitude in the response function value, then taking the position as a target center according to the current position, selecting the periphery of the target region as a search region of the next frame again, updating the filtering parameter, and then repeating the previous process.
10) When the target moves beyond the visual field range of the image acquisition module, the tracking mode is switched to the first tracking mode for tracking, then when the target enters the visual field range of the image acquisition module again, the frame selection operation is carried out again, and then the second tracking mode is executed.
2. A dual-mode rotorcraft target tracking method according to claim 1, wherein in the sixth step, in the first frame image of the selection box, the data of the target area is subjected to a cyclic shift operation in the first frame image of the selection box, and an image unit in the target area is X, and is shifted downward and rightward by a unit and b unit, respectively, thereby obtaining the following resultsShifting operated X1
X1=PaXPb
Wherein,
and selecting an area around the target area through cyclic shift operation so as to obtain a search area, wherein the area is the range to be searched in the next frame of image, and a and b are valued according to the resolution of the image, and finally, the area 2.5 times around the target is selected as the search area.
3. A dual-mode rotorcraft target tracking method according to claim 1, wherein the method of calculating the kernel autocorrelation K of the target appearance model in the sixth step is as follows:
K=κ(x't,x't) The kernel function used in the calculation process is a gaussian kernel function, and the specific form is as follows:
whereinIs thatThe conjugate matrix of (a) is determined,is xtObtained by discrete Fourier transform, F-1Is an inverse discrete Fourier transform, σ2Is the gaussian kernel bandwidth, which takes a value of 0.5.
4. A dual-mode rotorcraft target tracking method as recited in claim 1, wherein said method comprisesIn that, in the sixth step, an appearance model Z of the time-of-day object is calculatedtWith the appearance model x' at the previous momenttNuclear correlation between KxzThe kernel function used in the calculation process is a gaussian kernel function, and the specific form is as follows:
wherein,is thatThe conjugate matrix of (a) is determined,is ZtObtained by discrete Fourier transform, F-1Is the inverse discrete fourier transform, and σ is the gaussian kernel bandwidth, with a value of 0.5.
CN201810639869.9A 2018-06-20 2018-06-20 Dual-mode rotor craft target tracking method Active CN108961311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810639869.9A CN108961311B (en) 2018-06-20 2018-06-20 Dual-mode rotor craft target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810639869.9A CN108961311B (en) 2018-06-20 2018-06-20 Dual-mode rotor craft target tracking method

Publications (2)

Publication Number Publication Date
CN108961311A true CN108961311A (en) 2018-12-07
CN108961311B CN108961311B (en) 2021-06-22

Family

ID=64491419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810639869.9A Active CN108961311B (en) 2018-06-20 2018-06-20 Dual-mode rotor craft target tracking method

Country Status (1)

Country Link
CN (1) CN108961311B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911151A (en) * 2021-01-29 2021-06-04 京东数科海益信息科技有限公司 Target following method, device, equipment, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874854A (en) * 2017-01-19 2017-06-20 西安电子科技大学 Unmanned plane wireless vehicle tracking based on embedded platform
CN107590820A (en) * 2017-08-25 2018-01-16 北京飞搜科技有限公司 A kind of object video method for tracing and its intelligent apparatus based on correlation filtering
WO2018051232A1 (en) * 2016-09-13 2018-03-22 Hangzhou Zero Zero Technology Co., Ltd. Unmanned aerial vehicle system and method with environmental sensing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018051232A1 (en) * 2016-09-13 2018-03-22 Hangzhou Zero Zero Technology Co., Ltd. Unmanned aerial vehicle system and method with environmental sensing
CN106874854A (en) * 2017-01-19 2017-06-20 西安电子科技大学 Unmanned plane wireless vehicle tracking based on embedded platform
CN107590820A (en) * 2017-08-25 2018-01-16 北京飞搜科技有限公司 A kind of object video method for tracing and its intelligent apparatus based on correlation filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUI CHENG 等: "《An autonomous vision-based target tracking system for rotorcraft unmanned aerial vehicles》", 《 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
郑武兴 等: "《改进的KCF红外空中目标跟踪方法》", 《激光与红外》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911151A (en) * 2021-01-29 2021-06-04 京东数科海益信息科技有限公司 Target following method, device, equipment, system and storage medium

Also Published As

Publication number Publication date
CN108961311B (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN107808407B (en) Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
Abd-Elrahman et al. Development of pattern recognition algorithm for automatic bird detection from unmanned aerial vehicle imagery
Hassan-Esfahani et al. Topsoil moisture estimation for precision agriculture using unmmaned aerial vehicle multispectral imagery
WO2020103108A1 (en) Semantic generation method and device, drone and storage medium
EP2430615A2 (en) Method and system for visual collision detection and estimation
CN110765948A (en) Target detection and identification method and system based on unmanned aerial vehicle
EP3690744A1 (en) Method for integrating driving images acquired from vehicles performing cooperative driving and driving image integrating device using same
CN110799983A (en) Map generation method, map generation equipment, aircraft and storage medium
Andrea et al. Geolocation and counting of people with aerial thermal imaging for rescue purposes
Kemker et al. High-resolution multispectral dataset for semantic segmentation
CN117036989A (en) Miniature unmanned aerial vehicle target recognition and tracking control method based on computer vision
Faheem et al. Uav emergency landing site selection system using machine vision
CN110472092B (en) Geographical positioning method and system of street view picture
CN108961311B (en) Dual-mode rotor craft target tracking method
Soltani et al. Transfer learning from citizen science photographs enables plant species identification in UAV imagery
Karampinis et al. Ensuring UAV Safety: A Vision-only and Real-time Framework for Collision Avoidance Through Object Detection, Tracking, and Distance Estimation
Felizardo et al. Using ANN and UAV for terrain surveillance
CN113392723A (en) Unmanned aerial vehicle forced landing area screening method, device and equipment based on artificial intelligence
CN117196998A (en) Image blur eliminating method, computer device, and computer-readable storage medium
CN112750146A (en) Target object tracking method and device, storage medium and electronic equipment
Bupathy et al. Optimizing low-cost UAV aerial image mosaicing for crop growth monitoring
Zhifeng et al. Ellipsoidal method for UAVs target tracking and recognition
Cirneanu et al. CNN based on LBP for evaluating natural disasters
Khanykov et al. The Application of the High-Speed Pixel Clustering Method in Combining Multi-Angle Images Obtained from Airborne Optical-Location Systems
CN113343929A (en) Unmanned aerial vehicle image processing method based on wavelet transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant