CN109684909B - Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle - Google Patents

Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle Download PDF

Info

Publication number
CN109684909B
CN109684909B CN201811184403.0A CN201811184403A CN109684909B CN 109684909 B CN109684909 B CN 109684909B CN 201811184403 A CN201811184403 A CN 201811184403A CN 109684909 B CN109684909 B CN 109684909B
Authority
CN
China
Prior art keywords
current frame
image
target
accurate position
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811184403.0A
Other languages
Chinese (zh)
Other versions
CN109684909A (en
Inventor
洪汉玉
王维祥
张耀宗
石教炜
陈辉远
赵书涵
李施阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN201811184403.0A priority Critical patent/CN109684909B/en
Publication of CN109684909A publication Critical patent/CN109684909A/en
Application granted granted Critical
Publication of CN109684909B publication Critical patent/CN109684909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a real-time positioning method, a system and a storage medium for target essential points of an unmanned aerial vehicle, wherein the method comprises the steps of obtaining a monitoring image of the unmanned aerial vehicle, selecting a current frame image according to the monitoring image, and performing downsampling treatment on the current frame image to obtain a first gray level image; acquiring a current frame prediction position of a target center according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position; binarizing the current frame image to obtain a second gray level image; in the second gray level image, determining a suspicious domain of the current frame of a target key point in the current frame image according to the accurate position of the current frame; and determining a plurality of target vital point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame. The method can rapidly and accurately position the essential points of the unmanned aerial vehicle in real time, meets the requirements of high frame frequency processing and complex background recognition of multiple rotors, and can be widely applied to the technical field of unmanned aerial vehicle countermeasures.

Description

Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle
Technical Field
The invention relates to the technical field of target identification and unmanned aerial vehicle countering, in particular to a real-time positioning method, a real-time positioning system and a storage medium for target vital points of an unmanned aerial vehicle.
Background
In recent years, along with the rapid development of the unmanned aerial vehicle industry, the multi-rotor unmanned aerial vehicle has been widely used in various fields due to the characteristics of small size, low noise and convenient operation. The market of the multi-rotor unmanned aerial vehicle is rapidly developed, and meanwhile, a plurality of problems are brought, such as illegal invasion of the unmanned aerial vehicle into government offices, airports, military bases and the like, even some illegal persons use the multi-rotor unmanned aerial vehicle to probe the military bases of China, and the security and lives and properties of people of China are extremely threatened. The multi-rotor unmanned aerial vehicle has the characteristics of stable flight and hovering at any time, and is suitable for carrying out fixed-point investigation on a target, so that a countering method aiming at the type of the aircraft, such as identifying key parts of the aircraft, is necessary to be designed for striking.
At present, the countermeasures of the multi-rotor unmanned aerial vehicle are mainly divided into: shooting, net capturing, control signal interference, laser striking, and the like. However, no matter which countering mode is adopted, target detection is carried out on the unmanned aerial vehicle, and the identification and positioning of key parts are key to successful countering. The existing unmanned aerial vehicle detection technology is mainly divided into the following categories: radar technology, image technology. With the continuous perfection and development of image processing technology, imaging target detection and recognition technology is widely applied. The detection by the current image method has better detection rate and lower false alarm rate, and can meet the requirements of the civil field on target detection and identification, but due to the complexity of algorithm design, the algorithm consumes huge resources (whether time or space), and is difficult to meet some special requirements under military application, such as high frame rate, wide range, complex background, important point detection and the like.
Disclosure of Invention
The invention aims to solve the technical problem of providing a real-time positioning method, a real-time positioning system and a storage medium for target essential points of an unmanned aerial vehicle aiming at the defects of the prior art.
The technical scheme for solving the technical problems is as follows:
a real-time positioning method for target vital points of an unmanned aerial vehicle comprises the following steps:
step 1: acquiring a monitoring image of the unmanned aerial vehicle, selecting a current frame image according to the monitoring image, and performing downsampling on the current frame image to obtain a first gray level image;
step 2: acquiring a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position;
step 3: performing binarization processing on the current frame image to obtain a second gray level image;
step 4: in the second gray level image, determining a suspicious domain of the current frame of a target key point in the current frame image according to the accurate position of the current frame; and determining a plurality of target vital point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame.
The beneficial effects of the invention are as follows: the method comprises the steps of predicting a target center in a current frame image, obtaining a current frame prediction position of the target center, positioning a rough range of the target center of the unmanned aerial vehicle, calculating a current frame accurate position of the target center according to the current frame prediction position, and accurately positioning the target center of the unmanned aerial vehicle; and obtaining the suspicious domain of the current frame of the target essential point of the unmanned aerial vehicle according to the accurate position of the current frame of the target center in the second gray level image through binarization processing, and further positioning a plurality of target essential points of the unmanned aerial vehicle. The target essential point positioning method based on image processing is simple in algorithm, can be used for identifying and positioning the target essential point of the unmanned aerial vehicle in real time, has higher accuracy and lower false alarm rate, meets the requirements of multi-rotor high-frame frequency processing and complex background identification, can meet some special requirements on military application, can be widely used in the technical field of unmanned aerial vehicle countermeasures, and ensures the life and property safety of countries and people. The unmanned aerial vehicle is mainly aimed at a multi-rotor unmanned aerial vehicle, in particular a four-rotor unmanned aerial vehicle, and the multi-rotor unmanned aerial vehicle is a special unmanned rotor aircraft with three or more rotors.
Based on the technical scheme, the invention can also be improved as follows:
further: in the step 1, selecting a current frame image according to the monitoring image, and performing downsampling processing on the current frame image to obtain a first gray level image specifically includes:
selecting any frame in the monitoring image as the current frame image I mk And the current frame image I mk Is M multiplied by N, and the current frame image I is processed by a point separation method mk Respectively performing downsampling processing on the rows and columns of the first gray level image I mk1 Wherein M and N respectively refer to the current frame image I mk The number of pixels corresponding to the rows and columns, respectively.
The beneficial effects of the above-mentioned further scheme are: the monitoring image of the unmanned aerial vehicle is a dynamic video image, any frame is selected as a current frame image, each row and each column of the current frame image are subjected to downsampling treatment respectively, the target center and the target essential point of the current frame are conveniently positioned, the target center and the target essential point of each frame in the monitoring image are conveniently positioned through the positioning of the current frame, the unmanned aerial vehicle countercheck under the environment with high frame frequency and complex background is realized, and the life and property safety of the country and people is ensured. The rate of the downsampling process can be set according to practical situations, for example, the current frame image is I mk And (3) respectively performing 20 times of downsampling processing on the rows and the columns of the current frame image, namely taking one pixel point every twenty pixels points in each row and each column of the current frame image to be reserved as a first gray level image.
Further: the specific steps of the step 2 include:
step 21: respectively carrying out differential and absolute value processing on gray values of two adjacent pixel points in each row in the first gray image to obtain a two-dimensional matrix A;
step 22: traversing the two-dimensional matrix A line by line, and counting the number A of elements larger than a preset experience threshold value in the two-dimensional matrix A i And selecting the number A of the elements i Is used as the current frame target line i of the first gray image 0
Step 23: according to the current frame target line i 0 A column j corresponding to the first element and the last element which are larger than the preset experience threshold value respectively 0 and j1 Determining a current frame target column (j 0 +j 1 )/2;
Step 24: according to the current frame target line i 0 And the current frame target column (j 0 +j 1 ) Determining the current frame prediction position as B (H×i) 0 ,H×(j 0 +j 1 ) 2); wherein H is the multiplying power of the downsampling process;
step 25: image I at the current frame mk Selecting the predicted position B of the current frame as the center, and calculating the accurate position C of the current frame by using a square area with a preset side length k (x Ck ,y Ck ) The accurate position C of the current frame k (x Ck ,y Ck ) The specific calculation formula is as follows:
Figure BDA0001825828610000041
wherein ,
Figure BDA0001825828610000042
[]for rounding and rounding operations, I is a row in the current frame image, j is a column in the current frame image, L is the preset side length, I mk And (i, j) is the gray value of the pixel point corresponding to the ith row and the jth column in the current frame image.
The beneficial effects of the above-mentioned further scheme are: because the current frame image comprises the current frame unmanned aerial vehicle image and the current frame background image, and compared with the current frame unmanned aerial vehicle image and the current frame background image, the gray value difference of pixels is larger, particularly, the gray value difference of the edge pixel points of the current frame unmanned aerial vehicle image is 15 through a large number of experiments, so that the preset experience threshold value is usually selected to be 15;
by downsampling the downsampled first gray imageThe prediction of the target center can be equivalently performed as the prediction of the current frame prediction position in the current frame image, so that the gray values of two adjacent pixel points in each line in the first gray image are respectively subjected to differential and absolute value taking processing, and the processed two-dimensional matrix is traversed, so that the line with the largest number of elements larger than the preset experience threshold value in the two-dimensional matrix necessarily comprises the current frame unmanned aerial vehicle image, and the line can be set as the current frame target line i 0 Finishing line positioning of the predicted position of the current frame of the target center; because the difference between the edge pixel point of the unmanned aerial vehicle image of the current frame and the pixel point of the background image of the current frame is the largest, and the first element and the last element of which the difference of the gray value of the pixel point in the current frame image is larger than the preset experience threshold value are necessarily corresponding to the edge of the unmanned aerial vehicle image of the current frame, the target row i of the current frame is selected 0 Column j corresponding to the first element and the last element which are larger than the preset experience threshold value respectively 0 and j1 And can be determined as the current frame target column (j 0 +j 1 ) 2, finishing column positioning of the current frame prediction position of the target center; thereby obtaining the current frame prediction position B (h×i 0 ,H×(j 0 +j 1 ) 2); the current frame prediction position is a prediction position of a target center in the current frame image and is a rough position of the unmanned aerial vehicle in the current frame image, and the current frame target row and the target column are acquired in the first gray level image subjected to the downsampling process, so that the current frame image before the downsampling process needs to be restored, namely, the current frame prediction position needs to be multiplied by the multiplying power of the downsampling process.
However, the position is only a predicted position of the current frame, and the accurate position of the current frame of the target center needs to be further determined, so that a square area with a preset side length L is selected as the predicted area of the current frame of the unmanned aerial vehicle in the current frame image by taking the predicted position as the center, the mass center of the predicted area of the current frame is calculated according to a mass center calculation formula in the predicted area and the gray value of each pixel point in the current frame image, namely the accurate position of the current frame of the target center of the unmanned aerial vehicle, wherein the preset side length L can be selected according to different pixel numbers according to actual conditions, the preset side length L is usually selected as 151 pixels according to an empirical value, namely the side length of the square area is usually selected according to 151 pixels, and the position of the mass center is calculated, namely the accurate position of the current frame of the target center; the centroid calculation formula is a specific calculation formula of the accurate position of the current frame, is the prior art, and is not described in detail;
By positioning the current frame target row and the current frame target column, the current frame prediction position of the unmanned aerial vehicle is primarily predicted, so that the current frame prediction area of the unmanned aerial vehicle is determined, the accurate position of the current frame of the target center is conveniently obtained, the positioning method is simple, the positioning precision is high, the calculated amount of the subsequent steps can be reduced, the complex background environment interference can be overcome, the special requirements on military application can be met, and the method can be widely applied to the technical field of unmanned aerial vehicle countering.
Further: the specific steps of the step 3 include:
step 31: calculating the current frame image I mk Average gray value T of all pixel points k The average gray value T k The specific calculation formula is as follows:
Figure BDA0001825828610000051
step 32: : the current frame image I is subjected to a preset gray threshold S mk Binarizing to obtain the second gray level image I mk2 The method comprises the following steps:
Figure BDA0001825828610000061
/>
the beneficial effects of the above-mentioned further scheme are: because the pixel gray value of the unmanned aerial vehicle image in the target image of the current frame is lower than that of the background image, the unmanned aerial vehicle image and the background image can be binarized by taking the average gray value of all the pixel points of the current frame as a threshold value, the unmanned aerial vehicle image and the background image are divided, and the average gray value of all the pixel points of the current frame is lower than that of the background image according to the experience value The gray value of the scene image is low, which is lower than T according to a large number of experiments k The image corresponding to the pixel gray value of 20 is the unmanned plane image, which is higher than T k The image corresponding to the pixel gray value of-20 is the background image, so T is taken as k -20 as a preset gray threshold S for the current frame image I mk Binarizing and setting the value lower than S=T k -20 has a pixel gray value of 255 (white gray value of 255), i.e. the drone image in the current frame image is assigned white, and will be higher than s=t k -20 is assigned a pixel gray value of 0 (black gray value of 0), i.e. the background image in the current frame image is assigned a black background. Through the binarization processing, the unmanned aerial vehicle image and the background image are conveniently segmented, and the target essential points of the unmanned aerial vehicle are conveniently searched and accurately positioned.
Further: in the step 4, when the current frame image is the first frame image of the monitoring image, the specific steps include:
step 41: in the second gray level image I mk2 Will be at the current frame accurate position C k (x Ck ,y Ck ) Dividing a circle domain with a preset radius length as a searching radius into four equally divided threads for the circle center, taking the accurate position of the current frame as a starting point, taking the right direction of the accurate position of the current frame as a starting direction, making a ray every 1 DEG, and carrying out parallel searching in the four equally divided threads to respectively obtain a plurality of intersection points of each ray and the second gray level image in the four equally divided threads;
Step 42: traversing a plurality of intersection points on each ray, and taking the intersection point on each ray farthest from the accurate position of the current frame as the second gray level image I mk2 And obtaining the distance and direction between the outer boundary point and the accurate position of the current frame, denoted as D rkk ]And taking the distance and direction between the outer boundary point and the accurate position of the current frame as the suspicious domain of the current frame, wherein D rkk ]For theta k The outer boundary point of the direction and the current frameThe distance between accurate positions is 0 degree less than or equal to theta k <360,θ k Is an integer;
step 43: calculating the maximum value of the distance between the outer boundary point in the suspicious domain of the current frame and the accurate position of the current frame, and determining the pixel points corresponding to at least four maximum values as suspicious target essential points Z k (D x ,D y ) The suspicious target vital point Z k (D x ,D y ) The specific calculation formula of (2) is as follows:
Figure BDA0001825828610000071
Figure BDA0001825828610000072
step 44: comparing the magnitude of an included angle between the connecting lines of the two adjacent suspicious target essential points and the accurate position of the current frame, determining the two adjacent suspicious target essential points corresponding to the maximum included angle as the target essential points, and acquiring the positions corresponding to the target essential points as Z respectively k1 (D x1 ,D y1) and Zk2 (D x2 ,D y2 ) And the directions corresponding to the target key points are respectively theta k1 and θk2
The beneficial effects of the above-mentioned further scheme are: when the current frame image is the first frame image of the monitoring image, four equally divided threads are searched in parallel by taking the accurate position of the current frame as the circle center and taking the preset radius length as the circle domain of the searching radius, so that a plurality of intersection points between the unmanned aerial vehicle image and each ray in the circle domain searching process in the second gray level image can be conveniently, quickly and accurately obtained, the intersection point farthest from the accurate position of the current frame in the plurality of intersection points is the external boundary point of the unmanned aerial vehicle image in the ray direction corresponding to the intersection point, the external boundary point is likely to be a target essential point of the unmanned aerial vehicle, and the plurality of external boundary points are suspicious domains of the current frame; the four threads are searched in parallel, so that the searching speed is high and the accuracy is high;
according to the experience value, in each frame of the monitoring image, two forward rotary wings of the unmanned aerial vehicle (unmanned aerial vehicle with foot frames) and two foot frames are farthest from the target central point of the unmanned aerial vehicle, namely farthest from the accurate position of the current frame, so that the farthest outer boundary point in the suspicious domain of the current frame needs to be found, by calculating the maximum value of the distance between the outer boundary point and the accurate position of the current frame of the target center, four outer boundary points corresponding to at least four maximum values of the distance necessarily comprise the target essential points of the two forward rotary wings of the unmanned aerial vehicle, and possibly comprise the target essential points of the two foot frames, namely the suspicious target essential point Z k (D x ,D y ) According to mathematical knowledge of polar coordinate and rectangular coordinate conversion, the position coordinates and the direction of the suspicious target key points can be obtained; for an unmanned aerial vehicle with an unmanned aerial vehicle stand, the four outer boundary points corresponding to the at least four maximum values are all target essential points of four rotors of the unmanned aerial vehicle, and the target essential points of two forward rotors are necessarily contained;
according to the experience value, because the target essential point of the unmanned aerial vehicle is usually the rotor of the unmanned aerial vehicle, and in the monitoring image, the included angle between two forward rotors is usually larger than the included angle between the rotor and the foot rest (unmanned aerial vehicle with foot rest), or the included angle between two forward rotors is larger than the included angle between other adjacent rotors (unmanned aerial vehicle without foot rest), therefore, the two adjacent suspicious target essential points corresponding to the maximum included angle between the connecting line of the two suspicious target essential points and the accurate position of the current frame are necessarily the rotor of the unmanned aerial vehicle, namely the target essential points, and the positions corresponding to the target essential points can be obtained respectively are Z k1 (D x1 ,D y1) and Zk2 (D x2 ,D y2 ) And the directions are respectively theta k1 and θk2
By the steps of searching and calculating the circle domain of the first frame image, the target vital point of the unmanned aerial vehicle can be rapidly identified and positioned, and the method has higher accuracy and lower false alarm rate.
Further: in the step 4, when the current frame image is not the first frame image of the monitoring image, the specific steps further include:
step 45: according to steps 41-44, obtaining the image I of the previous frame of the current frame of the monitoring image m(k-1) Binarized third gray level image I m(k-1)2 Said last frame image I m(k-1) The accurate position C of the last frame of the target center k-1 (x C(k-1) ,y C(k-1) ) And the direction theta of the target key point of the previous frame in the previous frame image (k-1)1 and θ(k-1)2
Step 46: the second gray level image I mk2 And the third gray level image I m(k-1)2 Performing AND operation to obtain an intersecting image P of the second gray level image and the third gray level image k The method comprises the steps of carrying out a first treatment on the surface of the And at the intersection image P k In the step 41-42, fan-shaped search is performed by taking the accurate position of the current frame as a circle center and the preset radius length as a search radius and a preset fan angle to obtain the suspicious domain of the current frame of the target essential point in the current frame image
Figure BDA0001825828610000091
Step 47: according to steps 43-44, obtaining the positions of a plurality of corresponding target key points in the current frame image as Z respectively k1 ′(D x1 ,D y1) and Zk2 ′(D x2 ,D y2 ) And the directions corresponding to the target essential points are respectively theta' k1 and θ′k2
The beneficial effects of the above-mentioned further scheme are: because the monitoring image is a dynamic video image, the corresponding target key point in each frame image can be obtained through the steps, but each frame image adopts circular domain searching and calculating, the algorithm time is relatively long, and when the current frame image is not the first frame image of the monitoring image, the accurate position C of the last frame in the last frame image of the current frame image can be obtained through the steps k-1 (x C(k-1) ,y C(k-1) ) And the direction theta of the target key point of the previous frame (k-1)1 and θ(k-1)2 And the accurate position of the current frame image, and because the monitoring video is captured by the high-speed camera, according to the experience value, the time between each frame of the monitoring video captured by the high-speed camera is extremely short, the position change of the target essential point of each frame does not exceed the range of 15 degrees, so the sector search is carried out by taking the accurate position of the current frame as the center of a circle and taking the direction of the target essential point of the previous frame as the preset sector angle, the target essential point of the current frame is determined, the search range and the search time can be greatly reduced, the memory occupied by an algorithm is reduced, the positioning speed is improved, and the accuracy of the target essential point of the current frame obtained according to the target essential point of the previous frame is higher;
When sector search is performed, the third gray level image of the previous frame image after binarization processing is AND-operated with the second gray level image of the current frame image, so as to obtain sub-images after intersection of the previous frame image and the next frame image, namely an intersection image P k By intersecting the image P k Sector search is performed in the middle, and the obtained suspicious domain of the current frame can be ensured
Figure BDA0001825828610000092
The method is more accurate, and is convenient for the subsequent calculation of the position of the target key point to be more accurate; through the steps, real-time detection of target vital points of the unmanned aerial vehicle with high frame frequency, wide width and complex background can be met, some special requirements under military application are met, positioning accuracy is high, speed is high, algorithm is simple, occupied resources are relatively less, and the unmanned aerial vehicle target vital point detection method can be widely applied to the technical field of unmanned aerial vehicle countermeasures.
Further: the length of the preset radius is R 1 =m/4, the preset fan angle θ' k Is (theta) (k-1)1 -15°≤θ′ k ≤θ (k-1)1 +15°,θ (k-1)2 -15°≤θ′ k ≤θ (k-1)2 +15°),θ′ k Is an integer.
The beneficial effects of the above-mentioned further scheme are: m is the number of pixel points corresponding to the line of the current frame image, and the length R of the preset radius is used for 1 The M/4 is used for carrying out the circular domain search and the sector search for the search radius, thereby ensuring the searchThe accuracy of the method can avoid the search of redundant pixel points, and reduce the algorithm consumption time; the time between each frame of the monitoring video captured by the high-speed camera is extremely short, and the position change of the target vital point of each frame does not exceed the range of 15 DEG, so that (theta (k-1)1 -15°≤θ′ k ≤θ (k-1)1 +15°,θ (k-1)2 -15°≤θ′ k ≤θ (k-1)2 The sector search is carried out at the preset sector angle of +15°, so that the real-time performance of positioning can be ensured, the algorithm consumption time is reduced, the efficiency is high, and the search accuracy can be ensured.
According to another aspect of the invention, there is provided a real-time positioning system for target vital points of an unmanned aerial vehicle, comprising a monitoring unit, a downsampling processing unit, an operation unit, a binarization processing unit and a searching unit;
the monitoring unit is used for acquiring a monitoring image of the unmanned aerial vehicle;
the downsampling processing unit is used for selecting a current frame image according to the monitoring image, and downsampling the current frame image to obtain a first gray level image;
the operation unit is used for acquiring a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position;
the binarization processing unit is used for performing binarization processing on the current frame image to obtain a second gray level image;
the searching unit is used for determining a current frame suspicious domain of a target key point in the current frame image according to the accurate position of the current frame in the second gray level image;
The operation unit is also used for determining a plurality of target essential point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame.
The beneficial effects of the invention are as follows: the current frame prediction position of the current frame image is obtained through the monitoring unit, the downsampling processing unit and the operation unit, so that the accurate current frame position of the target center can be conveniently obtained through the operation unit according to the current frame prediction position, algorithm time and occupied resources are reduced, and positioning accuracy is high; and searching suspicious domains of the current frame through a binarization processing unit, a searching unit and an operation unit, and determining positions of a plurality of target key points through the operation unit according to the accurate positions of the current frame and the suspicious domains of the current frame. The target essential point positioning system based on image processing can identify and position the target essential point of the unmanned aerial vehicle in real time, has higher accuracy and lower false alarm rate, meets the requirements of high-frame-rate processing and complex background identification of multiple rotors, can meet some special requirements on military application, can be widely applied to the technical field of unmanned aerial vehicle countermeasures, and ensures the life and property safety of countries and people. The unmanned aerial vehicle is mainly aimed at a multi-rotor unmanned aerial vehicle, in particular a four-rotor unmanned aerial vehicle.
According to another aspect of the invention, a real-time positioning system for an essential point of an unmanned aerial vehicle target is provided, which comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the computer program realizes the specific steps in the real-time positioning method for the essential point of the unmanned aerial vehicle target.
The beneficial effects of the invention are as follows: the real-time positioning system for the target essential points of the unmanned aerial vehicle can be realized by storing the computer program on the memory and running the computer program on the processor, can identify and position the target essential points of the unmanned aerial vehicle in real time, has higher accuracy and lower false alarm rate, meets the requirements of high-frame-rate processing and complex background identification of multiple rotors, can meet some special requirements on military application, can be widely applied to the technical field of unmanned aerial vehicle countermeasures, and ensures the life and property safety of countries and people. The unmanned aerial vehicle is mainly aimed at a multi-rotor unmanned aerial vehicle, in particular a four-rotor unmanned aerial vehicle.
According to another aspect of the present invention, there is provided a storage medium including: at least one instruction, when the instruction is executed, the specific steps in the target essential point real-time positioning method of the unmanned aerial vehicle are realized.
The beneficial effects of the invention are as follows: the target essential point of the unmanned aerial vehicle can be identified and positioned in real time by executing the storage medium containing at least one instruction, the target essential point of the unmanned aerial vehicle can be identified and positioned in real time, the unmanned aerial vehicle has higher accuracy and lower false alarm rate, the requirements of high-frame-rate processing and complex background identification of multiple rotors are met, some special requirements on military application can be met, the unmanned aerial vehicle can be widely applied to the technical field of unmanned aerial vehicle countermeasures, and the life and property safety of countries and people is ensured. The unmanned aerial vehicle is mainly aimed at a multi-rotor unmanned aerial vehicle, in particular a four-rotor unmanned aerial vehicle.
Drawings
Fig. 1 is a flow chart of a real-time positioning method for target essential points of an unmanned aerial vehicle according to an embodiment of the invention;
figure 2 is a front view of a quad-rotor drone according to one embodiment of the present invention;
figure 3 is a top view of a quad-rotor drone according to one embodiment of the present invention;
FIG. 4 is a diagram illustrating an accurate position of a current frame of a drone according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a circle search according to an embodiment of the positioning method of the present invention;
FIG. 6 is a schematic diagram of a maximum value of a distance between an outer boundary point in a suspicious domain of a current frame and an accurate position of the current frame according to an embodiment of the positioning method of the present invention;
FIG. 7 is a schematic diagram of a suspicious target lesion according to an embodiment of the localization method according to the present invention;
FIG. 8 is a schematic diagram of a target point according to an embodiment of the positioning method of the present invention;
FIG. 9 is a schematic diagram of another embodiment of the present invention;
fig. 10 is a schematic structural view of a further embodiment of the present invention.
In the drawings, the list of components represented by the various numbers is as follows:
11. monitoring unit, 12, downsampling processing unit, 13, arithmetic unit, 14, binarization processing unit, 15, searching unit, 100, processor, 200, memory, 300, computer program.
Detailed Description
The principles and features of the present invention are described below with reference to the drawings, the examples are illustrated for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
The present invention will be described below with reference to the accompanying drawings.
In a first embodiment, as shown in fig. 1, a method for positioning a target essential point of an unmanned aerial vehicle in real time includes the following steps:
s1: acquiring a monitoring image of the unmanned aerial vehicle, selecting a current frame image according to the monitoring image, and performing downsampling processing on the current frame image to obtain a first gray level image;
s2: acquiring a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position;
S3: performing binarization processing on the current frame image to obtain a second gray level image;
s4: in the second gray level image, determining a suspicious domain of the current frame of a target key point in the current frame image according to the accurate position of the current frame; and determining a plurality of target vital point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame.
The method comprises the steps of predicting a target center in a current frame image, obtaining a current frame prediction position of the target center, positioning a rough range of the target center of the unmanned aerial vehicle, calculating a current frame accurate position of the target center according to the current frame prediction position, and accurately positioning the target center of the unmanned aerial vehicle; and then, through binarization processing, determining the suspicious domain of the current frame of the target essential point of the unmanned aerial vehicle according to the current frame accurate position of the target center in the second gray level image, and further positioning a plurality of target essential points of the unmanned aerial vehicle. The target essential point positioning method based on image processing is simple in algorithm, can be used for identifying and positioning the target essential point of the unmanned aerial vehicle in real time, has higher accuracy and lower false alarm rate, meets the requirements of multi-rotor high-frame frequency processing and complex background identification, can meet some special requirements on military application, can be widely used in the technical field of unmanned aerial vehicle countermeasures, and ensures the life and property safety of countries and people. The unmanned aerial vehicle is mainly aimed at a multi-rotor unmanned aerial vehicle, and the multi-rotor unmanned aerial vehicle is a special unmanned rotor aircraft with three or more rotors.
In this embodiment, a four-rotor unmanned aerial vehicle is monitored, as shown in fig. 2-3, fig. 2 is a front view of the four-rotor unmanned aerial vehicle of this embodiment, and fig. 3 is a top view of the four-rotor unmanned aerial vehicle of this embodiment.
Preferably, in S1, selecting a current frame image according to the monitoring image, and performing downsampling processing on the current frame image to obtain a first gray scale image specifically includes:
selecting any frame in the monitoring image as the current frame image I mk And the current frame image I mk Is M multiplied by N, and the current frame image I is processed by a point separation method mk Respectively performing downsampling processing on the rows and columns of the first gray level image I mk1 Wherein M and N respectively refer to the current frame image I mk The number of pixels corresponding to the rows and columns, respectively.
The monitoring image of the unmanned aerial vehicle is a dynamic video image, a frame is selected as a current frame image, each row and each column of the current frame image are subjected to downsampling treatment respectively, the target center and the target essential point of the current frame are conveniently positioned, the target center and the target essential point of each frame in the monitoring image are conveniently positioned through the positioning of the current frame, the unmanned aerial vehicle countercheck under the conditions of high frame frequency and complex background is realized, and the life and property safety of the country and people is ensured.
The current frame image I of the present embodiment mk The size of the image is 640 multiplied by 480, the line and the column of the current frame image are respectively subjected to 20 times downsampling treatment by adopting a dot separation method, namely, every twenty pixel points in each line and each column of the current frame image are reserved, and a first gray level image I is obtained mk1 The size of (2) is 32×24.
Preferably, the specific step of S2 comprises:
s21: respectively carrying out differential and absolute value processing on gray values of two adjacent pixel points in each row in the first gray image to obtain a two-dimensional matrix A;
in this embodiment, the two-dimensional matrix a obtained according to S1 has a size of 32×23.
S22: traversing the two-dimensional matrix A line by line, and counting the number A of elements larger than a preset experience threshold value in the two-dimensional matrix A i And selecting the number A of the elements i Is used as the current frame target line i of the first gray image 0
According to the embodiment, according to the preset experience threshold 15, the maximum number of elements larger than 15 in the 11 th row is obtained, and the 11 th row is taken as the target row of the current frame.
S23: according to the current frame target line i 0 A column j corresponding to the first element and the last element which are larger than the preset experience threshold value respectively 0 and j1 Determining a current frame target column (j 0 +j 1 )/2;
The present embodiment finds the 12 th and 16 th columns, respectively, in row 11 where the first and last elements greater than 15 are located.
S24: according to the current frame target line i 0 And the current frame target column (j 0 +j 1 ) Determining the current frame prediction position as B (H×i) 0 ,H×(j 0 +j 1 ) 2); wherein H is the multiplying power of the downsampling process;
in this embodiment, the current frame target column is obtained from the 12 th column and the 16 th column as the 14 th column, and the current frame prediction position of the target center in the current frame image is obtained from the current frame target line (11 th line) and the magnification of the down-sampling processing as B (220, 280).
S25: image I at the current frame mk Selecting the predicted position B of the current frame as the center, and calculating the accurate position C of the current frame by using a square area with a preset side length k (x Ck ,y Ck ) The accurate position C of the current frame k (x Ck ,y Ck ) The specific calculation formula is as follows:
Figure BDA0001825828610000151
wherein ,
Figure BDA0001825828610000152
[]for rounding and rounding operations, I is a row in the current frame image, j is a column in the current frame image, L is the preset side length, I mk And (i, j) is the gray value of the pixel point corresponding to the ith row and the jth column in the current frame image.
According to the embodiment, according to an empirical value, a preset side length of 151 pixels is selected, L/2=75 is obtained by rounding, and the current frame accurate position C is obtained by substituting the multiplying power of the downsampling process, the current frame target row and the current frame target column into the specific calculation formula of the current frame accurate position k (x Ck ,y Ck ) Is C k (307, 215), as shown in fig. 4, fig. 4 is a schematic diagram of the accurate position of the current frame of the drone in this embodiment.
The principle of this embodiment S2 is as follows: because the current frame image comprises the current frame unmanned aerial vehicle image and the current frame background image, and compared with the current frame unmanned aerial vehicle image and the current frame background image, the gray value difference of pixels is larger, particularly, the gray value difference of the edge pixel points of the current frame unmanned aerial vehicle image is 15 through a large number of experiments, so that the preset experience threshold value is usually selected to be 15;
the target center prediction is performed on the first gray level image after the downsampling processing, which can be equivalent to the prediction of the current frame prediction position in the current frame image, so that the gray level values of two adjacent pixel points in each line in the first gray level image are respectively subjected to difference and absolute value processing, and the processed two-dimensional matrix is traversed, the line with the largest number of elements larger than the preset experience threshold value in the two-dimensional matrix necessarily comprises the current frame unmanned aerial vehicle image, and the line can be set as the current frame target line i 0 Finishing line positioning of the predicted position of the current frame of the target center; since the current frame is unmannedThe difference between the edge pixel point of the machine image and the pixel point of the background image of the current frame is the largest, and the first element and the last element of which the difference of the gray value of the pixel point in the current frame image is larger than the preset experience threshold value are necessarily corresponding to the edge of the unmanned plane image of the current frame, so that the target row i of the current frame is selected 0 Column j corresponding to the first element and the last element which are larger than the preset experience threshold value respectively 0 and j1 And can be determined as the current frame target column (j 0 +j 1 ) 2, finishing column positioning of the current frame prediction position of the target center; and obtaining the current frame prediction position B (H×i) according to the multiplying power of the downsampling process 0 ,H×(j 0 +j 1 )/2);
However, the position is only a current frame prediction position, and the accurate position of the current frame of the target center needs to be further determined, so that a square area with a preset side length is selected as the current frame prediction area of the unmanned aerial vehicle by taking the prediction position as the center in the current frame image, the mass center of the current frame prediction area is calculated according to a mass center calculation formula in the prediction area and the gray value of each pixel point in the current frame image, namely the accurate position of the current frame of the target center of the unmanned aerial vehicle, wherein the preset side length can be selected according to different pixel numbers according to actual conditions, 151 pixels are usually selected according to experience values, namely the current frame prediction area is usually selected according to the side length of the square area, and the position of the mass center is calculated, namely the accurate position of the current frame of the target center;
By positioning the current frame target row and the current frame target column, the current frame prediction position of the unmanned aerial vehicle is primarily predicted, so that the current frame prediction area of the unmanned aerial vehicle is determined, the accurate position of the current frame of the target center is conveniently obtained, the positioning method is simple, the positioning precision is high, the calculated amount of the subsequent steps can be reduced, the complex background environment interference can be overcome, the special requirements on military application can be met, and the method can be widely applied to the technical field of unmanned aerial vehicle countering.
Preferably, the specific step of S3 comprises:
s31: calculating the current frame mapImage I mk Average gray value T of all pixel points k The average gray value T k The specific calculation formula is as follows:
Figure BDA0001825828610000171
s32: : the current frame image I is subjected to a preset gray threshold S mk Binarizing to obtain the second gray level image I mk2 The method comprises the following steps:
Figure BDA0001825828610000172
the principle of this embodiment S3 is as follows: because the gray level of the pixels of the unmanned aerial vehicle image in the target image of the current frame is lower than that of the background image, the unmanned aerial vehicle image and the background image can be binarized by taking the average gray level of all pixels of the current frame as a threshold value, the unmanned aerial vehicle image and the background image are divided, and the average gray level of all pixels of the current frame is lower than that of the background image according to the experience value and lower than T according to a large number of experiments k The image corresponding to the pixel gray value of 20 is the unmanned plane image, which is higher than T k The image corresponding to the pixel gray value of-20 is the background image, so T is taken as k -20 as a preset gray threshold S for the current frame image I mk Binarizing and setting the value lower than S=T k -20 has a pixel gray value of 255, i.e. the drone image in the current frame image is assigned white, and will be higher than s=t k -20 is assigned a pixel gray value of 0, i.e. the background image in the current frame image is assigned a black background. Through the binarization processing, the unmanned aerial vehicle image and the background image are conveniently segmented, and the target essential points of the unmanned aerial vehicle are conveniently searched and accurately positioned.
In this embodiment, the average gray value T is obtained according to the average gray value calculation formula k 126, and at s=t k -binarizing the threshold value of-20=106 to obtain a second gray level image I mk2
Preferably, in the step S4, when the current frame image is the first frame image of the monitoring image, the specific steps include:
s41: in the second gray level image I mk2 Will be at the current frame accurate position C k (x Ck ,y Ck ) Dividing a circle domain with a preset radius length as a searching radius into four equally divided threads for the circle center, taking the accurate position of the current frame as a starting point, taking the right direction of the accurate position of the current frame as a starting direction, making a ray every 1 DEG, and carrying out parallel searching in the four equally divided threads to respectively obtain a plurality of intersection points of each ray and the second gray level image in the four equally divided threads;
S42: traversing a plurality of intersection points on each ray, and taking the intersection point on each ray farthest from the accurate position of the current frame as the second gray level image I mk2 And obtaining the distance and direction between the outer boundary point and the accurate position of the current frame, denoted as D rkk ]And taking the distance and direction between the outer boundary point and the accurate position of the current frame as the suspicious domain of the current frame, wherein D rkk ]For theta k The distance between the outer boundary point of the direction and the accurate position of the current frame is 0 degree less than or equal to theta k <360,θ k Is an integer;
as shown in fig. 5, fig. 5 is a schematic diagram of a circle domain search in the positioning method according to the present embodiment; the embodiment will be described as C k (307, 215) is centered at a predetermined radius length R 1 The circle domain with the search radius is divided into four equally divided threads, including thread I:0 ° -90 ° (excluding 90 °), thread II:90 ° -180 ° (excluding 180 °), thread III:180 ° -270 ° (excluding 270 °) and thread IV:270 to 360 ° (excluding 360 °), C k The right direction of (307, 215) is taken as the initial direction, a ray is made every 1 DEG, the whole circle domain is divided into 360 parts, and theta is recorded k (0°≤θ k <360,θ k Integer) is the direction of the ray, each ray has X intersections with the second gray level image, the The upper part of the strip ray is away from the center C k The furthest intersection point (307, 215) is noted as the target at θ k An outer boundary point in the direction and storing the distance between the outer boundary point and the center of the circle in an array D rkk ]Wherein the distance from the outer boundary point of the 0-degree direction of the target to the center of the circle is stored in D rk [0]In the same way, θ k The distance from the outer boundary point of the direction to the center of the circle is stored in D rkk ]In (D) rkk ]I.e. the suspicious domain of the current frame.
S43: calculating the maximum value of the distance between the outer boundary point in the suspicious domain of the current frame and the accurate position of the current frame, and determining the pixel points corresponding to at least four maximum values as suspicious target essential points Z k (D x ,D y ) The suspicious target vital point Z k (D x ,D y ) The specific calculation formula of (2) is as follows:
Figure BDA0001825828610000191
Figure BDA0001825828610000192
the method for calculating the maximum value of the distance between the outer boundary point in the suspicious domain of the current frame and the accurate position of the current frame in this embodiment is as follows: computing array D rkk ]Setting the calculated radius of the maximum value to be 5, and firstly setting the array D rkk ]4 0 s are added to the leftmost and rightmost edges of (C), if D rkk ]Is larger than 4 values adjacent to the value, and the value is recorded as an array D rkk ]The maximum point with radius 5 is calculated, and the calculation result is shown in fig. 6. The calculation radius of the maximum value is 5, so that the influence of individual noise points in the second gray level image can be eliminated, and the subsequent positioning accuracy is improved.
Selecting the pixel points corresponding to the four maximum values in FIG. 6 as the suspicious target essential points, and passing through the suspicious target essential pointsTarget vital point Z k (D x ,D y ) The specific calculation formula of (2) calculates and obtains the position coordinate Z of four suspicious target key points k1 (D x1 ,D y1 )、Z k2 (D x2 ,D y2 )、Z k2 (D x2 ,D y2) and Zk4 (D x4 ,D y4 ) As shown in fig. 7, fig. 7 is a schematic diagram of a suspicious target point in the positioning method according to the embodiment.
S44: comparing the magnitude of an included angle between the connecting lines of the two adjacent suspicious target essential points and the accurate position of the current frame, determining the two adjacent suspicious target essential points corresponding to the maximum included angle as the target essential points, and acquiring the positions corresponding to the target essential points as Z respectively k1 (D x1 ,D y1) and Zk2 (D x2 ,D y2 ) And the directions corresponding to the target key points are respectively theta k1 and θk2
The embodiment will Z k1 、Z k2 、Z k3 and Zk4 Respectively to C k (307, 215) and marking the four line segments as L respectively k1 、L k2 、L k3 and Lk4 Respectively calculate L k1 And L is equal to k2 Included angle alpha between k1 、L k2 And L is equal to k3 Included angle alpha between k2 、L k3 And L is equal to k4 Included angle alpha between k3, and Lk4 And L is equal to k1 Included angle alpha between k4 Wherein two suspicious target essential points corresponding to two line segments forming the maximum included angle are target essential points, alpha in the four included angles of the embodiment k1 =147 ° max, then the corresponding Z k1 (351, 197) and Z k2 (261, 209) is the target critical point, and θ k1=21° and θ k2 173 DEG is the direction of the accurate position of the target key point relative to the current frame (target center), D rk1 [21]=48、D rk2 [173]=46 is the distance from the target key point to the accurate position (target center) of the current frame, wherein the calculation results are rounded, as shown in fig. 8, fig. 8 is the target key point in the positioning method of the present embodimentSchematic representation of the hazard point.
The principle of this embodiment S41 to S44 is as follows: according to the experience value, in each frame of the monitoring image, two forward rotors of the unmanned aerial vehicle (unmanned aerial vehicle with foot frames) and two foot frames are farthest from the target central point of the unmanned aerial vehicle, namely farthest from the accurate position of the current frame, so that the farthest outer boundary point in the suspicious domain of the current frame needs to be found, by calculating the maximum value of the distance between the outer boundary point and the accurate position of the current frame of the target center, the four outer boundary points corresponding to the four maximum values of the distance necessarily comprise the target essential points of the two forward rotors of the unmanned aerial vehicle, and possibly comprise the target essential points of the two foot frames, namely the suspicious target essential point Z k (D x ,D y ) According to mathematical knowledge of polar coordinate and rectangular coordinate conversion, the position coordinates and the direction of the suspicious target key points can be obtained; for an unmanned aerial vehicle with an unmanned aerial vehicle stand, four external boundary points corresponding to the four obtained maximum values are all target essential points of four rotors of the unmanned aerial vehicle and necessarily comprise target essential points of two forward rotors;
According to the experience value, because the target essential point of the unmanned aerial vehicle is usually the rotor of the unmanned aerial vehicle, and in the monitoring image, the included angle between two forward rotors is usually larger than the included angle between the rotor and the foot rest (unmanned aerial vehicle with foot rest), or the included angle between two forward rotors is larger than the included angle between other adjacent rotors (unmanned aerial vehicle without foot rest), therefore, the two adjacent suspicious target essential points corresponding to the maximum included angle between the connecting line of the two suspicious target essential points and the accurate position of the current frame are necessarily the rotor of the unmanned aerial vehicle, namely the target essential points, and the positions corresponding to the target essential points can be obtained are Z respectively k1 (D x1 ,D y1) and Zk2 (D x2 ,D y2 ) And the directions are respectively theta k1 and θk2
By the steps of searching and calculating the circle domain of the first frame image, the target vital point of the unmanned aerial vehicle can be rapidly identified and positioned, and the method has higher accuracy and lower false alarm rate.
Preferably, in S4, when the current frame image is not the first frame image of the monitoring image, the specific steps further include:
s45: according to S41-44, acquiring a previous frame image I of the current frame image in the monitoring image m(k-1) Binarized third gray level image I m(k-1)2 Said last frame image I m(k-1) The accurate position C of the last frame of the target center k-1 (x C(k-1) ,y C(k-1) ) And the direction theta of the target key point of the previous frame in the previous frame image (k-1)1 and θ(k-1)2
S46: the second gray level image I mk2 And the third gray level image I m(k-1)2 Performing AND operation to obtain an intersecting image P of the second gray level image and the third gray level image k The method comprises the steps of carrying out a first treatment on the surface of the And at the intersection image P k In the step S41-42, sector search is carried out by taking the accurate position of the current frame as a circle center, the search radius of the preset radius length and the preset sector angle to obtain the suspicious domain of the current frame of the target essential point in the current frame image
Figure BDA0001825828610000211
S47: according to S43-44, obtaining the positions of a plurality of corresponding target key points in the current frame image as Z respectively k1 ′(D x1 ,D y1) and Zk2 ′(D x2 ,D y2 ) And the directions corresponding to the target essential points are respectively theta' k1 and θ′k2
The principle of this embodiment S45-S47 is as follows: because the monitoring image is a dynamic video image, the corresponding target key point in each frame image can be obtained through the steps, but each frame image adopts circular domain searching and calculating, the algorithm time is relatively long, and when the current frame image is not the first frame image of the monitoring image, the accurate position C of the last frame in the last frame image of the current frame image can be obtained through the steps k-1 (x C(k-1) ,y C(k-1) ) And the direction theta of the target key point of the previous frame (k-1)1 and θ(k-1)2 And the accurate position of the current frame image, and because the monitoring video is captured by the high-speed camera, according to the experience value, the time between each frame of the monitoring video captured by the high-speed camera is extremely short, the position change of the target essential point of each frame does not exceed the range of 15 degrees, so the sector search is carried out by taking the accurate position of the current frame as the center of a circle and taking the direction of the target essential point of the previous frame as the preset sector angle, the target essential point of the current frame is determined, the search range and the search time can be greatly reduced, the memory occupied by an algorithm is reduced, the positioning speed is improved, and the accuracy of the target essential point of the current frame obtained according to the target essential point of the previous frame is higher;
when sector search is performed, the third gray level image of the previous frame image after binarization processing is AND-operated with the second gray level image of the current frame image, so as to obtain sub-images after intersection of the previous frame image and the next frame image, namely an intersection image P k By intersecting the image P k Sector search is performed in the middle, and the obtained suspicious domain of the current frame can be ensured
Figure BDA0001825828610000212
The method is more accurate, and is convenient for the subsequent calculation of the position of the target key point to be more accurate; through the steps, real-time detection of target vital points of the unmanned aerial vehicle with high frame frequency, wide width and complex background can be met, some special requirements under military application are met, positioning accuracy is high, speed is high, algorithm is simple, occupied resources are relatively less, and the unmanned aerial vehicle target vital point detection method can be widely applied to the technical field of unmanned aerial vehicle countermeasures.
In this embodiment, the target essential point obtained in step S44 is used as the target essential point of the previous frame, the next frame image is used as the current frame image, and the current frame image is positioned based on the target essential point of the previous frame obtained in step S44, so that the direction of the target essential point of the previous frame is θ by the method of step S44 (k-1)1=21° and θ (k-1)2 173 deg. and the accurate position of the current frame is C by the method described in S25 above k ′(305,213);
And through the step S32The second gray level image corresponding to the current frame image and the third gray level image corresponding to the previous frame image obtained by the same method are subjected to AND operation to obtain an intersecting image P k And in the intersecting image P k In C k ' (305, 213) is centered on R 1 =m/4=160 is the search radius, fan angle θ' k(k-1)1 -15°≤θ′ k ≤θ (k-1)1 +15°,θ (k-1)2 -15°≤θ′ k ≤θ (k-1)2 +15° is (6 DEG is less than or equal to theta' k ≤36°,158°≤θ′ k Sector search is performed at a temperature of less than or equal to 188 DEG, C k 'taking' as a starting point, taking 6 degrees as a starting direction, making a ray at 1 degree intervals in a fan-shaped angle, and obtaining the suspicious domain of the current frame according to a similar method of S41-S42
Figure BDA0001825828610000221
Then according to the same method as S43-S44, the positions of the target key points corresponding to the current frame image are respectively Z k1 ' (351, 196) and Z k2 '(260, 208) corresponding to the direction θ' k1 =20° and θ' k2 =175°。
In the invention, ti-based Keystone multi-core fixed floating point digital signal processor is adopted, a DSP integrates C66x CorePac, each core operates at 1 to 1.25GHz to 10GHz, and the hardware configuration of the chip can adopt the following method:
(1) The TMS320C6678 reads in the monitoring image, firstly, the monitoring image is stored in the 4M multi-core shared memory, and meanwhile, when pre-compiling is carried out, the space occupied by the intermediate result of the algorithm is opened up in the multi-core shared memory;
(2) TMS320C6678 adopts a parallel mechanism of OpenMP, and all threads are scheduled in a dynamic scheduling mode;
(3) And (3) performing memory space allocation on the program segments in the cmd file, and putting executable codes and constants, an initialization table, global and static constants, a jump table, a stack and C input/output caches in an L2 RAM.
In the second embodiment, as shown in fig. 9, a schematic structural diagram of a real-time positioning system for target essential points of an unmanned aerial vehicle in this embodiment is provided.
The real-time positioning system for the target vital point of the unmanned aerial vehicle comprises a monitoring unit 11, a downsampling processing unit 12, an operation unit 13, a binarization processing unit 14 and a searching unit 15;
the monitoring unit 11 is configured to acquire a monitoring image of the unmanned aerial vehicle;
the downsampling processing unit 12 is configured to select a current frame image according to the monitoring image, and downsample the current frame image to obtain a first gray level image;
The operation unit 13 is configured to obtain a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculate a current frame accurate position of the target center according to the current frame prediction position;
the binarization processing unit 14 is configured to perform binarization processing on the current frame image to obtain a second gray level image;
the searching unit 15 is configured to determine, in the second gray level image, a suspicious domain of the current frame of the target key point in the current frame image according to the accurate position of the current frame;
the operation unit 13 is further configured to determine a plurality of target vital point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame.
According to the real-time positioning system for the target essential point of the unmanned aerial vehicle, disclosed by the invention, the current frame prediction position of the current frame image is obtained through the monitoring unit, the downsampling processing unit and the operation unit, so that the accurate position of the current frame of the target center can be conveniently obtained through the operation unit according to the current frame prediction position, the algorithm time and occupied resources are reduced, and the positioning precision is higher; and searching suspicious domains of the current frame through a binarization processing unit, a searching unit and an operation unit, and determining positions of a plurality of target key points through the operation unit according to the accurate positions of the current frame and the suspicious domains of the current frame. The target essential point positioning system based on image processing can identify and position the target essential point of the unmanned aerial vehicle in real time, has higher accuracy and lower false alarm rate, meets the requirements of high-frame-rate processing and complex background identification of multiple rotors, can meet some special requirements on military application, can be widely applied to the technical field of unmanned aerial vehicle countermeasures, and ensures the life and property safety of countries and people.
The third embodiment, based on the first embodiment and the second embodiment, the present invention further discloses a real-time positioning system for an objective point of an unmanned aerial vehicle, as shown in fig. 10, fig. 10 is a schematic structural diagram of another real-time positioning system for an objective point of an unmanned aerial vehicle according to the present invention, including a processor 100, a memory 200, and a computer program 300 stored in the memory 200 and capable of running on the processor 100, where the computer program 300 implements the following specific steps when running:
s1: acquiring a monitoring image of the unmanned aerial vehicle, selecting a current frame image according to the monitoring image, and performing downsampling on the current frame image to obtain a first gray level image;
s2: acquiring a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position;
s3: performing binarization processing on the current frame image to obtain a second gray level image;
s4: in the second gray level image, determining a suspicious domain of the current frame of a target key point in the current frame image according to the accurate position of the current frame; and determining a plurality of target vital point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame.
The real-time positioning system for the target essential points of the unmanned aerial vehicle can be realized by storing the computer program on the memory and running the computer program on the processor, can identify and position the target essential points of the unmanned aerial vehicle in real time, has higher accuracy and lower false alarm rate, meets the requirements of high-frame-rate processing and complex background identification of multiple rotors, can meet some special requirements on military application, can be widely applied to the technical field of unmanned aerial vehicle countermeasures, and ensures the life and property safety of countries and people.
The invention also provides a storage medium having stored thereon at least one instruction which when executed implements the specific steps of S1-S4.
The target essential point of the unmanned aerial vehicle can be identified and positioned in real time by executing the storage medium containing at least one instruction, the target essential point of the unmanned aerial vehicle can be identified and positioned in real time, the unmanned aerial vehicle has higher accuracy and lower false alarm rate, the requirements of high-frame-rate processing and complex background identification of multiple rotors are met, some special requirements on military application can be met, the unmanned aerial vehicle can be widely applied to the technical field of unmanned aerial vehicle countermeasures, and the life and property safety of countries and people is ensured.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (6)

1. The real-time positioning method for the target vital point of the unmanned aerial vehicle is characterized by comprising the following steps of:
step 1: acquiring a monitoring image of the unmanned aerial vehicle, selecting a current frame image according to the monitoring image, and performing downsampling on the current frame image to obtain a first gray level image;
step 2: acquiring a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position;
step 3: performing binarization processing on the current frame image to obtain a second gray level image;
step 4: in the second gray level image, determining a suspicious domain of the current frame of a target key point in the current frame image according to the accurate position of the current frame; determining a plurality of target vital point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame;
In the step 1, selecting a current frame image according to the monitoring image, and performing downsampling processing on the current frame image to obtain a first gray level image specifically includes:
selecting any frame in the monitoring image as the current frame image I mk And the current frame image I mk Is M multiplied by N, and the current frame image I is processed by a point separation method mk Respectively performing downsampling processing on the rows and columns of the first gray level image I mk1 Wherein M and N respectively refer to the current frame image I mk The number of pixel points corresponding to the rows and columns of the display panel;
the specific steps of the step 2 include:
step 21: respectively carrying out differential and absolute value processing on gray values of two adjacent pixel points in each row in the first gray image to obtain a two-dimensional matrix A;
step 22: traversing the two-dimensional matrix A line by line, and counting the number A of elements larger than a preset experience threshold value in the two-dimensional matrix A i And selecting a row corresponding to the maximum value of the element number Ai as a current frame target row i of the first gray level image 0
Step 23: according to the current frame target line i 0 A column j corresponding to the first element and the last element which are larger than the preset experience threshold value respectively 0 and j1 Determining a current frame target column (j 0 +j 1 )/2;
Step 24: according to the current frame target line i 0 And the current frame target column (j 0 +j 1 ) Determining the current frame prediction position as B (H×i) 0 ,H×(j 0 +j 1 ) 2); wherein H is the multiplying power of the downsampling process;
step 25: image I at the current frame mk Selecting the predicted position B of the current frame as the center, and calculating the accurate position C of the current frame by using a square area with a preset side length k (x Ck ,y Ck ) The accurate position C of the current frame k (x Ck ,y Ck ) The specific calculation formula is as follows:
Figure FDA0004124159630000021
wherein ,
Figure FDA0004124159630000022
[]for rounding and rounding operations, I is a row in the current frame image, j is a column in the current frame image, L is the preset side length, I mk (i, j) is the gray value of the pixel point corresponding to the ith row and the jth column in the current frame image, H represents the multiplying power of the downsampling process, j 0 and j1 Representing the current frame target line i 0 The first element and the last element which are larger than the preset experience threshold value respectively correspond to the columns, m k 0 moment representing the current frame position in a square region of preset side length, i.e. the gray sum of the targets, M yk 、M xk The 1 st moment in row and column directions, i.e. the gray center, i, of the current frame position in the square region of preset side length, respectively 0 and (j0 +j 1 ) 2 is the centroid on the image, x Ck and yCk Is the centroid on the image;
the specific steps of the step 3 include:
step 31: calculating the current frame image I mk Average gray value T of all pixel points k The average gray value T k The specific calculation formula is as follows:
Figure FDA0004124159630000023
wherein M, N represents the current frame image I mk I is a row in the current frame image, j is a column in the current frame image;
step 32: the current frame image I is subjected to a preset gray threshold S mk Binarizing to obtain the second gray level image I mk2 The method comprises the following steps:
Figure FDA0004124159630000031
wherein s represents a preset gray threshold value, I mk Representing the current frame image, I mk2 Representing a second gray level image, i being a row in the current frame image and j being a column in the current frame image;
in the step 4, when the current frame image is the first frame image of the monitoring image, the specific steps include:
step 41: in the second gray level image I mk2 Will be at the current frame accurate position C k (x Ck ,y Ck ) Dividing a circle domain with a preset radius length as a searching radius into four equally divided threads for the circle center, taking the accurate position of the current frame as a starting point, taking the right direction of the accurate position of the current frame as a starting direction, making a ray every 1 DEG, and carrying out parallel searching in the four equally divided threads to respectively obtain a plurality of intersection points of each ray and the second gray level image in the four equally divided threads;
Step 42: traversing a plurality of intersection points on each ray, and taking the intersection point on each ray farthest from the accurate position of the current frame as the second gray level image I mk2 And obtaining the distance and direction between the outer boundary point and the accurate position of the current frame, denoted as D rkk ]And taking the distance and direction between the outer boundary point and the accurate position of the current frame as the suspicious domain of the current frame, wherein D rkk ]For theta k The distance between the outer boundary point of the direction and the accurate position of the current frame is 0 degree less than or equal to theta k <360,θ k Is an integer;
step 43: calculating the maximum value of the distance between the outer boundary point in the suspicious domain of the current frame and the accurate position of the current frame, and determining the pixel points corresponding to at least four maximum values as suspicious target essential points Z k (D x ,D y ) The suspicious target vital point Z k (D x ,D y ) The specific calculation formula of (2) is as follows:
Figure FDA0004124159630000041
Figure FDA0004124159630000042
wherein ,Drkk ]Represents θ k Distance, x, between the outer boundary point of the direction and the exact position of the current frame ck Abscissa, y of accurate position of current frame ck Ordinate representing accurate position of current frame, cos θ k Represents θ k Cosine value, sin theta of direction k Represents θ k Sine value of direction;
Step 44: comparing the magnitude of an included angle between the connecting lines of the two adjacent suspicious target essential points and the accurate position of the current frame, determining the two adjacent suspicious target essential points corresponding to the maximum included angle as the target essential points, and acquiring the positions corresponding to the target essential points as Z respectively k1 (D x1 ,D y1) and Zk2 (D x2 ,D y2 ) And the directions corresponding to the target key points are respectively theta k1 and θk2
2. The method for positioning the target vital point of the unmanned aerial vehicle according to claim 1, wherein in the step 4, when the current frame image is not the first frame image of the monitoring image, the specific steps further include:
step 45: according to steps 41-44, obtaining the image I of the previous frame of the current frame of the monitoring image m(k-1) Binarized third gray level image I m(k-1)2 Said last frame image I m(k-1) The accurate position C of the last frame of the target center k-1 (x C(k-1) ,y C(k-1) ) And the direction theta of the target key point of the previous frame in the previous frame image (k-1)1 and θ(k-1)2
Step 46: the second gray level image I mk2 And the third gray level image I m(k-1)2 Performing AND operation to obtain an intersecting image P of the second gray level image and the third gray level image k The method comprises the steps of carrying out a first treatment on the surface of the And at the intersection image P k In the steps 41-42, the accurate position of the current frame is used as a circle center, the preset radius length is used as a searching radius, and the direction theta of the target key point of the previous frame is used as a search radius (k-1)1 and θ(k-1)2 The angle formed is a preset fan angle theta k ' sector search is carried out to obtain the suspicious domain D of the current frame of the target key point in the current frame image rkk ′];
Step 47: according to steps 43-44, obtaining the positions of a plurality of corresponding target key points in the current frame image as Z respectively k1 ′(D x1 ,D y1) and Zk2 ′(D x2 ,D y2 ) And the directions corresponding to the target essential points are respectively theta' k1 and θ′k2
3. The method for positioning the target essential point of the unmanned aerial vehicle in real time according to claim 2, wherein the preset radius length is r1=m/4, and the preset fan angle θ k ' is (theta) (k-1)1 -15°≤θ k ′≤θ (k-1)1 +15°,θ (k-1)2 -15°≤θ k ′≤θ (k-1)2 +15°),θ k ' is an integer.
4. The real-time positioning system for the target vital point of the unmanned aerial vehicle is characterized by comprising a monitoring unit, a downsampling processing unit, an operation unit, a binarization processing unit and a searching unit;
the monitoring unit is used for acquiring a monitoring image of the unmanned aerial vehicle;
the downsampling processing unit is used for selecting a current frame image according to the monitoring image, and downsampling the current frame image to obtain a first gray level image;
The operation unit is used for acquiring a current frame prediction position of a target center of the unmanned aerial vehicle according to the first gray level image, and calculating a current frame accurate position of the target center according to the current frame prediction position;
the binarization processing unit is used for performing binarization processing on the current frame image to obtain a second gray level image;
the searching unit is used for determining a current frame suspicious domain of a target key point in the current frame image according to the accurate position of the current frame in the second gray level image;
the operation unit is also used for determining a plurality of target essential point positions of the unmanned aerial vehicle according to the suspicious domain of the current frame and the accurate position of the current frame;
the downsampling processing unit is further configured to:
selecting any frame in the monitoring image as the current frame image I mk And the current frame image I mk Is M multiplied by N, and the current frame image I is processed by a point separation method mk Respectively performing downsampling processing on the rows and columns of the first gray level image I mk1 Wherein M and N respectively refer to the current frame image I mk The number of pixel points corresponding to the rows and columns of the display panel;
the arithmetic unit is further configured to:
Respectively carrying out differential and absolute value processing on gray values of two adjacent pixel points in each row in the first gray image to obtain a two-dimensional matrix A;
traversing the two-dimensional matrix A line by line, and counting the number A of elements larger than a preset experience threshold value in the two-dimensional matrix A i And selecting a row corresponding to the maximum value of the element number Ai as a current frame target row i of the first gray level image 0
According to the current frame target line i 0 A column j corresponding to the first element and the last element which are larger than the preset experience threshold value respectively 0 and j1 Determining a current frame target column (j 0 +j 1 )/2;
According to the current frame target line i 0 And saidCurrent frame target column (j) 0 +j 1 ) Determining the current frame prediction position as B (H×i) 0 ,H×(j 0 +j 1 ) 2); wherein H is the multiplying power of the downsampling process;
image I at the current frame mk Selecting the predicted position B of the current frame as the center, and calculating the accurate position C of the current frame by using a square area with a preset side length k (x Ck ,y Ck ) The accurate position C of the current frame k (x Ck ,y Ck ) The specific calculation formula is as follows:
Figure FDA0004124159630000061
wherein ,
Figure FDA0004124159630000062
[]for rounding and rounding operations, I is a row in the current frame image, j is a column in the current frame image, L is the preset side length, I mk (i, j) is the gray value of the pixel point corresponding to the ith row and the jth column in the current frame image, H represents the multiplying power of the downsampling process, j 0 and j1 Representing the current frame target line i 0 The first element and the last element which are larger than the preset experience threshold value respectively correspond to the columns, m k 0 moment representing the current frame position in a square region of preset side length, i.e. the gray sum of the targets, M yk 、M xk The 1 st moment in row and column directions, i.e. the gray center, i, of the current frame position in the square region of preset side length, respectively 0 and (j0 +j 1 ) 2 is the centroid on the image, x Ck and yCk Is the centroid on the image;
the binarization processing unit is also used for:
calculating the current frame image I mk Average gray value T of all pixel points k The average gray value T k The specific calculation formula is as follows:
Figure FDA0004124159630000071
wherein M, N represents the current frame image I mk I is a row in the current frame image, j is a column in the current frame image;
the current frame image I is subjected to a preset gray threshold S mk Binarizing to obtain the second gray level image I mk2 The method comprises the following steps:
Figure FDA0004124159630000072
wherein s represents a preset gray threshold value, I mk Representing the current frame image, I mk2 Representing a second gray level image, i being a row in the current frame image and j being a column in the current frame image;
The search unit is further configured to:
in the second gray level image I mk2 Will be at the current frame accurate position C k (x Ck ,y Ck ) Dividing a circle domain with a preset radius length as a searching radius into four equally divided threads for the circle center, taking the accurate position of the current frame as a starting point, taking the right direction of the accurate position of the current frame as a starting direction, making a ray every 1 DEG, and carrying out parallel searching in the four equally divided threads to respectively obtain a plurality of intersection points of each ray and the second gray level image in the four equally divided threads;
traversing a plurality of intersection points on each ray, and taking the intersection point on each ray farthest from the accurate position of the current frame as the second gray level image I mk2 And obtaining the distance and direction between the outer boundary point and the accurate position of the current frame, denoted as D rkk ]And taking the distance and direction between the outer boundary point and the accurate position of the current frame as the suspicious domain of the current frame, wherein D rkk ]For theta k The distance between the outer boundary point of the direction and the accurate position of the current frame is 0 degree less than or equal to theta k <360,θ k Is an integer;
calculating the maximum value of the distance between the outer boundary point in the suspicious domain of the current frame and the accurate position of the current frame, and determining the pixel points corresponding to at least four maximum values as suspicious target essential points Z k (D x ,D y ) The suspicious target vital point Z k (D x ,D y ) The specific calculation formula of (2) is as follows:
Figure FDA0004124159630000081
Figure FDA0004124159630000082
wherein ,Drkk ]Represents θ k Distance, x, between the outer boundary point of the direction and the exact position of the current frame ck Abscissa, y of accurate position of current frame ck Ordinate representing accurate position of current frame, cos θ k Represents θ k Cosine value, sin theta of direction k Represents θ k Sine value of direction;
comparing the magnitude of an included angle between the connecting lines of the two adjacent suspicious target essential points and the accurate position of the current frame, determining the two adjacent suspicious target essential points corresponding to the maximum included angle as the target essential points, and acquiring the positions corresponding to the target essential points as Z respectively k1 (D x1 ,D y1) and Zk2 (D x2 ,D y2 ) And the directions corresponding to the target key points are respectively theta k1 and θk2
5. A real-time location system for a target point of interest of an unmanned aerial vehicle, comprising a processor, a memory and a computer program stored in the memory and operable on the processor, the computer program when run implementing the steps of any of claims 1-3.
6. A storage medium, the storage medium comprising: at least one instruction which when executed implements the steps of any of claims 1-3.
CN201811184403.0A 2018-10-11 2018-10-11 Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle Active CN109684909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811184403.0A CN109684909B (en) 2018-10-11 2018-10-11 Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811184403.0A CN109684909B (en) 2018-10-11 2018-10-11 Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN109684909A CN109684909A (en) 2019-04-26
CN109684909B true CN109684909B (en) 2023-06-09

Family

ID=66185679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811184403.0A Active CN109684909B (en) 2018-10-11 2018-10-11 Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN109684909B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949401B (en) * 2021-02-01 2024-03-26 浙江大华技术股份有限公司 Image analysis method, device, equipment and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105698762A (en) * 2016-01-15 2016-06-22 中国人民解放军国防科学技术大学 Rapid target positioning method based on observation points at different time on single airplane flight path
CN106981071A (en) * 2017-03-21 2017-07-25 广东华中科技大学工业技术研究院 A kind of method for tracking target applied based on unmanned boat
WO2018028546A1 (en) * 2016-08-10 2018-02-15 腾讯科技(深圳)有限公司 Key point positioning method, terminal, and computer storage medium
CN107862241A (en) * 2017-06-06 2018-03-30 哈尔滨工业大学深圳研究生院 A kind of clothes fashion method for digging and visually-perceptible system based on star's identification
CN108122247A (en) * 2017-12-25 2018-06-05 北京航空航天大学 A kind of video object detection method based on saliency and feature prior model
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8139115B2 (en) * 2006-10-30 2012-03-20 International Business Machines Corporation Method and apparatus for managing parking lots
US20170161961A1 (en) * 2015-12-07 2017-06-08 Paul Salsberg Parking space control method and system with unmanned paired aerial vehicle (uav)

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105698762A (en) * 2016-01-15 2016-06-22 中国人民解放军国防科学技术大学 Rapid target positioning method based on observation points at different time on single airplane flight path
WO2018028546A1 (en) * 2016-08-10 2018-02-15 腾讯科技(深圳)有限公司 Key point positioning method, terminal, and computer storage medium
CN106981071A (en) * 2017-03-21 2017-07-25 广东华中科技大学工业技术研究院 A kind of method for tracking target applied based on unmanned boat
CN107862241A (en) * 2017-06-06 2018-03-30 哈尔滨工业大学深圳研究生院 A kind of clothes fashion method for digging and visually-perceptible system based on star's identification
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium
CN108122247A (en) * 2017-12-25 2018-06-05 北京航空航天大学 A kind of video object detection method based on saliency and feature prior model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A local contrast method for small infrared target detection;CHEN L P 等;《IEEE Transactions on Geoscience & Remote Sensing》;20131231;全文 *
基于重新检测过程的三维细化算法的改进;洪汉玉 等;《计算机科学》;20180515;全文 *

Also Published As

Publication number Publication date
CN109684909A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
Zhao et al. Polardet: A fast, more precise detector for rotated target in aerial images
CN104715471B (en) Target locating method and its device
CN104301630B (en) A kind of video image joining method and device
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
Jeong et al. Fast horizon detection in maritime images using region-of-interest
Zhang et al. Multi-target tracking of surveillance video with differential YOLO and DeepSort
AU2018282347A1 (en) Method and apparatus for monitoring vortex-induced vibration of wind turbine
CN111126116A (en) Unmanned ship river channel garbage identification method and system
CN103679694A (en) Ship small target detection method based on panoramic vision
Zhu et al. Arbitrary-oriented ship detection based on retinanet for remote sensing images
Mattoccia et al. Near real-time fast bilateral stereo on the GPU
CN109684909B (en) Real-time positioning method, system and storage medium for target essential points of unmanned aerial vehicle
Zhang et al. An internal-external optimized convolutional neural network for arbitrary orientated object detection from optical remote sensing images
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
EP3044734B1 (en) Isotropic feature matching
Tian et al. Ship detection in visible remote sensing image based on saliency extraction and modified channel features
Shi et al. RAOD: refined oriented detector with augmented feature in remote sensing images object detection
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
Yang et al. Method for building recognition from FLIR images
CN112651351B (en) Data processing method and device
Elassal et al. Unsupervised crowd counting
CN105205826B (en) A kind of SAR image azimuth of target method of estimation screened based on direction straight line
CN112215036B (en) Cross-mirror tracking method, device, equipment and storage medium
CN109410274B (en) Method for positioning typical non-cooperative target key points in real time under high frame frequency condition
CN108898573B (en) Infrared small target rapid extraction method based on multidirectional annular gradient method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant