CN103942786A - Self-adaptation block mass target detecting method of unmanned aerial vehicle visible light and infrared images - Google Patents

Self-adaptation block mass target detecting method of unmanned aerial vehicle visible light and infrared images Download PDF

Info

Publication number
CN103942786A
CN103942786A CN201410141183.9A CN201410141183A CN103942786A CN 103942786 A CN103942786 A CN 103942786A CN 201410141183 A CN201410141183 A CN 201410141183A CN 103942786 A CN103942786 A CN 103942786A
Authority
CN
China
Prior art keywords
target
image
moving window
size
unmanned plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410141183.9A
Other languages
Chinese (zh)
Other versions
CN103942786B (en
Inventor
丁文锐
刘硕
李红光
袁永显
黎盈婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing northern sky long hawk UAV Technology Co. Ltd.
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410141183.9A priority Critical patent/CN103942786B/en
Publication of CN103942786A publication Critical patent/CN103942786A/en
Application granted granted Critical
Publication of CN103942786B publication Critical patent/CN103942786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a global block mass target detecting method suitable unmanned aerial vehicle visible light and infrared images and belongs to the field of image processing. According to the method, from image target nature, based on the aggregation performance of targets, the targets in the image global range are subjected to automatic detecting, the requirements for various image loads of an unmanned aerial vehicle are met, the method does not rely on moving information of the targets, the method is suitable for moving and static targets, meanwhile, parameters of a vehicle-mounted laser range-measurement system of the unmanned aerial vehicle and imaging equipment are used for establishing an interest target template size table, and processing speed is further increased. According to the method, unmanned aerial vehicle multiple load resources can be fully used, and the efficiency and the accuracy of target detecting are improved.

Description

The self-adaptation block objects detection method of unmanned plane visible ray and infrared image
Technical field
The invention belongs to unmanned plane technical field of image processing, be specifically related to a kind of self-adaptation block objects detection method that is applicable to unmanned plane visible ray and infrared image.
Background technology
Along with scientific and technical development, unmanned plane can carry multiple image load (such as visible ray, infrared sensor etc.), and the view data of different images sensor passback is carried out to automatic target detection, is to ensure that unmanned plane has the basis of round-the-clock fight capability.Unmanned plane image is to take and obtain under high-altitude, movement environment, has following characteristics: the relative image of target is less, target is diversified and easily there is profile variation and background complicated and changeable etc.In conjunction with the application demand of above unmanned plane feature of image and multi-load, need to study a kind of object detection method that is also applicable to unmanned plane infrared image that simply effectively, had both been applicable to unmanned plane visible images.
It is that one of machine vision is important and have a research direction of challenge that automatic target detects always.Current, can be divided into following a few class for the main research method of this problem: 1, Target Modeling method, extracts target signature, and target is characterized, and sets up To Template, searches for coupling in global image scope; 2, background modeling method, carries out modeling to background, and in image, subtracting background can obtain target; 3, image segmentation, cuts apart image same area, thereby background and target are separated.
Unmanned plane feature of image has limited traditional object detection method.Under the situation of little target, target signature is unstable, be not enough to carry out matching detection accurately, and same target shows different features under infrared image and visible images, can not adapt to the demand of multiple image load.Background modeling method requires camera to fix conventionally, to obtain a comparatively stable background model, unmanned plane image is taken and is obtained in rapid movement process, if application background modeling, just must carry out registration to front and back image, registration process is consuming time more on the one hand, and the real-time detection of target is had to considerable influence, and infrared image does not have ripe method for registering on the other hand.Image segmentation is at certain feature space, image is carried out to cluster iteration, find identical region, thereby the target and background of being partitioned into, be applicable to the large and target of target and background in the widely different situation of feature space, in unmanned plane image, target is less, and infrared image is not easy to find consistent feature space with visible images.
From upper analysis, current to carry out for unmanned plane multi-load image the problem that target detection mainly faces as follows: be not only applicable to infrared image but also be applicable to visible images, little target, target is changeable and unknown, background is complicated and changeable etc., all practical requirement completely of existing object detection method.
Summary of the invention
The present invention detects in order to solve unmanned plane multi-load Image Automatic Target the problem facing, propose a kind of from target essence, do not rely on the target automatic testing method of the particular type of image, simultaneously comprehensive unmanned plane during flying height and imaging parameters, set up target size table, the target of the multiple size of self-adaptation.
No matter in visible images or in infrared image, unmanned plane target occurs with the form of block entirety conventionally, with its around background present comparatively significantly difference, be referred to as agglomeration.The intrinsic propesties of the present invention's target from unmanned plane image, utilizes its agglomeration presenting, and according to unmanned plane imaging characteristic, proposes a kind of object detection method that had not only been suitable for unmanned plane visible images but also had been applicable to unmanned plane infrared image.
The self-adaptation block objects detection method of unmanned plane visible ray of the present invention and infrared image, comprises following step:
The first step, sets up interesting target size template table
Second step, determines moving window traversing graph picture
The 3rd step, carries out agglomeration detection to target
The 4th step, detects target under comprehensive many sizes
The present invention has the following advantages:
1) adapt to unmanned plane visible images and infrared image;
2) make full use of other load of unmanned plane and parameter information, set up interesting target size table, adapt to the target of multiple size;
3) utilize target agglomeration and detect with the difference of background around, method is simply effective.
Brief description of the drawings
Fig. 1 is general structure schematic diagram of the present invention;
Fig. 2 is that image slide window is chosen schematic diagram;
Fig. 3 is image-region pixel summation schematic diagram;
Fig. 4 is target detection schematic diagram;
Fig. 5 is target detection process flow diagram under same size;
Fig. 6 is the comprehensive schematic diagram of target under many sizes.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in detail.
The present invention is a kind of object detection method that is applicable to unmanned plane visible ray and infrared image based on agglomeration, and as shown in Figure 1, specific implementation method comprises the following steps general structure schematic diagram:
The first step, sets up the size table of interesting target according to the flight parameter of unmanned plane.
Scout the interesting target that may occur in video for unmanned plane, for example moving vehicle, military architecture, landmark etc., statistics target physical size size, in conjunction with the measured unmanned plane during flying height in unmanned aerial vehicle onboard laser ranging unit, and unmanned plane imaging load imaging parameters, can estimate roughly the Pixel Dimensions of target in image, then form the Pixel Dimensions table of interesting target.The Pixel Dimensions of target is not exact value, but because follow-up detection method does not need completely accurate object pixel size, design object Pixel Dimensions table, is mainly the size in order to determine fast detection window, improves the efficiency detecting.
Concrete:
Unmanned plane is carrying out in the process of reconnaissance mission, conventionally to specific objective, and for example moving vehicle, military architecture, landmarks etc., can add up its general physical size, and this step can complete before flight.Target physical size (long and wide) is as shown in table 1,
Table 1 interesting target physical size table
Target Long (unit: rice/m) Wide (unit: rice/m)
Target 1 L 1 W 1
Target 2 L 2 W 2
Target n L n W n
The flying height different according to unmanned plane, and the imaging parameters of airborne imaging load, can calculate the length Δ x (m) in each pixel representative reality of unmanned plane image, known pixel dimension is Δ u (m), focal length is f (m), and flying height is H (m), m representation unit, for rice, formula (1) is the formula of the physical length of each pixel representative in computed image.
Δx = H f Δu - - - ( 1 )
According to Δ x (m) and target physical size can obtain target long and wide in image shared number of pixels, respectively by l iand w irepresent, specific formula for calculation is as follows:
l i = L i Δx w i = W i Δx - - - ( 2 )
Wherein: L irepresent the physical length of i target, W ithe developed width of i target, l irepresent that i target length is at shared number of pixels in image, w irepresent that i target width is in shared number of pixels in image, unit is rice, 1≤i≤n, and n is the quantity of obtaining interesting target;
Table 2 is unmanned plane interesting image object pixel size table,
Table 2 interesting target Pixel Dimensions table
Target Long (unit: pixel) Wide (unit: pixel)
Target 1 l 1 w 1
Target 2 l 2 w 2
Target n l n w n
Second step, according to the moving window of the suitable size of object pixel size Selection.
The object pixel size table forming according to first step the inside, determines corresponding sliding window size, in testing process, be according to the agglomeration of target, and target and the difference of background around, sliding window size is greater than the Pixel Dimensions of target.Then choose traversal rule, travel through global image with moving window.Choosing of traversal rule can be flexible and changeable, is mainly accuracy of detection and real-time are done to a balance, and in ergodic process, front and back moving window Duplication is higher, detects effect better, but need to consume accordingly the more time.
Concrete:
Moving window size is chosen as the square window of p × p, and in unmanned plane image, target is less, presents agglomeration, under normal circumstances l i, w i(1≤i≤n) be more or less the same, different target also may have similar size, and the computing formula of p is as follows:
p i = max ( l i , w i ) ( | l i - w i | > 15 ) 1.25 * l i + w i 2 ( | l i - w i | ≤ 15 ) - - - ( 3 )
For 1≤i≤n, by p isequence from small to large, p 1≤ ...≤p i≤ ...≤p n, then taking 15 as step-length, template is divided into m grade, wherein template to each grade is averaging, and obtains new window size q k, as follows
The corresponding table of table 3 window size
Size sequence number Moving window wide (unit: pixel)
1 q 1
2 q 2
m q m
For size k (1≤k≤m), adopt q k× q kthe moving window of size travels through image, and as shown in Figure 2, the specific strategy of window traversal is as follows for schematic diagram:
(1) moving window is using the image upper left corner as reference position;
(2) by row scanning to the right, step-length is
(3) scanned after a line, moving window moves down at up scanning starting position continue by row scanning to the right, step-length is (4) repeat above-mentioned steps until moving window traversal entire image;
The 3rd step, in moving window, carries out agglomeration detection to target.
Calculate the integrogram of image to be detected, the size of integrogram is the same with the size of original image, in the process of moving window traversing graph picture, calculate the agglomeration in each moving window, then judge and in moving window, whether have target, for the window that may have target, record the information such as corresponding moving window size, position, agglomeration, and contrast with all definite targets before, get rid of the target of duplicate detection under same sliding window size, can guarantee that the target detecting under each size does not repeat.In the time that the moving window under a size has traveled through image, the moving window of other sizes is taked to identical traversal and detection method, until the moving window of all sizes has traveled through image.
Concrete:
The agglomeration of unmanned plane target refers to, target is due to concentration of energy and character that around background exists significant difference to show, the agglomeration of target is target essential attribute, do not rely on the feature that concrete image type forms, therefore be both applicable to unmanned plane infrared image, be applicable to again unmanned plane visible images.Image to be detected is designated as i (x, y), and integrogram is designated as ii (x, y), and integrogram ii is the gray-scale value sum of all pixels of (x, y) upper left in image i in the value of position (x, y), wherein i (x ', y ') presentation video i is at the gray-scale value of (x ', y ').As shown in Figure 3, the position of R tetra-angle points in region in image is respectively (x a, y a), (x b, y b), (x c, y c), (x d, y d), the pixel in the R of region and S rcan be calculated by formula (4),
S R=ii(x D,y D)+ii(x A,y A)-ii(x B,y B)-ii(x C,y C) (4)
Fig. 4 is target detection schematic diagram, and the moving window length of side is q k, target window and moving window Gong Yige center, the target window length of side is denoted as t k, wherein t k=0.75q k, the agglomeration of target is denoted as in conjunction with Fig. 5 target agglomeration overhaul flow chart, concrete steps are as follows:
(1) if image visible images is first converted into gray level image i (x, y), if image unmanned plane infrared image directly enters next step;
(2) the integrogram ii (x, y) of computed image, wherein s (x ,-1)=0, ii (1, y)=0;
s(x,y)=s(x,y-1)+i(x,y) (5)
ii(x,y)=ii(x-1,y)+s(x,y) (6)
(3) calculate in moving window all pixel grey scales and, be designated as
S si k = ii ( x Dk , y Dk ) + ii ( x Ak , y Ak ) - ii ( x Bk , y Bk ) - ii ( x Ck , y Ck )
Wherein: the position of tetra-angle points of moving window k in image is respectively (x ak, y ak), (x bk, y bk), (x ck, y ck), (x dk, y dk);
(4) according to formula (4) calculate in target window all pixel grey scales and, be designated as
S ti k = ii ( x Dkt , y Dkt ) + ii ( x Akt , y Akt ) - ii ( x Bkt , y Bkt ) - ii ( x Ckt , y Ckt )
Wherein, (x akt, y akt), (x bkt, y bkt), (x ckt, y ckt), (x dkt, y dkt) represent in moving window k that four angle points of target are in the position of image, and
( x Akt , y Akt ) = ( [ x Ak + 1 8 q k ] , [ y Ak + 1 8 q k ] ) ( x Bkt , y Bkt ) = ( [ x Bk - 1 8 q k ] , [ y Bk + 1 8 q k ] ) ( x Ckt , y Ckt ) = ( [ x Ck + 1 8 q k ] , [ y Ck - 1 8 q k ] ) ( x Dkt , y Dkt ) = ( [ x Dk - 1 8 q k ] , [ y Dk - 1 8 q k ] )
Wherein: [A] represents A to carry out rounding operation;
In moving window all pixel grey scales of background and
(5) calculate agglomeration according to formula (7);
Agglomeration i k = | S ti k t k × t k - S bi k q k × q k - t k × t k | - - - ( 7 )
(6) if be judged to be target, record the size k of moving window, and position
(7) will with compare, if ( | x i k - x j k | + | y i k - y j k | ) < 10 , Wherein 1≤j≤i-1, with be the same target under size k, remove the target that agglomeration is less;
The 4th step, carries out comprehensively the target detecting under the big or small moving window of difference
In the process detecting in the 3rd step target agglomeration, under each sliding window size, all can obtain an object chain, detection between different size is separate, can only ensure not have in the object chain under each size the target of repetition, can not get rid of between the object chain under different size moving window and exist and occur simultaneously, a target is repeated situation about detecting under different size moving window, as shown in Figure 6.The all targets that detect under different size moving window are verified, got rid of and repeat target, and export final target.For being k for size, s (1≤k, two moving windows of s≤m), if be judged to be same target, remove the target that agglomeration is less.In actual application, the destination number in a width unmanned plane image is few, therefore many size objectives is verified and can't be brought obvious impact to real-time.

Claims (2)

1. the object detection method that is applicable to unmanned plane visible ray and infrared image based on agglomeration, comprises the following steps:
The first step, obtains the size of interesting target according to the flight parameter of unmanned plane;
According to the task of unmanned plane, obtain the physical size of interesting target, the physical length that obtains i target is L i, width is W i, unit is rice, 1≤i≤n, and n is the quantity of obtaining interesting target;
Obtain each pixel representative physical length of unmanned plane image:
&Delta;x = H f &Delta;u
Wherein: Δ u represents pixel dimension, and H represents flying height, and f represents focal length, unit is rice;
According to Δ x, L i, W iobtain target length and width shared number of pixels in image:
l i = L i &Delta;x w i = W i &Delta;x
Wherein: l irepresent that i target length is at shared number of pixels in image, w irepresent that i target width is in shared number of pixels in image;
Second step, obtains the size of moving window according to object pixel size;
If the square window that moving window is p × p, the computing formula of p is as follows:
p i = max ( l i , w i ) ( | l i - w i | > 15 ) 1.25 * l i + w i 2 ( | l i - w i | &le; 15 ) - - - ( 3 )
By p isequence from small to large, p 1≤ ...≤p i≤ ...≤p n, taking 15 as step-length, template is divided into m grade, wherein template to each grade is averaging, and obtains k new window, and size is qk, 1≤k≤m;
For k new window, adopt q k× q kthe moving window of size travels through image;
The 3rd step, in moving window, carries out agglomeration detection to target;
In the moving window k of second step, target is carried out to agglomeration detection, be specially:
(1), if image visible images is converted into gray level image, if image unmanned plane infrared image directly enters next step, establish the gray-scale value of i (x, y) presentation video i in (x, y) position;
(2) the integrogram ii (x, y) of computed image i, integrogram ii is the gray-scale value sum of all pixels of (x, y) upper left in image i in the value of position (x, y), i (x ', y ') presentation video i is at the gray-scale value of (x ', y ');
If s (x, y) be in image x capable before y pixel gray scale with, obtain ii (x, y):
ii(x,y)=ii(x-1,y)+s(x,y)
s(x,y)=s(x,y-1)+i(x,y)
s(x,-1)=0,ii(-1,y)=0
(3) obtain in moving window k all pixel grey scales and:
S si k = ii ( x Dk , y Dk ) + ii ( x Ak , y Ak ) - ii ( x Bk , y Bk ) - ii ( x Ck , y Ck )
Wherein: the position of tetra-angle points of moving window k in image is respectively (x ak, y ak), (x bk, y bk), (x ck, y ck), (x dk, y dk);
(4) obtain moving window k internal object pixel grey scale and:
S ti k = ii ( x Dkt , y Dkt ) + ii ( x Akt , y Akt ) - ii ( x Bkt , y Bkt ) - ii ( x Ckt , y Ckt )
Wherein, (x akt, y akt), (x bkt, y bkt), (x ckt, y ckt), (x dkt, y dkt) represent in moving window k that four angle points of target are in the position of image, and
( x Akt , y Akt ) = ( [ x Ak + 1 8 q k ] , [ y Ak + 1 8 q k ] ) ( x Bkt , y Bkt ) = ( [ x Bk - 1 8 q k ] , [ y Bk + 1 8 q k ] ) ( x Ckt , y Ckt ) = ( [ x Ck + 1 8 q k ] , [ y Ck - 1 8 q k ] ) ( x Dkt , y Dkt ) = ( [ x Dk - 1 8 q k ] , [ y Dk - 1 8 q k ] )
Wherein: [A] represents A to carry out rounding operation;
All pixel grey scales of background and
(5) calculate agglomeration according to following formula;
Agglomeration i k = | S ti k t k &times; t k - S bi k q k &times; q k - t k &times; t k |
Wherein, t k = 3 4 q k ;
(6) if be judged to be target, record the position of moving window k
(7) will with compare, if ( | x i k - x j k | + | y i k - y j k | ) < 10 , Wherein 1≤j≤i-1, with be the same target under moving window k, remove the target that agglomeration is less;
The 4th step, carries out comprehensively the target detecting under the big or small moving window of difference
Be two moving windows of k, s for size, 1≤k, s≤m, if be judged to be same target, remove the target that agglomeration is less.
2. a kind of object detection method that is applicable to unmanned plane visible ray and infrared image based on agglomeration according to claim 1, in second step, adopts q k× q kthe concrete grammar that the moving window of size travels through image is:
(1) moving window is using the image upper left corner as reference position;
(2) by row scanning to the right, step-length is
(3) scanned after a line, moving window moves down at up scanning starting position continue by row scanning to the right, step-length is
(4) repeat above-mentioned steps until moving window traversal entire image.
CN201410141183.9A 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image Active CN103942786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141183.9A CN103942786B (en) 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141183.9A CN103942786B (en) 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image

Publications (2)

Publication Number Publication Date
CN103942786A true CN103942786A (en) 2014-07-23
CN103942786B CN103942786B (en) 2016-08-31

Family

ID=51190437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410141183.9A Active CN103942786B (en) 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image

Country Status (1)

Country Link
CN (1) CN103942786B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463179A (en) * 2014-12-30 2015-03-25 中国人民解放军国防科学技术大学 Unmanned-aerial-vehicle automatic landing target detection method based on BRISK detector maximum value response
CN107886534A (en) * 2017-11-07 2018-04-06 北京市路兴公路新技术有限公司 A kind of method and device of recognition target image size
CN108200406A (en) * 2018-02-06 2018-06-22 王辉 Safety monitoring device
CN109767442A (en) * 2019-01-15 2019-05-17 上海海事大学 A kind of remote sensing images Aircraft Targets detection method based on invariable rotary feature
CN109767417A (en) * 2017-11-10 2019-05-17 太豪生医股份有限公司 Lesion detection device and its method
CN111178371A (en) * 2019-12-17 2020-05-19 深圳市优必选科技股份有限公司 Target detection method, apparatus and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101173987A (en) * 2007-10-31 2008-05-07 北京航空航天大学 Multi-module and multi-target accurate tracking apparatus and method thereof
CN101676744A (en) * 2007-10-31 2010-03-24 北京航空航天大学 Method for tracking small target with high precision under complex background and low signal-to-noise ratio
EP2575104A1 (en) * 2011-09-27 2013-04-03 The Boeing Company Enhancing video using super-resolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101173987A (en) * 2007-10-31 2008-05-07 北京航空航天大学 Multi-module and multi-target accurate tracking apparatus and method thereof
CN101676744A (en) * 2007-10-31 2010-03-24 北京航空航天大学 Method for tracking small target with high precision under complex background and low signal-to-noise ratio
EP2575104A1 (en) * 2011-09-27 2013-04-03 The Boeing Company Enhancing video using super-resolution

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463179A (en) * 2014-12-30 2015-03-25 中国人民解放军国防科学技术大学 Unmanned-aerial-vehicle automatic landing target detection method based on BRISK detector maximum value response
CN104463179B (en) * 2014-12-30 2018-08-31 中国人民解放军国防科学技术大学 Unmanned plane independent landing object detection method based on the response of BRISK detector maximum values
CN107886534A (en) * 2017-11-07 2018-04-06 北京市路兴公路新技术有限公司 A kind of method and device of recognition target image size
CN109767417A (en) * 2017-11-10 2019-05-17 太豪生医股份有限公司 Lesion detection device and its method
CN109767417B (en) * 2017-11-10 2023-11-03 太豪生医股份有限公司 Focus detection device and method thereof
CN108200406A (en) * 2018-02-06 2018-06-22 王辉 Safety monitoring device
CN109767442A (en) * 2019-01-15 2019-05-17 上海海事大学 A kind of remote sensing images Aircraft Targets detection method based on invariable rotary feature
CN109767442B (en) * 2019-01-15 2020-09-04 上海海事大学 Remote sensing image airplane target detection method based on rotation invariant features
CN111178371A (en) * 2019-12-17 2020-05-19 深圳市优必选科技股份有限公司 Target detection method, apparatus and computer storage medium
CN111178371B (en) * 2019-12-17 2023-12-01 深圳市优必选科技股份有限公司 Target detection method, device and computer storage medium

Also Published As

Publication number Publication date
CN103942786B (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN107576960B (en) Target detection method and system for visual radar space-time information fusion
CN109086668B (en) Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
CN111126399B (en) Image detection method, device and equipment and readable storage medium
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN103942786A (en) Self-adaptation block mass target detecting method of unmanned aerial vehicle visible light and infrared images
CN110032949A (en) A kind of target detection and localization method based on lightweight convolutional neural networks
CN108681718B (en) Unmanned aerial vehicle low-altitude target accurate detection and identification method
CN104834915B (en) A kind of small infrared target detection method under complicated skies background
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
CN103455797A (en) Detection and tracking method of moving small target in aerial shot video
CN105654507A (en) Vehicle outer contour dimension measuring method based on image dynamic feature tracking
CN104574393A (en) Three-dimensional pavement crack image generation system and method
CN106056625B (en) A kind of Airborne IR moving target detecting method based on geographical same place registration
CN106096207B (en) A kind of rotor wing unmanned aerial vehicle wind resistance appraisal procedure and system based on multi-vision visual
Cepni et al. Vehicle detection using different deep learning algorithms from image sequence
CN104268880A (en) Depth information obtaining method based on combination of features and region matching
CN114089330B (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
CN113111727A (en) Method for detecting rotating target in remote sensing scene based on feature alignment
CN114078209A (en) Lightweight target detection method for improving small target detection precision
CN103679740A (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN114266805A (en) Twin region suggestion network model for unmanned aerial vehicle target tracking
CN106778504A (en) A kind of pedestrian detection method
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
Khosravi et al. Vehicle speed and dimensions estimation using on-road cameras by identifying popular vehicles

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170412

Address after: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee after: Beijing northern sky long hawk UAV Technology Co. Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: Beihang University