CN103942786B - The self adaptation block objects detection method of unmanned plane visible ray and infrared image - Google Patents

The self adaptation block objects detection method of unmanned plane visible ray and infrared image Download PDF

Info

Publication number
CN103942786B
CN103942786B CN201410141183.9A CN201410141183A CN103942786B CN 103942786 B CN103942786 B CN 103942786B CN 201410141183 A CN201410141183 A CN 201410141183A CN 103942786 B CN103942786 B CN 103942786B
Authority
CN
China
Prior art keywords
target
image
sliding window
unmanned plane
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410141183.9A
Other languages
Chinese (zh)
Other versions
CN103942786A (en
Inventor
丁文锐
刘硕
李红光
袁永显
黎盈婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing northern sky long hawk UAV Technology Co. Ltd.
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410141183.9A priority Critical patent/CN103942786B/en
Publication of CN103942786A publication Critical patent/CN103942786A/en
Application granted granted Critical
Publication of CN103942786B publication Critical patent/CN103942786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of overall block objects detection method being applicable to unmanned plane visible ray and infrared image, belong to image processing field.Described method is from image object essence, agglomeration according to target, target in the range of image overall is detected automatically, adapt to the demand of unmanned plane multiple image load, this method does not relies on the movable information of target, and sound target is respectively provided with the suitability, utilizes the parameter of unmanned aerial vehicle onboard LDMS and imaging device simultaneously, set up interesting target template size table, further speed up processing speed.Use the inventive method, it is possible to make full use of unmanned plane multi-load resource, improve efficiency and the accuracy rate of target detection.

Description

The self adaptation block objects detection method of unmanned plane visible ray and infrared image
Technical field
The invention belongs to unmanned plane technical field of image processing, be specifically related to a kind of unmanned plane visible ray and infrared image of being applicable to Self adaptation block objects detection method.
Background technology
Along with the development of science and technology, unmanned plane can carry multiple image load (such as visible ray, infrared sensor etc.), right The view data of different images sensor passback carries out Automatic Targets, is to ensure that unmanned plane has the base of round-the-clock fight capability Plinth.Unmanned plane image is to shoot to obtain under high-altitude, movement environment, has the following characteristics that target relative image is less, target Variation and easily occur profile variation and background complicated and changeable etc..In conjunction with above unmanned plane feature of image and multi-load should By demand, need to study a kind of simple effectively, be not only suitable for unmanned plane visible images be also applied for unmanned plane infrared image Object detection method.
Automatic Targets always machine vision one is important and has the research direction of challenge.Currently, for this problem Main research method can be divided into following a few class: 1, Target Modeling method, extracts target characteristic, characterizes target, sets up To Template, scans for coupling in global image scope;2, background modeling method, background is modeled, deducts in the picture Background i.e. can get target;3, image segmentation, image same area is split, thus background and target are separated.
Unmanned plane feature of image limits traditional object detection method.When Small object, target characteristic is unstable, no It is sufficient for matching detection accurately, and same target shows different features, no under infrared image and visible images Adapt to the demand of multiple image load.Background modeling method usually requires that photographic head is fixed, to obtain a relatively stable background Model, unmanned plane image then shoots during rapid movement and obtains, if application background modeling, is necessary for front and back Image registrates, and on the one hand registration process is the most more, and the detection in real time to target has considerable influence, the most infrared figure As the most ripe method for registering.Image, at certain feature space, is carried out clustering iteration, finds identical by image segmentation Region, thus be partitioned into target and background, it is adaptable to target compared with big and target and background in the widely different situation of feature space, In unmanned plane image, target is less, and infrared image is not easily found consistent feature space with visible images.
From upper analysis, currently carry out the main problems faced of target detection for unmanned plane multi-load image as follows: be both suitable for Be applicable to again visible images, Small object in infrared image, target is changeable and unknown, background is complicated and changeable, existing target Detection method all can not fully meet actual demand.
Summary of the invention
The present invention is to solve that unmanned plane multi-load Image Automatic Target detects problem encountered, it is proposed that a kind of from target essence Set out, be independent of the target automatic testing method of the particular type of image, the most comprehensive unmanned plane during flying height and imaging parameters, Set up target size table, the target of the multiple size of self adaptation.
No matter in visible images or in infrared image, unmanned plane target generally occurs, with it with block overall form Ambient background presents more significantly difference, referred to as agglomeration.The present invention is the intrinsic propesties of target from unmanned plane image, Utilize the agglomeration that it is presented, and according to unmanned plane imaging characteristic, propose one and be not only suitable for unmanned plane visible images but also fit Object detection method for unmanned plane infrared image.
The unmanned plane visible ray of the present invention and the self adaptation block objects detection method of infrared image, including following step:
The first step, sets up interesting target size template table
Second step, determines sliding window traversing graph picture
3rd step, carries out agglomeration detection to target
4th step, detects target under comprehensive many sizes
The invention have the advantages that
1) unmanned plane visible images and infrared image are adapted to;
2) make full use of other load of unmanned plane and parameter information, set up interesting target size table, adapt to the target of multiple size;
3) utilizing target agglomeration and the difference with ambient background thereof to detect, method is the most effective.
Accompanying drawing explanation
Fig. 1 is the population structure schematic diagram of the present invention;
Fig. 2 is that schematic diagram chosen by image slide window;
Fig. 3 is image-region pixel summation schematic diagram;
Fig. 4 is target detection schematic diagram;
Fig. 5 is target detection flow chart under same size;
Fig. 6 is the comprehensive schematic diagram of target under many sizes.
Detailed description of the invention
Below in conjunction with the accompanying drawings, the present invention is described in detail.
The present invention is a kind of object detection method being applicable to unmanned plane visible ray and infrared image based on agglomeration, population structure Schematic diagram is as it is shown in figure 1, specific implementation method comprises the following steps:
The first step, sets up the size table of interesting target according to the flight parameter of unmanned plane.
The interesting target being likely to occur is scouted in video for unmanned plane, such as moving vehicle, military architecture, landmark etc., Statistics target actual size size, in conjunction with the unmanned plane during flying height measured by unmanned aerial vehicle onboard laser ranging unit and unmanned Machine imaging load imaging parameters, can estimate roughly target Pixel Dimensions in the picture, then form the pixel of interesting target Size table.The Pixel Dimensions of target not exact value, but owing to follow-up detection method is not required to the most accurate target picture Element size, design object Pixel Dimensions table, primarily to quickly determine the size of detection window, improves the efficiency of detection.
Concrete:
Unmanned plane is during performing reconnaissance mission, and generally to specific objective, such as moving vehicle, military architecture, terrestrial reference is built Building, can add up its general actual size, this step can complete before flight.Target actual size (long and Wide) as shown in table 1,
Table 1 interesting target actual size table
Target Long (unit: rice/m) Wide (unit: rice/m)
Target 1 L1 W1
Target 2 L2 W2
Target n Ln Wn
According to the flying height that unmanned plane is different, and the imaging parameters of airborne imaging load, unmanned plane image can be calculated each Length Δ x (m) in reality representated by individual pixel, it is known that pixel dimension is Δ u (m), focal length is f (m), and flying height is H (m), M representation unit, for rice, the formula of the formula (1) physical length representated by each pixel in calculating image.
Δx = H f Δu - - - ( 1 )
Target length and width shared number of pixels in the picture can be obtained, respectively by l according to Δ x (m) and target actual sizeiAnd wi Representing, specific formula for calculation is as follows:
l i = L i Δx w i = W i Δx - - - ( 2 )
Wherein: LiRepresent the physical length of i-th target, WiThe developed width of i-th target, liRepresent i-th target length At shared number of pixels, w in the pictureiExpression i-th target width is in shared number of pixels in the picture, and unit is rice, 1≤i≤n, n are the quantity obtaining interesting target;
Table 2 is unmanned plane interesting image object pixel size table,
Table 2 interesting target Pixel Dimensions table
Target Long (unit: pixel) Wide (unit: pixel)
Target 1 l1 w1
Target 2 l2 w2
Target n ln wn
Second step, according to the sliding window of the suitable size of object pixel size Selection.
According to the object pixel size table formed inside the first step, determine corresponding sliding window size, during detection, Will be according to the agglomeration of target, and the difference of target and ambient background, sliding window size is greater than the Pixel Dimensions of target. Then choose traversal rule, travel through global image with sliding window.Choosing of traversal rule can be flexible and changeable, mainly to inspection Surveying precision and real-time does a balance, in ergodic process, slide anteroposterior windows overlay rate is the highest, and Detection results is the best, but Need accordingly to consume the more time.
Concrete:
Sliding window size is chosen as the square window of p × p, owing to, in unmanned plane image, target is less, presents agglomeration, L under normal circumstancesi,wi(1≤i≤n) is more or less the same, and different target is likely to have similar size, and the computing formula of p is as follows:
p i = max ( l i , w i ) ( | l i - w i | > 15 ) 1.25 * l i + w i 2 ( | l i - w i | ≤ 15 ) - - - ( 3 )
For 1≤i≤n, by piSort from small to large, p1≤...≤pi≤...≤pn, then with 15 for step-length, template is divided into m Individual grade, whereinThe template of each grade is averaging, obtains new window size qk, as follows
Table 3 window size correspondence table
Size sequence number Sliding window width (unit: pixel)
1 q1
2 q2
m qm
For size k (1≤k≤m), use qk×qkImage is traveled through by the sliding window of size, schematic diagram as in figure 2 it is shown, The specific strategy of window traversal is as follows:
(1) sliding window is using the image upper left corner as original position;
(2) scanning to the right by row, step-length is
(3), after scanning through a line, sliding window moves down at up scanning starting positionContinuing to scan to the right by row, step-length is (4) repeat the above steps is until sliding window travels through entire image;
3rd step, in sliding window, carries out agglomeration detection to target.
Calculating the integrogram of image to be detected, the size of integrogram is as the size of original image, at sliding window traversing graph During Xiang, calculate the agglomeration in each sliding window, in then judging sliding window, whether there is target, for possible There is the window of target, record the information such as corresponding sliding window size, position, agglomeration, and determine target with the most all Contrast, get rid of the target of duplicate detection under same sliding window size, it can be ensured that the target detected under each size Do not repeat.When the sliding window under a size has traveled through image, the sliding window of other sizes is taked identical traversal And detection method, until the sliding window of all sizes has traveled through image.
Concrete:
The agglomeration of unmanned plane target refers to, target concentrates due to energy and ambient background exists significant difference and shows Character, the agglomeration of target is target essential attribute, is independent of the feature that concrete image type is formed, is therefore not only suitable for Unmanned plane infrared image, is applicable to again unmanned plane visible images.Image to be detected be designated as i (x, y), integrogram be designated as ii (x, y), Integrogram ii position (x, value y) be in image i (x, y) the gray value sum of all pixels of upper left, whereinI (x ', y ') represents the image i gray value at (x ', y ').As it is shown on figure 3, tetra-angle points of region R Position in the picture is respectively (xA,yA),(xB,yB),(xC,yC),(xD,yD), the pixel in the R of region and SRCan be by formula (4) Calculate,
SR=ii (xD,yD)+ii(xA,yA)-ii(xB,yB)-ii(xC,yC) (4)
Fig. 4 is target detection schematic diagram, and the sliding window length of side is qk, target window and sliding window Gong Yige center, target window The mouth length of side is denoted as tk, wherein tk=0.75qk, the agglomeration of target is denoted asExamine in conjunction with Fig. 5 target agglomeration Flow gauge figure, specifically comprises the following steps that
(1) if image visible images, (x y), if image unmanned plane infrared image, is directly entered first to be converted into gray level image i Next step;
(2) calculate image integrogram ii (x, y), wherein s (x ,-1)=0, ii (-1, y)=0;
S (x, y)=s (x, y-1)+i (x, y) (5)
Ii (x, y)=ii (x-1, y)+s (x, y) (6)
(3) calculate in sliding window all pixel grey scales and, be designated as
S si k = ii ( x Dk , y Dk ) + ii ( x Ak , y Ak ) - ii ( x Bk , y Bk ) - ii ( x Ck , y Ck )
Wherein: tetra-angle points of sliding window k position in the picture is respectively (xAk,yAk),(xBk,yBk),(xCk,yCk),(xDk,yDk);
(4) according to formula (4) calculate in target window all pixel grey scales and, be designated as
S ti k = ii ( x Dkt , y Dkt ) + ii ( x Akt , y Akt ) - ii ( x Bkt , y Bkt ) - ii ( x Ckt , y Ckt )
Wherein, (xAkt,yAkt),(xBkt,yBkt),(xCkt,yCkt),(xDkt,yDkt) represent that in sliding window k, four angle points of target are at image Position, and
( x Akt , y Akt ) = ( [ x Ak + 1 8 q k ] , [ y Ak + 1 8 q k ] ) ( x Bkt , y Bkt ) = ( [ x Bk - 1 8 q k ] , [ y Bk + 1 8 q k ] ) ( x Ckt , y Ckt ) = ( [ x Ck + 1 8 q k ] , [ y Ck - 1 8 q k ] ) ( x Dkt , y Dkt ) = ( [ x Dk - 1 8 q k ] , [ y Dk - 1 8 q k ] )
Wherein: [A] expression carries out rounding operation to A;
Then in sliding window all pixel grey scales of background and
(5) agglomeration is calculated according to formula (7);
Agglomeration i k = | S ti k t k × t k - S bi k q k × q k - t k × t k | - - - ( 7 )
(6) ifThen it is judged to target, size k of record sliding window, and position
(7) willWithCompare, if ( | x i k - x j k | + | y i k - y j k | ) < 10 , Wherein 1≤j≤i-1,WithIt is then the same target under size k, removes the target that agglomeration is less;
4th step, is carried out comprehensively the target detected under different size sliding window
During the 3rd step target agglomeration detection, under each sliding window size, all can obtain an object chain, Detection between different size is separate, can only ensure the target not having to repeat in the object chain under each size, it is impossible to row Occuring simultaneously except existing between the object chain under different size sliding window, i.e. one target is repeated inspection under different size sliding window Situation about surveying, as shown in Figure 6.The all targets detected under different size sliding window are verified, get rid of and repeat target, And export final target.For for two sliding windows that size is k, s (1≤k, s≤m), ifThen it is judged to same target, removes the target that agglomeration is less.Should in reality During with, the destination number in a width unmanned plane image is few, and therefore many size objectives being carried out checking can't be to real-time Bring significantly impact.

Claims (2)

1. the object detection method being applicable to unmanned plane visible ray and infrared image based on agglomeration, comprises the following steps:
The first step, obtains the size of interesting target according to the flight parameter of unmanned plane;
According to the task of unmanned plane, obtaining the actual size of interesting target, the physical length obtaining i-th target is Li, width For Wi, unit is rice, and 1≤i≤n, n are the quantity obtaining interesting target;
Physical length representated by acquisition each pixel of unmanned plane image:
&Delta; x = H f &Delta; u - - - ( 1 )
Wherein: Δ u represents that pixel dimension, H represent that flying height, f represent focal length, and unit is rice;
According to Δ x, Li、WiObtain target length and width shared number of pixels in the picture:
l i = L i &Delta; x w i = W i &Delta; x - - - ( 2 )
Wherein: liRepresent i-th target length shared number of pixels, w in the pictureiRepresent that i-th target width is in the picture Shared number of pixels;
Second step, obtains the size of sliding window according to object pixel size;
If sliding window is pi×piSquare window, piComputing formula as follows:
p i = m a x ( l i , w i ) ( | l i - w i | > 15 ) 1.25 * l i + w i 2 ( | l i - w i | &le; 15 ) - - - ( 3 )
By piSort from small to large, p1≤...≤pi≤...≤pn, with 15 for step-length, template is divided into m grade, whereinBeing averaging the template of each grade, obtain k new window, size is qk, 1≤k≤m;
For kth new window, use qk×qkImage is traveled through by the sliding window of size;
3rd step, in sliding window, carries out agglomeration detection to target;
In the sliding window k of second step, target is carried out agglomeration detection, particularly as follows:
(1) if image visible images, it is converted into gray level image, if image unmanned plane infrared image, is directly entered next Individual step, if (x y) represents that image i is at (x, y) gray value of position to i;
(2) (x, y), in position, (x, value y) is (x, y) upper left institute in image i to integrogram ii to the integrogram ii of calculating image i There is the gray value sum of pixel, i.e.I (x ', y ') represents the image i gray value at (x ', y ');
IfS (x, y) be in image before xth row the gray scale of y pixel and, obtain ii (x, y):
Ii (x, y)=ii (x-1, y)+s (x, y) (4)
S (x, y)=s (x, y-1)+i (x, y) (5)
S (x ,-1)=0, ii (-1, y)=0 (6)
(3) obtain in sliding window k all pixel grey scales and:
S s i k = i i ( x D k , y D k ) + i i ( x A k , y A k ) - i i ( x B k , y B k ) - i i ( x C k , y C k ) - - - ( 7 )
Wherein: tetra-angle points of sliding window k position in the picture is respectively (xAk,yAk),(xBk,yBk),(xCk,yCk),(xDk,yDk);
(4) obtain sliding window k internal object pixel grey scale and:
S t i k = i i ( x D k t , y D k t ) + i i ( x A k t , y A k t ) - i i ( x B k t , y B k t ) - i i ( x C k t , y C k t ) - - - ( 8 )
Wherein, (xAkt,yAkt),(xBkt,yBkt),(xCkt,yCkt),(xDkt,yDkt) represent that in sliding window k, four angle points of target are at image Position, and
( x A k t , y A k t ) = ( &lsqb; x A k + 1 8 q k &rsqb; , &lsqb; y A k + 1 8 q k &rsqb; ) ( x B k t , y B k t ) = ( &lsqb; x B k - 1 8 q k &rsqb; , &lsqb; y B k + 1 8 q k &rsqb; ) ( x C k t , y C k t ) = ( &lsqb; x C k + 1 8 q k &rsqb; , &lsqb; y C k - 1 8 q k &rsqb; ) ( x D k t , y D k t ) = ( &lsqb; x D k - 1 8 q k &rsqb; , &lsqb; y D k - 1 8 q k &rsqb; ) - - - ( 9 )
Wherein: [A] expression carries out rounding operation to A;
Then all pixel grey scales of background and
(5) agglomeration is calculated according to following formula;
Agglomeration i k = | S t i k t k &times; t k - S b i k q k &times; q k - t k &times; t k | - - - ( 10 )
Wherein,
(6) ifThen it is judged to target, the position of record sliding window k
(7) willWithCompare, ifWherein 1≤j≤i-1,WithIt is then the same target under sliding window k, removes the target that agglomeration is less;
4th step, is carried out comprehensively the target detected under different size sliding window
For two sliding windows that size is k, s, 1≤k, s≤m, if Then it is judged to same target, removes the target that agglomeration is less.
A kind of target inspection being applicable to unmanned plane visible ray and infrared image based on agglomeration the most according to claim 1 Survey method, in second step, uses qk×qkImage is traveled through by the sliding window of size method particularly includes:
(1) sliding window is using the image upper left corner as original position;
(2) scanning to the right by row, step-length is
(3), after scanning through a line, sliding window moves down at up scanning starting positionContinue to scan to the right by row, step-length For
(4) repeat the above steps is until sliding window travels through entire image.
CN201410141183.9A 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image Active CN103942786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141183.9A CN103942786B (en) 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141183.9A CN103942786B (en) 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image

Publications (2)

Publication Number Publication Date
CN103942786A CN103942786A (en) 2014-07-23
CN103942786B true CN103942786B (en) 2016-08-31

Family

ID=51190437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410141183.9A Active CN103942786B (en) 2014-04-09 2014-04-09 The self adaptation block objects detection method of unmanned plane visible ray and infrared image

Country Status (1)

Country Link
CN (1) CN103942786B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463179B (en) * 2014-12-30 2018-08-31 中国人民解放军国防科学技术大学 Unmanned plane independent landing object detection method based on the response of BRISK detector maximum values
CN107886534A (en) * 2017-11-07 2018-04-06 北京市路兴公路新技术有限公司 A kind of method and device of recognition target image size
TWI683276B (en) * 2017-11-10 2020-01-21 太豪生醫股份有限公司 Focus detection apparatus and method therof
CN108200406A (en) * 2018-02-06 2018-06-22 王辉 Safety monitoring device
CN109767442B (en) * 2019-01-15 2020-09-04 上海海事大学 Remote sensing image airplane target detection method based on rotation invariant features
CN111178371B (en) * 2019-12-17 2023-12-01 深圳市优必选科技股份有限公司 Target detection method, device and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101173987A (en) * 2007-10-31 2008-05-07 北京航空航天大学 Multi-module and multi-target accurate tracking apparatus and method thereof
CN101676744A (en) * 2007-10-31 2010-03-24 北京航空航天大学 Method for tracking small target with high precision under complex background and low signal-to-noise ratio
EP2575104A1 (en) * 2011-09-27 2013-04-03 The Boeing Company Enhancing video using super-resolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101173987A (en) * 2007-10-31 2008-05-07 北京航空航天大学 Multi-module and multi-target accurate tracking apparatus and method thereof
CN101676744A (en) * 2007-10-31 2010-03-24 北京航空航天大学 Method for tracking small target with high precision under complex background and low signal-to-noise ratio
EP2575104A1 (en) * 2011-09-27 2013-04-03 The Boeing Company Enhancing video using super-resolution

Also Published As

Publication number Publication date
CN103942786A (en) 2014-07-23

Similar Documents

Publication Publication Date Title
CN103942786B (en) The self adaptation block objects detection method of unmanned plane visible ray and infrared image
CN104062973B (en) A kind of mobile robot based on logos thing identification SLAM method
WO2020151109A1 (en) Three-dimensional target detection method and system based on point cloud weighted channel feature
CN104536009B (en) Above ground structure identification that a kind of laser infrared is compound and air navigation aid
CN103324936B (en) A kind of vehicle lower boundary detection method based on Multi-sensor Fusion
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN101996401B (en) Target analysis method and apparatus based on intensity image and depth image
CN103353988B (en) Allos SAR scene Feature Correspondence Algorithm performance estimating method
CN101826157B (en) Ground static target real-time identifying and tracking method
CN107506729B (en) Visibility detection method based on deep learning
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
CN110163275B (en) SAR image target classification method based on deep convolutional neural network
JP6397379B2 (en) CHANGE AREA DETECTION DEVICE, METHOD, AND PROGRAM
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
CN106384079A (en) RGB-D information based real-time pedestrian tracking method
CN104268880A (en) Depth information obtaining method based on combination of features and region matching
CN103996027B (en) Space-based space target recognizing method
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN101770583B (en) Template matching method based on global features of scene
CN104680151B (en) A kind of panchromatic remote sensing image variation detection method of high-resolution for taking snow covering influence into account
CN114089330B (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
CN103065320A (en) Synthetic aperture radar (SAR) image change detection method based on constant false alarm threshold value
CN116071424A (en) Fruit space coordinate positioning method based on monocular vision
CN105631849B (en) The change detecting method and device of target polygon
Li et al. Road Damage Evaluation via Stereo Camera and Deep Learning Neural Network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170412

Address after: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee after: Beijing northern sky long hawk UAV Technology Co. Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: Beihang University