CN104408446B - A kind of unmanned plane independent landing object detection method based on saliency - Google Patents

A kind of unmanned plane independent landing object detection method based on saliency Download PDF

Info

Publication number
CN104408446B
CN104408446B CN201410796642.7A CN201410796642A CN104408446B CN 104408446 B CN104408446 B CN 104408446B CN 201410796642 A CN201410796642 A CN 201410796642A CN 104408446 B CN104408446 B CN 104408446B
Authority
CN
China
Prior art keywords
unmanned plane
num
image
sig
during flying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410796642.7A
Other languages
Chinese (zh)
Other versions
CN104408446A (en
Inventor
胡天江
马兆伟
沈镒峰
赵搏欣
孔维炜
王祥科
张代兵
相晓嘉
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201410796642.7A priority Critical patent/CN104408446B/en
Publication of CN104408446A publication Critical patent/CN104408446A/en
Application granted granted Critical
Publication of CN104408446B publication Critical patent/CN104408446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/60Static or dynamic means for assisting the user to position a body part for biometric acquisition
    • G06V40/67Static or dynamic means for assisting the user to position a body part for biometric acquisition by interactive indications to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a kind of unmanned plane independent landing object detection method based on saliency, its step is:(1) unmanned plane conspicuousness is detected;For the unmanned plane during flying view data I (i, j) of collection, from frequency-domain transform angle, the detection of salient region is realized using discrete cosine transform;That is, input:Unmanned plane during flying sequence chart I;Output:Unmanned plane Saliency maps are as Inum_SIG;(2) unmanned plane picture position is obtained;The positioning to unmanned plane is realized in Saliency maps picture, i.e. input:The notable sequence chart I of unmanned planenum_SIG, threshold value δ;Output:Unmanned plane image sequence coordinate (xnum_out,ynum_out).The present invention has the advantages that achievable unmanned plane quick detection positioning, the degree of accuracy and good reliability.

Description

A kind of unmanned plane independent landing object detection method based on saliency
Technical field
Present invention relates generally to the design field of unmanned plane, a kind of unmanned plane based on saliency is refered in particular to autonomous Land object detection method.
Background technology
The distinguishing feature such as unmanned plane has no one was injured risk, cost is low, lightweight, mobility is good, hidden ability is strong, Therefore its development is by many national great attentions.Therefore, unmanned plane independent landing be the development of following UAS must One of right trend, and high-precision Camera calibration technology is to realize the key that unmanned plane independent landing is reclaimed.With calculating The cross development of machine vision and videographic measurment, computer vision is applied to during UAV Landing, developed based on The airmanship of visual information, as new study hotspot.The unmanned plane independent landing technology of view-based access control model talk down is urgent The problem of solving unmanned plane safe retrieving.Unmanned plane detection technique plays an important role in vision guide technology.For reality The independent landing of existing unmanned plane, it is necessary first to the image sequence of unmanned plane is captured by video camera, how design robustness is good, know Bie Dugao, quick recognition detection algorithm, are the key issues for determining unmanned plane Autonomous landing efficiency.
The content of the invention
The technical problem to be solved in the present invention is that:The technical problem existed for prior art, the present invention provides one Plant the unmanned plane independent landing mesh based on saliency of achievable unmanned plane quick detection positioning, the degree of accuracy and good reliability Mark detection method.
In order to solve the above technical problems, the present invention uses following technical scheme:
A kind of unmanned plane independent landing object detection method based on saliency, its step is:
(1) unmanned plane conspicuousness is detected;For the unmanned plane during flying view data I (i, j) of collection, from frequency-domain transform angle Degree, the detection of salient region is realized using discrete cosine transform;That is, input:Unmanned plane during flying sequence chart I;Output:Unmanned plane Saliency maps are as Inum_SIG
(2) unmanned plane picture position is obtained;The positioning to unmanned plane is realized in Saliency maps picture, i.e. input:Unmanned plane Notable sequence chart Inum_SIG, threshold value δ;Output:Unmanned plane image sequence coordinate (xnum_out,ynum_out)。
As a further improvement on the present invention:Step (1) is concretely comprised the following steps:
(1.1) long-lost cosine code conversion is carried out;To the n-th um unmanned plane during flying figure InumThree passages carry out respectively Discrete cosine transform, obtains the DCT results I of each passagenum_a, a=1,2,3:
Wherein, flight view data I (i, j) Inum_a(i, j) for a width m × n unmanned plane during flying a-th of passage from Dissipate digital picture,
Then, average computing is taken to three passages:
Obtain the n-th um unmanned plane during flying figure InumDCT results Inum_DCT
(1.2) symbolic operation is carried out for the image after conversion, filters out region interested;I.e. according to following formula to image Inum_DCTCarry out symbolic operation and obtain symbolic operation result figure Inum_SIGN
(1.3) inverse discrete cosine transform conversion is carried out, the image after after screening is transformed into original form;That is, according to Following formula is to symbolic operation result figure Inum_SIGNThree passages carry out inverse discrete cosine transform respectively, obtain IDCT results Inum_a_IDCT
(1.4) Gaussian convolution computing is carried out, smooth operation is carried out to the image after conversion;That is, according to following formula to Inum_IDCT Three passages make the Gaussian smoothing that variance is σ respectively, obtain notable figure figure Inum_SIG
Inum_a_SIG(i, j, σ)=Inum_a_IDCT(i,j)*G(i,j,σ)
Wherein, σ is Gaussian kernel variance parameter.
As a further improvement on the present invention:Step (2) are concretely comprised the following steps:
(2.1) pixel is screened;The pixel screening operation that threshold value is δ is carried out to notable graphic sequence according to following formula, sieved Image I after choosingnum_δ
(2.2) picture position is resolved;That is, according to following formula to image Inum_δIn coordinate (xi,yj) make unmanned plane image position Settlement operations are put, the image coordinate (x of unmanned plane is obtainednum_out,ynum_out):
Compared with prior art, the advantage of the invention is that:The unmanned plane based on saliency of the present invention is autonomous Land object detection method, can carry out the detection of unmanned plane to the unmanned plane during flying sequence of collection, so that the position of unmanned plane is obtained, Guiding unmanned plane accurately lands, and finally can be completely achieved the positioning of unmanned plane quick detection, and the degree of accuracy and good reliability.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the inventive method.
Fig. 2 is the present invention in the hollow middle unmanned plane conspicuousness testing result schematic diagram of concrete application example.
Fig. 3 is present invention complex background unmanned plane conspicuousness testing result schematic diagram in concrete application example.
Fig. 4 is present invention ground unmanned plane conspicuousness testing result schematic diagram in concrete application example.
Embodiment
The present invention is described in further details below with reference to Figure of description and specific embodiment.
As shown in figure 1, a kind of unmanned plane independent landing object detection method based on saliency of the present invention, it is walked Suddenly it is:
(1) unmanned plane conspicuousness is detected;
The unmanned machine testing view-based access control model attention mechanism of the present invention, the main method detected using conspicuousness.The vision system of people There is a bottom-up vision noticing mechanism based on scene significance in system, it enables human eye to notice complicated field rapidly Well-marked target in scape.Selective visual attention (Selective Visual Attention) is information in human brain pathways for vision One key link of processing, it only allows that small part perception information enters short-term memory and visual consciousness stage.Vision is notable Property mechanism ensure that human eye obtains the high efficiency of information, effective acquisition and processing of the human eye to image information can be achieved.
For the unmanned plane during flying view data I (i, j) of collection in the present invention, from frequency-domain transform angle, using discrete remaining String converts the detection for realizing salient region.That is, input:Unmanned plane during flying sequence chart I, operator parameter;Output:Unmanned plane is notable Property image Inum_SIG
Its idiographic flow is:
(1.1) long-lost cosine code (Discrete Cosine Transform, DCT) conversion is carried out.
The definition of dct transform is:
Wherein, flight view data I (i, j) is a width m × n unmanned plane during flying discrete digital image,
Image after conversion is variable signal from low to high from the upper left corner to the lower right corner, and the absolute value of coefficient is gradually Diminish, energy is concentrated mainly on low frequency part.It is then less sensitive to high fdrequency component and human eye is more sensitive to low frequency component.It is low Frequency coefficient embodies the profile and gray-scale watermark of target, and high frequency coefficient embodies the details of target shape.Salient region is examined The profile characteristic that survey embodies mainly for the low frequency part of target is interested.In addition, DCT quantizing process is actually pair One optimization process of DCT coefficient, it is the characteristic that make use of human eye insensitive to HFS to realize the significantly simple of data Change.
The present invention is according to above formula (1), to the n-th um unmanned plane during flying figure InumThree passages carry out discrete cosine respectively Conversion, obtains the DCT results I of each passagenum_a, a=1,2,3.
Then, average computing is taken to three passages:
Obtain the n-th um unmanned plane during flying figure InumDCT results Inum_DCT
(2) symbolic operation is carried out for the image after conversion, filters out region interested.The definition of symbolic operation is:
That is, according to above formula (2) to image Inum_DCTCarry out symbolic operation and obtain symbolic operation result figure Inum_SIGN
(3) inverse discrete cosine transform (Inverse Discrete Cosine Transform, IDCT) conversion is carried out, will Image after after screening transforms to original form.
That is, according to above formula (3), to symbolic operation result figure Inum_SIGNThree passages carry out discrete cosine contravariant respectively Change, obtain IDCT results Inum_IDCT
(4) Gaussian convolution computing is carried out, smooth operation is carried out mainly for the image after conversion.
L (i, j, σ)=I (i, j) * G (i, j, σ) (4)
Wherein σ is Gaussian kernel variance parameter.
That is, according to above formula (4) to Inum_IDCTThree passages make the Gaussian smoothing that variance is σ respectively, obtain notable figure figure Inum_SIG
(2) unmanned plane picture position is obtained;
, it is necessary to realize the positioning to unmanned plane in Saliency maps picture after the Saliency maps picture of unmanned plane is obtained. Fig. 1 is that how the image of acquisition resolves the image coordinate of unmanned plane, be this hair in the images after conspicuousness detection Bright second step.That is, input:The notable sequence chart I of unmanned planenum_SIG, threshold value δ.Output:Unmanned plane image sequence coordinate (xnum_out,ynum_out)。
The process is mainly using two steps:Pixel screening, position clearing, its definition mode are as follows:
(2.1) pixel is screened.
S is obtained Saliency maps source, given threshold δ.The significant region of unmanned plane is screened, obtains retaining unmanned plane position Screening figure S ', the threshold decision computing used for:
I.e. according to above formula (6), the pixel screening operation that threshold value is δ, the image after being screened are carried out to notable graphic sequence Inum_δ
(2.2) picture position is resolved.
For screening figure S ', one of unmanned plane definite image coordinate (x relatively is directed in order to obtainout,yout).This hair Bright use average calculating operation operator, its definition mode is as follows:
Wherein, a is the number of non-zero pixel in screening figure S ', (xi,yj) scheme the image coordinate in S ' for screening.
That is, according to above formula (7), to image Inum_δMake unmanned plane picture position settlement operations, the image for obtaining unmanned plane is sat Mark (xnum_out,ynum_out)。
By two above-mentioned big steps, two above step can perform identical operation for whole image sequence, obtain Image coordinate (x during whole unmanned plane during flyingout,yout)。
In the concrete application example of the present invention, three exemplary positions in unmanned plane landing image sequence are chosen: In the air, into complex background region, the location drawing closely landed.Handled image is 480*640 coloured image, obtained place Reason process is as shown in Figure 2, Figure 3 and Figure 4.The implication of different figures in Fig. 2, Fig. 3, Fig. 4 is:(a) original image;(b) significantly inspection Mapping;(c) threshold process figure;(d) image pixel is resolved, and figure midpoint is the unmanned plane image pixel coordinates position resolved;(e) exist The unmanned plane image pixel coordinates shown in artwork.
Test chart source includes 182 images, and processing time is 23.044426s altogether, the processing time of average every width figure For:0.1252s, treatment conditions are processor Pentium Dual-Core CPU, internal memory 4GB, 32-bit operating system.Handle ring Border is Matlab2010.
It the above is only the preferred embodiment of the present invention, protection scope of the present invention is not limited merely to above-described embodiment, All technical schemes belonged under thinking of the present invention belong to protection scope of the present invention.It should be pointed out that for the art For those of ordinary skill, some improvements and modifications without departing from the principles of the present invention should be regarded as the protection of the present invention Scope.

Claims (1)

1. a kind of unmanned plane independent landing object detection method based on saliency, it is characterised in that step is:
(1) unmanned plane conspicuousness is detected;For the unmanned plane during flying view data I (i, j) of collection, from frequency-domain transform angle, profit The detection of salient region is realized with discrete cosine transform;That is, input:Unmanned plane during flying sequence chart I;Output:Unmanned plane is notable Property image Inum_SIG
(2) unmanned plane picture position is obtained;The positioning to unmanned plane is realized in Saliency maps picture, i.e. input:Unmanned plane is notable Sequence chart Inum_SIG, threshold value δ;Output:Unmanned plane image sequence coordinate (xnum_out,ynum_out);
Step (1) is concretely comprised the following steps:
(1.1) long-lost cosine code conversion is carried out;To the n-th um unmanned plane during flying figure InumThree passages carry out respectively it is discrete Cosine transform, obtains the DCT results I of each passagenum_a, a=1,2,3;
Wherein, flight view data Inum_a(i, j) is the discrete digital figure of a-th of passage of a width m × n unmanned plane during flying Picture,
Then, average computing is taken to three passages:
Obtain the n-th um unmanned plane during flying figure InumDCT results Inum_DCT
(1.2) symbolic operation is carried out for the image after conversion, filters out region interested;I.e. according to following formula to image Inum_DCTCarry out symbolic operation and obtain symbolic operation result figure Inum_SIGN
(1.3) inverse discrete cosine transform conversion is carried out, the image after after screening is transformed into original form;That is, according to following formula To symbolic operation result figure Inum_SIGNThree passages carry out inverse discrete cosine transform respectively, obtain IDCT results Inum_a_IDCT
(1.4) Gaussian convolution computing is carried out, smooth operation is carried out to the image after conversion;That is, according to following formula to Inum_IDCTThree Passage makees the Gaussian smoothing that variance is σ respectively, obtains notable figure figure Inum_SIG
Inum_a_SIG(i, j, σ)=Inum_a_IDCT(i,j)*G(i,j,σ)
Wherein, σ is Gaussian kernel variance parameter;
Step (2) are concretely comprised the following steps:
(2.1) pixel is screened;I.e. according to following formula to notable graphic sequence Inum_SIGThe pixel screening operation that threshold value is δ is carried out, is sieved Image I after choosingnum_δ
(2.2) picture position is resolved;That is, according to following formula to image Inum_δIn coordinate (xi,yj) make the clearing of unmanned plane picture position Operation, obtains the image coordinate (x of unmanned planenum_out,ynum_out):
Wherein, a1For the number of non-zero pixel in screening figure S ', (xi,yj) scheme the image coordinate in S ' for screening.
CN201410796642.7A 2014-12-19 2014-12-19 A kind of unmanned plane independent landing object detection method based on saliency Active CN104408446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410796642.7A CN104408446B (en) 2014-12-19 2014-12-19 A kind of unmanned plane independent landing object detection method based on saliency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410796642.7A CN104408446B (en) 2014-12-19 2014-12-19 A kind of unmanned plane independent landing object detection method based on saliency

Publications (2)

Publication Number Publication Date
CN104408446A CN104408446A (en) 2015-03-11
CN104408446B true CN104408446B (en) 2017-10-03

Family

ID=52646077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410796642.7A Active CN104408446B (en) 2014-12-19 2014-12-19 A kind of unmanned plane independent landing object detection method based on saliency

Country Status (1)

Country Link
CN (1) CN104408446B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK3485462T3 (en) * 2016-07-12 2021-02-01 Sz Dji Technology Co Ltd PROCESSING IMAGES TO OBTAIN ENVIRONMENTAL INFORMATION
CN108737821B (en) * 2018-04-25 2020-09-04 中国人民解放军军事科学院军事医学研究院 Video interest area quick preselection method and system based on multi-channel shallow feature
CN109543561B (en) * 2018-10-31 2020-09-18 北京航空航天大学 Method and device for detecting salient region of aerial video
US11932394B2 (en) 2021-09-14 2024-03-19 Honeywell International Inc. System and method for localization of safe zones in dense depth and landing quality heatmaps

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1916801A (en) * 2005-10-28 2007-02-21 南京航空航天大学 Method for identifying cooperated object for self-landing pilotless aircraft
CN101216938A (en) * 2007-12-28 2008-07-09 深圳市蓝韵实业有限公司 An automatic positioning method of multi-sequence images
CN102509290A (en) * 2011-10-25 2012-06-20 西安电子科技大学 Saliency-based synthetic aperture radar (SAR) image airfield runway edge detection method
CN102968793A (en) * 2012-11-20 2013-03-13 百年金海安防科技有限公司 Method for identifying natural image and computer generated image based on DCT (Discrete Cosine Transformation)-domain statistic characteristics

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031054A1 (en) * 2005-08-08 2007-02-08 Neomagic Israel Ltd. Encoding DCT coordinates

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1916801A (en) * 2005-10-28 2007-02-21 南京航空航天大学 Method for identifying cooperated object for self-landing pilotless aircraft
CN101216938A (en) * 2007-12-28 2008-07-09 深圳市蓝韵实业有限公司 An automatic positioning method of multi-sequence images
CN102509290A (en) * 2011-10-25 2012-06-20 西安电子科技大学 Saliency-based synthetic aperture radar (SAR) image airfield runway edge detection method
CN102968793A (en) * 2012-11-20 2013-03-13 百年金海安防科技有限公司 Method for identifying natural image and computer generated image based on DCT (Discrete Cosine Transformation)-domain statistic characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视觉注意模型及其在目标检测中的应用;范娜;《中国优秀硕士学位论文全文数据库》;20130515(第2013年第05期);第I138-1955页 *

Also Published As

Publication number Publication date
CN104408446A (en) 2015-03-11

Similar Documents

Publication Publication Date Title
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN107103613B (en) A kind of three-dimension gesture Attitude estimation method
Turker et al. Building‐based damage detection due to earthquake using the watershed segmentation of the post‐event aerial images
Huang et al. Feature-based image registration using the shape context
CN104408446B (en) A kind of unmanned plane independent landing object detection method based on saliency
Chen et al. Automatic power line extraction from high resolution remote sensing imagery based on an improved radon transform
CN105574527A (en) Quick object detection method based on local feature learning
CN104268853A (en) Infrared image and visible image registering method
CN106373128B (en) Method and system for accurately positioning lips
CN108225273B (en) Real-time runway detection method based on sensor priori knowledge
CN112489026B (en) Asphalt pavement disease detection method based on multi-branch parallel convolution neural network
Xu et al. Water level estimation based on image of staff gauge in smart city
CN108257153A (en) A kind of method for tracking target based on direction gradient statistical nature
CN107609465A (en) A kind of multi-dimension testing method for Face datection
CN114812398A (en) High-precision real-time crack detection platform based on unmanned aerial vehicle
CN105631849B (en) The change detecting method and device of target polygon
CN104050674A (en) Salient region detection method and device
CN110321869A (en) Personnel's detection and extracting method based on Multiscale Fusion network
US10169663B2 (en) Scene change detection via multiple sensors
Zhang et al. A combined approach to single-camera-based lane detection in driverless navigation
CN110503110A (en) Feature matching method and device
Mubarak et al. Effect of Gaussian filtered images on Mask RCNN in detection and segmentation of potholes in smart cities
CN105225232A (en) A kind of colour of view-based access control model attention mechanism warship cooperative target detection method
CN113628145B (en) Image sharpening method, system, device and storage medium
CN112613437B (en) Method for identifying illegal buildings

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant