CN107194941A - A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision - Google Patents

A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision Download PDF

Info

Publication number
CN107194941A
CN107194941A CN201710367835.4A CN201710367835A CN107194941A CN 107194941 A CN107194941 A CN 107194941A CN 201710367835 A CN201710367835 A CN 201710367835A CN 107194941 A CN107194941 A CN 107194941A
Authority
CN
China
Prior art keywords
straight line
parameter
image
unmanned plane
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710367835.4A
Other languages
Chinese (zh)
Inventor
张俊勇
伍世虔
宋运莲
陈鹏
张琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201710367835.4A priority Critical patent/CN107194941A/en
Publication of CN107194941A publication Critical patent/CN107194941A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P13/00Indicating or recording presence, absence, or direction, of movement
    • G01P13/02Indicating direction only, e.g. by weather vane
    • G01P13/025Indicating direction only, e.g. by weather vane indicating air data, i.e. flight variables of an aircraft, e.g. angle of attack, side slip, shear, yaw
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Abstract

The invention discloses a kind of method, system and the electronic equipment of the unmanned plane independent landing based on monocular vision, method therein includes, according to the Aerial Images obtained in advance, obtaining gaussian pyramid image;Rim detection is carried out to the gaussian pyramid image, edge image is obtained;First straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method;The 3rd straight line and the 4th straight line are obtained using the RANSAC algorithm of selective iteration to the first straight line and the second straight line;According to the 3rd straight line and the 4th straight line, the landing parameter of the unmanned plane is obtained, so that the unmanned plane carries out independent landing.The present invention solves UAV Landing method in the prior art and there is the not high technical problem of positioning precision.

Description

A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision
Technical field
The present invention relates to technical field of visual navigation, more particularly to a kind of unmanned plane independent landing side based on monocular vision Method, system and electronic equipment.
Background technology
In military aviation field, the unmanned plane with independent landing ability is the focus studied at present, and unmanned plane is autonomous Land proposes very high requirement to the accuracy and speed and reliability of navigation.Current unmanned plane independent landing navigation is broadly divided into Two classes:The navigation of navigation and view-based access control model based on satellite GPS.Navigation based on GPS is easy to use, but will be complete in wartime Failure;The independent navigation of view-based access control model, it is possible to reduce the dependence of external sector signal during UAV Landing, makes UAV Landing With higher independence.
In the prior art, unmanned plane independent landing technology is mainly based upon the detection of runway both sides white line, conventional several classes Method has:Hough transform, Radon conversion and line segment cluster etc..Wherein, line segment clustering method is easily by noise jamming, for nothing Man-machine navigation reliability is inadequate;Hough transform and Radon conversion are much like, are all that the point transformation of the plane of delineation is empty to parameter Between, difference is that the former is the discrete form of straight line parameter conversion, is directly applied on bianry image;And the latter is straight line parameter The conitnuous forms of conversion, are directly applied to gray level image.
However, applicant is had found by long-term practice, and in the above method used in the prior art, Hough transform speed Degree is very fast, but precision is not high, and Radon transduced precisions are higher, but real-time is poor, do not reach unmanned plane to real-time navigation speed Requirement.
It follows that UAV Landing method has the not high technical problem of positioning precision in the prior art, therefore provide A kind of method of unmanned plane independent landing is particularly important.
The content of the invention
The embodiment of the present invention provides a kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision, There is the not high technical problem of positioning precision to solve UAV Landing method in the prior art.
The invention discloses
The one or more technical schemes provided in the embodiment of the present invention, have at least the following technical effects or advantages:
A kind of method of unmanned plane independent landing based on monocular vision disclosed by the invention, first basis are obtained in advance Aerial Images, obtain gaussian pyramid image;And rim detection is carried out to the gaussian pyramid image, obtain edge image; Then first straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method;It is straight to described first Line and the second straight line obtain the 3rd straight line and the 4th straight line using the RANSAC algorithm of selective iteration, according to 3rd straight line and the 4th straight line, obtain the landing parameter of the unmanned plane so that the unmanned plane carry out it is autonomous Land.In the above method, first by gaussian pyramid accelerating algorithm according to the Aerial Images obtained in advance, gaussian pyramid is obtained Image, can improve real-time, and obtain first from the edge image using thick yardstick Hough transformation method to unmanned plane Straight line and second straight line, so that Hough transformation speed is improved, and using the RANSAC algorithm pair of selective iteration The straight line that Hough transformation is obtained is handled, and the positioning precision of straight line is improved, so that accurate landing parameter is obtained, so that institute State unmanned plane and carry out independent landing, the not high technology of the positioning precision that the UAV Landing method in the prior art of solving is present is asked Topic.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, can be with root Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of the method for the unmanned plane independent landing based on monocular vision in the embodiment of the present invention;
Fig. 2 is a kind of structure chart of the system of the unmanned plane independent landing based on monocular vision in the embodiment of the present invention;
Fig. 3 is the figure of the geometric interpretation of the relevant parameter of thick yardstick Hough transformation in the embodiment of the present invention;
Fig. 4 is the 3rd straight line drawn in the embodiment of the present invention using thick yardstick Hough transformation method and the original of the 4th straight line Reason figure;
Fig. 5 rotates and moved to the schematic diagram in the positive overhead of runway for unmanned plane in the embodiment of the present invention;
Fig. 6 is the structural representation of the electronic equipment provided in the embodiment of the present invention.
Embodiment
Set the embodiments of the invention provide a kind of unmanned plane independent landing method, system and electronics based on monocular vision It is standby, there is the not high technical problem of positioning precision to solve UAV Landing method in the prior art.
Technical scheme in the embodiment of the present application, general thought is as follows:
First by gaussian pyramid accelerating algorithm according to the Aerial Images obtained in advance, gaussian pyramid image is obtained, Can improve real-time, and to unmanned plane using thick yardstick Hough transformation method obtained from the edge image first straight line and Second straight line, so as to improve Hough transformation speed, and is become using the RANSAC algorithm of selective iteration to Hough The straight line got in return is handled, and improves the positioning precision of straight line, so as to obtain accurate landing parameter so that it is described nobody Machine carries out independent landing, solves the not high technical problem of the positioning precision of UAV Landing method presence in the prior art.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Embodiment one
A kind of unmanned plane independent landing method based on monocular vision is present embodiments provided, Fig. 1, methods described is referred to Including:
Step S101:According to the Aerial Images obtained in advance, gaussian pyramid image is obtained;
Step S102:Rim detection is carried out to the gaussian pyramid image, edge image is obtained;
Step S103:First straight line is obtained and second straight from the edge image using thick yardstick Hough transformation method Line;
Step S104:The first straight line and the second straight line are calculated using the random sampling uniformity of selective iteration Method obtains the 3rd straight line and the 4th straight line,
Step S105:According to the 3rd straight line and the 4th straight line, the landing parameter of the unmanned plane is obtained, so that The unmanned plane carries out independent landing.
It should be noted that first according to the Aerial Images obtained in advance in the application, gaussian pyramid image is obtained, can To improve computational efficiency, the demand of unmanned plane real-time navigation is met, then by thick yardstick Hough transformation method from the edge First straight line and second straight line are obtained in image, so as to improve Hough transformation speed, and is taken out using the random of selective iteration Sample consistency algorithm is handled the straight line that Hough transformation is obtained, and improves the positioning precision of straight line, so as to obtain accurate Land parameter, so that the unmanned plane carries out independent landing, solves the positioning that UAV Landing method is present in the prior art The not high technical problem of precision.
Below, a kind of unmanned plane independent landing method based on monocular vision provided with reference to Fig. 1 the application carries out detailed It is thin to introduce:
Step S101:According to the Aerial Images obtained in advance, gaussian pyramid image is obtained.
The Aerial Images that the basis is obtained in advance, obtain gaussian pyramid image, including:
Gray processing processing is carried out to the Aerial Images, the first image is obtained;
Using described first image as gaussian pyramid the 0th tomographic image;
The 0th tomographic image is checked using Gaussian convolution to be handled, and the 1st tomographic image is obtained, wherein the Gaussian convolution The size of core is 5*5;
Handled using the Gaussian convolution and to the 1st tomographic image, obtain L tomographic images, wherein, the L The size of image is the 1/2 of the 0th tomographic image sizeLTimes, the gaussian pyramid image is used as using the L tomographic images.
In specific implementation process, it is possible to use unmanned plane, which is shot, obtains Aerial Images, then passes through gaussian pyramid Accelerating algorithm is handled the image gray processing taken photo by plane, and is designated as F (x, y), is made the tomographic image G of gaussian pyramid 00(x, y)=F (x, y), Set gaussian pyramid L tomographic images as:
Wherein w (m, n) is size For 5 × 5 Gaussian convolution core, wherein G0(x,y),G1(x,y)…Gl(x, y), before this pyramid picture size of each floor height is 1/2 times, in this embodiment using L tomographic images, i.e. GL(x, y), in order to ensure accuracy of detection, GLPicture size is not less than 128 × 128, wherein, L tomographic images and the relation of the 0th tomographic image size are:GL=G0/2L
Then step S102 is performed:Rim detection is carried out to the gaussian pyramid image, edge image is obtained.
, can be using canny operators or the method for edge focusing to gaussian pyramid figure in specific implementation process As carrying out rim detection, edge image is obtained.
Next step S103 is performed:First straight line is obtained from the edge image using thick yardstick Hough transformation method And second straight line.
It is described that first straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method, wrap Include:
The rectilinear point in the edge image is obtained, wherein, the rectilinear point may be constructed a plurality of Hough straight line;
Hough straight line normal EQUATION x cos θ+ysin θ=ρ is obtained, wherein, x is the horizontal seat of the certain point on the straight line Mark, y is the ordinate of the certain point on the straight line, and the first parameter ρ corresponds to the point to the vertical line distance of positive x-axis, the second ginseng Number θ corresponds to the vertical line and x-axis positive direction angle;
Second parameter θ is entered into line translation according to predetermined amplitude from minimum value to maximum, wherein minimum value is -90 Degree, maximum is 90 degree, draws the second different parameter of multiple values;
According to the value of second parameter, first object parameter and the second target component are obtained;
According to the first object parameter and second target component, corresponding the 3rd target component and the 4th is obtained Target component, wherein the 3rd target component and the first parameter that the 4th target component is different values;
The straight line that the first object parameter and the 3rd target component are determined is as first straight line, by described second The straight line that target component and the 4th target component are determined is used as second straight line.
In specific implementation process, the normal equation that Hough changes straight line l is:Xcos θ+ysin θ=ρ, on first The geometric interpretation of parameter ρ and the second parameter θ is as shown in figure 3, ρ corresponds to positive x-axis intercept, and θ corresponds to straight line l vertical line and x Axle positive direction angle.The marginal point of xy coordinate systems has namely been transformed into ρ θ by such scheme by formula xcos θ+ysin θ=ρ In coordinate system, i.e., a point in xy coordinate systems is a sine curve in ρ θ coordinate systems.ρ θ parameter spaces are divided into tired Plus unit, as shown in figure 4, the cell positioned at coordinate (i, j) place has accumulated value A (i, j), it corresponds to the first parameter ρ Square with the second parameter θ to be associated in the coordinate system of coordinate.Said units are set to zero, i.e. A (i, j)=0 first.If Determine parameter area:θmin≤θ<θmax, ρmin≤ρ<ρmax.Due to not can determine that the specific of the white of runway in unmanned plane figure sideline Orientation, takes θ heremin=-90 °, θmax=90 °, ρmin=0, ρmax=D, D be it is diagonal in gaussian pyramid image between maximum Pixel distance.In order to improve arithmetic speed, using thick yardstick Hough transformation, that is, by the predetermined of ρ θ parameter space θ directions Amplitude takes larger value, and Δ θ values are smaller in traditional Hough transform, typically take Δ θ=0.1 °, in the present embodiment, we Take Δ θ=2 °.For example to the non-background dot (x on edge graph E (x, y)k,yk), above-mentioned non-background dot is the edge after rim detection Point, have on straight line, what is had does not exist, and Hough straight-line detection is exactly that can find out the rectilinear point in above-mentioned marginal point.
Specifically, by changing θ value, θ can be made to be equal to the subdivision value each allowed on θ axles, that is, make θj= θminmin+Δθ,θmin+2Δθ,…θmax, while using equation ρ=xkcosθ+ykSin θ solves corresponding ρ.Then to obtaining ρ values rounded up, obtain along the immediate subdivision value ρ of ρ axlesiThat is, ρi∈(ρminmin+1,ρmin+2,…ρmax)。 Then every a pair of (ρ are positionedij) corresponding cell A (i, j), it plus 1.In summing elements, the maximum cell A of numerical value (ρ corresponding to (i, j)ij) be found straight line parameter, as first object parameter and the 3rd target component, then The slope-intercept form of an equation of straight line is y=kx+b, whereinFurther, since on That state that process always handles is L layers of G of gaussian pyramidL(x, y) image, obtained straight line parameter (ρij) and original image G0 (x, y) cathetus parameter (ρ00) there is following transformational relation:θj0, ρi0/2L, while in order to improve landing runway side The accuracy of detection and robustness of line, runway both sides sideline are intended to detection.
Then step S104 is performed:The random sampling of selective iteration is used to the first straight line and the second straight line Consistency algorithm obtains the 3rd straight line and the 4th straight line.
It is described that the first straight line and the second straight line are obtained using the RANSAC algorithm of selective iteration The 3rd straight line and the 4th straight line are obtained, including:
Selecting the first straight line, nearby distance is T data point as iteration point set, wherein T=D*tan (Δ θ/2), D For the maximum pixel distance between the diagonal of the gaussian pyramid image, Δ θ is the predetermined amplitude;
Concentrated from iteration point, any 2 points of selection determines the first preliminary straight line, obtains the first of the first preliminary straight line The parameter θ 1 of parameter ρ 1 and second, until the parameter value (ρ 1, θ 1) and the parameter value of the first straight line of the described first preliminary straight line (ρll) difference within preset range;
According to default threshold value, by belong to point that the iteration point concentrate of the distance near the described first preliminary straight line for T As the interior point of the first preliminary straight line, number is put in statistics;
Using alternative manner, according to the interior number, the most interior point set of interior point number is chosen.
The interior point set is fitted using least square method, the first fitting a straight line is obtained, it is straight with the described first fitting Line is the 3rd straight line, and the second fitting a straight line is obtained with same method, straight using second fitting a straight line as the described 4th Line.
In specific implementation process, exemplified by obtaining the first fitting a straight line, it is discussed in detail using using selective iteration RANSAC algorithm obtain the 3rd straight line flow, select first above by thick yardstick Hough transformation method obtain First straight line nearby distance for T data point as RANSAC iteration point sets, wherein T can be according to the value and height of predetermined amplitude Maximum pixel distance between the diagonal of this pyramid diagram picture is calculated, specific computational methods be T=D*tan (Δ θ/ 2), then concentrated in iteration point, 2 points of random selection determines straight line parameter (ρ 1, θ 1), judge parameter (ρ 1, θ 1) and sideline l's Parameter (ρll) difference whether within a preset range, such as above-mentioned preset range be [δ12], δ can be taken1=Δ θ, δ2= 2.If meeting the requirements, step next step operation is carried out, otherwise chooses again at 2 points and determines straight line, required until meeting.
Specifically, predeterminable threshold value t, such as t's can be [1 3], then will the above-mentioned first preliminary straight line it is neighbouring away from From interior point of the point for belonging to iteration point concentration for T as the straight line, while point number in statistics.And repeat, change altogether For n times, N can be determined with adaptive method, and mark the most interior point set of point number in selection.Finally use least square Method is fitted to the interior point set of mark, obtains optimal sideline, i.e. the first fitting a straight line, regard first fitting a straight line as Three straight lines, the 3rd straight line is exactly to be refined on the basis of the first straight line obtained using thick yardstick Hough transformation method Processing, has obtained the 3rd higher straight line of precision.Similarly, another sideline in landing runway sideline, i.e. the 4th straight line can be adopted Obtained with same method, i.e., determine that nearby distance is used as iteration point set, wherein T=D* to second straight line for T data point first Tan (Δ θ/2), D are the maximum pixel distance between the diagonal of the gaussian pyramid image, and Δ θ is the predetermined amplitude; Then concentrated from iteration point, any 2 points of selection determines the second preliminary straight line, obtains the first parameter ρ of the second preliminary straight line 2 and second parameter θ 2, until the parameter value (ρ 2, θ 2) and the parameter value (ρ of the second straight line of the described second preliminary straight linel2, θl2) difference within preset range;According to default threshold value, distance near the described second preliminary straight line is belonged into institute for T The point of iteration point concentration is stated as the interior point of the second preliminary straight line, the interior point number of statistics;Then alternative manner is used, according to institute Interior number is stated, the most interior point set of interior point number is chosen;Finally the interior point set is fitted using least square method, obtained To the second fitting a straight line, using second fitting a straight line as the 4th straight line.
Finally perform step S105:According to the 3rd straight line and the 4th straight line, the landing of the unmanned plane is obtained Parameter, so that the unmanned plane carries out independent landing.
Specifically, it is described that the landing parameter of the unmanned plane is obtained according to the 3rd straight line and the 4th straight line, with The unmanned plane is set to carry out independent landing, including:
According to the 3rd straight line and the 4th straight line, actual first sideline and the reality of the landing runway of unmanned plane are obtained The sideline of border second;
According to the sideline of reality first and actual second sideline, the center line of landing runway is obtained;
According to the center line of the landing runway, the horizontal rotation angle and translation distance of unmanned plane are obtained, with the level The anglec of rotation and translation distance are the landing parameter, so that the unmanned plane carries out independent landing.
In specific implementation process, because the RANSAC algorithm using selective iteration obtains the 3rd straight line With the 4th straight line obtained on the basis of the image treated using gaussian pyramid acceleration principle, in order to obtain unmanned plane The actual sideline of landing runway, it is necessary to which the 3rd straight line and the 4th straight line are carried out into reduction treatment, specific method is the 3rd straight line l1With the 3rd straight line l2Parameter it is as follows:l1:(ρ11), l2:(ρ22).The then actual sideline of landing runway, the first sideline L is used respectively with the second sideline1r、l2rRepresent, l1r:(ρ1r1r), l2:(ρ2r2r), then corresponding relation is, ρ1r1/2L, ρ2r2/2L, θ1r1, θ2r2, thus obtain actual first sideline and actual second sideline of landing runway;
Then according to the sideline of reality first and actual second sideline, the center line of landing runway is obtained;If runway center line marking l Parameter be θl, ρl, then θ can be drawnl=(θ1r2r)/2, ρl=(ρ1r2r)/2.According to above-mentioned center line, nobody can be obtained What machine reached the angle, θ that horizontal rotation is needed directly over runway and translation in the air is apart from d:θ=θl,L is image diagonal pixel distance, LsFor the actual range of unmanned plane runway, α is that image is diagonal Line and runway center line marking parameter ρlAngle, its numerical value be α=γ-θl, wherein,A, b are respectively image The pixel distance of length and width.
Referring specifically to Fig. 5, the rectangle frame shown in Fig. 5 is field range when camera on unmanned plane is shot vertically downward (Aerial Images that i.e. unmanned plane is shot), above-mentioned image is selected by unmanned plane camera and determined, the resolution ratio of such as camera is 1920X1080, then the length and width of this frame, that is, the size of image are respectively equal to:A=1920 pixels, b=1080 pixels. The diagonal (dotted line shown in Fig. 5) of image is with the angle that the long side of image is aUnmanned plane is located at The center O points of image, direction is perpendicular to the long side a of image, and the angle of unmanned plane during flying direction and runway heading is that unmanned plane is needed The angle to be rotated, this angle is just equal to a parameter θ of runway center line markingl, unmanned plane rotation after between runway away from It is the image distance to be translated after unmanned plane rotates from d1, is then multiplied by the scale factor of an actual range and image distance The actual range d to be offset can be obtained, scale factor is equal to pixel distance of the actual range than soft strip of runway, i.e. Ls/ (ρ2r1r), LsFor the actual range of runway, (ρ2r1r) be runway pixel distance.θlSpecifically there are two kinds of situations:One is θl Situation more than zero, one is θlMinus situation, what formula was just as.When the slope of runway center line marking in the picture is just When, θl>0;When the slope of runway center line marking in the picture is bears, θl<0.That represented in Fig. 5 is θlSituation more than zero.
For example, if θ is to be turned clockwise θ degree on the occasion of, unmanned plane;If θ is negative value, unmanned plane rotate counterclockwise θ degree;If d is on the occasion of flight d meters to the left after unmanned plane rotation;If d is negative value, flight d meters to the right after unmanned plane rotation. Now, unmanned plane has been adjusted to the positive overhead of runway center line marking.
Embodiment two
Based on the inventive concept same with embodiment one, the embodiment of the present invention two provides a kind of nothing based on monocular vision The system of man-machine independent landing, refers to Fig. 2, and the system includes:
First obtains module 201, for according to the Aerial Images obtained in advance, obtaining gaussian pyramid image;
Second obtains module 202, for carrying out rim detection to the gaussian pyramid image, obtains edge image;
3rd obtains module 203, straight for obtaining first from the edge image using thick yardstick Hough transformation method Line and second straight line;
4th obtains module 204, for using the random of selective iteration to the first straight line and the second straight line Consistency algorithm of sampling obtains the 3rd straight line and the 4th straight line,
5th obtains module 205, for according to the 3rd straight line and the 4th straight line, obtain the unmanned plane Land parameter, so that the unmanned plane carries out independent landing.
In system provided in an embodiment of the present invention, the first acquisition module 201 is additionally operable to:
Gray processing processing is carried out to the Aerial Images, the first image is obtained;
Using described first image as gaussian pyramid the 0th tomographic image;
The 0th tomographic image is checked using Gaussian convolution to be handled, and the 1st tomographic image is obtained, wherein the Gaussian convolution The size of core is 5*5;
Handled using the Gaussian convolution and to the 1st tomographic image, obtain L tomographic images, wherein, the L The size of image is the 1/2 of the 0th tomographic image sizeLTimes, the gaussian pyramid image is used as using the L tomographic images.
In system provided in an embodiment of the present invention, the 3rd acquisition module 203 is additionally operable to:
The rectilinear point in the edge image is obtained, wherein, the rectilinear point may be constructed a plurality of Hough straight line;
Hough straight line normal EQUATION x cos θ+ysin θ=ρ is obtained, wherein, x is the horizontal seat of the certain point on the straight line Mark, y is the ordinate of the certain point on the straight line, and the first parameter ρ corresponds to the point to the vertical line distance of positive x-axis, the second ginseng Number θ corresponds to the vertical line and x-axis positive direction angle;
Second parameter θ is entered into line translation according to predetermined amplitude from minimum value to maximum, wherein minimum value is -90 Degree, maximum is 90 degree, draws the second different parameter of multiple values;
According to the value of second parameter, first object parameter and the second target component are obtained;
According to the first object parameter and second target component, corresponding the 3rd target component and the 4th is obtained Target component, wherein the 3rd target component and the first parameter that the 4th target component is different values;
The straight line that the first object parameter and the 3rd target component are determined is as first straight line, by described second The straight line that target component and the 4th target component are determined is used as second straight line.
In system provided in an embodiment of the present invention, the 4th acquisition module 204 is additionally operable to:
Selecting the first straight line, nearby distance is T data point as iteration point set, wherein T=D*tan (Δ θ/2), D For the maximum pixel distance between the diagonal of the gaussian pyramid image, Δ θ is the predetermined amplitude;
Concentrated from iteration point, any 2 points of selection determines the first preliminary straight line, obtains the first of the first preliminary straight line The parameter θ 1 of parameter ρ 1 and second, until the parameter value (ρ 1, θ 1) and the parameter value of the first straight line of the described first preliminary straight line (ρll) difference within preset range;
According to default threshold value, by belong to point that the iteration point concentrate of the distance near the described first preliminary straight line for T As the interior point of the first preliminary straight line, number is put in statistics;
Using alternative manner, according to the interior number, the most interior point set of interior point number is chosen.
The interior point set is fitted using least square method, the first fitting a straight line is obtained, it is straight with the described first fitting Line is the 3rd straight line, and the second fitting a straight line is obtained with same method, straight using second fitting a straight line as the described 4th Line.
In system provided in an embodiment of the present invention, the 5th acquisition module 205 is additionally operable to:
According to the 3rd straight line and the 4th straight line, actual first sideline and the reality of the landing runway of unmanned plane are obtained The sideline of border second;
According to the sideline of reality first and actual second sideline, the center line of landing runway is obtained;
According to the center line of the landing runway, the horizontal rotation angle and translation distance of unmanned plane are obtained, with the level The anglec of rotation and translation distance are the landing parameter, so that the unmanned plane carries out independent landing.
The various change mode of the method for the unmanned plane independent landing based on monocular vision in embodiment one and specific reality The system that example is equally applicable to the unmanned plane independent landing based on monocular vision of the present embodiment, by foregoing to being regarded based on monocular The detailed description of the method for the unmanned plane independent landing of feel, those skilled in the art are clear that the base in the present embodiment In the system of the unmanned plane independent landing of monocular vision, thus it is succinct for specification, it will not be described in detail herein.
Embodiment three
Based on the inventive concept same with embodiment one, the embodiment of the present invention three provides a kind of electronic equipment, referred to Fig. 6, including memory 301, processor 302 and are stored in the computer journey that can be run on memory 301 and on processor 302 Sequence, the processor 302 realizes following steps when performing described program:
According to the Aerial Images obtained in advance, gaussian pyramid image is obtained;
Rim detection is carried out to the gaussian pyramid image, edge image is obtained;
First straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method;
The is obtained using the RANSAC algorithm of selective iteration to the first straight line and the second straight line Three straight lines and the 4th straight line,
According to the 3rd straight line and the 4th straight line, obtain the landing parameter of the unmanned plane so that it is described nobody Machine carries out independent landing.
For convenience of description, Fig. 6 illustrate only the part related to the embodiment of the present invention, and particular technique details is not disclosed , it refer to present invention method part.Wherein, memory 301 can be used for storage software program and module, processor 302 perform the software program and module that are stored in memory 301 by running, so that the various functions for performing mobile terminal should With and data processing.
Memory 301 can mainly include storing program area and storage data field, wherein, storing program area can store operation system Application program needed for system, at least one function etc.;Storage data field can be stored uses created data etc. according to mobile phone. The control centre of the mobile communication terminal of processor 302, utilizes each of various interfaces and the whole mobile communication terminal of connection Part, by operation or performs and is stored in software program and/or module in memory 301, and calls and be stored in memory Data in 301, perform the various functions and processing data of mobile phone, so as to carry out integral monitoring to mobile phone. Optionally, processor 302 may include one or more processing units.
The various change mode of the method for the unmanned plane independent landing based on monocular vision in embodiment one and specific reality Example is equally applicable to the electronic equipment of the present embodiment, passes through the foregoing method to the unmanned plane independent landing based on monocular vision It is described in detail, those skilled in the art are clear that the electronic equipment in the present embodiment, so for the letter of specification It is clean, it will not be described in detail herein.
The one or more technical schemes provided in the embodiment of the present invention, have at least the following technical effects or advantages:
A kind of method of unmanned plane independent landing based on monocular vision disclosed by the invention, first basis are obtained in advance Aerial Images, obtain gaussian pyramid image;And rim detection is carried out to the gaussian pyramid image, obtain edge image; Then first straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method;It is straight to described first Line and the second straight line obtain the 3rd straight line and the 4th straight line using the RANSAC algorithm of selective iteration, according to 3rd straight line and the 4th straight line, obtain the landing parameter of the unmanned plane so that the unmanned plane carry out it is autonomous Land.In the above method, first by gaussian pyramid accelerating algorithm according to the Aerial Images obtained in advance, gaussian pyramid is obtained Image, can improve real-time, and obtain first from the edge image using thick yardstick Hough transformation method to unmanned plane Straight line and second straight line, so that Hough transformation speed is improved, and using the RANSAC algorithm pair of selective iteration The straight line that Hough transformation is obtained is handled, and the positioning precision of straight line is improved, so that accurate landing parameter is obtained, so that institute State unmanned plane and carry out independent landing, the not high technology of the positioning precision that the UAV Landing method in the prior art of solving is present is asked Topic.
, but those skilled in the art once know basic creation although preferred embodiments of the present invention have been described Property concept, then can make other change and modification to these embodiments.So, appended claims are intended to be construed to include excellent Select embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification without departing from this hair to the embodiment of the present invention The spirit and scope of bright embodiment.So, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention And its within the scope of equivalent technologies, then the present invention is also intended to comprising including these changes and modification.

Claims (10)

1. a kind of method of the unmanned plane independent landing based on monocular vision, it is characterised in that methods described includes:
According to the Aerial Images obtained in advance, gaussian pyramid image is obtained;
Rim detection is carried out to the gaussian pyramid image, edge image is obtained;
First straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method;
It is straight using the RANSAC algorithm acquisition the 3rd of selective iteration to the first straight line and the second straight line Line and the 4th straight line;
According to the 3rd straight line and the 4th straight line, the landing parameter of the unmanned plane is obtained, so that the unmanned plane enters Row independent landing.
2. the method as described in claim 1, it is characterised in that the Aerial Images that the basis is obtained in advance, obtains Gauss gold Word tower image, including:
Gray processing processing is carried out to the Aerial Images, the first image is obtained;
Using described first image as gaussian pyramid the 0th tomographic image;
The 0th tomographic image is checked using Gaussian convolution to be handled, and obtains the 1st tomographic image, wherein the Gaussian convolution core Size is 5*5;
Handled using the Gaussian convolution and to the 1st tomographic image, obtain L tomographic images, wherein, the L images Size be the 0th tomographic image size 1/2LTimes, the gaussian pyramid image is used as using the L tomographic images.
3. the method as described in claim 1, it is characterised in that described to use thick yardstick Hough transformation method from the edge graph First straight line and second straight line are obtained as in, including:
The rectilinear point in the edge image is obtained, wherein, the rectilinear point may be constructed a plurality of Hough straight line;
Hough straight line normal EQUATION x cos θ+ysin θ=ρ is obtained, wherein, x is the abscissa of the certain point on the straight line, and y is The ordinate of certain point on the straight line, the first parameter ρ corresponds to the point to the vertical line distance of positive x-axis, the second parameter θ correspondence In the vertical line and x-axis positive direction angle;
Second parameter θ is entered into line translation according to predetermined amplitude from minimum value to maximum, wherein minimum value is -90 degree, Maximum is 90 degree, draws the second different parameter of multiple values;
According to the value of second parameter, first object parameter and the second target component are obtained;
According to the first object parameter and second target component, corresponding the 3rd target component and the 4th target is obtained Parameter, wherein the 3rd target component and the first parameter that the 4th target component is different values;
The straight line that the first object parameter and the 3rd target component are determined is as first straight line, by second target The straight line that parameter and the 4th target component are determined is used as second straight line.
4. method as claimed in claim 3, it is characterised in that described used to the first straight line and the second straight line is selected The RANSAC algorithm of selecting property iteration obtains the 3rd straight line and the 4th straight line, including:
The first straight line data point that nearby distance is T is selected as iteration point set, wherein
T=D*tan (Δ θ/2), D are the maximum pixel distance between the diagonal of the gaussian pyramid image, and Δ θ is described Predetermined amplitude;
Concentrated from iteration point, any 2 points of selection determines the first preliminary straight line, obtains the first parameter ρ of the first preliminary straight line 1 and second parameter θ 1, until the parameter value (ρ 1, θ 1) and the parameter value (ρ of the first straight line of the described first preliminary straight linell) Difference within preset range;
According to default threshold value, using distance near the described first preliminary straight line for T belong to point that the iteration point concentrates as Point number in the interior point of the first preliminary straight line, statistics;
Using alternative manner, according to the interior number, the most interior point set of interior point number is chosen;
The interior point set is fitted using least square method, the first fitting a straight line is obtained, using first fitting a straight line as 3rd straight line, the second fitting a straight line is obtained with same method, using second fitting a straight line as the 4th straight line.
5. the method as described in claim 1, it is characterised in that described according to the 3rd straight line and the 4th straight line, is obtained The landing parameter of the unmanned plane is obtained, so that the unmanned plane carries out independent landing, including:
According to the 3rd straight line and the 4th straight line, actual first sideline and actual the of the landing runway of unmanned plane is obtained Two sidelines;
According to the sideline of reality first and actual second sideline, the center line of landing runway is obtained;
According to the center line of the landing runway, the horizontal rotation angle and translation distance of unmanned plane are obtained, with the horizontal rotation Angle and translation distance are the landing parameter, so that the unmanned plane carries out independent landing.
6. a kind of system of the unmanned plane independent landing based on monocular vision, it is characterised in that the system includes:
First obtains module, for according to the Aerial Images obtained in advance, obtaining gaussian pyramid image;
Second obtains module, for carrying out rim detection to the gaussian pyramid image, obtains edge image;
3rd obtains module, for obtaining first straight line and second from the edge image using thick yardstick Hough transformation method Straight line;
4th obtains module, for consistent using the random sampling of selective iteration with the second straight line to the first straight line Property algorithm obtain the 3rd straight line and the 4th straight line,
5th obtains module, for according to the 3rd straight line and the 4th straight line, obtaining the landing parameter of the unmanned plane, So that the unmanned plane carries out independent landing.
7. system as claimed in claim 6, it is characterised in that the first acquisition module is additionally operable to:
Gray processing processing is carried out to the Aerial Images, the first image is obtained;
Using described first image as gaussian pyramid the 0th tomographic image;
The 0th tomographic image is checked using Gaussian convolution to be handled, and obtains the 1st tomographic image, wherein the Gaussian convolution core Size is 5*5;
Handled using the Gaussian convolution and to the 1st tomographic image, obtain L tomographic images, wherein, the L images Size be the 0th tomographic image size 1/2LTimes, the gaussian pyramid image is used as using the L tomographic images.
8. system as claimed in claim 6, it is characterised in that the 3rd acquisition module is additionally operable to:
The rectilinear point in the edge image is obtained, wherein, the rectilinear point may be constructed a plurality of Hough straight line;
Hough straight line normal EQUATION x cos θ+ysin θ=ρ is obtained, wherein, x is the abscissa of the certain point on the straight line, and y is The ordinate of certain point on the straight line, the first parameter ρ corresponds to the point to the vertical line distance of positive x-axis, the second parameter θ correspondence In the vertical line and x-axis positive direction angle;
Second parameter θ is entered into line translation according to predetermined amplitude from minimum value to maximum, wherein minimum value is -90 degree, Maximum is 90 degree, draws the second different parameter of multiple values;
According to the value of second parameter, first object parameter and the second target component are obtained;
According to the first object parameter and second target component, corresponding the 3rd target component and the 4th target is obtained Parameter, wherein the 3rd target component and the first parameter that the 4th target component is different values;
The straight line that the first object parameter and the 3rd target component are determined is as first straight line, by second target The straight line that parameter and the 4th target component are determined is used as second straight line.
9. system as claimed in claim 8, it is characterised in that the 4th acquisition module is additionally operable to:
Select the first straight line nearby distance for T data point as iteration point set, wherein T=D*tan (Δ θ/2), D is institute The maximum pixel distance between the diagonal of gaussian pyramid image is stated, Δ θ is the predetermined amplitude;
Concentrated from iteration point, any 2 points of selection determines the first preliminary straight line, obtains the first parameter ρ of the first preliminary straight line 1 and second parameter θ 1, until the parameter value (ρ 1, θ 1) and the parameter value (ρ of the first straight line of the described first preliminary straight linell) Difference within preset range;
According to default threshold value, using distance near the described first preliminary straight line for T belong to point that the iteration point concentrates as Point number in the interior point of the first preliminary straight line, statistics;
Using alternative manner, according to the interior number, the most interior point set of interior point number is chosen;
The interior point set is fitted using least square method, the first fitting a straight line is obtained, using first fitting a straight line as 3rd straight line, the second fitting a straight line is obtained with same method, using second fitting a straight line as the 4th straight line.
10. a kind of electronic equipment, including memory, processor and storage are on a memory and the calculating that can run on a processor Machine program, it is characterised in that realize following steps during the computing device described program:
According to the Aerial Images obtained in advance, gaussian pyramid image is obtained;
Rim detection is carried out to the gaussian pyramid image, edge image is obtained;
First straight line and second straight line are obtained from the edge image using thick yardstick Hough transformation method;
It is straight using the RANSAC algorithm acquisition the 3rd of selective iteration to the first straight line and the second straight line Line and the 4th straight line;
According to the 3rd straight line and the 4th straight line, the landing parameter of the unmanned plane is obtained, so that the unmanned plane enters Row independent landing.
CN201710367835.4A 2017-05-23 2017-05-23 A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision Pending CN107194941A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710367835.4A CN107194941A (en) 2017-05-23 2017-05-23 A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710367835.4A CN107194941A (en) 2017-05-23 2017-05-23 A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision

Publications (1)

Publication Number Publication Date
CN107194941A true CN107194941A (en) 2017-09-22

Family

ID=59875539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710367835.4A Pending CN107194941A (en) 2017-05-23 2017-05-23 A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision

Country Status (1)

Country Link
CN (1) CN107194941A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192318A (en) * 2018-11-15 2020-05-22 杭州海康机器人技术有限公司 Method and device for determining position and flight direction of unmanned aerial vehicle and unmanned aerial vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504287A (en) * 2009-01-22 2009-08-12 浙江大学 Attitude parameter evaluation method for unmanned vehicle independent landing based on visual information
CN104215239A (en) * 2014-08-29 2014-12-17 西北工业大学 Vision-based autonomous unmanned plane landing guidance device and method
US20150199839A1 (en) * 2012-08-02 2015-07-16 Earthmine, Inc. Three-Dimentional Plane Panorama Creation Through Hough-Based Line Detection
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line
CN105387860A (en) * 2015-12-16 2016-03-09 西北工业大学 Unmanned plane autonomous landing guidance method combining monocular vision and laser ranging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504287A (en) * 2009-01-22 2009-08-12 浙江大学 Attitude parameter evaluation method for unmanned vehicle independent landing based on visual information
US20150199839A1 (en) * 2012-08-02 2015-07-16 Earthmine, Inc. Three-Dimentional Plane Panorama Creation Through Hough-Based Line Detection
CN104215239A (en) * 2014-08-29 2014-12-17 西北工业大学 Vision-based autonomous unmanned plane landing guidance device and method
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line
CN105387860A (en) * 2015-12-16 2016-03-09 西北工业大学 Unmanned plane autonomous landing guidance method combining monocular vision and laser ranging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨世保: "无人机着陆位姿参数视觉估计研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192318A (en) * 2018-11-15 2020-05-22 杭州海康机器人技术有限公司 Method and device for determining position and flight direction of unmanned aerial vehicle and unmanned aerial vehicle
CN111192318B (en) * 2018-11-15 2023-09-01 杭州海康威视数字技术股份有限公司 Method and device for determining position and flight direction of unmanned aerial vehicle and unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
WO2022078156A1 (en) Method and system for parking space management
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
CN107636679A (en) A kind of obstacle detection method and device
CN106774386A (en) Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN103136525B (en) A kind of special-shaped Extended target high-precision locating method utilizing Generalized Hough Transform
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
CN110516532B (en) Unmanned aerial vehicle railway track line identification method based on computer vision
CN106971408A (en) A kind of camera marking method based on space-time conversion thought
CN106444846A (en) Unmanned aerial vehicle and method and device for positioning and controlling mobile terminal
CN105004337B (en) Agricultural unmanned plane autonomous navigation method based on matching line segments
CN112949633B (en) Improved YOLOv 3-based infrared target detection method
CN111598952A (en) Multi-scale cooperative target design and online detection and identification method and system
CN112666963A (en) Road pavement crack detection system based on four-axis unmanned aerial vehicle and detection method thereof
CN109447117A (en) The double-deck licence plate recognition method, device, computer equipment and storage medium
CN105913488A (en) Three-dimensional-mapping-table-based three-dimensional point cloud rapid reconstruction method
CN106815553A (en) A kind of infrared front view based on edge matching is as Ship Detection
CN112947526A (en) Unmanned aerial vehicle autonomous landing method and system
CN114372992A (en) Edge corner point detection four-eye vision algorithm based on moving platform
CN109661631A (en) Control method, device and the unmanned plane of unmanned plane
CN107861510A (en) A kind of intelligent vehicle control loop
CN111273701A (en) Visual control system and control method for holder
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN104180794A (en) Method for treating texture distortion area of digital orthoimage
CN107194941A (en) A kind of unmanned plane independent landing method, system and electronic equipment based on monocular vision
CN110472451A (en) A kind of artificial landmark and calculation method towards AGV positioning based on monocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170922

RJ01 Rejection of invention patent application after publication