CN104766337A - Aircraft landing vision enhancement method based on runway boundary enhancement - Google Patents

Aircraft landing vision enhancement method based on runway boundary enhancement Download PDF

Info

Publication number
CN104766337A
CN104766337A CN201510205841.0A CN201510205841A CN104766337A CN 104766337 A CN104766337 A CN 104766337A CN 201510205841 A CN201510205841 A CN 201510205841A CN 104766337 A CN104766337 A CN 104766337A
Authority
CN
China
Prior art keywords
msub
runway
boundary
mrow
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510205841.0A
Other languages
Chinese (zh)
Other versions
CN104766337B (en
Inventor
李晖晖
燕攀登
郭雷
胡秀华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201510205841.0A priority Critical patent/CN104766337B/en
Publication of CN104766337A publication Critical patent/CN104766337A/en
Application granted granted Critical
Publication of CN104766337B publication Critical patent/CN104766337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an aircraft landing vision enhancement method based on runway boundary enhancement. The method comprises the steps that straight line features in a first-frame image of a forward looking infrared video are extracted through the line segment detection (LSD) algorithm, and line segments of runway boundaries are screened according to the intrinsic constraint conditions of the runway boundaries; two points are selected randomly from each of the line segments on the two runway boundaries, and rectangular sampling windows are selected with the points as the centers; the graded distribution features of the sampling windows are extracted, and parameters of a target classifier are initialized; all sampling points are tracked and positioned in following video frames, the runway boundaries are fitted according to the tracking results of all the sampling points, and finally a runway area and the runway boundaries are determined; finally, the runway boundaries are enhanced so as to improve the vision sensory ability of a pilot. By means of the method, inter-frame information of aircraft forward looking landing infrared video images can be fully utilized, the runway boundaries of an airport are tracked and recognized through the target tracking method, and the time performance of the vision enhancement algorithm is greatly improved while the recognition accuracy of the runway boundaries is guaranteed.

Description

Airplane landing vision enhancement method based on runway boundary enhancement
Technical Field
The invention belongs to the technical field of computer vision image processing, relates to an airplane landing vision enhancement method based on runway boundary enhancement, and can be widely applied to the fields of pilot vision enhancement systems (EVS), vehicle vision navigation and the like.
Background
In the landing process, weather factors are one of main reasons influencing normal landing of a pilot, and particularly in severe weather such as fog, rain, snow, sand and the like, the visibility of the runway and indication signals around the runway is poor, so that the runway and surrounding information acquired by the pilot visually is insufficient, and the pilot cannot land normally. In addition, the problem of dark light and low visibility exists when the automobile falls at night. Therefore, the method has important practical significance for improving the visibility of the airport runway environment and enhancing the visual perception of the pilot.
The visual enhancement of the pilot is to enhance the visual effect of the pilot under severe weather conditions and dim light conditions by utilizing various sensors and advanced technologies. The conventional "view system" concept is proposed to solve the similar problem. The basic idea is to adopt a forward-looking detection sensor to obtain high-resolution images of the airport runway and the surrounding area thereof in real time, and form a real scene image which is easy to be understood by a pilot through proper information, image processing and fusion, so that the pilot can see the runway clearly through cloud and fog and other severe weather and correctly operate an airplane to finish approach and landing. The vision system which can generate the vision system meeting the requirement can be realized by a vision synthesis or vision enhancement mode. The physical imaging characteristics are completely different for thermal infrared band and visible band images. If the visible light imaging is suitable in illumination, the image contrast is relatively high and contains much detail information of the ground, but if the conditions of severe weather, night and the like are met, the imaging result is greatly influenced and the ground target is difficult to distinguish and identify. The infrared imaging utilizes the thermal radiation characteristic of an object to obtain details, so that the infrared imaging is slightly influenced by climate and illumination, and an interested target often has the characteristics of high brightness and easiness in distinguishing in an image.
In the past, many researches on identification and positioning of airport runways have been carried out, but most of the researches are the application of some new theories or mathematical tools, and the researches are less specific to specific application requirements and practical effective methods. These methods all have some inherent drawbacks: first, the previous research is mostly directed to two or several static images, but continuous video images are used during the landing process of the airplane, and compared with the static images, more information in a time dimension is provided. If the airport detection method aiming at the static image is also used for carrying out identification and positioning frame by frame, firstly, information (such as inter-frame correlation) on a time dimension cannot be fully utilized to guide identification and positioning; secondly, the detection calculation amount of the isolated runway frame by frame is huge, and the speed is low; furthermore, pilot visual enhancement requires good real-time performance of the processing algorithm to meet application requirements. This puts more strict requirements on the computational efficiency, storage space, etc. while ensuring the algorithm effect.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides an airplane landing vision enhancement method based on runway boundary enhancement, which is used for positioning and tracking a runway in a forward-looking image of pilot landing vision enhancement.
Technical scheme
An airplane landing visual enhancement method based on runway boundary enhancement is characterized by comprising the following steps:
step 1, detecting a runway boundary in a first frame video image: carrying out noise reduction preprocessing on the first frame video image, and processing by using an LSD (least squares) line detection algorithm to obtain all line segment sets L ═ L1,l2,l3,... multidot.iMidpoint position (x)im,yim) Length siUsing a constraint function linei=fi(ki,(xim,yim),si),fi(ki,(xim,yim),si) Constraining to obtain two boundary straight lines of the runway in the first frame imagei1,linei2
Step 2, selecting tracking points on the runway boundary straight line: for four sample points (x)1,y1),(x2,y2) And (x)3,y3),(x4,y4) Randomly selecting the selected area within the interval determined by the following rules:
( x 1 , y 1 ) : 2 5 L xi 1 < | x 1 - x i 1 | < 1 2 L xi 1 ; 2 5 L yi 1 < | y 1 - y i 1 | < 1 2 L yi 1 ( x 2 , y 2 ) : 1 4 L xi 1 < | x 2 - x i 1 | < 1 3 L xi 1 ; 1 4 L yi 1 < | y 2 - y i 1 | < 1 3 L yi 1 ( x 3 , y 3 ) : 1 4 L xi 2 < | x 3 - x i 2 | < 1 3 L xi 2 ; 1 4 L yi 2 < | y 3 - y i 2 | < 1 3 L yi 2 ( x 4 , y 4 ) : 2 5 L xi 2 < | x 4 - x i 2 | < 1 2 L xi 2 ; 2 5 L yi 2 < | y 4 - y i 2 | < 1 2 L yi 2
L xi 1 = | x 0 - x i 1 | , L yi 1 = | y 0 - y i 1 | L xi 2 = | x 0 - x i 2 | , L yi 1 = | y 0 - y i 2 |
wherein: (x)0,y0) Is the intersection point of the two runway boundaries; l isi1、Li2Respectively two boundary straight linesi1,linei2Length of (d); the intersection point of the lower end of the runway boundary and the lower side boundary of the image is (x)i1,yi1),(xi2,yi2);Lxi1,Lxi2And Lyi1,Lyi2Respectively is the difference between the horizontal coordinates and the vertical coordinates of the end points of the straight line segments of the two runway boundaries;
the matrix of the two runway boundary line equations is represented as:
Y=KX+B
wherein: k is the slope matrix and B is the intercept matrix. From a matrix of sampling points x1 x2 x3 x4]T,[y1 y2 y3 y4]TDetermining two runway boundary straight lines;
step 3, tracking the runway sampling point at the next frame 3: respectively setting sampling windows Z for the 4 sampling points obtained in the step 2i,i=1,2,3,4;
Extracting a straight line from each sampling window by using an LSD (least squares distortion) straight line detection algorithm, screening a detected straight line set by using the following formula, and obtaining an accurate target straight line in the sampling window
<math><mrow> <msub> <mi>l</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mo>{</mo> <mi>l</mi> <mo>|</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>W</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>X</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>H</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>Y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo><</mo> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>></mo> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> </mrow> </mfrac> <mo>-</mo> <msub> <mi>k</mi> <msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>i</mi> </msub> </msub> <mo>|</mo> <mo><</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>}</mo> </mrow></math>
Wherein (x)i,yi) The coordinates of the top left vertex of the current rectangular sampling window are obtained; wi,HiThe length and width of the current rectangular sampling window;the coordinates of the middle points of the straight line segments in the candidate straight line set are obtained; gamma rayiPerforming constraint on the local optimum of the rectangular sampling window for the position difference threshold value of the candidate straight-line segment and the center of the rectangular sampling window; lambda [ alpha ]iIs the length threshold of the candidate straight line; (x)left,yleft),(xright,yright) Respectively as coordinates of two end points of the candidate straight-line segment;the slope of the straight line where the sampling point is located in the previous frame; etaiFurther constraining the slope difference threshold value between the two frames by utilizing the global optimum of the straight line;
then extracting the midpoint of the straight line segment of the target as a tracking point of the track boundary in the frame image:
( x t , y t ) = ( x left + x right 2 , y left + y right 2 )
wherein: (x)t,yt) The final tracking result of the sampling points in the rectangular sampling window is obtained;
the rectangular sampling window ZiRespectively has a length and a width of Hi,WiThe length and width relationship is as follows: wi=(1+θi)Hi/|kiL, wherein: thetaiIs a proportional margin; k is a radical ofiThe slope of a straight line where the rectangular sampling window corresponds to the sampling point is set;
therefore, the tracking results of the 4 sampling points of the frame are respectively as follows: (x)1,y1),(x2,y2),(x3,y3),(x4,y4);
Step 4, fitting a boundary straight line of the runway: establishing a boundary straight line equation according to the tracking results of the 4 sampling points:
k 1 = y 2 - y 1 x 2 - x 1 , k 2 = y 4 - y 3 x 4 - x 3 , b 1 = y 1 - k 1 x 1 , b 2 = y 3 - k 2 x 3
l 1 : y = k 1 x + b 1 l 2 : y = k 2 x + b 2
l1and l2For the resulting two-boundary linear equation, let l1And l2The intersection point between them and the intersection point with the image boundary determines the runway area (ROI);
wherein: k is a radical of1,k2The slopes of the two boundary lines are respectively; b1,b2The intercept of the two straight lines on the y axis respectively;
step 5, runway boundary enhancement: obtaining the straight line l of the two boundaries of the runway in the step 41And l2Marking the image of the current frame to enhance the runway boundary of the original image;
step 6: and (5) repeating the steps 2 to 5 aiming at the next frame of video image until the flying landing.
The initial length H of the rectangular sampling window on the two boundary straight linesiIs 48; proportional margin thetaiIs 1.5; position difference threshold gamma between candidate straight-line segment and rectangular sampling window centeriIs composed ofLength threshold lambda of candidate straight lineiIs composed ofThreshold η for the slope difference between two framesiIs composed of
The parameter values in the constraint function areThe deviation threshold value from the center position of the rectangular sampling window is 8, and the length selection parameter is si<10。
Advantageous effects
The invention provides an airplane landing vision enhancement method based on runway boundary enhancement. Simulation experiments prove that the algorithm can effectively track the runway in real time in the forward-looking landing infrared video image of the airplane. The visual perception capability of the pilot can be effectively enhanced to control the aircraft to land smoothly.
The invention has the advantages that: firstly, the method comprises the following steps: the target tracking theory is used for identifying and calibrating the airport runway, so that the real-time performance of runway area identification is obviously improved, and the robustness of the algorithm under the interference (such as shielding, loss and the like) condition can be obviously improved; secondly, the method comprises the following steps: when the runway boundary sampling points are tracked, a mode of combining local constraint and global constraint is adopted, so that runway positioning deviation caused by tracking error of a single sampling point can be remarkably reduced; thirdly, the method comprises the following steps: the runway boundary and the runway area are simply and effectively enhanced to become relatively remarkable areas in the image, so that the images are convenient for pilots to identify. Therefore, the algorithm can effectively enhance the visual perception capability of the pilot when the pilot lands in low visibility caused by severe weather.
Drawings
FIG. 1: a flow chart of the method of the invention;
FIG. 2: target video image selected by simulation experiment and a series of processing
(a) A video first frame image; (b) a runway area schematic diagram; (c) the LSD algorithm is used for preliminarily detecting a straight line; (d) determining a runway boundary straight line after screening; (e) sampling points and rectangular sampling windows are selected on the boundary of the runway; (f) a runway boundary is fitted by a sampling point of a first frame; (g) fitting a runway boundary straight line in real time by sampling points in a video frame; (h) tracking and calibrating effect graphs of runway areas;
FIG. 3: schematic diagram of the position of the selected four sampling points.
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the method of the invention is characterized by comprising the following steps:
step 1, runway boundary detection in a first frame video image: the method mainly utilizes the obvious boundary characteristics of the airport runway in the infrared image to identify the runway, and the core is an accurate specific boundary detection algorithm. And precisely positioning the runway by using constraint conditions such as boundary length, boundary significance, slope, position relation of two boundaries, number of detected boundaries and the like. Adopting an LSD (least squares-based) straight line detection algorithm to extract straight lines to obtain a to-be-selected runway boundary straight line set L ═ L1,l2,l3… … }. And finally, obtaining two boundaries of the runway according to the constraint conditions of the length, the width, the slope and the like of the boundary of the runway. The specific process is as follows:
a) LSD line detection
The detection of points on the same straight line is mainly carried out according to the gradient direction of the image. The directional gradient formula of the image is:
g x ( x , y ) = ( i ( x + 1 , y ) + i ( x + 1 , y + 1 ) - i ( x , y ) - i ( x , y + 1 ) ) / 2 g y ( x , y ) = ( i ( x , y + 1 ) + i ( x + 1 , y + 1 ) - i ( x , y ) - i ( x + 1 , y ) ) / 2 - - - ( 1 )
g ( x , y ) = g x 2 + g y 2 - - - ( 2 )
ang(x,y)=arctan(gx(x,y)/(-gy(x,y)) (3)
wherein i (x, y) is a coordinate point in the imageA pixel value of (x, y); gx(x, y) is the gradient of the coordinate point (x, y) in the x direction; gy(x, y) is the y-direction gradient of the coordinate point (x, y); g (x, y) is the actual gradient value of the coordinate point (x, y); and ang (x, y) is the gradient direction angle of the coordinate point (x, y).
b) Screening the detection straight line
Screening the line set obtained in the step a) according to the length, the slope and the number characteristics of the lines. The following constraints:
linei=fi(ki,li,si) (4)
Lf=F(linei,linej) (5)
wherein, lineiTo satisfy the slope k of the runway boundary straight lineiLength liPosition siA constrained straight line; l isfFor the result of the final runway (two boundaries) obtained by further screening the linear set satisfying the constraint condition of the formula (4), the screening condition is as follows: line pair linei,linejSatisfies the constraint function F (line)i,linej). Finally, the runway boundary can be obtained through accurate detection.
c) Determining runway area
And c) performing relevant processing such as extension on the boundary straight lines obtained by the detection in the step b), and determining the runway area according to the intersection point of the two runway boundary straight lines and the intersection point edge of the two boundary straight lines and the image boundary. The run-to region satisfies the following formula:
ROI=S1∩S2 (6)
<math><mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>{</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mi>f</mi> <mrow> <mi>l</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&le;</mo> <mn>0,0</mn> <mo>&le;</mo> <mi>x</mi> <mo>&le;</mo> <mi>N</mi> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>y</mi> <mo>&le;</mo> <mi>M</mi> <mo>}</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>=</mo> <mo>{</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mi>f</mi> <mrow> <mi>l</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mn>0,0</mn> <mo>&le;</mo> <mi>x</mi> <mo>&le;</mo> <mi>N</mi> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>y</mi> <mo>&le;</mo> <mi>M</mi> <mo>}</mo> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow></math>
wherein roi (region Of interest) represents the final runway area; s1、S2Respectively representing candidate runway areas constrained by two boundary straight lines; f. ofl1(x,y)=0、fl2(x, y) ═ 0 denotes the equation of the straight line where the two boundaries are located, respectively; m, N denote the height and width of the image, respectively.
Step 2, on the basis of the first frame boundary detected in the step 1, two sampling points (x) are respectively and randomly selected on the two boundaries1,y1),(x2,y2),(x3,y3),(x4,y4). The selection of these four samples is performed as follows: first, the runway edge is determined in step 1 to obtain a straight line equation y ═ kx + b, where k is the slope and b is the intercept. Secondly, considering that the definition of the boundary of the runway on the upper half part of the image is poor, a sampling point (one end far away from the vanishing point of the airport runway) close to the lower part of the image is selected. The selected sampling points meet the following limiting conditions:
Point_Set={(x,y)|y=kx+b,θy-<y<θy+x-<x<θx+} (8)
wherein Point _ Set is a Set of sampling points (x, y) satisfying a condition; thetay-、θy+Respectively an upper boundary and a lower boundary of a vertical coordinate of the sampling point meeting the condition; thetax-,θx+Respectively the upper and lower bounds of the abscissa of the sample point satisfying the condition. By selecting different thetay-、θy+、θx-、θx+The value is such that 4 samples can be correctly selected. And (3) tracking the sampling points in the step (3) after the sampling points are selected.
3, when the next frame is processed, respectively selecting a rectangular sampling window Z at each sampling point obtained in the step 2iI is 1,2,3,4 to track the sampling points. With rectangular sampling windows ZiRespectively has a length and a width of Hi,WiThen its length and widthThe relationship is as follows:
Wi=(1+θi)Hi/|ki| (9)
wherein, thetaiIs a proportional margin; k is a radical ofiThe slope of the straight line where the rectangular sampling window corresponds to the sampling point is shown. Thus by selecting a suitable HiThe size of the rectangular sampling window can be updated according to the difference of the slope of the straight line where different sampling points are located in each frame of image. And extracting a straight line for each sampling window according to the gradient directional diagram characteristics, namely LSD straight line detection. And according to the slope and the position, carrying out detection on the straight line set, and finally obtaining an accurate target straight line in a sampling windowNamely:
<math><mrow> <mrow> <msub> <mi>l</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mo>{</mo> <mi>l</mi> <mo>|</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>W</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>X</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>H</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>Y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo><</mo> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>></mo> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> </mrow> </mfrac> <mo>-</mo> <msub> <mi>k</mi> <msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>i</mi> </msub> </msub> <mo>|</mo> <mo><</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>}</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow></math>
wherein,is a target straight line; (x)i,yi) The coordinates of the top left vertex of the current rectangular sampling window are obtained; wi,HiThe length and width of the current rectangular sampling window;for straight-line segments in a set of candidate linesA midpoint coordinate; gamma rayiPerforming constraint on the position difference threshold value of the candidate straight-line segment and the center of the rectangular sampling window by using the local optimum of the holding sampling window; lambda [ alpha ]iIs the length threshold of the candidate straight line; (x)left,yleft),(xright,yright) Coordinates of two end points of a straight line segment in the candidate straight line set are respectively;the slope of the straight line where the sampling point is located in the previous frame; etaiAnd (4) performing constraint on the slope difference threshold value between the two frames by using the global optimum of the straight line. Selecting a most suitable straight line segment according to the optimal within the rectangular window and the global optimal of the frame comprehensively, and extracting the middle point of the target straight line segment as a tracking point of the frame runway boundary, namely
( x t , y t ) = ( x left + x right 2 , y left + y right 2 ) - - - ( 11 )
Wherein (x)t,yt) And the final tracking result of the sampling points in the rectangular sampling window is obtained.
And 4, after the tracking result of each sampling point of the frame is obtained in the step 3, fitting a runway area boundary straight line according to the tracking positions of each point. Suppose that the tracking results obtained on the two boundary lines are respectively (x)1,y1),(x2,y2) And (x)3,y3),(x4,y4). We can get the equation of the boundary line, that is:
wherein k is1,k2The slopes of the two boundary lines are respectively; b1,b2The intercept of the two straight lines on the y axis respectively; l1,l2To finally fit the resulting linear equation. The runway area, the ROI, is determined by the intersection between the runway boundary lines and its intersection with the image boundary.
And step 5, on the basis of the runway area and the boundary thereof obtained in the step 4, enhancing the detected runway boundary. The runway boundary is enhanced in a mode of manually calibrating a straight line, so that the visual perception capability of a pilot on an airport runway during landing is enhanced.
And 6, repeating the steps 2-5 until the flying landing.
The specific embodiment is as follows:
the hardware environment for implementation is: intel (R) Xeon (R), E5504,6GB RAM,2.0GHz, the software environment of operation is: mat1ab2014a and Win 7. The new algorithm provided by the invention is realized by mixed programming of Matlab language and C + + language. Experiments were conducted with a straight road video with prominent boundary features like an airport runway, with a video duration of 10 seconds(s) for a total of 150 frames. Size: 488X 191.
Step 1, runway boundary detection in a first frame video image: firstly, the first frame video image is processedAnd noise reduction preprocessing is carried out to improve the definition of the image. And then processing the whole image by using an LSD (least squares) line detection algorithm to obtain all line segment sets L ═ L1,l2,l3,.., using the slope k of the line, the midpoint position (x)m,ym) Length s, etc., i.e. linei=fi(ki,(xm,ym),si). Considering the characteristics of the track in the image, we select the parameter value asThe deviation threshold value from the center position of the rectangular sampling window is 8, and the length selection parameter is siLess than 10; then, the primary selection straight line is further screened according to the position relation between the two runway boundaries, and the straight line of the two runway boundaries is recorded as lm、lnSlope of km、knIntercept of bm、bn. Then first kmAnd k isnIs a pair of positive and negative opposite sign values; secondly, the lengths s of the two boundary lines do not differ much. And finally, in the process from the lower boundary to the upper boundary of the image, the distance between the two runway boundaries presents a descending situation.
And 2, selecting a tracking point on a runway boundary straight line. And (4) tracking the runway on the basis of the runway boundary of the first frame image obtained in the step (1). Two sampling points can be randomly selected on the straight line of the two runway boundaries, then each point is tracked, and finally the two runway boundaries are determined by the rule of 'determining a straight line by two points'. Considering the definition of the runway boundary, we choose the sampling points according to the following rules: assuming that the lengths of two runway boundary line segments in the image are respectively Li1、Li2The intersection point of the lower end of the runway boundary straight line and the lower side boundary of the image is (x)i1,yi1),(xi2,yi2). Then for the selection of sampling points on the runway boundary we select points on the boundary straight line that satisfy the following conditions, which are explained with the help of fig. 3. In conjunction with equation 8, for four sample points (x)1,y1),(x2,y2) And (x)3,y3),(x4,y4) We randomly choose the interval determined according to the following rules:
( x 1 , y 1 ) : 2 5 L xi 1 < | x 1 - x i 1 | < 1 2 L xi 1 ; 2 5 L yi 1 < | y 1 - y i 1 | < 1 2 L yi 1 ( x 2 , y 2 ) : 1 4 L xi 1 < | x 2 - x i 1 | < 1 3 L xi 1 ; 1 4 L yi 1 < | y 2 - y i 1 | < 1 3 L yi 1 ( x 3 , y 3 ) : 1 4 L xi 2 < | x 3 - x i 2 | < 1 3 L xi 2 ; 1 4 L yi 2 < | y 3 - y i 2 | < 1 3 L yi 2 ( x 4 , y 4 ) : 2 5 L xi 2 < | x 4 - x i 2 | < 1 2 L xi 2 ; 2 5 L yi 2 < | y 4 - y i 2 | < 1 2 L yi 2 - - - ( 14 )
L xi 1 = | x 0 - x i 1 | , L yi 1 = | y 0 - y i 1 | L xi 2 = | x 0 - x i 2 | , L yi 1 = | y 0 - y i 2 | - - - ( 15 )
wherein (x)0,y0) Is the intersection of the two runway boundaries. L isxi1,Lxi2And Lyi1,Lyi2Respectively is the difference of the horizontal coordinates and the vertical coordinates of the end points of the straight line segments of the two runway boundaries. Based on the above rules, we can obtain the appropriate sampling point. Meanwhile, assume that the matrix of the two runway boundary linear equations is represented as:
Y=KX+B (16)
where K is the slope matrix and B is the intercept matrix. The runway is tracked by tracking two points respectively selected on the two boundary straight lines. I.e. from the sampling point matrix x1 x2 x3 x4]T,[y1 y2 y3 y4]TTwo runway boundary lines are determined. Step 3 is just the runningTracking of trace sampling points
And step 3, tracking the runway sampling points. In the next frame, selecting rectangular sampling windows Z at the 4 sampling points obtained in the step 2 respectivelyiI is 1,2,3,4 to track the sampling points. With rectangular sampling windows ZiRespectively has a length and a width of Hi,WiThen, the length and width relationship is:
Wi=(1+θi)Hi/|ki| (17)
wherein, thetaiIs a proportional margin; k is a radical ofiThe slope of the straight line where the rectangular sampling window corresponds to the sampling point is shown. Thus by selecting a suitable HiThe size of the rectangular sampling window can be updated according to the difference of the slope of the straight line where different sampling points are located in each frame of image. And extracting a straight line for each sampling window according to the gradient directional diagram characteristics, namely LSD straight line detection. Screening the detected straight line set according to the slope and the position, and finally obtaining an accurate target straight line in a sampling windowNamely:
<math><mrow> <mrow> <msub> <mi>l</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mo>{</mo> <mi>l</mi> <mo>|</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>W</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>X</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>H</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>Y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo><</mo> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>></mo> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> </mrow> </mfrac> <mo>-</mo> <msub> <mi>k</mi> <msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>i</mi> </msub> </msub> <mo>|</mo> <mo><</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>}</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow></math>
wherein,is a target straight line; (x)i,yi) The coordinates of the top left vertex of the current rectangular sampling window are obtained; wi,HiThe length and width of the current rectangular sampling window;the coordinates of the middle points of the straight line segments in the candidate straight line set are obtained; gamma rayiPerforming constraint on the position difference threshold value of the candidate straight-line segment and the center of the rectangular sampling window by using the local optimum of the rectangular sampling window; lambda [ alpha ]iIs the length threshold of the candidate straight line; (x)left,yleft),(xright,yright) Coordinates of two end points of a straight line segment in the candidate straight line set are respectively;the slope of the straight line where the sampling point is located in the previous frame; etaiAnd (4) performing constraint on the slope difference threshold value between the two frames by using the global optimum of the straight line. Selecting a most suitable straight line segment according to the optimal within the rectangular window and the global optimal of the frame comprehensively, and extracting the midpoint of the target straight line segment as a tracking point of the runway boundary in the frame image, namely
( x t , y t ) = ( x left + x right 2 , y left + y right 2 ) - - - ( 19 )
Wherein (x)t,yt) And the final tracking result of the sampling points in the rectangular sampling window is obtained. And the same method is adopted to complete the tracking and positioning of the rest sampling points on the runway boundary. And step 4, using the determined runway boundary tracking point position information for determining the runway boundary of the next video frame.
And 4, after the tracking result of each sampling point of the frame is obtained in the step 3, fitting a runway area boundary straight line according to the tracking position of each point, and using the runway area boundary straight line for the next frame of runway boundary calibration. Suppose that the tracking results obtained on the two boundary lines are respectively (x)1,y1),(x2,y2) And (x)3,y3),(x4,y4). We can get the equation of the boundary line, that is:
wherein k is1,k2The slopes of the two boundary lines are respectively; b1,b2The intercept of the two straight lines on the y axis respectively; l1,l2To finally fit the resulting linear equation. The runway area, the ROI, is determined by the intersection between the runway boundary lines and its intersection with the image boundary.
And step 5, on the basis of the runway area and the boundary straight line thereof obtained in the step 4, enhancing the detected runway boundary. The runway boundary is enhanced in a straight line calibration mode, so that the visual perception capability of a pilot on an airport runway during landing is enhanced.
And 6, repeating the steps 2-5 until the flight landing is finished.
To further illustrate the effectiveness of the method in pilot landing vision enhancement applications, comparative analysis is performed with a detection-based vision enhancement algorithm from the aspects of runway boundary tracking accuracy and vision enhancement instantaneity, respectively. Carrying out visual enhancement on a simulated airplane landing video, wherein the total frame number of the video is 150 frames, and the size is as follows: 488X 191. The visual enhancement effect of the two methods in a low visibility scene is considered respectively. The comparative results are shown in table 1 below. It can be seen that the method not only improves the runway boundary tracking accuracy, but also has an order of magnitude higher processing time performance than a detection-based runway boundary enhancement algorithm.
TABLE 1 comparison of this method with detection-based visual enhancement algorithms under low visibility

Claims (3)

1. An airplane landing visual enhancement method based on runway boundary enhancement is characterized by comprising the following steps:
step 1, detecting a runway boundary in a first frame video image: carrying out noise reduction preprocessing on the first frame video image, and processing by using an LSD (least squares) line detection algorithm to obtain all line segment sets L ═ L1,l2,l3,... multidot.iMidpoint position (x)im,yim) Length siUsing a constraint function linei=fi(ki,(xim,yim),si),fi(ki,(xim,yim),si) Constraining to obtain two boundary straight lines of the runway in the first frame imagei1,linei2
Step 2, selecting tracking points on the runway boundary straight line: for four sample points (x)1,y1),(x2,y2) And (x)3,y3),(x4,y4) Randomly selecting the selected area within the interval determined by the following rules:
( x 1 , y 1 ) : 2 5 L xi 1 < | x 1 - x i 1 | < 1 2 L xi 1 ; 2 5 L yi 1 < | y 1 - y i 1 | < 1 2 L yi 1 ( x 2 , y 2 ) : 1 4 L xi 1 < | x 2 - x i 1 | < 1 3 L xi 1 ; 1 4 L yi 1 < | y 2 - y i 1 | < 1 3 L yi 1 ( x 3 , y 3 ) : 1 4 L xi 2 < | x 3 - x i 2 | < 1 3 L xi 2 ; 1 4 L yi 2 < | y 3 - y i 2 | < 1 3 L yi 2 ( x 4 , y 4 ) : 2 5 L xi 2 < | x 4 - x i 2 | < 1 2 L xi 2 ; 2 5 L yi 2 < | y 4 - y i 2 | < 1 2 L yi 2
L xi 1 = | x 0 - x i 1 | , L yi 1 = | y 0 - y i 1 | L xi 2 = | x 0 - x i 2 | , L yi 1 = | y 0 - y i 2 |
wherein: (x)0,y0) Is the intersection point of the two runway boundaries; l isi1、Li2Respectively two boundary straight linesi1,linei2Length of (d); the intersection point of the lower end of the runway boundary and the lower side boundary of the image is (x)i1,yi1),(xi2,yi2);Lxi1,Lxi2And Lyi1,Lyi2Respectively is the difference between the horizontal coordinates and the vertical coordinates of the end points of the straight line segments of the two runway boundaries;
the matrix of the two runway boundary line equations is represented as:
Y=KX+B
wherein: k is the slope matrix and B is the intercept matrix. From a matrix of sampling points x1 x2 x3 x4]T,[y1 y2 y3 y4]TDetermining two runway boundary straight lines;
step 3, tracking the runway sampling point at the next frame 3: respectively setting sampling windows Z for the 4 sampling points obtained in the step 2i,i=1,2,3,4;
Extracting a straight line from each sampling window by using an LSD (least squares distortion) straight line detection algorithm, screening a detected straight line set by using the following formula, and obtaining an accurate target straight line in the sampling window
<math> <mrow> <msub> <mi>l</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mo>{</mo> <mi>l</mi> <mo>|</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>W</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>X</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>H</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>Y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>&lt;</mo> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>></mo> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>right</mi> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mi>left</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>right</mi> </msub> </mrow> </mfrac> <mo>-</mo> <msub> <mi>k</mi> <msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>i</mi> </msub> </msub> <mo>|</mo> <mo>&lt;</mo> <msub> <mi>&eta;</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>}</mo> </mrow> </math>
Wherein (x)i,yi) The coordinates of the top left vertex of the current rectangular sampling window are obtained; wi,HiIs the length of the current rectangular sampling windowAnd width;the coordinates of the middle points of the straight line segments in the candidate straight line set are obtained; gamma rayiPerforming constraint on the local optimum of the rectangular sampling window for the position difference threshold value of the candidate straight-line segment and the center of the rectangular sampling window; lambda [ alpha ]iIs the length threshold of the candidate straight line; (x)left,yleft),(xright,yright) Respectively as coordinates of two end points of the candidate straight-line segment;the slope of the straight line where the sampling point is located in the previous frame; etaiFurther constraining the slope difference threshold value between the two frames by utilizing the global optimum of the straight line;
then extracting the midpoint of the straight line segment of the target as a tracking point of the track boundary in the frame image:
( x t , y t ) = ( x left + x right 2 , y left + y right 2 )
wherein: (x)t,yt) The final tracking result of the sampling points in the rectangular sampling window is obtained;
the rectangular sampling window ZiRespectively has a length and a width of Hi,WiThe length and width relationship is as follows: wi=(1+θi)Hi/|kiL, wherein: thetaiIs a proportional margin; k is a radical ofiThe slope of a straight line where the rectangular sampling window corresponds to the sampling point is set;
therefore, the tracking results of the 4 sampling points of the frame are respectively as follows: (x)1,y1),(x2,y2),(x3,y3),(x4,y4);
Step 4, fitting a boundary straight line of the runway: establishing a boundary straight line equation according to the tracking results of the 4 sampling points:
k 1 = y 2 - y 1 x 2 - x 1 , k 2 = y 4 - y 3 x 4 - x 3 , b1=y1-k1x1,b2=y3-k2x3
l 1 : y = k 1 x + b 1 l 2 : y = k 2 x + b 2
l1and l2For the resulting two-boundary linear equation, let l1And l2The intersection point between them and the intersection point with the image boundary determines the runway area (ROI);
wherein: k is a radical of1,k2The slopes of the two boundary lines are respectively; b1,b2The intercept of the two straight lines on the y axis respectively;
step 5, runway boundary enhancement: obtaining the straight line l of the two boundaries of the runway in the step 41And l2Marking the image of the current frame to enhance the runway boundary of the original image;
step 6: and (5) repeating the steps 2 to 5 aiming at the next frame of video image until the flying landing.
2. The runway boundary enhancement-based visual enhancement method for aircraft landing according to claim 1, further comprising: the initial length H of the rectangular sampling window on the two boundary straight linesiIs 48; proportional margin thetaiIs 1.5; position difference threshold gamma between candidate straight-line segment and rectangular sampling window centeriIs composed ofLength threshold lambda of candidate straight lineiIs composed ofThreshold η for the slope difference between two framesiIs composed of
3. The runway boundary enhancement-based visual enhancement method for aircraft landing according to claim 1, further comprising: the parameter values in the constraint function areThe deviation threshold value from the center position of the rectangular sampling window is 8, and the length selection parameter is si<10。
CN201510205841.0A 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary Active CN104766337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510205841.0A CN104766337B (en) 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510205841.0A CN104766337B (en) 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary

Publications (2)

Publication Number Publication Date
CN104766337A true CN104766337A (en) 2015-07-08
CN104766337B CN104766337B (en) 2017-10-20

Family

ID=53648142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510205841.0A Active CN104766337B (en) 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary

Country Status (1)

Country Link
CN (1) CN104766337B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611414A (en) * 2016-12-06 2017-05-03 中国航空工业集团公司洛阳电光设备研究所 Enhanced visual system, and runway enhanced display method in enhanced display
CN106934832A (en) * 2017-03-23 2017-07-07 电子科技大学 A kind of simple straight line automatic positioning method towards vision line walking
CN107010239A (en) * 2016-01-27 2017-08-04 霍尼韦尔国际公司 For generating flight deck display system and the method that driving cabin is shown
CN108369727A (en) * 2015-10-22 2018-08-03 台利斯公司 System from enhancing to operator and correlation technique suitable for the visibility for providing
CN112836587A (en) * 2021-01-08 2021-05-25 中国商用飞机有限责任公司北京民用飞机技术研究中心 Runway identification method and device, computer equipment and storage medium
CN114037637A (en) * 2022-01-10 2022-02-11 苏州浪潮智能科技有限公司 Image data enhancement method and device, computer equipment and storage medium
CN114842359A (en) * 2022-04-29 2022-08-02 西北工业大学 Vision-based method for detecting autonomous landing runway of fixed-wing unmanned aerial vehicle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130266175A1 (en) * 2012-04-09 2013-10-10 GM Global Technology Operations LLC Road structure detection and tracking
CN104008387A (en) * 2014-05-19 2014-08-27 山东科技大学 Lane line detection method based on feature point piecewise linear fitting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130266175A1 (en) * 2012-04-09 2013-10-10 GM Global Technology Operations LLC Road structure detection and tracking
CN104008387A (en) * 2014-05-19 2014-08-27 山东科技大学 Lane line detection method based on feature point piecewise linear fitting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NAIDU V 等: "Detection of Airport Runway Edges using Line Detection Techniques", 《COMPUTERS & GRAPHICS》 *
张会章 等: "一个机场跑道的自动识别系统", 《人工智能及识别技术》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108369727A (en) * 2015-10-22 2018-08-03 台利斯公司 System from enhancing to operator and correlation technique suitable for the visibility for providing
CN107010239A (en) * 2016-01-27 2017-08-04 霍尼韦尔国际公司 For generating flight deck display system and the method that driving cabin is shown
CN107010239B (en) * 2016-01-27 2022-07-12 霍尼韦尔国际公司 Cockpit display system and method for generating a cockpit display
CN106611414A (en) * 2016-12-06 2017-05-03 中国航空工业集团公司洛阳电光设备研究所 Enhanced visual system, and runway enhanced display method in enhanced display
CN106611414B (en) * 2016-12-06 2019-08-20 中国航空工业集团公司洛阳电光设备研究所 A kind of enhancing display methods enhancing runway in visual system and enhancing display
CN106934832A (en) * 2017-03-23 2017-07-07 电子科技大学 A kind of simple straight line automatic positioning method towards vision line walking
CN106934832B (en) * 2017-03-23 2019-07-09 电子科技大学 A kind of simple straight line automatic positioning method towards vision line walking
CN112836587A (en) * 2021-01-08 2021-05-25 中国商用飞机有限责任公司北京民用飞机技术研究中心 Runway identification method and device, computer equipment and storage medium
CN112836587B (en) * 2021-01-08 2024-06-04 中国商用飞机有限责任公司北京民用飞机技术研究中心 Runway identification method, runway identification device, computer equipment and storage medium
CN114037637A (en) * 2022-01-10 2022-02-11 苏州浪潮智能科技有限公司 Image data enhancement method and device, computer equipment and storage medium
CN114842359A (en) * 2022-04-29 2022-08-02 西北工业大学 Vision-based method for detecting autonomous landing runway of fixed-wing unmanned aerial vehicle
CN114842359B (en) * 2022-04-29 2024-09-20 西北工业大学 Method for detecting autonomous landing runway of fixed-wing unmanned aerial vehicle based on vision

Also Published As

Publication number Publication date
CN104766337B (en) 2017-10-20

Similar Documents

Publication Publication Date Title
CN104766337B (en) One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary
CN105260699B (en) A kind of processing method and processing device of lane line data
CN107330376B (en) Lane line identification method and system
DE102011086512B4 (en) fog detection
CN105488454B (en) Front vehicles detection and ranging based on monocular vision
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN105718870B (en) Based on the preceding roadmarking extracting method to camera in automatic Pilot
EP2096602B1 (en) Methods and apparatus for runway segmentation using sensor analysis
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
Huang et al. On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness
WO2015010451A1 (en) Method for road detection from one image
CN105654073B (en) A kind of speed automatic control method of view-based access control model detection
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN102073846B (en) Method for acquiring traffic information based on aerial images
CN106096525A (en) A kind of compound lane recognition system and method
CN102982304B (en) Utilize polarized light image to detect the method and system of vehicle location
CN108198417B (en) A kind of road cruising inspection system based on unmanned plane
CN103984950A (en) Moving vehicle stop lamp state recognition method adaptable to day detection
CN102354364B (en) Three-dimensional barrier detecting method of monitoring system with single video camera
CN104809433A (en) Zebra stripe detection method based on maximum stable region and random sampling
Nagarani et al. Unmanned Aerial vehicle’s runway landing system with efficient target detection by using morphological fusion for military surveillance system
Meshram et al. Traffic surveillance by counting and classification of vehicles from video using image processing
CN104978746A (en) Running vehicle body color identification method
Chen et al. Nighttime turn signal detection by scatter modeling and reflectance-based direction recognition
CN106203273A (en) The lane detection system of multiple features fusion, method and senior drive assist system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant