CN104766337B - One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary - Google Patents

One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary Download PDF

Info

Publication number
CN104766337B
CN104766337B CN201510205841.0A CN201510205841A CN104766337B CN 104766337 B CN104766337 B CN 104766337B CN 201510205841 A CN201510205841 A CN 201510205841A CN 104766337 B CN104766337 B CN 104766337B
Authority
CN
China
Prior art keywords
msub
mrow
mfrac
runway
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510205841.0A
Other languages
Chinese (zh)
Other versions
CN104766337A (en
Inventor
李晖晖
燕攀登
郭雷
胡秀华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201510205841.0A priority Critical patent/CN104766337B/en
Publication of CN104766337A publication Critical patent/CN104766337A/en
Application granted granted Critical
Publication of CN104766337B publication Critical patent/CN104766337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The enhanced aircraft landing vision enhancement method in runway boundary is based on the present invention relates to one kind, the linear feature in the first two field picture of FLIR video is extracted using LSD line detection algorithms, and runway boundary straight line screening is carried out by the intrinsic constraints in runway boundary.Then two points are respectively randomly selected in two boundary straight lines of runway, and rectangularly-sampled window is chosen centered on these points.Gradient distribution, and initialized target classifier parameters are extracted to sample window afterwards.Then positioning is tracked to each sampled point in subsequent video frame, runway boundary is fitted by the tracking result of each sampled point, and finally determine runway zone and border.Finally, runway boundary is strengthened to improve the visual ability of pilot.The inventive method can make full use of the inter-frame information in aircraft forward sight landing Infrared video image, the Tracking Recognition on airfield runway border is carried out using method for tracking target, while runway boundary recognition correct rate is ensured, drastically increasing what comes into a driver's strengthens the time performance of algorithm.

Description

One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary
Technical field
The invention belongs to computer visual image processing technology field, it is related to one kind and based on the enhanced aircraft in runway boundary Land vision enhancement method, can be widely applied to the fields such as pilot's vision enhancement system (EVS), vehicle vision-based navigation.
Background technology
Pilot in landing mission, weather conditions be influence its normal landing one of the main reasons, especially mist, The bad weathers such as rain, snow, sand and dust are descended, the visual variation of the indication signal of runway and its surrounding, cause pilot to pass through vision The runway and peripheral information of acquisition are not enough, it is impossible to regular descent.In addition, night landing is there is also dark, visuality is low to ask Topic.Therefore, the visuality of airfield runway environment is improved, the visually-perceptible of enhancing pilot has important practical significance.
Pilot's vision enhancement, is to utilize various sensors and advanced technology, pilot is under severe weather conditions for enhancing And the visual effect under dim light condition.Conventional " visual system " concept, is proposed to solve Similar Problems.Its Basic ideas are exactly to obtain airfield runway and its high-definition picture of neighboring area in real time using forward looking detection sensor, and are led to Cross appropriate information, Image processing & fusion formation and be easy to the real scene image of pilot's understanding, so that pilot can be saturating Cross the bad weathers such as cloud and mist and see runway and correctly operating aircraft completion approach clearly.The what comes into a driver's system for meeting and requiring can be generated System, can be by way of what comes into a driver's synthesis or vision enhancement.For Thermal infrared bands and visible light wave range image, the imaging of its physics Characteristic is entirely different.If visual light imaging illumination is suitable, picture contrast is of a relatively high, many detailed information comprising ground, If but encounter the light such as bad weather and night very dark situation, imaging results can be by extreme influence, it is difficult to differentiate identification Ground target.And it is that the thermal radiation property of object obtains details that infrared imaging, which is utilized, therefore the influence of climate and illumination is small, The characteristics of target interested often shows high brightness, easily differentiated in the picture.
In the past existing many researchs for airfield runway identification positioning, but it is mostly some new theories or mathematical tool Application, it is less for concrete application demand and actual effective ways.There are some inherent shortcomings in these methods:First, Research in the past is more to be studied for two width or a few width still images, and during aircraft landing is continuous video image, with static state Image is compared, more than the information of time dimension.If it is fixed to be also identified frame by frame with the airfield detection method for still image Position, then, one is that the information (such as frame-to-frame correlation) on time dimension can not be made full use of to instruct identification positioning;Two be isolated frame by frame Runway detection amount of calculation is huge, and speed is slow;Furthermore, pilot's vision enhancement needs Processing Algorithm to have good real-time, with Satisfaction application needs.This computational efficiency, memory space etc. are proposed while algorithm effect is ensured it is more strict will Ask.
The content of the invention
The technical problem to be solved
In order to avoid the shortcomings of the prior art, the present invention proposes that one kind is regarded based on the enhanced aircraft landing in runway boundary Enhancement Method is felt, for runway locating and tracking in the front view picture of pilot's landing vision enhancement.
Technical scheme
One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary, it is characterised in that step is as follows:
Runway boundary in step 1, the first frame video image of detection:Noise reduction pretreatment is carried out to first frame video image, LSD is utilized Line detection algorithm progress, which is handled, obtains all straightway set L={ l1,l2,l3... }, then utilize the oblique of straight line Rate ki, point midway (xim,yim), length si, using constraint function linei=fi(ki,(xim,yim),si), fi(ki,(xim, yim),si) enter row constraint, obtain two boundary straight line line of runway in first two field picturei1, linei2
Trace point in step 2, selection runway boundary straight line:For four sampled point (x1,y1), (x2,y2) and (x3,y3), (x4,y4), randomly selected in the interval that following rule is determined:
Wherein:(x0,y0) be two runway boundarys intersection point;Li1、Li2Respectively two boundary straight line linei1, linei2's Length;Runway boundary lower end is (x with boundary intersection on the downside of imagei1,yi1), (xi2,yi2);Lxi1, Lxi2And Lyi1, Lyi2Respectively The end points of two runway boundary straightways is horizontal, Diff N is poor;
The matrix of two runway boundary linear equations is expressed as:
Y=KX+B
Wherein:K is slope matrix, and B is intercept matrix.By sampling dot matrix [x1 x2 x3 x4]T, [y1 y2 y3 y4]TReally Fixed two runway boundary straight lines;
Step 3, in next frame to runway sampled point tracking 3:4 sampled points obtained to step 2 set sampling respectively Window Zi, i=1,2,3,4;
Straight line is extracted to each sample window using LSD line detection algorithms, the straight line set of detection entered using following formula Row screening, obtains accurate target straight line in sample window
Wherein, (xi,yi) be current rectangle sample window left upper apex coordinate;Wi,HiFor current rectangle sample window length and It is wide;For the middle point coordinates of candidate's straight line set cathetus section;γiFor the position of candidate's straightway and rectangularly-sampled window center Poor threshold value is put, the local optimum using rectangularly-sampled window enters row constraint;λiFor the length threshold of candidate's straight line;(xleft,yleft), (xright,yright) be respectively candidate's straightway two ends point coordinates;Straight line is oblique where the sampled point in previous frame Rate;ηiFor the slope differences threshold value between two frames, further constrained using the global optimum of place straight line;
Target line section midpoint is extracted again as a trace point of runway boundary in this two field picture:
Wherein:(xt,yt) for the final tracking result of sampled point in this rectangularly-sampled window;
The rectangularly-sampled window ZiLength and width be respectively Hi, Wi, its length and width relation is:Wi=(1+ θi)Hi/|ki|, wherein:θi For ratio nargin;kiThe slope of straight line where rectangularly-sampled window correspondence sampled point;
Thus the tracking result for obtaining 4 sampled points of this frame is respectively:(x1,y1), (x2,y2), (x3,y3), (x4,y4);
Step 4, the boundary straight line for being fitted runway:Boundary straight line equation is set up according to the tracking result of 4 sampled points:
l1And l2For two resulting border linear equations, with l1With l2Between intersection point and its intersection point with image boundary Determine runway zone (ROI);
Wherein:k1, k2The respectively slope of two boundary straight lines;b1, b2The intercept of respectively two straight lines on the y axis;
Step 5, runway boundary enhancing:Step 4 is obtained into the boundary straight line l of runway two1And l2, in the enterprising rower of this two field picture Note so that the runway boundary of original image is strengthened;
Step 6:For next frame video image, 2~step 5 of repeat step, until flight landing.
The initial length H of rectangularly-sampled window in two boundary straight lineiFor 48;Ratio nargin θiFor 1.5;Candidate's straightway With the alternate position spike threshold gamma of rectangularly-sampled window centeriForThe length threshold λ of candidate's straight lineiForBetween two frames Slope differences threshold value ηiFor
Parameter value in the constraint function isIt is 8, length with rectangularly-sampled window center position deviation threshold value Selecting All Parameters are si< 10.
Beneficial effect
It is proposed by the present invention a kind of based on the enhanced aircraft landing vision enhancement method in runway boundary, it is theoretical with target following Carry out runway boundary is accurately positioned tracking, and finally fits post position and strengthened.Simulation results show, the algorithm Can in aircraft forward sight landing Infrared video image in real time, effectively track runway.Pilot's visual impression can effectively be strengthened Know ability with operating aircraft uneventful landing.
The invention has the advantages that:First:Identification by target following theory for airfield runway is demarcated, and significantly improves runway The real-time of region recognition, and robustness of the algorithm in the case of interference (such as block, lose) can be significantly improved;Second: When being tracked to runway boundary sampled point using local restriction by the way of global restriction is combined, it can significantly reduce by list Runway deviations caused by individual sampled point tracking mistake;3rd:Runway boundary and runway zone are carried out simply and effectively Enhancing, makes relatively significant region in image, is easy to pilot to recognize.So as to which this algorithm can effectively strengthen badly Visual ability when pilot lands under low visibility caused by weather.
Brief description of the drawings
Fig. 1:The flow chart of the inventive method;
Fig. 2:Target video image and a series of processing selected by emulation experiment
(a) the first two field picture of video;(b) runway zone schematic diagram;(c) the preliminary straight-line detection result of LSD algorithm;(d) through sieve The runway boundary straight line of determination after choosing;(e) sampled point and rectangularly-sampled window chosen on runway boundary;(f) there is first frame The runway boundary that sampled point is fitted;(g) in the video frame by sampled point real-time fitting runway boundary straight line;(h) runway zone Tracking demarcation design sketch;
Fig. 3:The schematic diagram for four sampling point positions chosen.
Embodiment
In conjunction with embodiment, accompanying drawing, the invention will be further described:
The method is characterized in that step is as follows:
Runway boundary is detected in step 1 head frame video images:This step mainly uses airfield runway in infrared image Significant boundary characteristic carries out runway identification, and core is accurate specific border detection algorithm.It is notable using boundary length, border The constraintss such as degree, slope, the position relationship on two borders, the perimeter strip number of detection carry out being accurately positioned runway.Using LSD straight lines Detection algorithm carries out lines detection, obtains a runway boundary straight line collection L={ l to be selected1,l2,l3... ....Finally, according to race The constraintss such as road boundary length, border width, slope obtain two borders of runway.Idiographic flow is as follows:
A) LSD straight-line detections
The gradient direction of Main Basiss image carries out the detection of same Points on Straight Line.The direction gradient formula of image is:
Ang (x, y)=arctan (gx (x, y)/(- gy (x, y)) (3)
Wherein, i (x, y) is the pixel value of coordinate points (x, y) in image;gx(x, y) is terraced for the x directions of coordinate points (x, y) Degree;gy(x, y) is the y direction gradients of coordinate points (x, y);G (x, y) is the actual gradient value of coordinate points (x, y);Ang (x, y) is The gradient direction angle of coordinate points (x, y).
B) detection of straight lines is screened
The straight line collection obtained by a) is screened according to the length of straight line, slope, quantitative characteristics.Following constraints:
linei=fi(ki,li,si) (4)
Lf=F (linei,linej) (5)
Wherein, lineiTo meet runway boundary straight slope ki, length li, position siThe straight line of constraint;LfFor to meeting public affairs The straight line collection of formula (4) constraints further screens the result of obtained final runway (two borders), and screening conditions are:Straight line pair linei,linejMeet constraint function F (linei,linej).Finally we just can obtain runway boundary by accurately detecting.
C) runway zone is determined
The boundary straight line that b) detection is obtained is carried out the relevant treatment such as extending, by the intersection point of two runway boundary straight lines, both sides The intersection point side of boundary's straight line and image boundary can determine runway zone.Go to region and meet below equation:
ROI=S1∩S2 (6)
Wherein, ROI (Region Of Interest) represents final runway zone;S1、S2Represent straight through two borders respectively The candidate runway region of line constraint;fl1(x, y)=0, fl2(x, y)=0 represents linear equation where two borders respectively;M, N distinguish Represent the height and width of image.
Step 2 is obtained in step 1 detection on the basis of first frame boundaries, and randomly selecting two respectively on two borders adopts Sampling point (x1,y1), (x2,y2), (x3,y3), (x4,y4).The selection of this four sampled points is carried out in accordance with the following steps:First, by step Rapid 1 determination runway side obtains linear equation y=kx+b, and wherein k is slope, and b is intercept.Secondly, it is contemplated that in the image first half Runway boundary definition is poor, therefore chooses the sampled point (one end away from airfield runway vanishing point) close to image lower portion.Choosing The sampled point taken meets following restrictive condition:
Point_Set=(x, y) | y=kx+b, θy-< y < θy+x-< x < θx+} (8)
Wherein, Point_Set is the set of the sampled point (x, y) for the condition that meets;θy-、θy+Respectively meet condition sampling The bound of the ordinate of point;θx-, θx+Respectively meet the bound of the abscissa of condition sampled point.By choosing different θy-、 θy+、θx-、θx+Value, just can correctly choose 4 sampled points.Sampled point is to carry out step 3 to sampled point to track after choosing.
Step 3 is in next frame, each sample point obtained in step 2, and rectangularly-sampled window Z is chosen respectivelyi, i=1,2, 3,4, to be tracked to sampled point.If rectangularly-sampled window ZiLength and width be respectively Hi, Wi, then its length and width relation is:
Wi=(1+ θi)Hi/|ki| (9)
Wherein, θiFor ratio nargin;kiThe slope of straight line where rectangularly-sampled window correspondence sampled point.So pass through selection One suitable Hi, just can be according to the different update rectangularly-sampled window of straight slope where different sampled points in every two field picture Size.To each sample window according to its gradient direction figure feature extraction straight line, i.e. LSD straight-line detections.And according to slope and position Put the straight line set to detection to carry out, accurate target straight line is finally obtained in sample windowI.e.:
Wherein,For target line;(xi,yi) be current rectangle sample window left upper apex coordinate;Wi,HiFor current rectangle The length and width of sample window;For the middle point coordinates of candidate's straight line set cathetus section;γiAdopted for candidate's straightway and rectangle The alternate position spike threshold value of sample window center, i.e., enter row constraint using the local optimum for holding sample window;λiFor the length threshold of candidate's straight line Value;(xleft,yleft), (xright,yright) be respectively candidate's straight line set cathetus section two ends point coordinates;For upper one The slope of straight line where sampled point in frame;ηiFor the slope differences threshold value between two frames, i.e., entered using the global optimum of place straight line Row constraint.Optimal and this frame global optimum in compositive index rectangular window, chooses most suitable straight line section, and it is straight to extract target Line segment midpoint as this frame runway boundary a trace point, i.e.,
Wherein, (xt,yt) for the final tracking result of sampled point in this rectangularly-sampled window.
Step 4 is goed off the course after the tracking result that step 3 obtains each sampled point of this frame according to the fitting of each point tracing positional Zone boundary straight line.Assuming that the tracking result obtained in two boundary straight lines is respectively (x1,y1), (x2,y2) and (x3,y3), (x4,y4).So we just can obtain its boundary straight line equation, i.e.,:
Wherein, k1, k2The respectively slope of two boundary straight lines;b1, b2The intercept of respectively two straight lines on the y axis;l1, l2For Linear equation obtained by final fitting.Intersection point between the straight line of runway boundary and its determine runway with the intersection point of image boundary Region, i.e. ROI.
Step 5 is obtained in step 4 on the basis of runway zone and its border, and the runway boundary detected is strengthened. Runway boundary is strengthened by the way of artificially demarcation straight line, so as to enhance pilot in landing to airfield runway Visual ability.
Step 6 repeat step 2~5, until flight landing.
Specific embodiment:
Hardware environment for implementation is:Intel (R) Xeon (R), E5504,6GB RAM, 2.0GHz, the software of operation Environment is:Mat1ab2014a and Win7.We are realized proposed by the present invention with Matlab language and C Plus Plus hybrid programming New algorithm.Tested using the straight highway video with notable boundary characteristic of similar airfield runway, video duration 10 seconds (s), totally 150 frame.Size:488×191.
Runway boundary is detected in step 1 head frame video images:Noise reduction pretreatment is carried out to first frame video image first, with Improve the definition of image.Then entire image progress is handled using LSD line detection algorithms and obtains all straightway collection Close L={ l1,l2,l3... }, then utilize the slope k of straight line, point midway (xm,ym), the condition such as length s enter row constraint, That is linei=fi(ki,(xm,ym),si).In view of track features in image, the parameter value that we choose here is It is 8 with rectangularly-sampled window center position deviation threshold value, length Selecting All Parameters are si< 10;Thereafter to primary election straight line according to two runways Position relationship between border is further screened, and two runway boundary straight lines of note are lm、ln, slope is km、kn, intercept is bm、bn。 So first kmWith knFor the value of a pair of positive and negative contrary signs;Secondly, two border straight length s differences will not be very big.Finally, by image Lower boundary to during coboundary, situation of successively decreasing is presented in distance between two runway boundarys.
The selection of trace point on the straight line of step 2 runway boundary.On the basis of step 1 obtains first two field picture runway boundary, Runway is tracked.Each on two runway boundary straight lines two sampled points arbitrarily can be selected, then each point is tracked, finally Two runway boundarys are determined by " 2 points determine straight line " rule.In view of the definition of runway boundary, we are according to following rule Then choose sampled point:Assuming that the length of two runway boundary straightways in the picture is respectively Li1、Li2, runway boundary straight-line lower end It is (x with boundary intersection on the downside of imagei1,yi1), (xi2,yi2).The selection of point is so up-sampled for runway boundary, we select The point in the boundary straight line of following condition is met, is illustrated by Fig. 3.With reference to formula 8, for four sampled point (x1,y1), (x2,y2) and (x3,y3), (x4,y4), we randomly select according to the interval that following rule is determined:
Wherein, (x0,y0) be two runway boundarys intersection point.Lxi1, Lxi2And Lyi1, Lyi2Respectively two runway boundary straightways End points is horizontal, Diff N.According to rule above, we just can obtain suitable sampled point.Simultaneously, it is assumed that two runway boundarys The matrix of linear equation is expressed as:
Y=KX+B (16)
Wherein, K is slope matrix, and B is intercept matrix.By two points respectively being chosen in two boundary straight lines are carried out with Track, so as to realize the tracking to runway.I.e. by sampling dot matrix [x1 x2 x3 x4]T, [y1 y2 y3 y4]TDetermine two runway sides Boundary's straight line.Step 3 is the tracking to runway sampled point
Tracking of the step 3 to runway sampled point.In next frame, 4 sample points obtained in step 2 are chosen respectively Rectangularly-sampled window Zi, i=1,2,3,4, to be tracked to sampled point.If rectangularly-sampled window ZiLength and width be respectively Hi, Wi, that Its length and width relation is:
Wi=(1+ θi)Hi/|ki| (17)
Wherein, θiFor ratio nargin;kiThe slope of straight line where rectangularly-sampled window correspondence sampled point.So pass through selection One suitable Hi, just can be according to the different update rectangularly-sampled window of straight slope where different sampled points in every two field picture Size.To each sample window according to its gradient direction figure feature extraction straight line, i.e. LSD straight-line detections.And according to slope and position Put the straight line set to detection to screen, accurate target straight line is finally obtained in sample windowI.e.:
Wherein,For target line;(xi,yi) be current rectangle sample window left upper apex coordinate;Wi,HiFor current rectangle The length and width of sample window;For the middle point coordinates of candidate's straight line set cathetus section;γiAdopted for candidate's straightway and rectangle The alternate position spike threshold value of sample window center, i.e., enter row constraint using the local optimum of rectangularly-sampled window;λiFor the length threshold of candidate's straight line Value;(xleft,yleft), (xright,yright) be respectively candidate's straight line set cathetus section two ends point coordinates;For upper one The slope of straight line where sampled point in frame;ηiFor the slope differences threshold value between two frames, i.e., entered using the global optimum of place straight line Row constraint.Optimal and this frame global optimum in compositive index rectangular window, chooses most suitable straight line section, and it is straight to extract target Line segment midpoint as runway boundary in this two field picture a trace point, i.e.,
Wherein, (xt,yt) for the final tracking result of sampled point in this rectangularly-sampled window.Using same method completion pair The track and localization of remaining sampled point of runway boundary.Then step 4 is carried out, the runway boundary tracking dot position information of determination is used for The runway boundary of next frame of video is determined.
Step 4 is goed off the course after the tracking result that step 3 obtains each sampled point of this frame according to the fitting of each point tracing positional Zone boundary straight line, and demarcated for the runway boundary of next frame.Assuming that the tracking result obtained in two boundary straight lines point Wei not (x1,y1), (x2,y2) and (x3,y3), (x4,y4).So we just can obtain its boundary straight line equation, i.e.,:
Wherein, k1, k2The respectively slope of two boundary straight lines;b1, b2The intercept of respectively two straight lines on the y axis;l1, l2For Linear equation obtained by final fitting.Intersection point between the straight line of runway boundary and its determine runway with the intersection point of image boundary Region, i.e. ROI.
Step 5 is obtained in step 4 on the basis of runway zone and its boundary straight line, and the runway boundary detected is carried out Enhancing.Runway boundary is strengthened by the way of demarcation straight line, so as to enhance pilot in landing to airfield runway Visual ability.
Step 6 repeat step 2~5, until flight landing terminates.
For further illustrate this method pilot's landing vision enhancement application in validity, respectively from runway boundary with It is analyzed in terms of track accuracy and vision enhancement real-time with the vision enhancement algorithm based on detection.Simulation aircraft Land video carries out vision enhancement, and video totalframes is 150 frames, size:488×191.Consider two methods in low visibility respectively Vision enhancement effect under scene.Its comparing result is as shown in table 1 below.As can be seen that this method not only increases runway boundary Runway boundary enhancing algorithm an order of magnitude based on detection is higher by tracking accuracy, and processing time performance.
This method is contrasted with the vision enhancement algorithm based on detection under the low visibility of table 1

Claims (3)

1. one kind is based on the enhanced aircraft landing vision enhancement method in runway boundary, it is characterised in that step is as follows:
Runway boundary in step 1, the first frame video image of detection:Noise reduction pretreatment is carried out to first frame video image, LSD straight lines are utilized Detection algorithm progress, which is handled, obtains all straightway set L={ l1,l2,l3... }, then utilize the slope k of straight linei、 Point midway (xim,yim), length si, using constraint function linei=fi(ki,(xim,yim),si) enter row constraint, obtain first frame Two boundary straight line line of runway in imagei1, linei2
Trace point in step 2, selection runway boundary straight line:For four sampled point (x1,y1), (x2,y2) and (x3,y3), (x4, y4), randomly selected in the interval that following rule is determined:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>:</mo> <mfrac> <mn>2</mn> <mn>5</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>;</mo> <mfrac> <mn>2</mn> <mn>5</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>:</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>;</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>3</mn> </msub> <mo>)</mo> <mo>:</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>;</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>y</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> <mo>:</mo> <mfrac> <mn>2</mn> <mn>5</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>;</mo> <mfrac> <mn>2</mn> <mn>5</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>&lt;</mo> <mo>|</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>|</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mo>|</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>,</mo> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mo>|</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>|</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mo>|</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>|</mo> <mo>,</mo> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mo>|</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>|</mo> </mtd> </mtr> </mtable> </mfenced>
Wherein:(x0,y0) be two runway boundarys intersection point;Runway boundary lower end is (x with boundary intersection on the downside of imagei1,yi1), (xi2,yi2);Lxi1, Lxi2And Lyi1, Lyi2The end points of respectively two runway boundary straightways is horizontal, Diff N;
The matrix of two runway boundary linear equations is expressed as:
Y=KX+B
Wherein:K is slope matrix, and B is intercept matrix;By sampled point (x1,y1), (x2,y2) a runway boundary straight line is determined, by Sampled point (x3,y3), (x4,y4) determine another runway boundary straight line;
Step 3, in next frame to runway sampled point track:4 sampled points obtained to step 2 set sample window Z respectivelyi, i =1,2,3,4;
Straight line is extracted to each sample window using LSD line detection algorithms, the straight line set of detection sieved using following formula Choosing, obtains accurate target straight line in sample window
<mrow> <msub> <mi>l</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mrow> <mi>l</mi> <mo>|</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>W</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <msub> <mi>H</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>-</mo> <mover> <mi>Y</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>&lt;</mo> <msub> <mi>&amp;gamma;</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>&gt;</mo> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> </mfrac> <mo>-</mo> <msub> <mi>k</mi> <msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>i</mi> </msub> </msub> <mo>|</mo> <mo>&lt;</mo> <msub> <mi>&amp;eta;</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>}</mo> </mrow> </mrow>
Wherein, (xi,yi) be current rectangle sample window left upper apex coordinate;Wi,HiFor the wide of current rectangle sample window and length;For the middle point coordinates of candidate's straight line set cathetus section;γiFor the alternate position spike of candidate's straightway and rectangularly-sampled window center Threshold value, the local optimum using rectangularly-sampled window enters row constraint;λiFor the length threshold of candidate's straight line;(xleft,yleft), (xright,yright) be respectively candidate's straightway two ends point coordinates;Straight line is oblique where the sampled point in previous frame Rate;ηiFor the slope differences threshold value between two frames, further constrained using the global optimum of place straight line;
Target line section midpoint is extracted again as a trace point of runway boundary in this two field picture:
<mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>t</mi> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> <mn>2</mn> </mfrac> <mo>,</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>f</mi> <mi>t</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> <mn>2</mn> </mfrac> <mo>)</mo> </mrow>
Wherein:(xt,yt) for the final tracking result of sampled point in this rectangularly-sampled window;
The current rectangle sample window ZiLength and width be respectively Hi, Wi, its length and width relation is:Wi=(1+ θi)Hi/|ki|, wherein:θi For ratio nargin;kiThe slope of straight line where rectangularly-sampled window correspondence sampled point;
Thus the tracking result for obtaining 4 sampled points of this frame is respectively:(x1′,y1'), (x2′,y2'), (x3′,y3'), (x4′, y4′);
Step 4, the boundary straight line for being fitted runway:Boundary straight line equation is set up according to the tracking result of 4 sampled points:
b1=y1′-k1x1', b2=y3′-k2x3
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>l</mi> <mn>1</mn> </msub> <mo>:</mo> <mi>y</mi> <mo>=</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> <mi>x</mi> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>l</mi> <mn>2</mn> </msub> <mo>:</mo> <mi>y</mi> <mo>=</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> <mi>x</mi> <mo>+</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
l1And l2For two resulting border linear equations, with l1With l2Between intersection point and its determine to run with the intersection point of image boundary Road region;
Wherein:k1, k2The respectively slope of two boundary straight lines;b1, b2The intercept of respectively two straight lines on the y axis;
Step 5, runway boundary enhancing:Step 4 is obtained into the boundary straight line l of runway two1And l2, it is labeled, makes on this two field picture Obtaining the runway boundary of original image is strengthened;
Step 6:For next frame video image, 2~step 5 of repeat step, until flight landing.
2. the enhanced aircraft landing vision enhancement method in runway boundary is based on according to claim 1, it is characterised in that:It is described The initial length H of rectangularly-sampled window in two boundary straight linesiFor 48;Ratio nargin θiFor 1.5;Candidate's straightway and rectangularly-sampled window The alternate position spike threshold gamma at centeriForThe length threshold λ of candidate's straight lineiForSlope differences threshold between two frames Value ηiFor
3. the enhanced aircraft landing vision enhancement method in runway boundary is based on according to claim 1, it is characterised in that:
Parameter value in the constraint function isIt is 8 with rectangularly-sampled window center position deviation threshold value, length is chosen Parameter is si< 10.
CN201510205841.0A 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary Active CN104766337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510205841.0A CN104766337B (en) 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510205841.0A CN104766337B (en) 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary

Publications (2)

Publication Number Publication Date
CN104766337A CN104766337A (en) 2015-07-08
CN104766337B true CN104766337B (en) 2017-10-20

Family

ID=53648142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510205841.0A Active CN104766337B (en) 2015-04-27 2015-04-27 One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary

Country Status (1)

Country Link
CN (1) CN104766337B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3042882B1 (en) * 2015-10-22 2018-09-21 Thales SYSTEM PROVIDED TO PROVIDE OPERATOR WITH INCREASED VISIBILITY AND ASSOCIATED METHOD
US9936191B2 (en) * 2016-01-27 2018-04-03 Honeywell International Inc. Cockpit display systems and methods for generating cockpit displays including enhanced flight visibility indicators
CN106611414B (en) * 2016-12-06 2019-08-20 中国航空工业集团公司洛阳电光设备研究所 A kind of enhancing display methods enhancing runway in visual system and enhancing display
CN106934832B (en) * 2017-03-23 2019-07-09 电子科技大学 A kind of simple straight line automatic positioning method towards vision line walking
CN112836587A (en) * 2021-01-08 2021-05-25 中国商用飞机有限责任公司北京民用飞机技术研究中心 Runway identification method and device, computer equipment and storage medium
CN114037637B (en) * 2022-01-10 2022-04-19 苏州浪潮智能科技有限公司 Image data enhancement method and device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008387A (en) * 2014-05-19 2014-08-27 山东科技大学 Lane line detection method based on feature point piecewise linear fitting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8750567B2 (en) * 2012-04-09 2014-06-10 GM Global Technology Operations LLC Road structure detection and tracking

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008387A (en) * 2014-05-19 2014-08-27 山东科技大学 Lane line detection method based on feature point piecewise linear fitting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Detection of Airport Runway Edges using Line Detection Techniques;Naidu V 等;《Computers & Graphics》;20111231;第31卷(第3期);第493-500页 *
一个机场跑道的自动识别系统;张会章 等;《人工智能及识别技术》;20011231;第27卷(第12期);第77-78页 *

Also Published As

Publication number Publication date
CN104766337A (en) 2015-07-08

Similar Documents

Publication Publication Date Title
CN104766337B (en) One kind is based on the enhanced aircraft landing vision enhancement method in runway boundary
CN105488454B (en) Front vehicles detection and ranging based on monocular vision
CN105718870B (en) Based on the preceding roadmarking extracting method to camera in automatic Pilot
CN102708356B (en) Automatic license plate positioning and recognition method based on complex background
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN103324930B (en) A kind of registration number character dividing method based on grey level histogram binaryzation
CN101334836B (en) License plate positioning method incorporating color, size and texture characteristic
CN103324913B (en) A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
CN100544446C (en) The real time movement detection method that is used for video monitoring
CN107862290A (en) Method for detecting lane lines and system
CN105005771A (en) Method for detecting full line of lane based on optical flow point locus statistics
CN108549864A (en) Area-of-interest filter method towards vehicle-mounted thermal imaging pedestrian detection and device
CN104063882B (en) Vehicle video speed measuring method based on binocular camera
CN104809433A (en) Zebra stripe detection method based on maximum stable region and random sampling
CN109190483B (en) Lane line detection method based on vision
CN103632363A (en) Object-level high-resolution remote sensing image change detection method based on multi-scale fusion
CN105740809A (en) Expressway lane line detection method based on onboard camera
CN104021378A (en) Real-time traffic light recognition method based on space-time correlation and priori knowledge
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN105205785A (en) Large vehicle operation management system capable of achieving positioning and operation method thereof
CN102902983B (en) A kind of taxi identification method based on support vector machine
CN107038411A (en) A kind of Roadside Parking behavior precise recognition method based on vehicle movement track in video
CN103679704A (en) Video motion shadow detecting method based on lighting compensation
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN106919939B (en) A kind of traffic signboard tracks and identifies method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant