CN105335985A - Real-time capture method and system of docking airplane on the basis of machine vision - Google Patents

Real-time capture method and system of docking airplane on the basis of machine vision Download PDF

Info

Publication number
CN105335985A
CN105335985A CN201410377269.1A CN201410377269A CN105335985A CN 105335985 A CN105335985 A CN 105335985A CN 201410377269 A CN201410377269 A CN 201410377269A CN 105335985 A CN105335985 A CN 105335985A
Authority
CN
China
Prior art keywords
aircraft
region
background
real
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410377269.1A
Other languages
Chinese (zh)
Other versions
CN105335985B (en
Inventor
邓览
程建
王帅
李鸿升
习友宝
王海彬
王龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen CIMC Tianda Airport Support Ltd
Original Assignee
China International Marine Containers Group Co Ltd
Shenzhen CIMC Tianda Airport Support Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China International Marine Containers Group Co Ltd, Shenzhen CIMC Tianda Airport Support Ltd filed Critical China International Marine Containers Group Co Ltd
Priority to CN201410377269.1A priority Critical patent/CN105335985B/en
Publication of CN105335985A publication Critical patent/CN105335985A/en
Application granted granted Critical
Publication of CN105335985B publication Critical patent/CN105335985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a real-time capture method and system of a docking airplane on the basis of machine vision. The method comprises the following steps: dividing a monitoring scene into different information processing functional areas to improve processing efficiency; on the basis of a median filtering background model, a Gaussian mixture background model or a background model of kernel density probability estimation, simulating the dynamic distribution of a background in a scene, carrying out background modeling, and then, carrying out difference on a current frame and the background model to eliminate the background; carrying out statistics on a gray value, which is extracted through background elimination, in a foreground area to carry out shade elimination; and establishing a standard front-side airplane area template, detecting and extracting a target area, obtaining a vertical projection curve of the area, obtaining a correlation coefficient of the vertical projection curve and the vertical projection curve of the standard front-side airplane area template to judge whether an airplane docks, and carrying out further verification through the engine and the front wheel of the airplane captured through detection. The invention also provides a real-time capture system of the docking airplane, wherein the real-time capture system correspondingly realizes the above method.

Description

A kind of real-time catching method of docking aircraft based on machine vision and system
Technical field
The present invention relates to a kind of berth Plane location and bootstrap technique, the real-time catching method of docking aircraft based on machine vision of particularly a kind of moving object segmentation for aircraft docking guidance, feature identification and checking and system.
Background technology
Aircraft docking guidance refers to and will be directed to the stop position on machine level ground to port aircraft from taxiway end and the process of accurately berthing.The object of aircraft docking guidance ensures that docking aircraft safety accurately berths, and aircraft can be facilitated to dock with various the accurate of ground service interface, and make connecting bridge effectively can be abutted against aircraft door, improve Airport Operation efficiency and safety.Autoplane docking guidance system is mainly divided into by using the type difference of sensor: (1) ground buried coil class; (2) laser scanning and ranging class; (3) visually-perceptible class.Because laser scanning and ranging class and visually-perceptible class automated induction systems effectively can obtain the visual information of docking aircraft, therefore this two classes autoplane docking guidance system is also called visual berth priming system.Whether buried inductive coil class automated induction systems has metal object to pass through by detection or stops the position determining docking aircraft.The advantage of buried inductive coil is that fast response time, cost are low, and to weather and illumination no requirement (NR), but error is comparatively large, antijamming capability is low.Meanwhile, the lead-in wire and the electronic component that are embedded in underground crush easily, reliability is not high, and measuring accuracy is not high, can not identify type, and adjustable maintenanceability is poor.Laser scanning and ranging class automated induction systems determines the information such as aircraft position, speed and type by laser ranging and laser scanning, be not subject to the impact of ambient light illumination and be subject to weather effect less, precision is higher, adjustable maintenanceability is good, but cost is higher, and laser scanning frequency is limited, limit real-time and the stability of guiding.Visually-perceptible class automated induction systems obtains the image information of aircraft docking process by optical imaging modalities, and then by information such as the position of Intelligentized Information technology determination docking aircraft, speed and types, system architecture is simple, cost is low, there is high intelligent level, adjustability is maintainable better, but has requirement, adaptability poor to weather and illumination.
Along with the development that deepens continuously of visually-perceptible imaging technique, Intelligentized Information technology and computer technology, visual aircraft docking guidance technology can obtain the docking information of docking aircraft accurately and fast, is applied in the docking guidance system on airport.Honeywell Corp. USA development visual aircraft docking guidance system (VDGS) and Siemens develop video docking guidance system (VDOCKS) as the leading level in the world vision guide equipment also in the world some airports be applied, but these systems are to weather and illumination requirement is higher, adaptability is poor, and the information processing capability of lack of wisdom.In the aircraft bootup process of whole berth, aircraft is followed the tracks of and location, plane type recognition and authentication operation are all carry out after berth aircraft is caught.If parking system does not capture berth aircraft, so follow-up every operation can not perform.Therefore, berth aircraft quick, accurately to catch be basis and the prerequisite that docking guidance system completes that berth aircraft guides task.One fast and accurately berth aircraft catching method can provide information and more processing time more accurately for follow-up aircraft model identification, tracking and guiding.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of berth aircraft of can fast, accurately catching based on the real-time catching method of docking aircraft of machine vision and system.
To achieve these goals, the invention provides a kind of real-time catching method of docking aircraft based on machine vision, wherein, comprise the steps:
Monitoring scene partitioning is different information processing function districts by S1, aircraft berth scene setting, to reduce the processing region scope of picture, improves treatment effeciency;
S2, aircraft are caught, and comprising:
S21, background are eliminated, carry out the DYNAMIC DISTRIBUTION of background in simulated scenario based on the background model of medium filtering, mixture Gaussian background model or the background model based on cuclear density probability estimate and carry out background modeling, then present frame and background model being made difference to eliminate background;
S22, shadow removing, add up the gray-scale value eliminated by background in the foreground area extracted, find out maximum gradation value gmax and minimum gradation value gmin, shadow removing is carried out in the region being then less than T=gmin+ (gmax-gmin) * 0.5 at gray-scale value;
S23, territorial classification, set up a standard front face aircraft region template, through and change Detection and Extraction target area and the vertical projection curve asking for this region, then the related coefficient of the vertical projection curve of this vertical projection curve and described standard front face aircraft region template is asked for, if this related coefficient is more than or equal to 0.9, then this target is aircraft;
By the engine and front-wheel detecting the aircraft captured, S24, signature verification, verify whether this target is aircraft further.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, after step S1, also can comprise the steps:
S10, video image pre-service, carry out gamma correction and denoising to image, to improve the visual effect of image, improves the sharpness of image.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, in described background removal process S21, the foundation of described single Gaussian Background model comprises the steps:
The foundation of S211, background model, initial background image, calculates the average gray value μ of each pixel in video sequence image f (x, y) in a period of time 0and the variance of pixel grey scale by μ 0with composition has Gaussian distribution η (x, μ 0, σ 0) initial background image B 0,
Wherein: μ 0 ( x , y ) = 1 T Σ i = 0 T - 1 f i ( x , y ) , σ 0 2 ( x , y ) = 1 T Σ i = 0 T - 1 [ f i ( x , y ) - μ 0 ( x , y ) ] 2
Then for each pixel of every two field picture sets up Gauss model η (x i, μ i, σ i);
Wherein, i is frame number, x ifor the current pixel value of pixel, μ ifor the average of current pixel point Gauss model, σ ifor the mean square deviation of current pixel point Gauss model; If η is (x i, μ i, σ i)≤Tp, Tp is probability threshold value, then judge that this point is as foreground point, otherwise be background dot;
The renewal of S212, background model
If scene changes, then upgrade background model, the real-time information that the consecutive image utilizing camera head to take provides upgrades background model, as shown in the formula:
μ i+1=(1-α)μ i+αx i
σ i + 1 = ( 1 - α ) μ i 2 + αd i 2
Wherein α is turnover rate, and value is between 0 ~ 1.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, if this pixel is background, then α gets 0.05, if this pixel is prospect, then α gets 0.0025.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, described signature verification step S24 comprises:
S241, image pole black region extract, statistics of histogram is carried out to image, the ratio that pixel count is not the maximum gradation value/minimum gradation value of 0 is obtained in 1% ~ 99% scope in the middle of gray level, use default extremely black decision threshold to extract part the most black in image, obtain a width pole black region;
S242, similar round detect, and extract all outer boundary of this pole black region, to the barycentric coordinates of the square computation bound on each use border, border, the jth i rank square on border is defined as follows:
m ji = Σ x , y ( f ( x , y ) x j y i )
Barycentric coordinates
x ‾ = m 10 m 00 , y ‾ = m 01 m 00
For all pixels of current border, calculate the distance of itself and this center of gravity, if the ratio of the ultimate range calculated and minor increment is greater than a circular decision threshold, then think that this region is non-circular, carry out the judgement in next region, the barycentric coordinates in the similar round region that record judges and radius;
S243, class circle region in by judge similarity detection aeroengine;
S244, detection aircraft nose wheel, confirm this aeroengine and front-wheel then acquisition success.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, detects in aeroengine step S243, suppose to detect altogether M similar round region in class circle region, being wherein calculated as of i-th similarity individual with jth:
Similarity ij=|Height i-Height j|*|Radius i-Radius j|
Wherein, Height is height of C.G., and Radius is radius, as similarity Similarity ijwhen being less than default similarity threshold, then think that region i and j is aeroengine.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, in step S243, if do not detect aeroengine, then carry out iterative detection, described extremely black decision threshold, circular decision threshold, similarity threshold are increased respectively, then carry out step S241-243; If still do not detect aeroengine, then use the circular shuttering of 7*7 to open operation to all pole black regions, then carry out step S242-243;
If still do not detect aeroengine, then carry out 2 above-mentioned iterative detection again;
If still do not detect aeroengine, then exist without engine in process decision chart picture.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, the recruitment of described extremely black decision threshold, circular decision threshold, similarity threshold is respectively 0.05,0.5,20.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein, in step S244, using in the middle of the aeroengine detected and be highly the region of search of region as aircraft nose wheel of 4 engine radiuses, in region of search, by the gray-level quantization to 64 grade of 256 grades, volumes of searches turns to first crest in the grey level histogram of 64 grades and trough, and the optimum crest location BestPeak in the grey level histogram of original 256 grades of gray scales, optimum trough BestValley position are defined as follows:
BestPeak = arg max peak * 4 - 4 ≤ i ≤ peak * 4 + 3 { hist 256 ( i ) }
BestValley = arg min BestPeak < i &le; valley * 4 + 3 { hist 256 ( i ) }
Wherein hist 256i () is in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
Split gray scale with this optimum trough BestValley, to the part being less than optimum trough BestValley, the assorted point that removing area is less, uses a flattened oval type structural element to carry out closed operation to image;
Then to the 7 rank H on all graphics calculations borders umoment characteristics, compares with the moment characteristics of preset standard front-wheel model, when similarity is lower than then judging during a setting threshold value that one, centre is as front-wheel.
In order to realize above-mentioned purpose better, present invention also offers a kind of docking aircraft for the above-mentioned real-time catching method of docking aircraft based on machine vision and implementing capture systems.
Technique effect of the present invention is:
The present invention has effective Intelligent vision information processing capability, the aircraft that effectively can realize aircraft docking process is caught, follows the tracks of and is located, the function such as plane type recognition and authentication, and there is level ground, intelligentized station visualized monitoring function, effectively can improve the level of Civil Aviation Airport robotization, intellectuality and operation management.
Describe the present invention below in conjunction with the drawings and specific embodiments, but not as a limitation of the invention.
Accompanying drawing explanation
Fig. 1 is the real-time capture systems schematic diagram of docking aircraft of one embodiment of the invention;
Fig. 2 is that the docking aircraft of one embodiment of the invention catches operating diagram in real time;
Fig. 3 is the real-time catching method process flow diagram of docking aircraft of one embodiment of the invention;
Fig. 4 is the scene definition schematic diagram of one embodiment of the invention;
Fig. 5 is that the background of one embodiment of the invention eliminates process flow diagram;
Fig. 6 is the aircraft front vertical drop shadow curve of one embodiment of the invention;
Fig. 7 is the typical pole black region schematic diagram of one embodiment of the invention;
Fig. 8 is the process flow diagram that the similarity of one embodiment of the invention judges;
Fig. 9 is the grey level histogram (horizontal ordinate: gray level of 256 grades of gray scales of one embodiment of the invention; Ordinate: the number put under this gray level);
Figure 10 is the grey level histogram (horizontal ordinate: gray level of 64 grades of gray scales of one embodiment of the invention; Ordinate: the number put under this gray level);
Figure 11 is the effect exemplary plot of the closed operation of one embodiment of the invention.
Wherein, Reference numeral
1 camera head
2 central processor equipments
3 display devices
4 level ground, station, aircraft berths
41 stop lines
42 guide lines
5 aircrafts
6 trapping regions
7 track and localization districts
8 ground servicing equipment districts
9 gauge points
91 first gauge points
10 first crests
11 first troughs
S1-S24 step
Embodiment
Below in conjunction with accompanying drawing, structural principle of the present invention and principle of work are described in detail:
See the real-time capture systems schematic diagram of docking aircraft that Fig. 1 and Fig. 2, Fig. 1 are one embodiment of the invention, Fig. 2 is that the docking aircraft of one embodiment of the invention catches operating diagram in real time.Docking aircraft tracing-positioning system based on machine vision of the present invention, forms primarily of camera head 1, central processor equipment 2 and display device 3.Camera head 1 is connected with central processor equipment 2, and central processor equipment 2 is connected with display device 3, and the image of shooting is sent to central processor equipment 2 by camera head 1, and the displaying contents comprising guidance information is sent to display device 3 by central processor equipment 2.Wherein, camera head 1 is arranged on stop line 41 rear on level ground, station, aircraft berth 4, and be just advisable to guide line 42, setting height(from bottom) is higher than the fuselage of aircraft 5, is advisable at about 8m.Central processor equipment 2 can be one and have the computing machine accepting data, process data, storage data, generation display view data, transmission data capability, comprise for perform aircraft berth scene configuration, video image pre-service, aircraft is caught, aircraft is followed the tracks of, Plane location, aircraft identification and authentication multiple functional modules, and generate the module being used for informational display, be all arranged in central processor equipment 2 as software.Display device 3 is preferably installed on can for the large-scale information display screen of aviator's viewing in airport, and in addition, airport employe also can be equipped with hand-held display device to observe airplane conditions.
See the real-time catching method process flow diagram of docking aircraft that Fig. 3, Fig. 3 are one embodiment of the invention.The real-time catching method of docking aircraft based on machine vision of the present invention, comprises the steps:
Monitoring scene partitioning is different information processing function districts by step S1, aircraft berth scene setting, to reduce the processing region scope of picture, improves treatment effeciency;
First need to carry out scene definition in actual scene, on computers monitoring scene partitioning is become different information processing function districts, reduce the processing region scope of picture, improve treatment effeciency; Mark the information such as guide line, stop line in addition, with Plane location close relation.First need in actual scene, the scale at a black and white interval laid by next-door neighbour's guide line, the length separation of black and white just as, the maximum 1m of length separation, can according to the resolution of camera head, use length separation for the meticulousr scale such as 0.5m, 0.25m, the total length of scale is no more than and carries out to aircraft position the scope that distance resolves, be generally 50m, other work is performed by the software write in advance, and software is opened and shown the picture of camera head shooting, and manually draws lines, selects frame and point, mark relevant range, and keeping records.
Take aircraft berth scene image when not having aircraft and show, Fig. 4 is shown in by scene definition schematic diagram, and Fig. 4 is the scene definition schematic diagram of one embodiment of the invention.Picture shown when carrying out proving operation and the region that can be used for describing is represented in picture frame, in figure, dotted line wire frame can be the position of Manual description, can display image on hand drawn lines, mark guide line 42 and stop line 41 respectively, keeping records guide line 42 and stop line 41 positional information in the picture; Hand drawn selects frame again, marks trapping region 6, track and localization district 7 and relevant ground service battery limits 8 respectively, keeping records trapping region 6 and track and localization district 7 positional information in the picture; Again according to the scale laid in scene, manual picture point, the largest interval marking next-door neighbour guide line side is all gauge points 9 of 1m, all gauge points 9 of keeping records positional information in the picture, and each gauge point 9 is in the distance of actual scene middle distance first gauge point 91.Wherein, mark guide line 42, stop line 41 and gauge point 9 time, the image section of mark can be needed to amplify, be amplified to tens of pixel wide time, manually portion markings therebetween, to improve mark precision.The trapping region 6 of mark and the position in track and localization district 7 do not need very strict, the positional distance stop line 41 of trapping region 6 coboundary in actual scene be 100m approximately, the positional distance stop line 41 of trapping region 6 lower limb in actual scene be 50m approximately, the positional distance stop line 41 of track and localization district 7 coboundary in actual scene be 50m approximately, and track and localization district 7 lower limb is at stop line less than 41.
After step S1, also can comprise video image pre-treatment step S10, gamma correction and denoising are carried out to image, to improve the visual effect of image, improve the sharpness of image.Namely utilize conventional image processing method, comprise gamma correction, denoising etc., improve the visual effect of image, improve the sharpness of iconic element or image become more be conducive to computer disposal.
Step S2, aircraft are caught, and comprising:
Step S21, background are eliminated, carry out the DYNAMIC DISTRIBUTION of background in simulated scenario based on the background model of medium filtering, mixture Gaussian background model or the background model based on cuclear density probability estimate and carry out background modeling, then present frame and background model being made difference to eliminate background;
The background being one embodiment of the invention see Fig. 5, Fig. 5 eliminates process flow diagram.Wherein, the foundation of described single Gaussian Background model utilizes single Gaussian Background model carry out the DYNAMIC DISTRIBUTION of background in simulated scenario and carry out background modeling, then present frame and background model are made difference to eliminate background, there is no the scene of aircraft, in the background namely needed, by camera continuous acquisition N frame, this N frame background image is then utilized to train background model, to determine average and the variance of Gaussian distribution.Comprise the steps:
The foundation of step S211, background model, initial background image, calculates the average gray value μ of each pixel in video sequence image f (x, y) in a period of time 0and the variance of pixel grey scale by μ 0with composition has Gaussian distribution η (x, μ 0, σ 0) initial background image B 0,
Wherein: &mu; 0 ( x , y ) = 1 T &Sigma; i = 0 T - 1 f i ( x , y ) , &sigma; 0 2 ( x , y ) = 1 T &Sigma; i = 0 T - 1 [ f i ( x , y ) - &mu; 0 ( x , y ) ] 2
Then for each pixel of every two field picture sets up Gauss model η (x i, μ i, σ i);
Wherein, subscript i is frame number, x ifor the current pixel value of pixel, μ ifor the average of current pixel point Gauss model, σ ifor the mean square deviation of current pixel point Gauss model; If η is (x i, μ i, σ i)≤Tp, Tp is probability threshold value, then judge that this point is as foreground point, otherwise be that background dot is (at this moment also known as x iwith η (x i, μ i, σ i) coupling); When applying, probability threshold value can be replaced by threshold value of equal value.Note d i=| x ii|, in common one dimension situation, then normal according to d i/ σ ivalue foreground detection threshold value is set: if d i/ σ i>T (T value is between 2 to 3), then this point is judged as foreground point, otherwise is background dot.
Other background removing method carries out background elimination by using other background model, as based on the background model of medium filtering, mixture Gaussian background model and the background model etc. based on cuclear density probability estimate.Background model based on medium filtering be intermediate value by getting N two field picture as a setting, algorithm is comparatively simple, and effect is bad; Mixture Gaussian background model is by safeguarding that several Gauss model simulates the change of dynamic scene, and algorithm is complicated, poor real; Background model based on cuclear density probability estimate is a kind of background model method of powerful non-parametric estmation, and can simulate the distribution of dynamic scene well, adapt to certain illumination variation, but algorithm be very complicated, very high to request memory, real-time is very poor.
The renewal of step S212, background model
If scene changes, then background model needs to respond these changes, then upgrade background model, the real-time information that the consecutive image utilizing camera head to take provides upgrades background model, as shown in the formula:
μ i+1=(1-α)μ i+αx i
&sigma; i + 1 = ( 1 - &alpha; ) &mu; i 2 + &alpha;d i 2
Wherein α is turnover rate, and value is between 0 ~ 1.If this pixel is background, then turnover rate α preferably gets 0.05, if this pixel is prospect, then α turnover rate preferably gets 0.0025.
Step S22, shadow removing, add up the gray-scale value in the foreground area extracted by background elimination, find out maximum gradation value gmax and minimum gradation value gmin, shadow removing is carried out in the region being then less than T=gmin+ (gmax-gmin) * 0.5 at gray-scale value.
In this low gray areas, ask for the gray level ratio between each pixel and background pixel, if preferably this ratio between 0.3 and 0.9, is then considered to shadow spots.Utilize morphological image process, first carry out corroding and carrying out expansive working, zonule wherein can be eliminated.Wherein morphology processing normally in the picture a mobile structural element carry out the operation being similar to convolution, at each location of pixels, the image pixel corresponding with it to structural element implements specific logical operation, denoising and noise jamming can be removed, improve the signal to noise ratio (S/N ratio) of image, expansion, corrosion are the basic operations of morphology processing.So by repeatedly morphological erosion and the expansive working non-hatched area (zonule) removed in shadow spots obtain the shadow region of detection and eliminate, so finally will by repeatedly morphological dilations and etching operation are eliminated the cavity in the target area needed and regional is communicated with.
Step S23, territorial classification, set up a standard front face aircraft region template, and due to the characteristic that narrow centre, this both sides, aircraft region is wide, this template can distinguish aircraft and non-aircraft well.Through change Detection and Extraction target area and the vertical projection curve asking for this region (see Fig. 6, Fig. 6 is the aircraft front vertical drop shadow curve of one embodiment of the invention), then the related coefficient of the vertical projection curve of this vertical projection curve and described standard front face aircraft region template is asked for, if this related coefficient is larger, such as be more than or equal to 0.9, then this target is aircraft; Otherwise, be non-aircraft.
By the engine and front-wheel detecting the aircraft captured, step S24, signature verification, verify whether this target is aircraft further.
Described signature verification step S24 comprises further:
Step S241, image pole black region extract, statistics of histogram is carried out to image, in the gray level of 2 ~ 253 (usually namely) scope, acquisition pixel count is not the ratio of maximum (gmax) gray-scale value/minimum (gmin) gray-scale value of 0 in the middle of the gray level 1% ~ 99%, use default threshold value to extract part the most black in image, obtain a width pole black region.
In the present embodiment, use one be preset as 0.05 extremely black decision threshold (BlackestJudge), this extremely black decision threshold to mean in image the most black 5%, should adjust, until just split by two engine outlines according to actual scene.Extract the region of gray-scale value between gmin to (gmax-gmin) * BlackestJudge+gmin in image, namely the most black in image part, obtain a width pole black region, Fig. 7 is shown in by one width typical pole black region schematic diagram, Fig. 7 is the typical pole black region schematic diagram of one embodiment of the invention, and each figure inside in figure is pole black region.
Step S242, similar round detect, and extract all outer boundary of this pole black region, to the barycentric coordinates of the square computation bound on each use border, border, the jth i rank square on border is defined as follows:
m ji = &Sigma; x , y ( f ( x , y ) x j y i )
Barycentric coordinates
x &OverBar; = m 10 m 00 , y &OverBar; = m 01 m 00
For all pixels of current border, calculate the distance of itself and this center of gravity, if the ratio of the ultimate range calculated and minor increment is greater than a preset value (being such as preset as the circular decision threshold circleJudge of 1.5), then think that this region is non-circular, carry out the judgement in next region, for the region judging to pass through, the barycentric coordinates in the similar round region that record judges and radius (border is to the mean distance of center of gravity), judge to enter similarity.
Step S243, class circle region in by judge similarity detection aeroengine;
It is the process flow diagram of the similarity judgement of one embodiment of the invention see Fig. 8, Fig. 8.In the present embodiment, suppose to detect altogether M similar round region, being wherein calculated as of similarity of i-th and jth:
Similarity ij=|Height i-Height j|*|Radius i-Radius j|
Wherein, Height is height of C.G., and Radius is radius (namely border is to the mean distance of center of gravity), as similarity Similarity ijwhen being less than the threshold value similarThresh being preset as 40, then think that region i and j is aeroengine.
If do not detect aeroengine, then carry out iterative detection, extremely black decision threshold (BlackestJudge), circular decision threshold (circleJudge), similarity threshold (similarThresh) are increased respectively, the recruitment of the extremely black decision threshold (BlackestJudge) of the present embodiment, circular decision threshold (circleJudge), similarity threshold (similarThresh) is preferably 0.05,0.5,20 respectively, then carries out step S241-243; If still do not detect aeroengine, then use the circular shuttering of 7*7 to open operation to all pole black regions, then carry out step S242-243;
If still do not detect aeroengine, then carry out 2 above-mentioned iterative detection again;
If still do not detect aeroengine, then exist without engine in process decision chart picture.When detecting subsequent frame, if its previous frame image use iterative steps be n, then direct from the (n-1)th step iteration.
Step S244, detection aircraft nose wheel, confirm this aeroengine and front-wheel then acquisition success.
In step S244, can using in the middle of the aeroengine detected and be highly the region of search of region as aircraft nose wheel of 4 engine radiuses, in region of search, by the gray-level quantization to 64 grade of 256 grades, see Fig. 9 and Figure 10, Fig. 9 is the grey level histogram of 256 grades of gray scales of one embodiment of the invention, wherein horizontal ordinate is gray level, ordinate is the number put under this gray level, Figure 10 is the grey level histogram of 64 grades of gray scales of one embodiment of the invention, wherein horizontal ordinate is gray level, and ordinate is the number put under this gray level.Volumes of searches turns to first crest 10 in the grey level histogram of 64 grades and first trough 11, if first crest location after quantizing is peak, wave trough position is valley, then the optimum crest location BestPeak in the grey level histogram of original 256 grades of gray scales, optimum trough BestValley position are defined as follows:
BestPeak = arg max peak * 4 - 4 &le; i &le; peak * 4 + 3 { hist 256 ( i ) }
BestValley = arg min BestPeak < i &le; valley * 4 + 3 { hist 256 ( i ) }
Wherein hist 256i () is in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
With this optimum trough BestValley, gray scale is split, to the part being less than optimum trough BestValley, the assorted point that removing area is less, a flattened oval type structural element is used to carry out closed operation to image, effect example is the effect exemplary plot of the closed operation of one embodiment of the invention see Figure 11, Figure 11;
Then to the 7 rank Hu moment characteristics on all graphics calculations borders, compare (about H with the Hu moment characteristics of preset standard front-wheel model uapart from feature: geometric moment is proposed in 1962 by Hu (Visualpatternrecognitionbymomentinvariants), has translation, rotation and scale invariability.Hu utilizes second order and three rank centre distance structure 7 not displacements.So 7 rank Hu are well-determined apart from 7 rank of feature), when similarity is then judged to be wheel lower than during setting threshold value (preferred value is 1).Can arrive the position of totally three wheels like this, middle wheel is on the lower front-wheel.
Aircraft catching method for intelligent aircraft docking guidance system of the present invention and system, the video image information collection of aircraft berth process is carried out by visual imaging subsystem, the transmission of video images of collection processed in real time to central processor equipment and analyzes, finally showing guidance information by display device.Catch to realize berth aircraft fast and accurately, obtain a stable target area, carry out in the berth aircraft capture region of whole berth aircraft acquisition procedure all only in scene definition, reduce the processing region scope of picture, improve treatment effeciency, be conducive to the quick realization that aircraft is caught.In the aircraft capture region of berth, first carry out change to detect, comprise background elimination, shadow removing, territorial classification, extract moving object region, and then classification is carried out to the moving object region extracted determine whether berth aircraft, to realize accurately catching of berth aircraft.
Certainly; the present invention also can have other various embodiments; when not deviating from the present invention's spirit and essence thereof; those of ordinary skill in the art are when making various corresponding change and distortion according to the present invention, but these change accordingly and are out of shape the protection domain that all should belong to the claim appended by the present invention.

Claims (10)

1., based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, comprise the steps:
Monitoring scene partitioning is different information processing function districts by S1, aircraft berth scene setting, to reduce the processing region scope of picture, improves treatment effeciency;
S2, aircraft are caught, and comprising:
S21, background are eliminated, carry out the DYNAMIC DISTRIBUTION of background in simulated scenario based on the background model of medium filtering, mixture Gaussian background model or the background model based on cuclear density probability estimate and carry out background modeling, then present frame and background model being made difference to eliminate background;
S22, shadow removing, add up the gray-scale value eliminated by background in the foreground area extracted, find out maximum gradation value gmax and minimum gradation value gmin, shadow removing is carried out in the region being then less than T=gmin+ (gmax-gmin) * 0.5 at gray-scale value;
S23, territorial classification, set up a standard front face aircraft region template, through and change Detection and Extraction target area and the vertical projection curve asking for this region, then the related coefficient of the vertical projection curve of this vertical projection curve and described standard front face aircraft region template is asked for, if this related coefficient is more than or equal to 0.9, then this target is aircraft;
By the engine and front-wheel detecting the aircraft captured, S24, signature verification, verify whether this target is aircraft further.
2., as claimed in claim 1 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, after step S1, also can comprise the steps:
S10, video image pre-service, carry out gamma correction and denoising to image, to improve the visual effect of image, improves the sharpness of image.
3., as claimed in claim 1 or 2 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, in described background removal process S21, the foundation of described single Gaussian Background model comprises the steps:
The foundation of S211, background model, initial background image, calculates the average gray value μ of each pixel in video sequence image f (x, y) in a period of time 0and the variance of pixel grey scale by μ 0with composition has Gaussian distribution η (x, μ 0, σ 0) initial background image B 0,
Wherein: &mu; 0 ( x , y ) = 1 T &Sigma; i = 0 T - 1 f i ( x , y ) , &sigma; 0 2 ( x , y ) = 1 T &Sigma; i = 0 T - 1 [ f i ( x , y ) - &mu; 0 ( x , y ) ] 2
Then for each pixel of every two field picture sets up Gauss model η (x i, μ i, σ i);
Wherein, i is frame number, x ifor the current pixel value of pixel, μ ifor the average of current pixel point Gauss model, σ ifor the mean square deviation of current pixel point Gauss model; If η is (x i, μ i, σ i)≤Tp, Tp is probability threshold value, then judge that this point is as foreground point, otherwise be background dot;
The renewal of S212, background model
If scene changes, then upgrade background model, the real-time information that the consecutive image utilizing camera head to take provides upgrades background model, as shown in the formula:
μ i+1=(1-α)μ i+αx i
&sigma; i + 1 = ( 1 - &alpha; ) &mu; i 2 + &alpha;d i 2
Wherein α is turnover rate, and value is between 0 ~ 1.
4., as claimed in claim 3 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, if this pixel is background, then α gets 0.05, if this pixel is prospect, then α gets 0.0025.
5. the real-time catching method of docking aircraft based on machine vision as described in claim 1,2 or 4, it is characterized in that, described signature verification step S24 comprises:
S241, image pole black region extract, statistics of histogram is carried out to image, the ratio that pixel count is not the maximum gradation value/minimum gradation value of 0 is obtained in 1% ~ 99% scope in the middle of gray level, use default extremely black decision threshold to extract part the most black in image, obtain a width pole black region;
S242, similar round detect, and extract all outer boundary of this pole black region, to the barycentric coordinates of the square computation bound on each use border, border, the jth i rank square on border is defined as follows:
m ji = &Sigma; x , y ( f ( x , y ) x j y i )
Barycentric coordinates
x &OverBar; = m 10 m 00 , y &OverBar; = m 01 m 00
For all pixels of current border, calculate the distance of itself and this center of gravity, if the ratio of the ultimate range calculated and minor increment is greater than a circular decision threshold, then think that this region is non-circular, carry out the judgement in next region, the barycentric coordinates in the similar round region that record judges and radius;
S243, class circle region in by judge similarity detection aeroengine;
S244, detection aircraft nose wheel, confirm this aeroengine and front-wheel then acquisition success.
6. as claimed in claim 5 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, detect in aeroengine step S243 in class circle region, suppose to detect altogether M similar round region, being wherein calculated as of i-th similarity individual with jth:
Similarity ij=|Height i-Height j|*|Radius i-Radius j|
Wherein, Height is height of C.G., and Radius is radius, as similarity Similarity ijwhen being less than default similarity threshold, then think that region i and j is aeroengine.
7. as claimed in claim 6 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, in step S243, if do not detect aeroengine, then carry out iterative detection, described extremely black decision threshold, circular decision threshold, similarity threshold are increased respectively, then carry out step S241-243; If still do not detect aeroengine, then use the circular shuttering of 7*7 to open operation to all pole black regions, then carry out step S242-243;
If still do not detect aeroengine, then carry out 2 above-mentioned iterative detection again;
If still do not detect aeroengine, then exist without engine in process decision chart picture.
8., as claimed in claim 7 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, the recruitment of described extremely black decision threshold, circular decision threshold and described similarity threshold is respectively 0.05,0.5,20.
9. as claimed in claim 5 based on the real-time catching method of docking aircraft of machine vision, it is characterized in that, in step S244, using in the middle of the aeroengine detected and be highly the region of search of region as aircraft nose wheel of 4 engine radiuses, in region of search, by the gray-level quantization to 64 grade of 256 grades, volumes of searches turns to first crest in the grey level histogram of 64 grades and trough, and the optimum crest location BestPeak in the grey level histogram of original 256 grades of gray scales, optimum trough BestValley position are defined as follows:
BestPeak = arg max peak * 4 - 4 &le; i &le; peak * 4 + 3 { hist 256 ( i ) }
BestValley = arg min BestPeak < i &le; valley * 4 + 3 { hist 256 ( i ) }
Wherein hist 256i () is in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
Split gray scale with this optimum trough BestValley, to the part being less than optimum trough BestValley, the assorted point that removing area is less, uses a flattened oval type structural element to carry out closed operation to image;
Then to the 7 rank H on all graphics calculations borders umoment characteristics, compares with the moment characteristics of preset standard front-wheel model, when similarity is lower than then judging during a setting threshold value that one, centre is as front-wheel.
10. the real-time capture systems of docking aircraft for the real-time catching method of docking aircraft based on machine vision described in the claims 1-9 any one.
CN201410377269.1A 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision Active CN105335985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410377269.1A CN105335985B (en) 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410377269.1A CN105335985B (en) 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision

Publications (2)

Publication Number Publication Date
CN105335985A true CN105335985A (en) 2016-02-17
CN105335985B CN105335985B (en) 2019-03-01

Family

ID=55286490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410377269.1A Active CN105335985B (en) 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision

Country Status (1)

Country Link
CN (1) CN105335985B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108683865A (en) * 2018-04-24 2018-10-19 长沙全度影像科技有限公司 A kind of background replacement system and method for bullet time special efficacy
CN108921891A (en) * 2018-06-21 2018-11-30 南通西塔自动化科技有限公司 A kind of machine vision method for rapidly positioning that can arbitrarily rotate
CN109785357A (en) * 2019-01-28 2019-05-21 北京晶品特装科技有限责任公司 A method of the robot automtion panorama photoelectronic reconnaissance suitable for battlefield surroundings
CN109887343A (en) * 2019-04-04 2019-06-14 中国民航科学技术研究院 It takes to a kind of flight and ensures node automatic collection monitoring system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399767A (en) * 1999-10-29 2003-02-26 安全门国际股份公司 Aircraft identification and docking guidance systems
KR20090082546A (en) * 2008-01-28 2009-07-31 국방과학연구소 Method for recognizing a target in images
CN101739694A (en) * 2010-01-07 2010-06-16 北京智安邦科技有限公司 Image analysis-based method and device for ultrahigh detection of high voltage transmission line
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103049788A (en) * 2012-12-24 2013-04-17 南京航空航天大学 Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk
CN103177586A (en) * 2013-03-05 2013-06-26 天津工业大学 Machine-vision-based urban intersection multilane traffic flow detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399767A (en) * 1999-10-29 2003-02-26 安全门国际股份公司 Aircraft identification and docking guidance systems
KR20090082546A (en) * 2008-01-28 2009-07-31 국방과학연구소 Method for recognizing a target in images
CN101739694A (en) * 2010-01-07 2010-06-16 北京智安邦科技有限公司 Image analysis-based method and device for ultrahigh detection of high voltage transmission line
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103049788A (en) * 2012-12-24 2013-04-17 南京航空航天大学 Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk
CN103177586A (en) * 2013-03-05 2013-06-26 天津工业大学 Machine-vision-based urban intersection multilane traffic flow detection method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108683865A (en) * 2018-04-24 2018-10-19 长沙全度影像科技有限公司 A kind of background replacement system and method for bullet time special efficacy
CN108921891A (en) * 2018-06-21 2018-11-30 南通西塔自动化科技有限公司 A kind of machine vision method for rapidly positioning that can arbitrarily rotate
CN109785357A (en) * 2019-01-28 2019-05-21 北京晶品特装科技有限责任公司 A method of the robot automtion panorama photoelectronic reconnaissance suitable for battlefield surroundings
CN109785357B (en) * 2019-01-28 2020-10-27 北京晶品特装科技有限责任公司 Robot intelligent panoramic photoelectric reconnaissance method suitable for battlefield environment
CN109887343A (en) * 2019-04-04 2019-06-14 中国民航科学技术研究院 It takes to a kind of flight and ensures node automatic collection monitoring system and method

Also Published As

Publication number Publication date
CN105335985B (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
CN102768804B (en) Video-based traffic information acquisition method
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
Huang et al. On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness
CN102509291B (en) Pavement disease detecting and recognizing method based on wireless online video sensor
CN104008371A (en) Regional suspicious target tracking and recognizing method based on multiple cameras
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN104866823A (en) Vehicle detection and tracking method based on monocular vision
CN101807352A (en) Method for detecting parking stalls on basis of fuzzy pattern recognition
CN107845264A (en) A kind of volume of traffic acquisition system and method based on video monitoring
CN104268596A (en) License plate recognizer and license plate detection method and system thereof
CN104616502A (en) License plate identification and positioning system based on combined type vehicle-road video network
CN107798691B (en) A kind of unmanned plane independent landing terrestrial reference real-time detection tracking of view-based access control model
CN105335985A (en) Real-time capture method and system of docking airplane on the basis of machine vision
CN108198417A (en) A kind of road cruising inspection system based on unmanned plane
CN105447431A (en) Docking airplane tracking and positioning method and system based on machine vision
CN104766337A (en) Aircraft landing vision enhancement method based on runway boundary enhancement
CN104463104A (en) Fast detecting method and device for static vehicle target
CN103996207A (en) Object tracking method
CN105335688A (en) Identification method of airplane model on the basis of visual image
CN104331708B (en) A kind of zebra crossing automatic detection analysis method and system
Tan et al. Efficient lane detection system based on monocular camera
Espino et al. Rail and turnout detection using gradient information and template matching
Huang et al. A lightweight road crack and damage detection method using Yolov5s for IoT applications
CN105335764B (en) A kind of docking aircraft model identification verification system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210618

Address after: 518103 No.9, Fuyuan 2nd Road, Fuyong street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN CIMC-TIANDA AIRPORT SUPPORT Co.,Ltd.

Address before: Four No. four industrial road, Shekou Industrial Zone, Guangdong, Shenzhen 518067, China

Patentee before: SHENZHEN CIMC-TIANDA AIRPORT SUPPORT Co.,Ltd.

Patentee before: China International Marine Containers (Group) Co.,Ltd.

TR01 Transfer of patent right