CN105335688B - A kind of aircraft model recognition methods of view-based access control model image - Google Patents

A kind of aircraft model recognition methods of view-based access control model image Download PDF

Info

Publication number
CN105335688B
CN105335688B CN201410377473.3A CN201410377473A CN105335688B CN 105335688 B CN105335688 B CN 105335688B CN 201410377473 A CN201410377473 A CN 201410377473A CN 105335688 B CN105335688 B CN 105335688B
Authority
CN
China
Prior art keywords
aircraft
engine
image
cabin
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410377473.3A
Other languages
Chinese (zh)
Other versions
CN105335688A (en
Inventor
邓览
程建
李鸿升
王峰
周圣云
马莹
张敬献
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China International Marine Containers Group Co Ltd
Shenzhen CIMC Tianda Airport Support Ltd
Original Assignee
China International Marine Containers Group Co Ltd
Shenzhen CIMC Tianda Airport Support Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China International Marine Containers Group Co Ltd, Shenzhen CIMC Tianda Airport Support Ltd filed Critical China International Marine Containers Group Co Ltd
Priority to CN201410377473.3A priority Critical patent/CN105335688B/en
Publication of CN105335688A publication Critical patent/CN105335688A/en
Application granted granted Critical
Publication of CN105335688B publication Critical patent/CN105335688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of aircraft model recognition methods of view-based access control model image, including:Foreground target mask is obtained using frame difference method extraction moving target edge;It is in circular using aeroengine and almost non-reflective characteristic, extracts position and the size of engine;The straight line where fitting wing is converted using Hough line, will be less than at longitudinal gradient significant decrease of the moving target edge extent and is used as tip location;It determines cabin left and right edges position, both wings cusp height mean value is taken to do horizontal linear, by the straight line for being less than the moving target edge extent and the left and right marginal position as cabin at cabin side transverse gradients wave crest;Using the engine radius of aircraft to be identified as scaling factor, calculate separately two engine spacing of the aircraft to be identified, the ratio of wing span, cabin width and the engine radius, with preset standard type data match, type of the corresponding type of maximum matching value as the aircraft to be identified is taken.

Description

A kind of aircraft model recognition methods of view-based access control model image
Technical field
The present invention relates to a kind of berth Plane location and bootstrap technique, it is especially a kind of for aircraft direct picture based on The aircraft model recognition methods of visual pattern.
Background technology
Characteristic area extraction in visual pattern is widely used in the tasks such as industrial detection, camera calibration, target detection. For more complex figure, the target detection technique based on machine learning is needed, it is complete to give the power to make decision of decision to computer At for such technology still in laboratory stage, reliability is unable to get guarantee.But to some simple applications, such as to simple Shape, the target detection of solid color, it is already possible to be used in industrialization demand.
Simple target detection method is divided into two kinds of region detection method and Edge Detection Using.Region detection utilizes target to be detected Certain color or texture on characteristic, itself and background are distinguished, region is screened by areal shape, last profit Target region is extracted with information such as the shape in region, size, spatial positions.Border detection, which then focuses on color, to be had The place of apparent drop, has certain stability under different illumination conditions, utilizes the shape of certain stability boundaris of target The position of target is determined with relative position relation.Application No. is " 200510016267 ", entitled " airplane berth plane type is known automatically Not with instruction system " Chinese invention patent disclosed in the recognition methods for berth aircraft, need to for side number into Row identification, and the related data of binding site and speed detector is identified and indicates that recognition efficiency and accuracy rate are equal There are certain defects.
Invention content
Technical problem to be solved by the invention is to provide a kind of flying for view-based access control model image for aircraft direct picture Machine plane type recognition method, rapidly and accurately to identify berth aircraft model.
To achieve the goals above, the present invention provides a kind of aircraft model recognition methods of view-based access control model image, wherein Include the following steps:
S1, moving target recognition obtain foreground target mask, to establish constraint using frame difference method extraction moving target edge Condition simultaneously reduces the influence that background is brought;
S2, aeroengine extraction, are in circular using aeroengine and almost non-reflective characteristic, extract the position of engine It sets and size;
S3, airplane wingtip position detection are converted the straight line where fitting wing using Hough line, will be less than the movement Tip location is used as at longitudinal gradient significant decrease of object edge range;
S4, cabin width detection determine cabin left and right edges position, and both wings cusp height mean value is taken to do horizontal linear, will not Straight line more than the moving target edge extent and the left and right margin location as cabin at cabin side transverse gradients wave crest It sets, cabin width is the distance between described left and right edge;
S5, aircraft model identification, using the engine radius of aircraft to be identified as scaling factor, calculate separately this and wait knowing Two engine spacing of other aircraft, the ratio of wing span, cabin width and the engine radius, with preset standard machine type data Match, takes type of the corresponding type of maximum matching value as the aircraft to be identified.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein use frame difference method extraction moving target edge Including:
The direct picture of S11, a series of aircrafts to be identified of acquisition, calculating t frames image are absolute with t-1 frame images Difference calculates the standard deviation of t frame images, using the 1/4 of standard deviation as the threshold value for dividing moving target and noise, obtains movement mesh Target bianry image, calculation formula are as follows:
Wherein,For the gray value at t frame coordinates (x, y), std (I(t)) be t frame gray value of images standard deviation, Mx,yFor the bianry image of moving target.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein use frame difference method extraction moving target edge Further include:
S12, hole in the bianry image of the moving target is closed using closed operation, actionradius is 5 circular shuttering Expansion 3 times corrodes 2 times and obtains modified foreground target mask.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein aeroengine extraction step S2 includes:
S21, the grey level histogram cumulative distribution for calculating the lower visual pattern of modified foreground target mask covering, record 1% and 99% corresponding gray level in lower cumulative distribution, most dark/most bright gray level as image;
S22, pole black region and other regions are divided the image into using separation threshold value, the separation threshold value is described extremely black Region ratio shared in the foreground target mask;
S23, class loop truss is carried out to the pole black region using round decision threshold, extracts all pole black regions All outer boundaries calculate its barycentric coodinates to each boundary, and the calculation formula of center of gravity is:
Wherein,
The all pixels point edge { x, y } for enumerating current region boundary, calculates it at a distance from center of gravity, and constantly updates most Greatly/minimum range then judges the pole black region to be non-once maximum distance divided by minimum range are more than the circle decision threshold Circle, into the judgement of next pole black region;
S24, the detection of pairs of engine, screen the class entelechy black region of judgement, it is assumed that detect M class circle region, Generate a M*M upper triangular matrix S, S in each element calculation formula be:
S (i, j)=abs (Wi-Wj-Tij)*abs(Hi-Hj)*abs(Ri-Rj)
Tij=3* (Ri+Rj)
Wherein WiIndicate the abscissa at i-th of pole black region center, HiIndicate the ordinate at i-th of pole black region center, Ri Indicate that i-th of extremely black class justifies the radius in region, TijFor the minimum spacing of two engines, the subscript i and j of a minimum element in S For the pairs of engine detected.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein aeroengine extraction step S2 further includes:
S25, it detects again, if step S22-S24 can not find engine, threshold value will be separated and round decision threshold is distinguished Expand a grade, repeats step S22-S24 and detected again.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein the step S3 includes:
S31, the longitudinal gradient G calculated in area-of-interest take 1.5 engine height above engine by machine type data library, The region of the width of 8 engine diameters is as area-of-interest, gradient calculation formula:
Gy(x, y)=2*I (x, y)-I (x, y-1)-I (x, y+1)
Wherein Gy(x, y) is longitudinal gradient at (x, y) coordinate, and I (x, y) is corresponding gray value at (x, y) coordinate;
S32, straight line where point slope form fitting wing is used;
S33, the gradient that wing line correspondence passes through is scanned by lateral wings point direction in aircraft, calculates the total flat of process on the way The average gradient of equal gradient and nearest 5 pixels scans position if the gradient of nearest 5 pixels is less than the 1/3 of overall average gradient It sets and has exceeded tip location, and scan position is moved back into 5 pixels as the tip location of current straight line.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein the step S3 further includes:
Signal lamp is had in S34, evening images, at airplane wingtip to light, if scanning element is apparent highlighted continuously across 2 Pixel, then the high luminance pixels position is tip location.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein the step S3 further includes:
If the wing tip of S35, aircraft side are blocked by connecting bridge, the symmetric relation of wing is utilized, is connected with two engine cores The perpendicular bisector of line is fictionalized a wing and is calculated its correlation and join in the aircraft as line of symmetry by the side that connecting bridge blocks Number.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein the step S4 includes:
S41, two engine cores are taken, are highly the region of 7 engine radiuses as area-of-interest, calculate area-of-interest Interior horizontal gradient Gx, formula is as follows:
Gx(x, y)=2*I (x, y)-I (x-1, y)-I (x+1, y)
Wherein, Gx(x, y) is the horizontal gradient at (x, y) coordinate, and I (x, y) is corresponding gray value at (x, y) coordinate;
G is counted in S42, the moving target mask described in step S1xThe histogram of (x, y), by GxMaximum 30% gradient portion It is divided into 1, rest part is set to 0, and forms a width binary map;
Area of all 1 pixel values connection institute at region is counted, the area filter that area is less than to 50 falls;
And closed operation is carried out to ensure that the edge of cabin fully comes together to the binary map for getting rid of noise;
S43, in the wing tip average height that step S3 is extracted, by the inside scanning element point in both sides, when pixel is in institute It is 1 to state the pixel value in moving target mask and in the binary map, stops scanning, using two points scanning as cabin Left and right edge.
The aircraft model recognition methods of above-mentioned view-based access control model image, wherein the step S5 includes:
S51, engine radius, engine spacing, span-width and the cabin width information for storing all target types;
S52, using the aeroengine extraction, tip location detection and cabin width detection as a result, obtaining engine Mean radius and engine spacing, two wing tip spacing and cabin width, the engine spacing, wing tip spacing, cabin is wide Degree is respectively divided by engine mean radius, generation three-dimensional feature vector F are compared with the information stored in step 51, compares public Formula is:
Wherein, StFor the similarity of aircraft in type t and image, FiFor the ith feature extracted from image,For The ith feature of type t in database, the maximum type of similarity are the type identified.
The technical effects of the invention are that:
The present invention extracts the letters such as engine radius, engine spacing, wing length, cabin width on the direct picture of aircraft Breath, is compared with the corresponding information to prestore in database, the final type for determining captured aircraft.Make full use of aircraft each Relativeness between component, and the position of component is constrained by the moving target recognition based on frame difference method, simply, have Effect, the relevant information that aircraft is steadily obtained from image, to accomplish accurately to identify.
Below in conjunction with the drawings and specific embodiments, the present invention will be described in detail, but not as a limitation of the invention.
Description of the drawings
Fig. 1 is the flow chart of one embodiment of the invention;
Fig. 2 is frame difference method design sketch when not doing any processing of one embodiment of the invention;
Fig. 3 is to use the frame difference method design sketch after Morphological scale-space to Fig. 2;
Fig. 4 is the aeroengine overhaul flow chart of one embodiment of the invention;
Fig. 5 A are the grey level histogram under the night condition of one embodiment of the invention;
Fig. 5 B are the grey level histogram under the conditions of the morning of one embodiment of the invention;
Fig. 6 is the pole black region extraction effect figure of one embodiment of the invention;
Fig. 7 is the engine boundary extracted and left side aircraft wing schematic shapes of one embodiment of the invention.
Wherein, reference numeral
S1-S5 steps
Specific implementation mode
The structural principle and operation principle of the present invention are described in detail below in conjunction with the accompanying drawings:
It is the flow chart of one embodiment of the invention referring to Fig. 1, Fig. 1.The aircraft model of the view-based access control model image of the present invention is known Other method, includes the following steps:
Step S1, moving target recognition obtains foreground target mask, to establish using frame difference method extraction moving target edge Constraints simultaneously reduces the influence that background is brought;
Step S2, aeroengine detects, and is in circular using aeroengine and almost non-reflective characteristic, detects engine Position and size;
Step S3, airplane wingtip position detection is converted the straight line where fitting wing using Hough line, will be less than described Tip location is used as at longitudinal gradient significant decrease of moving target edge extent;
Step S4, cabin width detection determines cabin left and right edges position, and both wings cusp height mean value is taken to do horizontal linear, By the straight line for being less than the moving target edge extent and the left and right edge as cabin at cabin side transverse gradients wave crest Position, cabin width are the distance between described left and right edge;
Step S5, aircraft model identifies, using the engine radius of aircraft to be identified as scaling factor, calculates separately this Two engine spacing of aircraft to be identified, the ratio of wing span, cabin width and the engine radius, with preset standard type Data match takes type of the corresponding type of maximum matching value as the aircraft to be identified.
Wherein, include using frame difference method extraction moving target edge:
Step S11, it acquires a series of direct picture of aircrafts to be identified by photographic device, calculates t frames image and the The absolute difference (i.e. absolute value of the difference) of t-1 frame images calculates the standard deviation of t frame images, is transported using the 1/4 of standard deviation as segmentation The threshold value of moving-target and noise, obtains the bianry image of moving target, and calculation formula is as follows:
Wherein,For the gray value at t frame coordinates (x, y), std (I(t)) be t frame gray value of images standard deviation, Mx,yFor the bianry image of moving target.
The M obtained by step S11x,yAlthough having filtered out noise, a part of useful information is also lost, by image On see as with a large amount of holes white area (referring to Fig. 2).These can be closed using the closed operation on morphological image Hole, closed operation are considered as the once expansion to image white region and once to the corrosion of white area.Therefore, this implementation In example, further include using frame difference method extraction moving target edge:
Step S12, the hole in the bianry image of the moving target is closed using closed operation, actionradius is the circle of R Template expands M times, and corrosion n times obtain modified foreground target mask, and preferably R is 5, M 3, N 2.The modified foreground target Mask is a secondary foreground target mask slightly widened, hole is less.Radius is excessive, expansion number can excessively cause mask to lose Original shape is gone, it is very few inside to be caused multiple hole occurred.Corrode number less than expansion number, to ensure obtained mask Target to be detected (i.e. aircraft, referring to Fig. 3) is covered all as far as possible.
It is the aeroengine overhaul flow chart of one embodiment of the invention referring to Fig. 4, Fig. 4.In the present embodiment, aeroengine carries The step S2 is taken to include:
Step S21, the grey level histogram cumulative distribution of the visual pattern under the modified foreground target mask covering is calculated, 1% and 99% corresponding gray level in cumulative distribution is recorded, most dark/most bright gray level as image is (referring to Fig. 5 A, 5B, figure 5A is the grey level histogram under the night condition of one embodiment of the invention, under the conditions of Fig. 5 B is the mornings of one embodiment of the invention Grey level histogram), if most bright gray level is less than 60, judge image for evening images.
Step S22, pole black region and other brighter areas are divided the image into using a fixed separation threshold value, it is described The physical significance for separating threshold value is pole black region ratio shared in the foreground target mask (aircraft front shape) (being the pole black region extraction effect figure of one embodiment of the invention referring to Fig. 6, Fig. 6);In evening images, background is changed into black Threshold value raising 0.4 is detected by color at this time.
Step S23, class loop truss is carried out to the pole black region using round decision threshold, extracts all pole black areas All outer boundaries in domain calculate its barycentric coodinates to each boundaryThe calculation formula of center of gravity is:
0,1 calculating is wherein taken respectively by j and i to following formula,
The all pixels point edge { x, y } for enumerating current region boundary, calculates it at a distance from center of gravity, and constantly updates most Greatly/minimum range is then sentenced once maximum distance divided by minimum range are more than the circle decision threshold (preset value is preferably 1.2) The disconnected pole black region is not rounded, is directly entered the judgement of next pole black region;
Step S24, pairs of engine detection, is screened the class entelechy black region of judgement, is always gone out in pairs using engine (part type is 3 engines to existing characteristic, ignores the third engine at empennage at this time, only detects two hung at wing and draws Hold up) class circle region is screened.Assuming that detecting M class circle region, generate each in the upper triangular matrix S, S of a M*M Element calculation formula is:
S (i, j)=abs (Wi-Wj-Tij)*abs(Hi-Hj)*abs(Ri-Rj)
Tij=3* (Ri+Rj)
Wherein, S (i, j) indicates that the i-th row of upper triangular matrix, the element of jth row, abs expressions remove absolute value, Wi、WjIt indicates The abscissa at the i-th, j pole black region center, Hi、HjIndicate the ordinate at i-th, j pole black region center, Ri、RjExpression i-th, J extremely black classes justify the radius in region, TijFor the minimum spacing of two engines, it is defaulted as 3 engine diameters, which should be with waiting for Different numerical value is arranged in the difference of berth aircraft model, and absolute value is removed in abs expressions.Take an element minimum in S subscript i and J is the pairs of engine detected.
Wherein, aeroengine extraction step S2 may also include:
Step S25, it detects again, if step S22-S24 can not find engine, then it is assumed that parameter current may be slightly tight It is severe, two parameters are expanded into a grade respectively at this time, i.e., will separate threshold value and round decision threshold expands a grade respectively, Step S22-S24 is repeated to be detected again.Widened number is no more than 2 times.
Referring to Fig. 7, Fig. 7 is that the engine boundary extracted of one embodiment of the invention and left side aircraft wing shape are illustrated Figure.Wherein, the step S3 includes:
Step S31, longitudinal gradient G in area-of-interest is calculated, by machine type data library, takes 1.5 engines above engine Highly, as area-of-interest, this is highly common Boeing-737 and Air Passenger with width in the region of the width of 8 engine diameters Parameter used in A320 passenger planes, other types should be according to the different height of the parameter setting in machine type data library and width models It encloses.Gradient calculation formula is:
Gy(x, y)=2*I (x, y)-I (x, y-1)-I (x, y+1)
Wherein Gy(x, y) be (x, y) coordinate at longitudinal gradient, I (x, y), I (x, y-1), I (x, y+1) be respectively (x, Y), corresponding gray value at (x, y-1) and (x, y+1) coordinate;
Step S32, using straight line where point slope form fitting wing, all pixels of the area-of-interest rightmost side, mistake are enumerated The every 1 ° of slope as straight line of each 0~15 ° of pixel, all straight lines are drawn under black background using antialiasing algorithm.Assuming that There are 50 pixels in the rightmost side, and 0~15 ° shares 16 angles, then needs to draw 50*16=800 straight line in total.
If the straight line image corresponding to i-th straight line is Li, calculate the votes V of i-th straight linei
Wherein Li(x, y) is straight line image L at (x, y) coordinateiGray value, Gy(x, y) is the longitudinal direction at (x, y) coordinate Gradient, x and y take all over area-of-interest all the points.Then ViThe corresponding straight line of maximum value be straight line where wing.Note that The position of scanning element is no more than moving target edge described in step S1, to avoid the high gradient value interference in complex background.
Step S33, by the gradient that lateral wings point direction scanning wing line correspondence passes through in aircraft, process is calculated on the way The average gradient of overall average gradient and nearest 5 pixels is swept if the gradient of nearest 5 pixels is less than the 1/3 of overall average gradient It retouches position and has exceeded tip location, and scan position is moved back into 5 pixels as the tip location of current straight line.
In the present embodiment, the step S3 may also include:
Step S34, in evening images, there is signal lamp to light at airplane wingtip, determine whether night using S21 the methods Late image.In evening images, if scanning element is continuously across 2 apparent high luminance pixels, (gray value of high luminance pixels is more than it 2 times of the average gray value of preceding scanning element), then V in the ballot value described in S32iIn addition 10000, with other not through excessively high The straight line of bright pixel distinguishes.Evening images are not necessarily to calculate tip location by mode described in S33, can directly enable the high luminance pixels institute It is set to tip location in place.
The step S3 may also include:
If step S35, the wing tip of aircraft side is blocked by connecting bridge, the symmetric relation of wing is utilized, in two engines The perpendicular bisector of heart line fictionalizes a wing by the side that connecting bridge blocks in the aircraft and calculates its correlation as line of symmetry Parameter simultaneously provides parameter for S4 processes.The wing shapes extracted are as shown in Figure 5.
In the present embodiment, the step S4 includes:
Step S41, it takes two engine cores, is highly the region of 7 engine radiuses as area-of-interest, calculate interested Horizontal gradient G in regionx, formula is as follows:
Gx(x, y)=2*I (x, y)-I (x-1, y)-I (x+1, y)
Wherein, Gx(x, y) be (x, y) coordinate at horizontal gradient, I (x, y), I (x-1, y), I (x+1, y) be (x, y), Corresponding gray value at (x-1, y) and (x+1, y) coordinate;
Step S42, G is counted in the moving target mask described in step S1xThe histogram of (x, y), by GxMaximum 30% ladder Degree part is set as 1, and rest part is set to 0, and forms a width binary map;
Area of all 1 pixel values connection institute at region is counted, the area filter that area is less than to 50 falls;
And it is similar to step S12, closed operation is carried out to ensure to the binary map for getting rid of noise using the rectangle template of 5*5 The edge of cabin fully comes together;
Step S43, in the wing tip average height that step S3 is extracted, by the inside scanning element point in both sides, work as pixel Pixel value in the moving target mask and in the binary map described in step S42 is 1, stops scanning, with scan two Left and right edge of a point as cabin.Cabin width is the distance between 2 points.
In the present embodiment, the step S5 includes:
Step S51, it is wide that the engine radiuses of all target types, engine spacing, the span are stored in information processing centre Degree and cabin width information;
Step S52, use aeroengine detection described in step S2, S3 and S4, tip location detection and cabin wide respectively It spends detecting as a result, obtaining mean radius and engine spacing, the two wing tip spacing and cabin width of engine, draws described It holds up spacing, wing tip spacing, cabin width respectively divided by engine mean radius, generation three-dimensional feature vector F is stored with step 51 Information be compared, compare formula be:
Wherein, StFor the similarity of aircraft in type t and image, FiFor the ith feature extracted from image,For The ith feature of type t, S in databasetThe maximum type of similarity is the type identified.
The method of the present invention detecting and alarm, wing, cabin in aircraft direct picture passes through the parameter and number that detected According to the aircraft model data in library compared to pair, the type of the aircraft in image is finally determined.
Certainly, the present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, ripe It knows those skilled in the art and makes various corresponding change and deformations, but these corresponding changes and change in accordance with the present invention Shape should all belong to the protection domain of appended claims of the invention.

Claims (10)

1. a kind of aircraft model recognition methods of view-based access control model image, which is characterized in that include the following steps:
S1, moving target recognition obtain foreground target mask, to establish constraints using frame difference method extraction moving target edge And reduce the influence that background is brought;
S2, aeroengine extraction, using aeroengine in circular and almost non-reflective characteristic, extract engine position and Size;
S3, airplane wingtip position detection are converted the straight line where fitting wing using Hough line, will be less than the moving target Tip location is used as at longitudinal gradient significant decrease of edge extent;
S4, cabin width detection determine cabin left and right edges position, take both wings cusp height mean value to do horizontal linear, will be less than The straight line of the moving target edge extent and the left and right marginal position as cabin at cabin side transverse gradients wave crest, machine Cabin width is the distance between described left and right edge;
It is to be identified winged to calculate separately this using the engine radius of aircraft to be identified as scaling factor for S5, aircraft model identification Two engine spacing of machine, the ratio of wing span, cabin width and the engine radius, with preset standard machine type data phase Match, takes type of the corresponding type of maximum matching value as the aircraft to be identified.
2. the aircraft model recognition methods of view-based access control model image as described in claim 1, which is characterized in that carried using frame difference method The moving target edge is taken to include:
The direct picture of S11, a series of aircrafts to be identified of acquisition, calculate the absolute difference of t frames image and t-1 frame images, The standard deviation for calculating t frame images obtains moving target using the 1/4 of standard deviation as the threshold value for dividing moving target and noise Bianry image, calculation formula is as follows:
Wherein,For the gray value at t frame coordinates (x, y), std (I(t)) be t frame gray value of images standard deviation, Mx,yFor The bianry image of moving target, abs expressions take absolute value.
3. the aircraft model recognition methods of view-based access control model image as claimed in claim 2, which is characterized in that carried using frame difference method The moving target edge is taken to further include:
S12, hole in the bianry image of the moving target is closed using closed operation, actionradius is the circular shuttering expansion P of R Secondary, corrosion n times obtain modified foreground target mask.
4. the aircraft model recognition methods of view-based access control model image as claimed in claim 3, which is characterized in that aeroengine extracts Step S2 includes:
S21, the grey level histogram cumulative distribution for calculating visual pattern under modified foreground target mask covering, are recorded tired 1% and 99% corresponding gray level in cloth is integrated, most dark/most bright gray level as image;
S22, pole black region and other regions are divided the image into using separation threshold value, the separation threshold value is the pole black region The shared ratio in the foreground target mask;
S23, class loop truss is carried out to the pole black region using round decision threshold, extracts all of all pole black regions Outer boundary calculates its barycentric coodinates to each boundary, and the calculation formula of center of gravity is:
Wherein, j' and i' are taken into 0 and 1 respectively, calculated
The all pixels point for enumerating presently described pole black region boundary, calculates it at a distance from center of gravity, and constantly update it is maximum/most Small distance then judges the pole black region to be not rounded once maximum distance divided by minimum range are more than the circle decision threshold, entrance The judgement of next pole black region;
S24, the detection of pairs of engine, screen the class entelechy black region of judgement, it is assumed that detect M class circle region, generate Each element calculation formula in the upper triangular matrix S, S of one M*M is:
S (i, j)=abs (Wi-Wj-Tij)*abs(Hi-Hj)*abs(Ri-Rj)
Tij=3* (Ri+Rj)
Wherein S (i, j) indicates that the i-th row of upper triangular matrix, the element of jth row, abs expressions take absolute value, Wi、WjIndicate i-th, j The abscissa at a pole black region center, Hi、HjIndicate the ordinate at i-th, j pole black region center, Ri、RjIndicate i-th, j class The radius of entelechy black region, TijFor the minimum spacing of two engines, in S the subscript i and j of a minimum element be detect at To engine.
5. the aircraft model recognition methods of view-based access control model image as claimed in claim 4, which is characterized in that aeroengine extracts Step S2 further includes:
S25, it detects again, if step S22-S24 can not find engine, threshold value will be separated and round decision threshold expands respectively One grade repeats step S22-S24 and is detected again.
6. the aircraft model recognition methods of the view-based access control model image as described in claim 1,2,3 or 5, which is characterized in that described Step S3 includes:
S31, calculate area-of-interest in longitudinal gradient G, press machine type data library, take engine top 1.5 engine height, 8 The region of the width of engine diameter is as area-of-interest, gradient calculation formula:
Gy(x, y)=2*I (x, y)-I (x, y-1)-I (x, y+1)
Wherein Gy(x, y) is longitudinal gradient at (x, y) coordinate, and I (x, y) is corresponding gray value at (x, y) coordinate;
S32, straight line where point slope form fitting wing is used;
S33, the gradient that wing line correspondence passes through is scanned by lateral wings point direction in aircraft, calculates the overall average ladder of process on the way The average gradient of degree and nearest 5 pixels, if the gradient of nearest 5 pixels is less than the 1/3 of overall average gradient, scan position is 5 pixels are moved back as the tip location of current straight line through exceeding tip location, and using scan position.
7. the aircraft model recognition methods of view-based access control model image as claimed in claim 6, which is characterized in that the step S3 is also Including:
Signal lamp is had in S34, evening images, at airplane wingtip to light, if scanning element is continuously across 2 significantly highlighted pictures Element, then the high luminance pixels position is tip location.
8. the aircraft model recognition methods of view-based access control model image as claimed in claim 6, which is characterized in that the step S3 is also Including:
If the wing tip of S35, aircraft side are blocked by connecting bridge, the symmetric relation of wing is utilized, with two engine core lines Perpendicular bisector fictionalizes a wing by the side that connecting bridge blocks in the aircraft and calculates its relevant parameter as line of symmetry.
9. the aircraft model recognition methods of the view-based access control model image as described in claim 1,2,3,5,7 or 8, which is characterized in that The step S4 includes:
S41, two engine cores are taken, are highly the region of 7 engine radiuses as area-of-interest, are calculated in area-of-interest Horizontal gradient Gx, formula is as follows:
Gx(x, y)=2*I (x, y)-I (x-1, y)-I (x+1, y)
Wherein, Gx(x, y) is the horizontal gradient at (x, y) coordinate, and I (x, y) is corresponding gray value at (x, y) coordinate;
G is counted in S42, the moving target mask described in step S1xThe histogram of (x, y), by GxMaximum 30% gradient part is set It is 1, rest part is set to 0, and forms a width binary map;
Area of all 1 pixel values connection institute at region is counted, the area filter that area is less than to 50 falls;
And closed operation is carried out to ensure that the edge of cabin fully comes together to the binary map for getting rid of noise;
S43, in the wing tip average height that step S3 is extracted, by the inside scanning element point in both sides, when pixel is in the fortune Pixel value in moving-target mask and in the binary map is 1, stops scanning, using two points scanning as cabin Left and right edge.
10. the aircraft model recognition methods of view-based access control model image as claimed in claim 9, which is characterized in that the step S5 Including:
S51, engine radius, engine spacing, span-width and the cabin width information for storing all target types;
S52, using the aeroengine extraction, tip location detection and cabin width detection as a result, obtaining being averaged for engine Radius and engine spacing, two wing tip spacing and cabin width, by the engine spacing, wing tip spacing, cabin width point Not divided by engine mean radius, generation three-dimensional feature vector F are compared with the information stored in step 51, and comparing formula is:
Wherein, StFor the similarity of aircraft in type t and image, FiFor the ith feature extracted from image,For data The ith feature of type t in library, the maximum type of similarity are the type identified.
CN201410377473.3A 2014-08-01 2014-08-01 A kind of aircraft model recognition methods of view-based access control model image Active CN105335688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410377473.3A CN105335688B (en) 2014-08-01 2014-08-01 A kind of aircraft model recognition methods of view-based access control model image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410377473.3A CN105335688B (en) 2014-08-01 2014-08-01 A kind of aircraft model recognition methods of view-based access control model image

Publications (2)

Publication Number Publication Date
CN105335688A CN105335688A (en) 2016-02-17
CN105335688B true CN105335688B (en) 2018-07-13

Family

ID=55286205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410377473.3A Active CN105335688B (en) 2014-08-01 2014-08-01 A kind of aircraft model recognition methods of view-based access control model image

Country Status (1)

Country Link
CN (1) CN105335688B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921219B (en) * 2018-07-03 2020-06-30 中国人民解放军国防科技大学 Model identification method based on target track

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509091A (en) * 2011-11-29 2012-06-20 北京航空航天大学 Airplane tail number recognition method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509091A (en) * 2011-11-29 2012-06-20 北京航空航天大学 Airplane tail number recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
可见光图像中飞机目标的特征选择及提取;王树国等;《哈尔滨工业大学学报》;20100731;第42卷(第7期);1056-1059 *
基于闭合轮廓提取和部分特征匹配的飞机识别;张名成等;《计算机仿真》;20061130;第23卷(第11期);193-197 *

Also Published As

Publication number Publication date
CN105335688A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
Tao et al. Detection of power line insulator defects using aerial images analyzed with convolutional neural networks
US9846946B2 (en) Objection recognition in a 3D scene
CN106250870B (en) A kind of pedestrian's recognition methods again of joint part and global similarity measurement study
CN104966045B (en) Aircraft disengaging berth automatic testing method based on video
CN104778721B (en) The distance measurement method of conspicuousness target in a kind of binocular image
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN103984961B (en) A kind of image detecting method for being used to detect underbody foreign matter
CN104392212B (en) The road information detection and front vehicles recognition methods of a kind of view-based access control model
CN106023185B (en) A kind of transmission facility method for diagnosing faults
WO2019104767A1 (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
Pan et al. A robust system to detect and localize texts in natural scene images
Huang et al. A multidirectional and multiscale morphological index for automatic building extraction from multispectral GeoEye-1 imagery
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
Li et al. Automatic pavement crack detection by multi-scale image fusion
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN105518709B (en) The method, system and computer program product of face for identification
CN104298996B (en) A kind of underwater active visual tracking method applied to bionic machine fish
CN103442209B (en) Video monitoring method of electric transmission line
CN101981582B (en) Method and apparatus for detecting object
Saini et al. A study analysis on the different image segmentation techniques
CN105913093B (en) A kind of template matching method for Text region processing
CN103093212B (en) The method and apparatus of facial image is intercepted based on Face detection and tracking
CN105574550A (en) Vehicle identification method and device
Huang et al. A new building extraction postprocessing framework for high-spatial-resolution remote-sensing imagery

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant