CN107392963A - A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling - Google Patents

A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling Download PDF

Info

Publication number
CN107392963A
CN107392963A CN201710506141.4A CN201710506141A CN107392963A CN 107392963 A CN107392963 A CN 107392963A CN 201710506141 A CN201710506141 A CN 201710506141A CN 107392963 A CN107392963 A CN 107392963A
Authority
CN
China
Prior art keywords
image
point
region
gray level
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710506141.4A
Other languages
Chinese (zh)
Other versions
CN107392963B (en
Inventor
段海滨
王晓华
邓亦敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201710506141.4A priority Critical patent/CN107392963B/en
Publication of CN107392963A publication Critical patent/CN107392963A/en
Application granted granted Critical
Publication of CN107392963B publication Critical patent/CN107392963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The present invention is a kind of imitative hawkeye moving target localization method for soft autonomous air refuelling, and implementation step is:Step 1:Imitative fierce look top cover cellular response calculates;Step 2:Imitative hawk core group texture suppresses to extract with notable figure;Step 3:Color threshold segmentation;Step 4:Region of interesting extraction;Step 5:Tapered sleeve mark point coordinates obtains;Step 6:Identify Point matching;Step 7:Camera parameters are demarcated;Step 8:Fuel filling taper sleeve pose measurement;Imitative hawkeye moving target localization method proposed by the present invention for soft autonomous air refuelling can accurately extract the fuel filling taper sleeve in the soft refueling process of unmanned plane, and accurate to determine fuel filling taper sleeve position, this method has higher accuracy and robustness.

Description

A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling
First, technical field
The present invention is a kind of imitative hawkeye moving target localization method for soft autonomous air refuelling, belongs to computer and regards Feel technical field.
2nd, background technology
Autonomous air refuelling technology is aircraft autonomy-oriented and a focus in Study of intelligent field, and this technology not only may be used For lifting unmanned plane combat radius and fighting efficiency, and be advantageous to lifted manned aircraft air refuelling security and Operability.Especially under severe weather conditions, it can substantially reduce pilot using autonomous air refuelling technology and carry out in the air The technical difficulty and work load that oiling is.In April, 2011, Northey Luo Pu Grumman Co., Ltd of the U.S., U.S. national defense advanced studies Unmanned verification machine that a frame was transformed in Plan Bureau and Nasa's Gordon Dryden flight center and a frame global hawk are in 13716m High-altitude complete partner's formula air refuelling, started the beginning of the automatic air refuelling of unmanned plane.In April, 2015, U.S. army X-47B The autonomous air refuelling docking test of unmanned plane first in history is realized, is successfully released probe insertion fuel charger soft In pipe-tapered sleeve.
The precision navigation of fuel charger/refueled aircraft is a key technology and research emphasis for autonomous air refuelling technology.Mesh The preceding air refuelling airmanship studied both at home and abroad mainly has inertial navigation system, global positioning system (Global Position System, GPS), vision navigation system etc..Wherein the position error of inertial navigation system can need with accumulated time To be modified using other navigation system.GPS navigation system is using relatively broad, technology also relative maturity, using simple etc. Advantage, but due to fully relying on the reception of satellite-signal, external signal is relied on larger.In addition, for soft oiling, flexible pipe And its fuel filling taper sleeve carried can be produced swings by airflow influence, the fuel charger that is obtained using differential GPS and inertial navigation and Position between refueled aircraft can not accurately provide refuel cone and the position by hydraulic fluid port.Relative Navigation based on bionical vision Relative position that can be between direct measurement fuel filling taper sleeve and refueled aircraft, accurately navigation letter is provided for unmanned plane air refuelling Breath.
Many biosystems possess superpower environment sensing ability in nature, if being applied to autonomous air refuelling Its high accuracy, real-time, robustness etc. can be ensured during Relative Navigation simultaneously.Hawkeye is a good biological vision The visual processes mechanism of hawkeye is used in bionical vision positioning to improve the accurate of target detection by information processing system, the present invention Spend and then improve the precision of vision positioning, accurate Relative Navigation is provided for soft autonomous air refuelling.The vision system of hawks and falcons class There is a kind of pop-out mechanism in system, visual attention can be locked in more valuable target area by the mechanism, so as to significantly Improve graphical analysis speed and the target acquistion degree of accuracy of vision system.From hawkeye retina, in hawkeye retina neural Mutual inhibitory action in ganglion cell be present, the responding range of cell can be limited by this inhibitory action being widely present, Efficient coding and integration are carried out to information simultaneously, this coding is delivered to next layer.Between retina to brain exist from Top cover path, gorge nucleus and top cover cell in the path can receive the thorn from gangliocyte He other bottom cells Swash, and further integration process is carried out to it, filter out invalid information and noise information, target information is extracted under being used for The target detection of one step and positioning.
The present invention simulates regarding for hawk from the vision system treatment mechanism of hawk with reference to the interaction between hawk brain core group Feel attention mechanism, extract the approximate region where fuel filling taper sleeve, then tapered sleeve region is carried out using color segmentation effective special Sign extraction, and then use the relative position information between pose algorithm for estimating measurement fuel charger tapered sleeve and refueled aircraft vision system. In addition, the present invention has built aerial verification platform and to the soft autonomous bionical vision positioning side of air refuelling proposed by the invention Method is verified.
3rd, the content of the invention
1st, goal of the invention:
The present invention proposes a kind of imitative hawkeye moving target localization method for soft autonomous air refuelling, the purpose is to A kind of accurate soft air refuelling Relative Navigation scheme is provided, reliable relative position is provided for soft autonomous tanker refuelling system Metrical information is put, the independence of air refuelling Relative Navigation is improved, reduces the dependence to external signals such as satellites, reduce to taking over Accident rate in journey, reform current air refuelling technology.
2nd, technical scheme:
A kind of Relative Navigation mission requirements of the present invention for the soft air refuelling low coverage docking stage, it is proposed that robustness By force, the high imitative hawkeye moving target localization method of accuracy, and aerial verification platform system is devised, the composition of system is shown in accompanying drawing 1.A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling, made from the vision system processor of hawk Hair, the vision noticing mechanism of the interaction simulation hawk between being rolled into a ball with reference to hawk brain core, extracts the approximate region where fuel filling taper sleeve, Then effective feature extraction is carried out to tapered sleeve region using color segmentation, and then is bored using pose algorithm for estimating measurement fuel charger Relative position information between set and refueled aircraft vision system;This method comprises the following steps that:
Step 1:Imitative fierce look top cover cellular response calculates
Effectively imitative fierce look top cover cell coding mechanism is established, simulates the validity of fierce look top cover cell coding and sparse Property, effective consistency information is obtained from image.Assuming that it can pass through a series of images base B for any piece image Ik Linear combination be indicated:
Wherein image base BkNeed to train from substantial amounts of natural image and obtain, this is existing shared between natural image Information, akIt is image base BkCorresponding coefficient, the coefficient has necessarily openness, and can be obtained by below equation:
Wherein, CkReferred to as coding filter, it is image base BkInverse or pseudoinverse.Using the wave filter to any one width figure As being filtered operation, then can obtain the response of optic tectum cell, and the response can present it is certain openness, i.e., largely Response be 0, only least a portion of response is larger, this response with optic tectum cell Physiologic Studies result kiss Close.
Hawkeye optic tectum cellular response corresponding to background area is similar under soft air refuelling scene, and fuel filling taper sleeve The cellular response in region has very big difference with background area.Cellular response is together decided on by input picture and receptive field, and The receptive field of optic tectum cell has very strong selectivity to direction and marginal information.Cellular response corresponding to some receptive field is bigger Illustrate that direction corresponding to the image block and marginal information match with the selectivity of the receptive field.Using peak response and its correspondingly Receptive field the main information of the image block can be described.Likewise, the peak response of background area is much like and fuel filling taper sleeve area The cellular response in domain has larger difference therewith.Background area is typically relatively flat simultaneously, marginal information unobvious, causes it corresponding Cellular response can't occur the phenomenon for being far longer than other receptive fields on some receptive field.On the contrary, in fuel filling taper sleeve There is abundant marginal information in region, and there is stronger directionality target area, and its corresponding cellular response often appears in It is significantly larger than other receptive fields on some receptive field.Therefore, the present invention describes this using the peak response of each image block Image block is the probability in fuel filling taper sleeve region.
Step 2:Imitative hawk core group texture suppresses to extract with notable figure
Substantial amounts of lateral inhibition effect in the vision system of hawk between each cell receptive field be present, when there is stimulation to input, in Core cell can be by the inhibitory action of peripheral cell, and the inhibitory action is shown as to the antijamming capability for enhancing algorithm, this hair The bright texture homogeneity for also contemplating background area.Recognize when the texture homogeneity of some image block and image block around it is stronger For the image block be background area probability it is larger, therefore larger texture rejection coefficient is assigned to it and is suppressed.Conversely, work as The texture homogeneity of the image block and surrounding image block illustrate when weaker the image block be fuel filling taper sleeve region probability it is larger, it is right It is assigned to less texture rejection coefficient.
Texture homogeneity computational methods are the methods based on gray level co-occurrence matrixes, and gray level co-occurrence matrixes are defined as follows:It is assumed that There is Nx pixel in image level direction to be analyzed, has Ny pixel on vertical direction y, the gray level of image is G, if X= {1,2,...,NxRepresent image horizontal direction pixel coordinate, Y={ 1,2 ..., NyRepresent in image vertical direction Pixel coordinate, N={ 0,1 ..., G } are the gray level after quantifying, then original image can be expressed as one by level and vertical seat Mark the mapping function f of gray level:X×Y→N.The system of a pair of pixel grey scales separated by a distance on some direction in image Meter rule can reflect the texture features of the image, then may be used using a matrix to describe the gray-scale statistical rule of each pixel pair Gray level co-occurrence matrixes are obtained, are expressed as W.
Pixel (x+a, y+b) one picture of formation of any point (x, y) with some with it apart from certain length in image Vegetarian refreshments pair, if the gray value of the pixel pair is (i, j).I.e. the gray value of pixel (x, y) is i, pixel (x+a, y+b) Gray value is j.Fixed a and b, makes point (x, y) be moved in entire image, then can obtain various (i, j) values.If the ash of image Degree series is G, then i and j combination shares G2Kind.In entire image, the frequency for counting each appearance is P (i, j, d, θ) Then claim square formation [P (i, j, d, θ)]G×GFor gray level co-occurrence matrixes, i.e. [P (i, j, d, θ)]G×G.Gray level co-occurrence matrixes are substantially exactly The joint histogram of two pixels, different combinations of values is taken apart from difference value (a, b), image can be obtained along certain side It is separated by a distance to θGray level co-occurrence matrixes.
A=b=2 is set in the present invention, and θ=[0 °, 45 °, 90 °, 135 °], the tonal gradation after quantization is 8, therefore gray scale Co-occurrence matrix is 8*8*4 matrix.Gray level co-occurrence matrixes can reflect gradation of image on direction, adjacent spaces, change The information such as amplitude.Typically can't be directly using obtained gray scale symbiosis square to analyze the local mode of image and queueing discipline etc. Battle array, but second degree statistics is obtained on its basis.Before the characteristic parameter of gray level co-occurrence matrixes is obtained, to make regular place Reason, normalization process is carried out using following formula:
P (i, j, d, θ)=P (i, j, d, θ)/R (3)
Wherein, R is regular constant, is whole element sums in gray level co-occurrence matrixes.
The second degree statistics that the present invention uses has contrast and entropy, and wherein contrast is defined as follows:
W1=∑ ∑ [(i-j)2×P2(i,j,d,θ)] (4)
Contrast is the moment of inertia on leading diagonal in W battle arrays, and it has measured distribution situation and the office of image of matrix value Portion changes.W1Value it is bigger explanation texture comparison it is stronger, image is more clear, and grain effect is more obvious.
Entropy is defined as follows:
W2=-∑ ∑ P (i, j, d, θ) × log10P(i,j,d,θ) (5)
The information content that entropy represents image is the measurement of picture material randomness, can characterize the complexity of texture.Work as figure Entropy is 0 during as texture-free, and entropy is maximum when expiring texture.
In order to calculate the texture rejection coefficient of image, the present invention is big using two different windows are carried out around each pixel Small sampling, two image blocks obtained to sampling calculate its gray level co-occurrence matrixes respectively.Then obtained using calculation formula (3) Gray level co-occurrence matrixes after normalization.Calculate its second degree statistics respectively using gray level co-occurrence matrixes, then calculating two is secondary The distance between statistic, the distance describe some image block and around it image block texture homogeneity.If two The distance between second degree statistics is more big, illustrates that the texture of the image block and peripheral region has larger difference, therefore its texture presses down Coefficient processed should be assigned to less value;Otherwise illustrate the image block and peripheral region if the distance between two second degree statisticses are smaller The texture in domain is more similar, and now texture rejection coefficient should be larger corresponding to the image block.The imitative hawk core group line that the present invention designs Reason suppresses to use the distance between second degree statistics of corresponding two gray level co-occurrence matrixes of each image block work with notable figure extraction For its texture rejection coefficient.Then it is corresponding with each image block of image to strengthen coefficient using the inverse of texture rejection coefficient as texture Peak response be multiplied and can obtain the notable figure of image, region corresponding to significance maximum is tapered sleeve region.Using imitative The notable figure that hawkeye notable figure is extracted to obtain can filter out part background information, and amount of calculation is reduced for follow-up processing.
Step 3:Color threshold segmentation
In the verification platform that the present invention designs, fuel filling taper sleeve region is red annulus.As shown in Figure 2, using annulus table Show fuel filling taper sleeve region, the mark that green and blueness are pasted on annulus is used for vision positioning.Topmost a point is blueness, is made For first identification point, each point is numbered successively, remaining point is green.Imitative hawkeye notable figure extraction can be obtained comprising tapered sleeve Approximate region, the present invention notable figure extraction on the basis of, to marking area carry out color threshold segmentation, it is more smart to obtain True tapered sleeve region.It is R-G-B color space compared to RGB, HSV is form and aspect-saturation degree-lightness colors space in color The color-aware mode of the mankind is more conformed in expression, color separation into asymmetric three components in human eye:Form and aspect, saturation Degree and brightness.Three components in hsv color space can more intuitively express light and shade, tone and the bright-coloured degree of color. It is that i.e. two passages of saturation degree of form and aspect, S enter row threshold division to image using H on the basis of notable figure extraction, comprising The region of the red objects such as tapered sleeve, the image binaryzation after segmentation obtains splitting binary map.
Step 4:Region of interesting extraction
It is emerging in order to obtain the sense of artwork because the region that Threshold segmentation extracts is probably incomplete, and comprising noise Interesting region, it is necessary first to which the bianry image obtained to first time HSV Threshold segmentation carries out morphological operation, and extracts each red The exterior contour in region.Assuming that the profile point set of ith zone is qi, wherein m-th of profile point of ith zone image sit It is designated asCalculating is ranked up to two dimensions of the image coordinate of each region contour point, obtains each area The maximum and minimum value of domain profile point coordinates, so as to obtain the boundary rectangle in each region, as Region of Interest (ROI) region, is expressed as ROIi=(ui,vi,wi,hi), uiAnd viROI rectangular areas upper left corner top is represented respectively The image coordinate of point, wiAnd hiThe width and height of the rectangular area are represented respectively, so as to uniquely determine the external of each region Rectangle.
Red area in image shared pixel ratio very little in the picture, is further located to the ROI extracted During reason, the computing resource of occupancy to artwork much smaller than operating, so as to reach the purpose for improving calculating speed.Due to design Fuel filling taper sleeve in the index point comprising green and blueness in red area, the tapered sleeve in red threshold splits obtained binary map Hole occurs in region.Therefore, it is necessary to be filled to the hole of contour area while ROI is extracted.As shown in Figure 2, Fuel filling taper sleeve is an annulus, in order to prevent also being filled endocyclic area, inside each red area that reply segmentation obtains Each cyst areas judged, the profile beyond area threshold is not filled with, so as to obtain correct red R OI.
Step 5:Tapered sleeve mark point coordinates obtains
The present invention has pasted the circular index point of blueness and green in tapered sleeve region, when tapered sleeve is closer to the distance with refueled aircraft When, whether blueness can be included according to each ROI and green disc pellet judges whether this region is tapered sleeve region.The present invention is to each Whether ROI region carries out the HSV Threshold segmentations of blue channel and green channel respectively, judge in red area comprising blue or green The disk of color, so as to eliminate non-targeted red interfering object, find tapered sleeve area-of-interest.
Detect behind tapered sleeve region, it is necessary to be extracted to the central point of circular index point.First, gray level image is used The Threshold segmentation of continuous unique step is binary image set, if segmentation threshold scope is [T1,T2], step-length t, then all thresholds It is worth and is:T1, T1+ t, T1+ 2t ..., T2.Secondly, the border of every width bianry image is extracted, detects its connected region, and extract two It is worth the center image coordinate in image connectivity region.Again, the connected domain centre coordinate of all bianry images is counted, if different two-values The connected domain centre distance of image is less than a threshold value, then these bianry image spots belong to a gray level image spot.Most Afterwards, the image coordinate and size of gray level image spot are determined.Coordinate position of the spot in gray level image passes through all corresponding The weighting of bianry image spot centers coordinate is tried to achieve, as shown in calculation formula (6), qiFor the inertial rate of i-th of bianry image spot, Therefore bianry image spot shape is closer to circular, and its contribution to gray level image speckle displacement is bigger.Gray level image spot Size then be all bianry image spots radius length intermediate value.
, can be miscellaneous to filter out by limiting shape, area and the color of spot in the spot extraction process of bianry image Point.Because the index point on tapered sleeve is circular, therefore the area threshold and roundness threshold of the invention by setting spot, filter out miscellaneous Point.Each spot corresponds to the green or blue identification point in a red annulus on fuel filling taper sleeve.In the mark of the present invention In point extraction process, input picture is the image after HSV Threshold segmentations, it is not necessary to image is carried out into binaryzation, directly to input Image carries out connected domain detection, and filters out not rounded miscellaneous point according to circularity and area threshold, exports the Centered Graphs of each identification point As coordinate.
Step 6:Identify Point matching
Before pose measurement is carried out, also need to determine the one-to-one corresponding of the image coordinate point and actual circular indicia point extracted Relation, the image coordinate of blue identification point and green identification point can be obtained by the extracting method of step 5, but can not be right The identification point of difference numbering makes a distinction.Therefore, it is necessary to which Feature Points Matching algorithm solves the correspondence problem of characteristic point.
The present invention is identified using a kind of simple method to characteristic point, if green circular indicia point is first point, The nearest blue dot of from first identification point Euclidean distance is second identification point on imaging plane.Except first identification point Outside, it is the 3rd identification point with the closest point of second identification point, by that analogy, all identification points can be compiled Number.
Step 7:Camera parameters are demarcated
A black and white chessboard case marker fixed board is made, the length of side of each grid is given value.Using vision sensor to chessboard Lattice carry out the shooting of different angle and different depths, so reduce calibrated error and obtain more accurate camera internal reference.Demarcating In experiment, camera is gathered into image respectively to the gridiron pattern of different angle, extract the tessellated angle point of each uncalibrated image it Afterwards, you can calculate camera model, due to lens distortion very little, the mirror image distortion of a consideration camera of the invention, utilize MATLAB 2015a obtains the internal reference and distortion factor of camera after being demarcated to camera in tool box.
Step 8:Fuel filling taper sleeve pose measurement
For soft Aerial Oiling Problem, current invention assumes that video camera is arranged on a certain ad-hoc location of refueled aircraft.For Fuel filling taper sleeve is obtained relative to the relative position by hydraulic fluid port, phase is carried out using dot position information and camera imaging model is indicated Position is resolved.The present invention is refueled based on robust position-pose measurement Robust Perspective-n-Point (RPnP) Pose measurement is bored, is used as cost function by establishing seven rank multinomials to obtain the solution of RPnP problems.The entirety of the present invention Flow is as shown in Figure 3.
3rd, advantage and effect:
The present invention proposes a kind of imitative hawkeye moving target localization method for soft autonomous air refuelling, for it is soft from The detection of oiling cone and its pose measurement in main air refuelling vision guided navigation provide a kind of Practical Solution.This method is simulated Fierce look feels system processor system, and the image obtained to vision sensor carries out conspicuousness calculating, carries out feeling emerging using color segmentation Interesting extracted region, and then feature point extraction is carried out with matching, estimate that realization is accurately positioned to the cone that refuels eventually through pose.This Method independence is strong, and outer signals are relied on less, and has higher robustness and high accuracy, is greatly improved soft autonomous The safety and reliability of air refuelling.
4th, illustrate
The soft autonomous air refuelling verification platform frameworks of Fig. 1.
Fig. 2, which refuels, bores identification point and world coordinates relation.
Fig. 3 is used for the imitative hawkeye moving target localization method flow of soft autonomous air refuelling.
Fig. 4 X-axis position measurements.
Fig. 5 Y-axis position measurements.
Fig. 6 Z axis position measurements.
Fig. 7 re-projection error curves.
5th, embodiment
Cone located instance is refueled to verify method designed by the present invention below by a specific aerial verification platform Validity.Do testing experiment using two frame unmanned planes in this example, a frame unmanned plane as fuel charger, a frame as refueled aircraft, Its main composition is as shown in Figure 1.Fuel charger is from the S900 six rotorcrafts for being big boundary, the APM2.5 of loading 3DR companies Flight control system of increasing income, wireless data transmission module selects 3DR companies APM Data transfer systems, and airborne processor is Raspberry Pi processor. Refueled aircraft also selects the S900 six rotorcrafts of big boundary, and binocular vision sensor, airborne visual processes are provided with refueled aircraft Device and digital radio Image transmission equipment.Wherein, flight controller is increased income from the Pixhawk of 3DR companies production flies control, at vision Reason device is Ai Xun industrial handlers, and camera is the Aca Series Industrial cameras of German Basler companies production, wireless image transmission Equipment using great Jiang Creative Technology Ltd. DJI Lightbridge high-definition digital wireless image transmission modules.Vision is surveyed The main configuration for measuring navigation system is as follows:
(1) airborne vision processor:PICO880;I7 4650U processor 1.7GHz dominant frequency;8GB internal memories;120G solid-states are hard Disk;Size is 100 × 72 × 40.3mm;Gross weight about 450g;4 USB3.0 interfaces.
(2) airborne color vision sensor:The acA1920-155uc colour imagery shots of Basler companies;USB 3.0 connects Mouthful;The pixel of the pixel of resolution ratio 1920 × 1200;Maximum frame per second is 164fps;CCD physical sizes are 2/3inch;Pixel Dimensions 3.75um×3.75um。
Step 1:Optic tectum cellular response is calculated to calculate
When visual scene is inputted to optic tectum cell, only the cell of fraction can produce response, i.e., most thin Born of the same parents will not be activated, therefore can establish efficient coding mechanism, and effective consistency information is obtained from image.Assuming that for Any piece image I can pass through a series of images base BkLinear combination be indicated:
Wherein image base BkNeed to train from substantial amounts of natural image and obtain, this is existing shared between natural image Information, akIt is image base BkCorresponding coefficient, the coefficient has necessarily openness, and can be obtained by below equation:
Wherein, CkReferred to as coding filter, it is image base BkInverse or pseudoinverse.The present invention has obvious direction by 128 For the filtering core of selectivity as hawkeye optic tectum cell receptive field, the size of each receptive field filtering core is 14*14=196.Profit Operation is filtered to any piece image with the wave filter, then can obtain the response of optic tectum cell, and the response can be in Now certain is openness, i.e., most response is 0, and only least a portion of response is larger, and this responds and on optic tectum The Physiologic Studies result of cell is coincide.
Hawkeye optic tectum cellular response corresponding to background area is similar under soft air refuelling scene, and fuel filling taper sleeve The cellular response in region has very big difference with background area.Cellular response is together decided on by input picture and receptive field, and The receptive field of optic tectum cell has very strong selectivity to direction and marginal information.Cellular response corresponding to some receptive field is bigger Illustrate that direction corresponding to the image block and marginal information match with the selectivity of the receptive field.Using peak response and its correspondingly Receptive field the main information of the image block can be described.Likewise, the peak response of background area is much like and fuel filling taper sleeve area The cellular response in domain has larger difference therewith.Background area is typically relatively flat simultaneously, marginal information unobvious, causes it corresponding Cellular response can't occur the phenomenon for being far longer than other receptive fields on some receptive field.On the contrary, in fuel filling taper sleeve There is abundant marginal information in region, and there is stronger directionality target area, and its corresponding cellular response often appears in It is significantly larger than other receptive fields on some receptive field.Therefore, the present invention describes this using the peak response of each image block Image block is the probability in fuel filling taper sleeve region.
Step 2:Imitative hawk core group texture suppresses to extract with notable figure
Substantial amounts of lateral inhibition effect in the vision system of hawk between each cell receptive field be present, when there is stimulation to input, in Core cell can be by the inhibitory action of peripheral cell, and the inhibitory action is shown as to the antijamming capability for enhancing algorithm, this hair The bright texture homogeneity for also contemplating background area.Recognize when the texture homogeneity of some image block and image block around it is stronger For the image block be background area probability it is larger, therefore larger texture rejection coefficient is assigned to it and is suppressed.Conversely, work as The texture homogeneity of the image block and surrounding image block illustrate when weaker the image block be fuel filling taper sleeve region probability it is larger, it is right It is assigned to less texture rejection coefficient.
Texture homogeneity computational methods are the methods based on gray level co-occurrence matrixes, and gray level co-occurrence matrixes are defined as follows:It is assumed that There is Nx pixel in image level direction to be analyzed, has Ny pixel on vertical direction y, the gray level of image is G, if X= {1,2,...,NxRepresent image horizontal direction pixel coordinate, Y={ 1,2 ..., NyRepresent in image vertical direction Pixel coordinate, N={ 0,1 ..., G } are the gray level after quantifying, then original image can be expressed as one by level and vertical seat Mark the mapping function f of gray level:X×Y→N.The system of a pair of pixel grey scales separated by a distance on some direction in image Meter rule can reflect the texture features of the image, then may be used using a matrix to describe the gray-scale statistical rule of each pixel pair Gray level co-occurrence matrixes are obtained, are expressed as W.
Pixel (x+a, y+b) one picture of formation of any point (x, y) with some with it apart from certain length in image Vegetarian refreshments pair, if the gray value of the pixel pair is (i, j).I.e. the gray value of pixel (x, y) is i, pixel (x+a, y+b) Gray value is j.Fixed a and b, makes point (x, y) be moved in entire image, then can obtain various (i, j) values.If the ash of image Degree series is G, then i and j combination shares G2Kind.In entire image, the frequency for counting each appearance is P (i, j, d, θ) Then claim square formation [P (i, j, d, θ)]G×GFor gray level co-occurrence matrixes, i.e. [P (i, j, d, θ)]G×G.Gray level co-occurrence matrixes are substantially exactly The joint histogram of two pixels, different combinations of values is taken apart from difference value (a, b), image can be obtained along certain side It is separated by a distance to θGray level co-occurrence matrixes.
A=b=2 is set in the present invention, and θ=[0 °, 45 °, 90 °, 135 °], the tonal gradation after quantization is 8, therefore gray scale Co-occurrence matrix is 8*8*4 matrix.Gray level co-occurrence matrixes can reflect gradation of image on direction, adjacent spaces, change The information such as amplitude.Typically can't be directly using obtained gray scale symbiosis square to analyze the local mode of image and queueing discipline etc. Battle array, but second degree statistics is obtained on its basis.Before the characteristic parameter of gray level co-occurrence matrixes is obtained, to make regular place Reason, normalization process is carried out using following formula:
P (i, j, d, θ)=P (i, j, d, θ)/R (3)
Wherein, R is regular constant, is whole element sums in gray level co-occurrence matrixes.
The second degree statistics that the present invention uses has contrast and entropy, and wherein contrast is defined as follows:
W1=∑ ∑ [(i-j)2×P2(i,j,d,θ)] (4)
Contrast is the moment of inertia on leading diagonal in W battle arrays, and it has measured distribution situation and the office of image of matrix value Portion changes.W1Value it is bigger explanation texture comparison it is stronger, image is more clear, and grain effect is more obvious.
Entropy is defined as follows:
W2=-∑ ∑ P (i, j, d, θ) × log10P(i,j,d,θ) (5)
The information content that entropy represents image is the measurement of picture material randomness, can characterize the complexity of texture.Work as figure Entropy is 0 during as texture-free, and entropy is maximum when expiring texture.
In order to calculate the texture rejection coefficient of image, the present invention is big using two different windows are carried out around each pixel Small sampling, two image blocks obtained to sampling calculate its gray level co-occurrence matrixes respectively.Then obtained using calculation formula (3) Gray level co-occurrence matrixes after normalization.Calculate its second degree statistics respectively using gray level co-occurrence matrixes, then calculating two is secondary The distance between statistic, the distance describe some image block and around it image block texture homogeneity.If two The distance between second degree statistics is more big, illustrates that the texture of the image block and peripheral region has larger difference, therefore its texture presses down Coefficient processed should be assigned to less value;Otherwise illustrate the image block and peripheral region if the distance between two second degree statisticses are smaller The texture in domain is more similar, and now texture rejection coefficient should be larger corresponding to the image block.The imitative hawk core group line that the present invention designs Reason suppresses to use the distance between second degree statistics of corresponding two gray level co-occurrence matrixes of each image block work with notable figure extraction For its texture rejection coefficient.Then it is corresponding with each image block of image to strengthen coefficient using the inverse of texture rejection coefficient as texture Peak response be multiplied and can obtain the notable figure of image, region corresponding to significance maximum is tapered sleeve region.Using imitative The notable figure that hawkeye notable figure is extracted to obtain can filter out part background information, and amount of calculation is reduced for follow-up processing.
Step 3:Color threshold segmentation
In the verification platform that the present invention designs, fuel filling taper sleeve region is red annulus.As shown in Figure 2, using annulus table Show fuel filling taper sleeve region, the mark that green and blueness are pasted on annulus is used for vision positioning.Topmost a point is blueness, is made For first identification point, each point is numbered successively, remaining point is green.Imitative hawkeye notable figure extraction can be obtained comprising tapered sleeve Approximate region, the present invention notable figure extraction on the basis of, to marking area carry out color threshold segmentation, it is more smart to obtain True tapered sleeve region.It is R-G-B color space compared to RGB, HSV is form and aspect-saturation degree-lightness colors space in color The color-aware mode of the mankind is more conformed in expression, color separation into asymmetric three components in human eye:Form and aspect, saturation Degree and brightness.Three components in hsv color space can more intuitively express light and shade, tone and the bright-coloured degree of color. It is that i.e. two passages of saturation degree of form and aspect, S enter row threshold division to image using H on the basis of notable figure extraction, comprising The region of the red objects such as tapered sleeve, the image binaryzation after segmentation obtains splitting binary map.
Step 4:Region of interesting extraction
It is emerging in order to obtain the sense of artwork because the region that Threshold segmentation extracts is probably incomplete, and comprising noise Interesting region, it is necessary first to which the bianry image obtained to first time HSV Threshold segmentation carries out morphological operation, and extracts each red The exterior contour in region.Assuming that the profile point set of ith zone is qi, wherein m-th of profile point of ith zone image sit It is designated asCalculating is ranked up to two dimensions of the image coordinate of each region contour point, obtains each area The maximum and minimum value of domain profile point coordinates, so as to obtain the boundary rectangle in each region, as Region of Interest (ROI) region, is expressed as ROIi=(ui,vi,wi,hi), uiAnd viROI rectangular areas upper left corner top is represented respectively The image coordinate of point, wiAnd hiThe width and height of the rectangular area are represented respectively, so as to uniquely determine the external of each region Rectangle.
Red area in image shared pixel ratio very little in the picture, is further located to the ROI extracted During reason, the computing resource of occupancy to artwork much smaller than operating, so as to reach the purpose for improving calculating speed.Due to design Fuel filling taper sleeve in the index point comprising green and blueness in red area, the tapered sleeve in red threshold splits obtained binary map Hole occurs in region.Therefore, it is necessary to be filled to the hole of contour area while ROI is extracted.As shown in Figure 2, Fuel filling taper sleeve is an annulus, in order to prevent also being filled endocyclic area, inside each red area that reply segmentation obtains Each cyst areas judged, the profile beyond area threshold is not filled with, so as to obtain correct red R OI.
Step 5:Tapered sleeve mark point coordinates obtains
The present invention has pasted the circular index point of blueness and green in tapered sleeve region, when tapered sleeve is closer to the distance with refueled aircraft When, whether blueness can be included according to each ROI and green disc pellet judges whether this region is tapered sleeve region.The present invention is to each Whether ROI region carries out the HSV Threshold segmentations of blue channel and green channel respectively, judge in red area comprising blue or green The disk of color, so as to eliminate non-targeted red interfering object, find tapered sleeve area-of-interest.
Detect behind tapered sleeve region, it is necessary to be extracted to the central point of circular index point.First, gray level image is used The Threshold segmentation of continuous unique step is binary image set, if segmentation threshold scope is [T1,T2], step-length t, then all thresholds It is worth and is:T1, T1+ t, T1+ 2t ..., T2.Secondly, the border of every width bianry image is extracted, detects its connected region, and extract two It is worth the center image coordinate in image connectivity region.Again, the connected domain centre coordinate of all bianry images is counted, if different two-values The connected domain centre distance of image is less than a threshold value, then these bianry image spots belong to a gray level image spot.Most Afterwards, the image coordinate and size of gray level image spot are determined.Coordinate position of the spot in gray level image passes through all corresponding The weighting of bianry image spot centers coordinate is tried to achieve, as shown in calculation formula (6), qiFor the inertial rate of i-th of bianry image spot, Therefore bianry image spot shape is closer to circular, and its contribution to gray level image speckle displacement is bigger.Gray level image spot Size then be all bianry image spots radius length intermediate value.
, can be miscellaneous to filter out by limiting shape, area and the color of spot in the spot extraction process of bianry image Point.Because the index point on tapered sleeve is circular, therefore the area threshold and roundness threshold of the invention by setting spot, filter out miscellaneous Point.Each spot corresponds to the green or blue identification point in a red annulus on fuel filling taper sleeve.In the mark of the present invention In point extraction process, input picture is the image after HSV Threshold segmentations, it is not necessary to image is carried out into binaryzation, directly to input Image carries out connected domain detection, and filters out not rounded miscellaneous point according to circularity and area threshold, exports the Centered Graphs of each identification point As coordinate.
Step 6:Identify Point matching
Before pose measurement is carried out, also need to determine the one-to-one corresponding of the image coordinate point and actual circular indicia point extracted Relation, the image coordinate of blue identification point and green identification point can be obtained by the extracting method of step 5, but can not be right The identification point of difference numbering makes a distinction.Therefore, it is necessary to which Feature Points Matching algorithm solves the correspondence problem of characteristic point.
The present invention is identified using a kind of simple method to characteristic point, if green circular indicia point is first point, The nearest blue dot of from first identification point Euclidean distance is second identification point on imaging plane.Except first identification point Outside, it is the 3rd identification point with the closest point of second identification point, by that analogy, all identification points can be compiled Number.
Step 7:Camera parameters are demarcated
Make a black and white chessboard case marker fixed board by oneself first, the length of side of each grid is 5.8mm.Passed using Basler visions Sensor carries out the shooting of different angle and different depths to gridiron pattern, so reduces calibrated error and obtains in more accurate camera Ginseng.In calibration experiment, camera is gathered into image to the gridiron pattern of different angle respectively, extracts each uncalibrated image gridiron pattern Angle point after, you can calculate camera model, due to lens distortion very little, the radial distortion of a consideration camera of the invention, utilize MATLAB 2015a obtain the internal reference and distortion factor of camera after being demarcated to camera in tool box.
Camera parameter:
Step 8:Fuel filling taper sleeve pose measurement
The tapered sleeve external diameter that the present invention designs is 35cm, internal diameter 20cm, and it is solid to arrange six bluenesss on the annulus of tapered sleeve Circle and a green filled circles, radius of circle is about 2cm.The center of circle of seven circular index points is coplanar on tapered sleeve, according to this position Relation, using the plane as X-Y plane, the center of circle of tapered sleeve is origin, and vertical X-Y plane is upwards Z axis positive direction, establishes the world Coordinate system.After having established world coordinate system, the world coordinates of 7 circle marker dot center can obtain, as shown in Figure 2.For Fuel filling taper sleeve is obtained relative to the relative position by hydraulic fluid port, is entered using index point relative position information and camera imaging model Row relative position resolves.Advanced algorithm Robust Perspective-n-Point of the present invention based on existing pose measurement (RPnP) the cone pose measurement that refuels is carried out.
Pose solution, the displacement such as accompanying drawing 4- in three directions solved are carried out to the multiple image in consecutive image sequence Shown in accompanying drawing 6.Solve back what the central point pixel that index point obtains obtained with step 5 by the way that re-projection is counter according to the result of resolving Index point position coordinates makes the difference, and re-projection error can be calculated, and error curve is shown in accompanying drawing 7.By result of the test it can be found that The pose that the present invention can simulate fuel filling taper sleeve to soft air refuelling is accurately measured.

Claims (1)

  1. A kind of 1. imitative hawkeye moving target localization method for soft autonomous air refuelling, it is characterised in that:From the vision of hawk System processor makes hair, the vision noticing mechanism of the interaction simulation hawk between being rolled into a ball with reference to hawk brain core, extracts fuel filling taper sleeve The approximate region at place, effective feature extraction then is carried out to tapered sleeve region using color segmentation, and then estimated using pose Relative position information between algorithm measurement fuel charger tapered sleeve and refueled aircraft vision system;This method comprises the following steps that:
    Step 1:Imitative fierce look top cover cellular response calculates
    Effectively imitative fierce look top cover cell coding mechanism is established, simulates the validity of fierce look top cover cell coding and openness, from Effective consistency information is obtained in image;Assuming that it can pass through a series of images base B for any piece image IkLine Property combination be indicated:
    Wherein image base BkNeeding to train from substantial amounts of natural image and obtain, this is existing shared information between natural image, akIt is image base BkCorresponding coefficient, the coefficient has necessarily openness, and can be obtained by below equation:
    Wherein, CkReferred to as coding filter, it is image base BkInverse or pseudoinverse;Any piece image is entered using the wave filter Row filtering operation, then the response of optic tectum cell can be obtained, and certain openness, i.e., most sound can be presented in the response It should be worth for 0, only least a portion of response is larger, and this response coincide with the Physiologic Studies result on optic tectum cell;
    Hawkeye optic tectum cellular response corresponding to background area is similar under soft air refuelling scene, and fuel filling taper sleeve region Cellular response and background area have very big difference;Cellular response is together decided on by input picture and receptive field, and regarding top The receptive field of cover cell has very strong selectivity to direction and marginal information;The bigger explanation of cellular response corresponding to some receptive field Direction corresponding to the image block and marginal information match with the selectivity of the receptive field;Use peak response and its corresponding sense The main information of the image block can be described by open country;Likewise, the peak response of background area is much like and fuel filling taper sleeve region Cellular response has larger difference therewith;Background area is typically relatively flat simultaneously, marginal information unobvious, causes thin corresponding to it The phenomenon for being far longer than other receptive fields can't occur on some receptive field in born of the same parents' response;On the contrary, in fuel filling taper sleeve region In the presence of abundant marginal information, and there is stronger directionality target area, and its corresponding cellular response often appears in a certain It is significantly larger than other receptive fields on individual receptive field;Therefore, using the peak response of each image block come describe the image block be plus The probability in oily tapered sleeve region;
    Step 2:Imitative hawk core group texture suppresses to extract with notable figure
    Substantial amounts of lateral inhibition effect in the vision system of hawk between each cell receptive field be present, when have stimulate input when, in it is careful Born of the same parents can be by the inhibitory action of peripheral cell, and the inhibitory action is shown as to strengthen the antijamming capability of algorithm, the present invention is also Consider the texture homogeneity of background area;When some image block and around it image block texture homogeneity it is stronger when think this Image block is that the probability of background area is larger, therefore larger texture rejection coefficient is assigned to it and is suppressed;Conversely, work as the figure As illustrate when the texture homogeneity of block and surrounding image block is weaker the image block be fuel filling taper sleeve region probability it is larger, it is assigned With less texture rejection coefficient;Texture homogeneity computational methods are the methods based on gray level co-occurrence matrixes, gray level co-occurrence matrixes It is defined as follows:It is assumed that there is Nx pixel in image level direction to be analyzed, there are Ny pixel, the ash of image on vertical direction y Degree level is G, if X={ 1,2 ..., NxRepresent image horizontal direction pixel coordinate, Y={ 1,2 ..., NyRepresent image Pixel coordinate in vertical direction, N={ 0,1 ..., G } be quantify after gray level, then original image can be expressed as one by The horizontal mapping function f with vertical coordinate to gray level:X×Y→N;A pair separated by a distance on some direction in image The statistical law of pixel grey scale can reflect the texture features of the image, and the gray scale of each pixel pair is described using a matrix Statistical law then can obtain gray level co-occurrence matrixes, be expressed as W;
    Pixel (x+a, y+b) one pixel of formation of any point (x, y) with some with it apart from certain length in image Right, if the gray value of the pixel pair is (i, j), i.e. the gray value of pixel (x, y) is i, the gray scale of pixel (x+a, y+b) It is worth for j;Fixed a and b, makes point (x, y) be moved in entire image, then can obtain various (i, j) values;If the gray level of image Number is G, then i and j combination shares G2Kind;In entire image, the frequency for counting each appearance then claims for P (i, j, d, θ) Square formation [P (i, j, d, θ)]G×GFor gray level co-occurrence matrixes, i.e. [P (i, j, d, θ)]G×G;Gray level co-occurrence matrixes are substantially exactly two The joint histogram of pixel, different combinations of values is taken apart from difference value (a, b), image θ in a certain direction can be obtained, It is separated by a distanceGray level co-occurrence matrixes;
    A=b=2 is set, and θ=[0 °, 45 °, 90 °, 135 °], the tonal gradation after quantization is 8, therefore gray level co-occurrence matrixes are one 8*8*4 matrix;Gray level co-occurrence matrixes can reflect gradation of image on information such as direction, adjacent spaces, amplitudes of variation;To divide The local mode and queueing discipline etc. for analysing image typically can't be directly using obtained gray level co-occurrence matrixes, but on its basis Upper acquisition second degree statistics;Before the characteristic parameter of gray level co-occurrence matrixes is obtained, to make normalization process, be carried out using following formula Normalization process:
    P (i, j, d, θ)=P (i, j, d, θ)/R (3)
    Wherein, R is regular constant, is whole element sums in gray level co-occurrence matrixes;
    Second degree statistics has contrast and entropy, and wherein contrast is defined as follows:
    W1=∑ ∑ [(i-j)2×P2(i,j,d,θ)] (4)
    Contrast is the moment of inertia on leading diagonal in W battle arrays, and it has measured distribution situation and the local change of image of matrix value Change;W1Value it is bigger explanation texture comparison it is stronger, image is more clear, and grain effect is more obvious;
    Entropy is defined as follows:
    W2=-∑ ∑ P (i, j, d, θ) × log10P(i,j,d,θ) (5)
    The information content that entropy represents image is the measurement of picture material randomness, can characterize the complexity of texture;When image without Entropy is 0 during texture, and entropy is maximum when expiring texture;In order to calculate the texture rejection coefficient of image, carried out using around each pixel The sampling of two different windows sizes, two image blocks obtained to sampling calculate its gray level co-occurrence matrixes respectively;Then utilize Calculation formula (3) obtains the gray level co-occurrence matrixes after normalization;Calculate its second degree statistics respectively using gray level co-occurrence matrixes, so Calculate the distance between two second degree statisticses afterwards, the distance describe some image block and around it image block texture one Cause property;Illustrate that the texture of the image block and peripheral region has larger difference if the distance between two second degree statisticses are bigger It is different, therefore its texture rejection coefficient should be assigned to less value;Otherwise explanation should if the distance between two second degree statisticses are smaller Image block is more similar to the texture of peripheral region, and now texture rejection coefficient should be larger corresponding to the image block;The present invention makes The distance between second degree statistics by the use of corresponding two gray level co-occurrence matrixes of each image block is as its texture rejection coefficient, then It is multiplied the inverse of texture rejection coefficient as texture enhancing coefficient peak response corresponding with each image block of image and can obtains To the notable figure of image, region corresponding to significance maximum is tapered sleeve region;Extract what is obtained using imitative hawkeye notable figure Notable figure can filter out part background information, and amount of calculation is reduced for follow-up processing;
    Step 3:Color threshold segmentation
    In the verification platform that the present invention designs, fuel filling taper sleeve region is red annulus;Fuel filling taper sleeve region is represented using annulus, The mark that green and blueness are pasted on annulus is used for vision positioning;Topmost point is blueness, as first identification point, according to Secondary that each point is numbered, remaining point is green;Imitative hawkeye notable figure extraction can obtain the approximate region comprising tapered sleeve, notable On the basis of figure extraction, color threshold segmentation is carried out to marking area, to obtain more accurate tapered sleeve region;Compared to RGB That is R-G-B color space, HSV are that form and aspect-saturation degree-lightness colors space reaches the face for above more conforming to the mankind in color table Color perceptive mode, color separation into asymmetric three components in human eye:Form and aspect, saturation degree and brightness;Hsv color space Three components can more intuitively express light and shade, tone and the bright-coloured degree of color;On the basis of notable figure extraction, profit It is that i.e. two passages of saturation degree of form and aspect, S enter row threshold division to image with H, obtains including the region of the red objects such as tapered sleeve, will Image binaryzation after segmentation, obtain splitting binary map;
    Step 4:Region of interesting extraction
    Because the region that Threshold segmentation extracts is probably incomplete, and comprising noise, in order to obtain the region of interest of artwork Domain, it is necessary first to which the bianry image obtained to first time HSV Threshold segmentation carries out morphological operation, and extracts each red area Exterior contour;Assuming that the profile point set of ith zone is qi, the image coordinate of wherein m-th of profile point of ith zone isCalculating is ranked up to two dimensions of the image coordinate of each region contour point, obtains each region wheel The maximum and minimum value of wide point coordinates, it is Region of as ROI so as to obtain the boundary rectangle in each region Interest regions, are expressed as ROIi=(ui,vi,wi,hi), uiAnd viThe figure of ROI rectangular areas top left corner apex is represented respectively As coordinate, wiAnd hiThe width and height of the rectangular area are represented respectively, so as to uniquely determine the boundary rectangle in each region;
    Red area in image shared pixel ratio very little in the picture, when the ROI extracted is further processed, The computing resource of occupancy to artwork much smaller than operating, so as to reach the purpose for improving calculating speed;Due to adding for design Index point comprising green and blueness in red area in oily tapered sleeve, the tapered sleeve region in red threshold splits obtained binary map Hole occurs;Therefore, it is necessary to be filled to the hole of contour area while ROI is extracted;Fuel filling taper sleeve is a circle Ring, in order to prevent also being filled endocyclic area, each cyst areas inside each red area that reply segmentation obtains enters Row judges, the profile beyond area threshold is not filled with, so as to obtain correct red R OI;
    Step 5:Tapered sleeve mark point coordinates obtains
    The circular index point of blueness and green has been pasted in tapered sleeve region, can be according to every when tapered sleeve and refueled aircraft are closer to the distance Individual ROI whether includes blueness and green disc pellet judges whether this region is tapered sleeve region;Blueness is carried out respectively to each ROI region The HSV Threshold segmentations of passage and green channel, judge whether comprising blueness or green disk in red area, so as to eliminate Non-targeted red interfering object, finds tapered sleeve area-of-interest;
    Detect behind tapered sleeve region, it is necessary to be extracted to the central point of circular index point;First, gray level image is used continuous The Threshold segmentation of unique step is binary image set, if segmentation threshold scope is [T1,T2], step-length t, then all threshold values be: T1, T1+ t, T1+ 2t ..., T2;Secondly, the border of every width bianry image is extracted, detects its connected region, and extract binary map As the center image coordinate of connected region;Again, the connected domain centre coordinate of all bianry images is counted, if different bianry images Connected domain centre distance be less than a threshold value, then these bianry image spots belong to a gray level image spot;Finally, really Determine the image coordinate and size of gray level image spot;Coordinate position of the spot in gray level image passes through all corresponding binary maps The coordinate weighting of image patch dot center is tried to achieve, as shown in calculation formula (6), qiFor the inertial rate of i-th of bianry image spot, therefore two It is worth Image Speckle shape closer to circular, its contribution to gray level image speckle displacement is bigger;The size of gray level image spot It is then the radius length intermediate value of all bianry image spots;
    In the spot extraction process of bianry image, impurity point can be filtered by limiting shape, area and the color of spot;By In the index point on tapered sleeve to be circular, therefore by setting the area threshold and roundness threshold of spot, filter impurity point;Each spot The green or blue identification point in a red annulus on the corresponding fuel filling taper sleeve of point;In identification point extraction process, input figure Picture is the image after HSV Threshold segmentations, it is not necessary to image is subjected to binaryzation, connected domain detection directly is carried out to input picture, And not rounded miscellaneous point is filtered out according to circularity and area threshold, export the center image coordinate of each identification point;
    Step 6:Identify Point matching
    Before pose measurement is carried out, also need to determine that the one-to-one corresponding of the image coordinate point and actual circular indicia point extracted closes System, the image coordinate of blue identification point and green identification point can be obtained by step 5 extracting method, but can not be to difference The identification point of numbering makes a distinction, therefore, it is necessary to Feature Points Matching algorithm solves the correspondence problem of characteristic point;
    Characteristic point is identified using following methods:If green circular indicia point is first point, from imaging plane The nearest blue dot of one identification point Euclidean distance is second identification point;In addition to first identification point, with second identification point Closest point is the 3rd identification point, and by that analogy, all identification points can be numbered;
    Step 7:Camera parameters are demarcated
    A black and white chessboard case marker fixed board is made, the length of side of each grid is given value;Gridiron pattern is entered using vision sensor The shooting of row different angle and different depths, so reduce calibrated error and obtain more accurate camera internal reference;In calibration experiment In, camera is gathered into image respectively to the gridiron pattern of different angle, after extracting the tessellated angle point of each uncalibrated image, i.e., Camera model can be calculated, due to lens distortion very little, the mirror image distortion of a consideration camera of the invention, utilizes MATLAB 2015a works Tool case obtains the internal reference and distortion factor of camera after being demarcated to camera;
    Step 8:Fuel filling taper sleeve pose measurement
    For soft Aerial Oiling Problem, it is assumed that video camera is arranged on a certain ad-hoc location of refueled aircraft;To obtain the cone that refuels Set carries out relative position solution relative to the relative position by hydraulic fluid port using dot position information and camera imaging model is indicated Calculate;The cone pose measurement that refuels is carried out using based on robust position-pose measurement, is used as cost by establishing seven rank multinomials Function obtains the solution of robust position-pose measurement problem.
CN201710506141.4A 2017-06-28 2017-06-28 Eagle eye-imitated moving target positioning method for soft autonomous aerial refueling Active CN107392963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710506141.4A CN107392963B (en) 2017-06-28 2017-06-28 Eagle eye-imitated moving target positioning method for soft autonomous aerial refueling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710506141.4A CN107392963B (en) 2017-06-28 2017-06-28 Eagle eye-imitated moving target positioning method for soft autonomous aerial refueling

Publications (2)

Publication Number Publication Date
CN107392963A true CN107392963A (en) 2017-11-24
CN107392963B CN107392963B (en) 2019-12-06

Family

ID=60333918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710506141.4A Active CN107392963B (en) 2017-06-28 2017-06-28 Eagle eye-imitated moving target positioning method for soft autonomous aerial refueling

Country Status (1)

Country Link
CN (1) CN107392963B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107640292A (en) * 2017-08-07 2018-01-30 吴金伟 The autonomous oiling method of unmanned boat and system
CN108536132A (en) * 2018-03-20 2018-09-14 南京航空航天大学 A kind of fixed-wing unmanned plane air refuelling platform and its oiling method
CN108960026A (en) * 2018-03-10 2018-12-07 王洁 Unmanned plane during flying Orientation system
CN108983815A (en) * 2018-08-03 2018-12-11 北京航空航天大学 A kind of anti-interference autonomous docking control method based on the control of terminal iterative learning
CN109085845A (en) * 2018-07-31 2018-12-25 北京航空航天大学 A kind of bionical vision navigation control system and method for autonomous air refuelling docking
CN109360240A (en) * 2018-09-18 2019-02-19 华南理工大学 A kind of small drone localization method based on binocular vision
CN109446892A (en) * 2018-09-14 2019-03-08 杭州宇泛智能科技有限公司 Human eye notice positioning method and system based on deep neural network
CN109557944A (en) * 2018-11-30 2019-04-02 南通大学 A kind of moving target position detection system and method
CN110599507A (en) * 2018-06-13 2019-12-20 中国农业大学 Tomato identification and positioning method and system
CN110969603A (en) * 2019-11-26 2020-04-07 联博智能科技有限公司 Relative positioning method and device for lesion position and terminal equipment
CN112101099A (en) * 2020-08-04 2020-12-18 北京航空航天大学 Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method
CN112232181A (en) * 2020-10-14 2021-01-15 北京航空航天大学 Eagle eye color cognitive antagonism mechanism-simulated unmanned aerial vehicle marine target detection method
CN114953700A (en) * 2021-12-06 2022-08-30 黄河水利职业技术学院 Method for manufacturing ultrahigh-precision cooperative target for industrial photogrammetry
CN115393352A (en) * 2022-10-27 2022-11-25 浙江托普云农科技股份有限公司 Crop included angle measuring method based on image recognition and application thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825505A (en) * 2016-03-14 2016-08-03 北京航空航天大学 Vision measurement method facing boom air refueling
CN106875403A (en) * 2017-01-12 2017-06-20 北京航空航天大学 A kind of imitative hawkeye visual movement object detection method for air refuelling

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825505A (en) * 2016-03-14 2016-08-03 北京航空航天大学 Vision measurement method facing boom air refueling
CN106875403A (en) * 2017-01-12 2017-06-20 北京航空航天大学 A kind of imitative hawkeye visual movement object detection method for air refuelling

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HAIBIN DUAN ET AL: "A binocular vision-based UAVs autonomous aerial refueling platform", 《SCIENCE CHINA INFORMATION SCIENCES》 *
段海滨 等: "基于仿鹰眼视觉的无人机自主空中加油", 《仪器仪表学报》 *
王晓华 等: "基于仿生视觉注意机制的无人机目标检测", 《航空科学技术》 *
赵国治 等: "仿鹰眼视觉技术研究进展", 《中国科学》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107640292A (en) * 2017-08-07 2018-01-30 吴金伟 The autonomous oiling method of unmanned boat and system
CN108960026A (en) * 2018-03-10 2018-12-07 王洁 Unmanned plane during flying Orientation system
CN108536132A (en) * 2018-03-20 2018-09-14 南京航空航天大学 A kind of fixed-wing unmanned plane air refuelling platform and its oiling method
CN110599507A (en) * 2018-06-13 2019-12-20 中国农业大学 Tomato identification and positioning method and system
CN110599507B (en) * 2018-06-13 2022-04-22 中国农业大学 Tomato identification and positioning method and system
CN109085845B (en) * 2018-07-31 2020-08-11 北京航空航天大学 Autonomous air refueling and docking bionic visual navigation control system and method
CN109085845A (en) * 2018-07-31 2018-12-25 北京航空航天大学 A kind of bionical vision navigation control system and method for autonomous air refuelling docking
CN108983815A (en) * 2018-08-03 2018-12-11 北京航空航天大学 A kind of anti-interference autonomous docking control method based on the control of terminal iterative learning
CN109446892A (en) * 2018-09-14 2019-03-08 杭州宇泛智能科技有限公司 Human eye notice positioning method and system based on deep neural network
CN109446892B (en) * 2018-09-14 2023-03-24 杭州宇泛智能科技有限公司 Human eye attention positioning method and system based on deep neural network
CN109360240A (en) * 2018-09-18 2019-02-19 华南理工大学 A kind of small drone localization method based on binocular vision
CN109360240B (en) * 2018-09-18 2022-04-22 华南理工大学 Small unmanned aerial vehicle positioning method based on binocular vision
CN109557944A (en) * 2018-11-30 2019-04-02 南通大学 A kind of moving target position detection system and method
CN110969603A (en) * 2019-11-26 2020-04-07 联博智能科技有限公司 Relative positioning method and device for lesion position and terminal equipment
CN112101099A (en) * 2020-08-04 2020-12-18 北京航空航天大学 Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method
CN112101099B (en) * 2020-08-04 2022-09-06 北京航空航天大学 Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method
CN112232181A (en) * 2020-10-14 2021-01-15 北京航空航天大学 Eagle eye color cognitive antagonism mechanism-simulated unmanned aerial vehicle marine target detection method
CN112232181B (en) * 2020-10-14 2022-08-16 北京航空航天大学 Eagle eye color cognitive antagonism mechanism-simulated unmanned aerial vehicle marine target detection method
CN114953700A (en) * 2021-12-06 2022-08-30 黄河水利职业技术学院 Method for manufacturing ultrahigh-precision cooperative target for industrial photogrammetry
CN115393352A (en) * 2022-10-27 2022-11-25 浙江托普云农科技股份有限公司 Crop included angle measuring method based on image recognition and application thereof

Also Published As

Publication number Publication date
CN107392963B (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN107392963A (en) A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling
CN104536009B (en) Above ground structure identification that a kind of laser infrared is compound and air navigation aid
CN112818988B (en) Automatic identification reading method and system for pointer instrument
CN104598908B (en) A kind of crops leaf diseases recognition methods
CN103927741B (en) SAR image synthesis method for enhancing target characteristics
CN107330376A (en) A kind of Lane detection method and system
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
CN104951799B (en) A kind of SAR remote sensing image oil spilling detection recognition method
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN102096824B (en) Multi-spectral image ship detection method based on selective visual attention mechanism
CN107392885A (en) A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN107705288A (en) Hazardous gas spillage infrared video detection method under pseudo- target fast-moving strong interferers
CN106934795A (en) The automatic testing method and Forecasting Methodology of a kind of glue into concrete beam cracks
CN101908153B (en) Method for estimating head postures in low-resolution image treatment
CN103927758B (en) Saliency detection method based on contrast ratio and minimum convex hull of angular point
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN104504675B (en) A kind of active vision localization method
CN103528568A (en) Wireless channel based target pose image measuring method
CN104599288A (en) Skin color template based feature tracking method and device
CN105139401A (en) Depth credibility assessment method for depth map
CN106228130A (en) Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN106295657A (en) A kind of method extracting human height's feature during video data structure
CN110110618A (en) A kind of SAR target detection method based on PCA and global contrast

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant