CN112862862B - Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method - Google Patents
Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method Download PDFInfo
- Publication number
- CN112862862B CN112862862B CN202110184390.2A CN202110184390A CN112862862B CN 112862862 B CN112862862 B CN 112862862B CN 202110184390 A CN202110184390 A CN 202110184390A CN 112862862 B CN112862862 B CN 112862862B
- Authority
- CN
- China
- Prior art keywords
- image
- taper sleeve
- oil receiving
- oiling
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000000007 visual effect Effects 0.000 title claims abstract description 33
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 25
- 230000003287 optical effect Effects 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000036544 posture Effects 0.000 claims abstract description 15
- 238000003672 processing method Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 11
- 230000007246 mechanism Effects 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 30
- 238000001914 filtration Methods 0.000 claims description 18
- 238000001514 detection method Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000005259 measurement Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 239000000446 fuel Substances 0.000 claims description 3
- 238000003711 image thresholding Methods 0.000 claims description 3
- 238000009434 installation Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 3
- 230000004927 fusion Effects 0.000 abstract description 2
- 239000003921 oil Substances 0.000 description 96
- 238000003032 molecular docking Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 210000001503 joint Anatomy 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 239000010727 cylinder oil Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an aircraft autonomous oil receiving device and an aircraft autonomous oil receiving method based on artificial intelligence visual tracking, wherein the method comprises the following steps: firstly, respectively installing optical and infrared image acquisition devices on two sides of a machine head of an oil receiving machine, and respectively acquiring images of a refueling taper sleeve in different postures; secondly, respectively processing and resolving images under different postures by using an image processing method based on artificial intelligence and an image processing method based on visual tracking to obtain the center position coordinates of the oiling taper sleeve, the relative positions of the oiling taper sleeve and the oil receiving plug and posture information; finally, the control computer controls the three-dimensional adjusting device to move through the instruction, so that the oil receiving plug is aligned to the center of the oiling taper sleeve to carry out oiling, and after the oiling process is finished, the three-dimensional adjusting device enables the oil receiving plug to be adjusted to an initial position. The invention adopts a binocular vision tracking mechanism, utilizes image information under different visual angles to carry out fusion operation, can accurately identify and track the pose of the oiling taper sleeve, and improves the calculation and tracking precision.
Description
Technical Field
The invention relates to the technical field of aerial refueling and artificial intelligence application of aircrafts, in particular to an autonomous oil receiving device of an aircraft based on artificial intelligence visual tracking and an application method thereof.
Background
Currently, various airplanes of air force and naval force mainly adopt a hose type air refueling mode, namely a conical sleeve-plug structure mode. In the docking process of the oiling machine and the oil receiving machine, the position and the gesture of the oiling taper sleeve are difficult to control due to the influences of an engine tail vortex field, atmospheric turbulence, the relative positions and the gestures of the two machines and the like, and the method for enabling the oil receiving plug to meet and dock with the oiling taper sleeve by manually operating the oil receiving machine by a pilot is low in speed, low in efficiency and high in failure rate, and causes high pilot load.
Disclosure of Invention
The invention provides an aircraft autonomous oil receiving device based on artificial intelligent visual tracking and an application method thereof, aiming at the technical problems of high failure rate and low efficiency in the process of aerial oil receiving butt joint of large aircraft such as bombers, early warning machines and the like.
The technical scheme of the invention is realized as follows:
an aircraft autonomous oil receiving method based on artificial intelligence visual tracking comprises the following steps:
step one: the optical image acquisition device and the infrared image acquisition device are respectively arranged at two sides of a machine head of the oil receiving machine, and the installation base line length of the optical image acquisition device and the infrared image acquisition device is in the range of 0.5m and 1.5 m;
step two: the method comprises the steps that an optical image acquisition device and an infrared image acquisition device are used for acquiring images of the oiling taper sleeve in different postures respectively, and the images are transmitted to a control computer;
step three: processing the image by using an image processing method based on artificial intelligence to obtain an image recognition model of the oiling taper sleeve;
step four: when the oil receiving machine needs to be filled with oil, respectively utilizing an optical image acquisition device and an infrared image acquisition device to acquire real-time images of the oil filling taper sleeve, and transmitting the real-time images to a control computer;
step five: processing the real-time image by using an image processing method based on visual tracking and an image recognition model of the oiling taper sleeve respectively to obtain a characteristic image I and a characteristic image II;
step six: when the obtaining time of the characteristic image I is smaller than that of the characteristic image II, taking the characteristic image I as a characteristic image, otherwise taking the characteristic image II as the characteristic image, and calculating the central position coordinate of the oiling taper sleeve according to the characteristic image;
step seven: estimating the relative pose of the oiling taper sleeve and the oil receiving plug based on a binocular vision tracking mechanism according to the real-time image obtained in the step four, and obtaining the relative position and pose information of the oiling taper sleeve and the oil receiving plug;
step eight: the control computer controls the three-dimensional adjusting device to move through instructions according to the relative position and gesture information of the oil filling taper sleeve and the oil receiving plug and the central position coordinate of the oil filling taper sleeve, so that the oil receiving plug is aligned to the center of the oil filling taper sleeve for filling oil;
step nine: after the oiling process is completed, the three-dimensional adjusting device enables the oil receiving plug to be adjusted to an initial position.
The method for processing the image by using the image processing method based on artificial intelligence comprises the following steps:
s3.1, extracting and marking a characteristic region in the image by adopting a visual saliency method of target guidance to obtain a characteristic region image;
s3.2, intensively extracting SIFT features of the feature area image according to a fixed step length, performing sparse coding on the SIFT features by using an overcomplete dictionary, and establishing a feature recognition library;
and S3.3, establishing a basic algorithm model with randomly changed characteristic conditions, and inputting sparse characteristics in a characteristic recognition library into the basic algorithm model for training by using a maximum interval learning method to obtain an image recognition model of the oiling taper sleeve.
The method for extracting and labeling the characteristic region in the image by adopting the visual saliency method of the target guidance in the step S3.1 comprises the following steps:
s3.1.1, randomly selecting M rectangular windows in the image, wherein the size and the position of the rectangular windows are randomly set;
s3.1.2, respectively calculating the linear density and FT contrast of the rectangular window, and linearly combining the linear density and the FT contrast to obtain the objectivity measurement of the rectangular window;
the linear density in the rectangular window is expressed as:
the FT contrast of a rectangular window is expressed as:
the objectivity metric for a rectangular window is expressed as:
Score(w)=αLD(w)+(1-α)FTC(w),
wherein LD (w) represents the linear density, line of rectangular window b (p) represents a straight line segment split binary image, area (w) represents a rectangular window Area, FTC (w) represents a rectangular window contrast, S 0 (p) represents a specific interval salient feature value, S 0 'p' denotes a total salient feature value, score (w) denotes an object metric value, and α denotes a weight coefficient;
s3.1.3, screening m rectangular windows as characteristic region images by adopting non-maximum suppression according to the object performance measurement: and sequencing all the rectangular windows from large to small according to the object performance measurement, sequentially judging whether the rectangular windows overlap with more than 50% of the space areas in the selected rectangular windows, if so, discarding the rectangular windows, otherwise, judging the next rectangular window until m rectangular windows are screened out.
The method for establishing the feature recognition library in the step S3.2 comprises the following steps: extracting SIFT features of a rectangular window from the feature region image according to the step length q and the size a X a, performing feature cascade on the extracted SIFT features to form a sample feature set X, and then learning a training dictionary d from the sample feature set X under the limitation of sparse constraint and sparse features S of the sample feature set X under the training dictionary d 1 Will sparse feature S 1 And adding the characteristic identification library.
The method for processing the real-time image by using the image processing method based on visual tracking comprises the following steps:
s5.1, performing image preprocessing on the acquired real-time image:
s5.1.1, calibrating and correcting an image: performing internal reference calibration on the optical image acquisition device and the infrared image acquisition device respectively, measuring an internal reference matrix and a distortion vector, and correcting the acquired real-time image according to the internal reference matrix and the distortion vector;
s5.1.2, image thresholding: threshold segmentation is carried out on the corrected real-time image by using a maximum inter-class variance method, and the target is separated from the complex background;
when the real-time image is an optical image, converting the real-time image from an RGB color space to an HSV color space, and performing color phase filtering and binarization processing to obtain a binarized image;
the conversion matrix from RGB color space to HSV color space is:
wherein R represents a red mode value, G represents a green mode value, B represents a blue mode value, V represents a luminance value, S represents saturation, and H represents hue;
when the real-time image is an infrared image, converting the real-time image from a gray level image to a binary image;
s5.1.3, image filtering: filtering the binarized image to obtain a filtered binary image;
s5.2, detecting and identifying characteristic points of the filtered binary image:
s5.2.1, edge fitting: performing edge fitting on a target area in the binary image to respectively obtain an inner ring and an outer ring of an edge ring belt of the oiling taper sleeve, and judging the target area as the ring belt of the end surface of the oiling taper sleeve when the center distance between the inner ring and the outer ring is smaller than a set threshold gamma;
s5.2.2, long axis ratio match identification: when the ratio of the long axes of the inner ring and the outer ring of the edge ring belt of the refueling cone sleeve is at a threshold value [ delta ] min ,δ max ]And when the range is within the range, the end surface area of the oiling taper sleeve is matched with the annular belt area, so that a characteristic image of the oiling taper sleeve is obtained.
The method for calculating the center position coordinate of the oiling taper sleeve according to the characteristic diagram comprises the following steps: the end face of the oiling taper sleeve is positioned on a space circle, projections of the space circle under different postures are elliptic, the circle center is the center of the end face of the oiling taper sleeve, and the center coordinates of the end face of the taper sleeve can be obtained by carrying out least square parameter estimation fitting on the ring belt area of the end face of the oiling taper sleeve obtained through detection;
the least square parameter fitting estimation method comprises the following steps:
the plane equation for any ellipse can be expressed as:
f(x,y)=x 2 +Axy+By 2 +Cx+Dy+E=0,
wherein f (x, y) represents an elliptical plane function, and (x, y) is an elliptical central coordinate point, and A, B, C, D, E are parameters;
the objective function is:
wherein i=1, 2, …, N representsTotal number of sample sets, (x i ,y i ) Representing the coordinates of the point i;
according to the least squares principle, each parameter is determined for the minimum value of the objective function, and it is possible to obtain:
[A B C D E] T =P -1 Q;
screening the boundary points searched in the center in a multidirectional manner, removing the minimum value and the maximum value of the most frequently occurring distortion condition of the searching step length, and obtaining the abscissa x, the ordinate y, the major axis radius a, the minor axis radius b and the included angle theta between the major axis and the x axis of the ellipse center with five parameters:
the accurate detection and identification of the taper sleeve targets in different image sequences can be realized through the multidirectional search and least square fitting of the edge center of the oiling taper sleeve.
The method for obtaining the relative position and posture information of the oiling taper sleeve and the oil receiving plug comprises the following steps:
the binocular vision tracking is to convert the front and back frame characteristic matching of a monocular camera into characteristic matching between left and right views by using a triangulation method, and the specific method is as follows: determining two rays by the left and right image points and the optical center of the camera, wherein the space intersection point of the two rays is the position of an actual object point P; respectively establishing coordinate systems of an optical camera and an infrared camera, wherein P1 and P2 are image points of P points on left and right image planes, and solving coordinates of the P points after establishing an equation set;
in the refueling state, a Kalman filtering equation in a continuous state is established:
wherein I is a 3×3 unit diagonal matrix, W (t) is a model error, z= [ X pd Y pd Z pd ]Representing relative position vectors, X pd Indicating the relative position of X-axis, Y pd Indicating the relative position of the Y axis, Z pd Indicating the relative position of the Z-axis,representing a matrix of relative positions>Representing posterior state variables, F KF Representing a state matrix, G representing a covariance matrix, X (t) representing a priori state variables;
discretizing Kalman filtering equation to obtainSampling time is deltat, then:
wherein X is k+1 Represents the posterior state estimation value at k+1, phi represents the state matrix, X k Represents the posterior state estimate at k, W k Representing model errors;
system noise variance matrix Q k The method comprises the following steps:
the measurement equation is:
wherein Y is k+1 Representing posterior state variables, V k Representing the measured value;
measuring noise covariance matrix R k The method comprises the following steps:
therefore, a Kalman filtering algorithm is adopted to estimate the relative motion state of the oil receiving plug and the oil filling taper sleeve.
The aircraft autonomous oil receiving device based on the artificial intelligent visual tracking comprises an oil receiving machine and an oil filling machine, wherein the oil receiving machine is provided with an oil receiving pipe, the oil filling machine is provided with an oil filling taper sleeve, and the oil filling taper sleeve is matched with the oil receiving pipe; the oil receiving machine is characterized in that an image acquisition device and a control computer are further arranged on the oil receiving machine, and the image acquisition device is used for acquiring image information of the oiling taper sleeve and sending the image information to the control computer.
The oil receiving pipe comprises an oil receiving plug, a three-dimensional adjusting device, a bent pipe and a straight pipe; one end of the oil receiving plug is matched with the oiling taper sleeve, the other end of the oil receiving plug is connected with the three-dimensional adjusting device, the three-dimensional adjusting device is connected with one end of the bent pipe, the other end of the bent pipe is connected with one end of the straight pipe, and the other end of the straight pipe is communicated with the inlet of the fuel passage of the oil receiving machine.
The oil receiving machine is provided with a GPS positioning system and an inertial navigation system, and the positioning acquired by the GPS positioning system and the pose information acquired by the inertial navigation system are transmitted to a control computer for processing.
Compared with the prior art, the invention has the beneficial effects that:
1) The invention adopts algorithms such as image acquisition, preprocessing, feature point detection, pose resolving and the like based on a feature recognition library and artificial intelligent visual tracking, has strong robustness to noise, brightness, contrast and other interferences, can realize rapid acquisition, processing and high-precision resolving and matching of the refueling taper sleeve image in the refueling process, and improves the docking efficiency and success rate of aerial refueling of an aircraft;
2) The binocular vision tracking mechanism is adopted, the image information of the two image acquisition devices under different visual angles is fully utilized to carry out fusion operation, the oiling taper sleeve can be accurately identified and tracked, the pose of the oiling taper sleeve can be solved in real time, and the calculation and tracking accuracy is improved;
3) According to the invention, the adjustable azimuth oil receiving pipe capable of carrying out real-time azimuth adjustment according to the pose of the oiling taper sleeve is designed, the adjustment direction of the oil receiving plug is controlled according to the relative pose, the pilot does not need to carry out docking by operating the aircraft pose again, the phenomenon that the docking is adjusted for multiple times due to the influence of the relative pose of the oiling machine/the oil receiving machine and the atmospheric turbulence is avoided, and the docking efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of the operation of the present invention.
Fig. 2 is a schematic structural diagram of an oil receiving plug according to the present invention.
Fig. 3 is a front view of fig. 2.
Fig. 4 is a right side view of fig. 2.
In the figure, 1 is an oil receiving plug connector, 2 is an adjusting flange, 3 is a piston rod, 4 is an oil cylinder oil delivery pipe, 5 is an oil receiving straight pipe connector, and 6 is an oil cylinder oil delivery pipe connector.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
2-4, an autonomous aircraft oil receiving device based on artificial intelligence visual tracking comprises an oil receiving machine and an oil filling machine, wherein the oil receiving machine is provided with an oil receiving pipe, the oil filling machine is provided with an oil filling taper sleeve, and the oil filling taper sleeve is matched with the oil receiving pipe; the oil receiving machine is also provided with an image acquisition device and a control computer, and the image acquisition device is used for acquiring image information of the oiling taper sleeve and sending the image information to the control computer. The image acquisition device comprises an optical image acquisition device and an infrared image acquisition device, the optical image acquisition device and the infrared image acquisition device are respectively arranged at the front part of the oil receiver body at the non-shielding position, the base line length is 0.5 m-1.5 m, and the two sets of image acquisition devices are both used for acquiring characteristic image data of an oiling cone sleeve of the oiling machine. The control computer is internally integrated with image processing software based on visual tracking, and the characteristic information extraction is carried out on a large number of collected refueling taper sleeve images of the refueling machine under different relative postures by utilizing an embedded artificial intelligent image processing algorithm to form a characteristic recognition library which is used for estimating the relative postures of the oil receiving machine and the refueling taper sleeve and rapidly judging the center position of the refueling taper sleeve in the real-time butt joint process so as to guide the oil receiving plug to carry out center point tracking.
The oil receiving pipe is an azimuth-adjustable oil receiving pipe and comprises an oil receiving plug, a three-dimensional adjusting device, a bent pipe and a straight pipe; one end of the oil receiving plug is matched with the oiling taper sleeve, the other end of the oil receiving plug is connected with the three-dimensional adjusting device, the three-dimensional adjusting device is connected with one end of the bent pipe, the other end of the bent pipe is connected with one end of the straight pipe, and the other end of the straight pipe is communicated with the inlet of the fuel passage of the oil receiving machine. The oil receiving machine is provided with a GPS positioning system and an Inertial Navigation System (INS), and the positioning acquired by the GPS positioning system and the pose information acquired by the inertial navigation system are transmitted to a control computer for processing. The control computer fuses the acquired multisource information such as a Global Positioning System (GPS), an Inertial Navigation System (INS), relative pose and the like, gives out the relative position and the relative pose of the oil receiving machine and the oiling taper sleeve, controls the actuator of the oil receiving pipe, makes the oil receiving plug turn, and accurately butts against the oiling taper sleeve.
Embodiment 2, as shown in fig. 1, is an aircraft autonomous application method based on artificial intelligence vision tracking, which specifically comprises the following steps:
step one: the optical image acquisition device and the infrared image acquisition device are respectively arranged at two sides of a machine head of the oil receiving machine, so that no shielding object is left in front of a lens of the image acquisition device, and the installation base line length of the optical image acquisition device and the infrared image acquisition device is in the range of [0.5m and 1.5m ].
Step two: and respectively utilizing an optical image acquisition device and an infrared image acquisition device to acquire images of the oiling taper sleeve in different postures, and transmitting the images to a control computer.
Step three: and processing the image by using an image processing method based on artificial intelligence to obtain an image recognition model of the oiling taper sleeve.
According to the image processing method based on artificial intelligence, based on the collected optical image and infrared image, a visual saliency method of target guidance is adopted to extract and mark a feature region of interest (oiling taper sleeve) in the image, so that a rough detection process is completed; secondly, further carrying out fine detection in the marked region of interest, densely providing a large number of SIFT features of the oiling taper sleeve according to a fixed step length, carrying out sparse coding on the SIFT features by using an overcomplete dictionary, and establishing a feature recognition library; on the basis, a basic algorithm model with randomly changed characteristic conditions is established, model parameter learning is carried out, the characteristic parameters of the algorithm are optimized according to the result matching consistency, the optimal values of the parameters are solved by using a maximum interval learning method, the optimal values of the model parameters are gradually converged by circulating iteration on a training set, finally model reasoning is carried out, and a label corresponding to the maximum confidence level is used as a final inference result to form a stable image recognition model; further utilizing sparse coding features of the sub-image blocks and context information of the image blocks to estimate probability that the image blocks belong to targets, and finally synthesizing target probability diagrams under multiple scales to obtain a refueling taper sleeve detection result through self-adaptive segmentation; the specific method comprises the following steps:
s3.1, extracting and marking a characteristic region (oiling taper sleeve) in the image by adopting a visual saliency method of target guidance, and completing a rough detection process to obtain a characteristic region image.
S3.1.1, randomly selecting M rectangular windows in the image, wherein the size and the position of the rectangular windows are randomly set.
S3.1.2, respectively calculating the linear density and FT contrast of the rectangular window, and linearly combining the linear density and the FT contrast to obtain the objectivity measurement of the rectangular window.
The linear density in the rectangular window is expressed as:
the FT contrast of a rectangular window is expressed as:
the objectivity metric for a rectangular window is expressed as:
Score(w)=αLD(w)+(1-α)FTC(w),
wherein LD (w) represents the linear density, line of rectangular window b (p) represents a straight line segment split binary image, area (w) represents a rectangular window Area, FTC (w) represents a rectangular window contrast, S 0 (p) represents a specific interval salient feature value, S 0 'p' denotes the total salient feature value, score (w) denotes the object metric value, and α denotes the weight coefficient.
S3.1.3, screening m rectangular windows as characteristic region images by adopting non-maximum suppression according to the object performance measurement: and sequencing all the rectangular windows from large to small according to the object performance measurement, sequentially judging whether the rectangular windows overlap with more than 50% of the space areas in the selected rectangular windows, if so, discarding the rectangular windows, otherwise, judging the next rectangular window until m rectangular windows are screened out.
S3.2, intensively extracting SIFT features of the feature area image according to a fixed step length, performing sparse coding on the SIFT features by using an overcomplete dictionary, and establishing a feature recognition library.
The method for establishing the feature recognition library in the step S3.2 comprises the following steps: extracting SIFT features of a rectangular window from the feature region image according to the step length q and the size a X a, performing feature cascade on the extracted SIFT features to form a sample feature set X, and then learning a training dictionary d from the sample feature set X under the limitation of sparse constraint and sparse features S of the sample feature set X under the training dictionary d 1 Will sparse feature S 1 And adding the characteristic identification library.
And S3.3, establishing a basic algorithm model with randomly changed characteristic conditions, and inputting sparse characteristics in a characteristic recognition library into the basic algorithm model for training by using a maximum interval learning method to obtain an image recognition model of the oiling taper sleeve. The maximum interval learning method comprises the following steps: the energy function value corresponding to the optimal solution of the model is larger than the energy function value obtained by bringing the artificial labeling value into the model, and the optimal value of the model parameter is gradually converged through cyclic iteration in the feature recognition library.
Step four: when the oil receiving machine needs to be filled, the optical image acquisition device and the infrared image acquisition device are respectively utilized to acquire real-time images of the filling taper sleeve, and the real-time images are transmitted to the control computer.
Step five: and processing the real-time image by using an image processing method based on visual tracking and an image recognition model of the oiling taper sleeve to obtain a characteristic image I and a characteristic image II.
The image processing method based on visual tracking, namely, image preprocessing, feature point detection and recognition, end face center point calculation and the like are carried out on the acquired feature images, and a feature recognition library under different poses is formed, and the specific method comprises the following steps:
s5.1, performing image preprocessing on the acquired real-time image:
s5.1.1, calibrating and correcting an image: the image acquisition device has optical mirror image distortion and tangential distortion, so that the optical image acquisition device and the infrared image acquisition device respectively perform internal reference calibration, an internal reference matrix and a distortion vector are measured, and the acquired real-time image is corrected according to the internal reference matrix and the distortion vector.
S5.1.2, image thresholding: and (3) performing threshold segmentation on the corrected real-time image by using a maximum inter-class variance method, and separating the target from the complex background.
When the real-time image is an optical image, converting the real-time image from an RGB color space to an HSV color space, and performing color phase filtering and binarization processing to obtain a binarized image;
the conversion matrix from RGB color space to HSV color space is:
wherein R represents a red mode value, G represents a green mode value, B represents a blue mode value, V represents a luminance value, S represents saturation, and H represents hue.
When the real-time image is an infrared image, the real-time image is converted from a gray level image to a binary image, so that the loss of main target details is reduced as much as possible, and the target taper sleeve information is reserved.
S5.1.3, image filtering: filtering the binarized image to obtain a filtered binary image; and denoising the binary image through image morphology operation including image corrosion, expansion, open operation, close operation, morphological gradient operation and the like, and finally obtaining the refueled taper sleeve information after background removal.
S5.2, detecting and identifying characteristic points of the filtered binary image:
s5.2.1, edge fitting: and performing edge fitting on a target area in the binary image to respectively obtain an inner ring and an outer ring of an edge ring belt of the oiling taper sleeve, and judging the target area as the ring belt of the end surface of the oiling taper sleeve when the center distance between the inner ring and the outer ring is smaller than a set threshold value gamma=15 Pixel.
S5.2.2, long axis ratio match identification: when the oil is filled into the taper sleeveThe ratio of the major axes of the inner ring and the outer ring of the edge ring belt is at a threshold [ delta ] min =0.7,δ max =0.9]And when the range is in the range, the end surface area of the oiling taper sleeve is matched with the annular belt area, namely the oiling taper sleeve is effectively identified, and the characteristic image of the oiling taper sleeve is obtained.
Step six: and when the acquisition time of the characteristic image I is smaller than that of the characteristic image II, taking the characteristic image I as a characteristic image, otherwise, taking the characteristic image II as the characteristic image, and calculating the central position coordinate of the oiling taper sleeve according to the characteristic image.
The method for calculating the center position coordinate of the oiling taper sleeve according to the characteristic diagram comprises the following steps: the end face of the oiling taper sleeve is positioned on a space circle, projections of the space circle under different postures are elliptic, the circle center is the center of the end face of the oiling taper sleeve, and the center coordinates of the end face of the taper sleeve can be obtained by carrying out least square parameter estimation fitting on the ring belt area of the end face of the oiling taper sleeve obtained through detection.
The least square parameter fitting estimation method comprises the following steps:
the plane equation for any ellipse can be expressed as:
f(x,y)=x 2 +Axy+By 2 +Cx+Dy+E=0,
where f (x, y) represents an elliptical plane function, and (x, y) is an elliptical center coordinate point, and A, B, C, D, E are parameters.
The objective function is:
where i=1, 2, …, N represents the total number of sample sets, (x i ,y i ) Representing the coordinates of the i point.
According to the least squares principle, each parameter is determined for the minimum value of the objective function, and it is possible to obtain:
[A B C D E] T =P -1 Q
and screening the boundary points searched in the central multidirectional mode, removing the minimum value and the maximum value of the most frequently occurring distortion condition of the searching step length, wherein the parameter N is not more than 10. Five parameters of the ellipse can be obtained, namely an abscissa x, an ordinate y, a major axis radius a, a minor axis radius b and an included angle theta between the major axis and the x axis:
the accurate detection and identification of the taper sleeve targets in different image sequences can be realized through the multidirectional search and least square fitting of the edge center of the oiling taper sleeve.
The characteristic recognition library under different poses is mainly used for realizing rapid and effective recognition of the refueling taper sleeve target. In order to eliminate the influence of factors such as the relative position of the oiling machine and the oil receiving machine, the atmospheric turbulence and the like on imaging, the images are subjected to scale normalization operation by utilizing a Gabor filter function, a characteristic sample set of characteristic parameters such as the number of the oiling machine, the number of the cone sleeve, the optical/infrared image of the cone sleeve, the binarized image of the cone sleeve, the center coordinates of an ellipse, the long axis, the short axis and the included angle is established, a characteristic recognition library is constructed, the algorithm is trained and evolved, and an autonomous image rapid processing and matching algorithm is formed.
Step seven: and (3) estimating the relative pose of the oiling taper sleeve and the oil receiving plug based on a binocular vision tracking mechanism according to the real-time image obtained in the step (IV) to obtain the relative position and pose information of the oiling taper sleeve and the oil receiving plug.
The method for obtaining the relative position and posture information of the oiling taper sleeve and the oil receiving plug comprises the following steps:
the binocular vision tracking is to convert front and back frame feature matching of a monocular camera into feature matching between left and right views by using a triangulation method, so that positioning and resolving precision in a detection process is improved, the problem of matching of targets under severe disturbance is solved by using an epipolar geometry relation between the binocular cameras, and stability and high efficiency of an algorithm are ensured, and the method comprises the following steps: determining two rays by the left and right image points and the optical center of the camera, wherein the space intersection point of the two rays is the position of an actual object point P; and respectively establishing coordinate systems of the optical camera and the infrared camera, wherein P1 and P2 are image points of the P point on the left and right image planes, and solving the coordinates of the P point after establishing an equation set.
The relative pose estimation is mainly used for obtaining the relative positions of the oil receiving plug and the oil filling taper sleeve so as to realize successful butt joint of the oil receiving plug and the oil filling taper sleeve. In the refueling state, the approaching speed and acceleration between the oil receiving plug and the refueling cone sleeve are basically kept in a small range, so that the relative acceleration can be replaced by noise. Establishing a Kalman filtering equation under a continuous state:
wherein I is a 3×3 unit diagonal matrix, W (t) is a model error, z= [ X pd Y pd Z pd ]Representing relative position vectors, X pd Indicating the relative position of X-axis, Y pd Indicating the relative position of the Y axis, Z pd Indicating the relative position of the Z-axis,representing a matrix of relative positions>Representing posterior state variables, F KF Representing a state matrix, G representing a covariance matrix, and X (t) representing a priori state variables.
Discretizing Kalman filtering equation to obtainSampling time is deltat, then:
wherein X is k+1 Represents the posterior state estimation value at k+1, phi represents the state matrix, X k Represents the posterior state estimate at k, W k Representing model errors.
System noise variance matrix Q k The method comprises the following steps:
the measurement equation is:
wherein Y is k+1 Representing posterior state variables, V k Representing the measured value.
Measuring noise covariance matrix R k The method comprises the following steps:
therefore, a Kalman filtering algorithm is adopted to estimate the relative motion state of the oil receiving plug and the oil filling taper sleeve.
Step eight: the control computer controls the three-dimensional adjusting device to move through instructions according to the relative position and gesture information of the oil filling taper sleeve and the oil receiving plug and the central position coordinate of the oil filling taper sleeve, so that the oil receiving plug is aligned to the center of the oil filling taper sleeve for filling oil; the three-dimensional adjusting device comprises a multi-degree-of-freedom adjusting seat, a piston rod inner sleeve, a piston rod outer sleeve and an oil cylinder; the multi-degree-of-freedom adjusting seat can adjust the relative posture of the oil receiving plug in a three-dimensional space according to control, and the adjusting angle range is larger than +/-25 degrees; the inner sleeve of the piston rod is fixedly connected to one end of the bent pipe, and the inner sleeve can perform linear telescopic movement according to control to adjust the relative position of the oil receiving plug.
Step nine: after the oiling process is completed, the three-dimensional adjusting device enables the oil receiving plug to be adjusted to an initial orientation.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (10)
1. An aircraft autonomous application method based on artificial intelligence vision tracking is characterized by comprising the following steps:
step one: the optical image acquisition device and the infrared image acquisition device are respectively arranged at two sides of a machine head of the oil receiving machine, and the installation base line length of the optical image acquisition device and the infrared image acquisition device is in the range of 0.5m and 1.5 m;
step two: the method comprises the steps that an optical image acquisition device and an infrared image acquisition device are used for acquiring images of the oiling taper sleeve in different postures respectively, and the images are transmitted to a control computer;
step three: processing the image by using an image processing method based on artificial intelligence to obtain an image recognition model of the oiling taper sleeve;
step four: when the oil receiving machine needs to be filled with oil, respectively utilizing an optical image acquisition device and an infrared image acquisition device to acquire real-time images of the oil filling taper sleeve, and transmitting the real-time images to a control computer;
step five: processing the real-time image by using an image processing method based on visual tracking and an image recognition model of the oiling taper sleeve respectively to obtain a characteristic image I and a characteristic image II;
step six: when the obtaining time of the characteristic image I is smaller than that of the characteristic image II, taking the characteristic image I as a characteristic image, otherwise taking the characteristic image II as the characteristic image, and calculating the central position coordinate of the oiling taper sleeve according to the characteristic image;
step seven: estimating the relative pose of the oiling taper sleeve and the oil receiving plug based on a binocular vision tracking mechanism according to the real-time image obtained in the step four, and obtaining the relative position and pose information of the oiling taper sleeve and the oil receiving plug;
step eight: the control computer controls the three-dimensional adjusting device to move through instructions according to the relative position and gesture information of the oil filling taper sleeve and the oil receiving plug and the central position coordinate of the oil filling taper sleeve, so that the oil receiving plug is aligned to the center of the oil filling taper sleeve for filling oil;
step nine: after the oiling process is completed, the three-dimensional adjusting device enables the oil receiving plug to be adjusted to an initial position.
2. The aircraft autonomous application method based on artificial intelligence vision tracking according to claim 1, wherein the method for processing the image by using the image processing method based on artificial intelligence is as follows:
s3.1, extracting and marking a characteristic region in the image by adopting a visual saliency method of target guidance to obtain a characteristic region image;
s3.2, intensively extracting SIFT features of the feature area image according to a fixed step length, performing sparse coding on the SIFT features by using an overcomplete dictionary, and establishing a feature recognition library;
and S3.3, establishing a basic algorithm model with randomly changed characteristic conditions, and inputting sparse characteristics in a characteristic recognition library into the basic algorithm model for training by using a maximum interval learning method to obtain an image recognition model of the oiling taper sleeve.
3. The aircraft autonomous application method based on artificial intelligence visual tracking according to claim 2, wherein the method for extracting and labeling the feature area in the image by using the visual saliency method of the target guidance in step S3.1 is as follows:
s3.1.1, randomly selecting M rectangular windows in the image, wherein the size and the position of the rectangular windows are randomly set;
s3.1.2, respectively calculating the linear density and FT contrast of the rectangular window, and linearly combining the linear density and the FT contrast to obtain the objectivity measurement of the rectangular window;
the linear density in the rectangular window is expressed as:
the FT contrast of a rectangular window is expressed as:
the objectivity metric for a rectangular window is expressed as:
Score(w)=αLD(w)+(1-α)FTC(w),
wherein LD (w) represents the linear density, line of rectangular window b (p) represents a straight line segment split binary image, area (w) represents a rectangular window Area, FTC (w) represents a rectangular window contrast, S 0 (p) represents a specific interval salient feature value, S 0 'p' denotes a total salient feature value, score (w) denotes an object metric value, and α denotes a weight coefficient;
s3.1.3, screening m rectangular windows as characteristic region images by adopting non-maximum suppression according to the object performance measurement: and sequencing all the rectangular windows from large to small according to the object performance measurement, sequentially judging whether the rectangular windows overlap with more than 50% of the space areas in the selected rectangular windows, if so, discarding the rectangular windows, otherwise, judging the next rectangular window until m rectangular windows are screened out.
4. The aircraft autonomous application method based on artificial intelligence visual tracking according to claim 2, wherein the method for establishing the feature recognition library in step S3.2 is as follows: extracting SIFT features of a rectangular window from the feature region image according to the step length q and the size a multiplied by a, carrying out feature cascade on the extracted SIFT features to form a sample feature set X, and then carrying out sparse constraintLearning the training dictionary d from the sample feature set X under the constraint and sparse features S of the sample feature set X under the training dictionary d 1 Will sparse feature S 1 And adding the characteristic identification library.
5. The aircraft autonomous application method based on artificial intelligence visual tracking according to claim 1, wherein the method for processing the real-time image by using the image processing method based on visual tracking is as follows:
s5.1, performing image preprocessing on the acquired real-time image:
s5.1.1, calibrating and correcting an image: performing internal reference calibration on the optical image acquisition device and the infrared image acquisition device respectively, measuring an internal reference matrix and a distortion vector, and correcting the acquired real-time image according to the internal reference matrix and the distortion vector;
s5.1.2, image thresholding: threshold segmentation is carried out on the corrected real-time image by using a maximum inter-class variance method, and the target is separated from the complex background;
when the real-time image is an optical image, converting the real-time image from an RGB color space to an HSV color space, and performing color phase filtering and binarization processing to obtain a binarized image;
the conversion matrix from RGB color space to HSV color space is:
wherein R represents a red mode value, G represents a green mode value, B represents a blue mode value, V represents a luminance value, S represents saturation, and H represents hue;
when the real-time image is an infrared image, converting the real-time image from a gray level image to a binary image;
s5.1.3, image filtering: filtering the binarized image to obtain a filtered binary image;
s5.2, detecting and identifying characteristic points of the filtered binary image:
s5.2.1, edge fitting: performing edge fitting on a target area in the binary image to respectively obtain an inner ring and an outer ring of an edge ring belt of the oiling taper sleeve, and judging the target area as the ring belt of the end surface of the oiling taper sleeve when the center distance between the inner ring and the outer ring is smaller than a set threshold gamma;
s5.2.2, long axis ratio match identification: when the ratio of the long axes of the inner ring and the outer ring of the edge ring belt of the refueling cone sleeve is at a threshold value [ delta ] min ,δ max ]And when the range is within the range, the end surface area of the oiling taper sleeve is matched with the annular belt area, so that a characteristic image of the oiling taper sleeve is obtained.
6. The aircraft autonomous application method based on artificial intelligence visual tracking according to claim 1, wherein the method for calculating the center position coordinate of the refueling drogue according to the feature map is as follows: the end face of the oiling taper sleeve is positioned on a space circle, projections of the space circle under different postures are elliptic, the circle center is the center of the end face of the oiling taper sleeve, and the center coordinates of the end face of the taper sleeve can be obtained by carrying out least square parameter estimation fitting on the ring belt area of the end face of the oiling taper sleeve obtained through detection;
the least square parameter fitting estimation method comprises the following steps:
the plane equation for any ellipse can be expressed as:
f(x,y)=x 2 +Axy+By 2 +Cx+Dy+E=0,
wherein f (x, y) represents an elliptical plane function, and (x, y) is an elliptical central coordinate point, and A, B, C, D, E are parameters;
the objective function is:
where i=1, 2, …, N represents the total number of sample sets, (x i ,y i ) Representing the coordinates of the point i;
according to the least squares principle, each parameter is determined for the minimum value of the objective function, and it is possible to obtain:
[A B C D E] T =P -1 Q;
screening the boundary points searched in the center in a multidirectional manner, removing the minimum value and the maximum value of the most frequently occurring distortion condition of the searching step length, and obtaining the abscissa x, the ordinate y, the major axis radius a, the minor axis radius b and the included angle theta between the major axis and the x axis of the ellipse center with five parameters:
the accurate detection and identification of the taper sleeve targets in different image sequences can be realized through the multidirectional search and least square fitting of the edge center of the oiling taper sleeve.
7. The aircraft autonomous application method based on artificial intelligence visual tracking according to claim 1, wherein the method for obtaining the relative position and posture information of the refueling drogue and the oil receiving plug is as follows:
the binocular vision tracking is to convert the front and back frame characteristic matching of a monocular camera into characteristic matching between left and right views by using a triangulation method, and the specific method is as follows: determining two rays by the left and right image points and the optical center of the camera, wherein the space intersection point of the two rays is the position of an actual object point P; respectively establishing coordinate systems of an optical camera and an infrared camera, wherein P1 and P2 are image points of P points on left and right image planes, and solving coordinates of the P points after establishing an equation set;
in the refueling state, a Kalman filtering equation in a continuous state is established:
wherein I is a 3×3 unit diagonal matrix, W (t) is a model error, z= [ X pd Y pd Z pd ]Representing relative position vectors, X pd Indicating the relative position of X-axis, Y pd Indicating the relative position of the Y axis, Z pd Indicating the relative position of the Z-axis,representing a matrix of relative positions>Representing posterior state variables, F KF Representing a state matrix, G representing a covariance matrix, X (t) representing a priori state variables;
discretizing Kalman filtering equation to obtainSampling time is deltat, then:
wherein X is k+1 Represents the posterior state estimation value at k+1, phi represents the state matrix, X k Represents the posterior state estimate at k, W k Representing model errors;
system noise variance matrix Q k The method comprises the following steps:
the measurement equation is:
wherein Y is k+1 Representing posterior state variables, V k Representing the measured value;
measuring noise covariance matrix R k The method comprises the following steps:
therefore, a Kalman filtering algorithm is adopted to estimate the relative motion state of the oil receiving plug and the oil filling taper sleeve.
8. The autonomous aircraft oil receiving device based on artificial intelligence visual tracking according to any one of claims 1-7, comprising an oil receiving machine and an oil filling machine, wherein the oil receiving machine is provided with an oil receiving pipe, the oil filling machine is provided with an oil filling taper sleeve, and the oil filling taper sleeve is matched with the oil receiving pipe; the oil receiving machine is characterized in that an image acquisition device and a control computer are further arranged on the oil receiving machine, and the image acquisition device is used for acquiring image information of the oiling taper sleeve and sending the image information to the control computer.
9. The autonomous oil receiving device for an aircraft based on artificial intelligence visual tracking according to claim 8, wherein the oil receiving pipe comprises an oil receiving plug, a three-dimensional adjusting device, a bent pipe and a straight pipe; one end of the oil receiving plug is matched with the oiling taper sleeve, the other end of the oil receiving plug is connected with the three-dimensional adjusting device, the three-dimensional adjusting device is connected with one end of the bent pipe, the other end of the bent pipe is connected with one end of the straight pipe, and the other end of the straight pipe is communicated with the inlet of the fuel passage of the oil receiving machine.
10. The autonomous oil receiving device of the aircraft based on the artificial intelligence visual tracking according to claim 8, wherein a GPS positioning system and an inertial navigation system are arranged on the oil receiving machine, and the positioning obtained through the GPS positioning system and the pose information obtained through the inertial navigation system are transmitted to a control computer for processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110184390.2A CN112862862B (en) | 2021-02-10 | 2021-02-10 | Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110184390.2A CN112862862B (en) | 2021-02-10 | 2021-02-10 | Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862862A CN112862862A (en) | 2021-05-28 |
CN112862862B true CN112862862B (en) | 2023-11-17 |
Family
ID=75988022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110184390.2A Active CN112862862B (en) | 2021-02-10 | 2021-02-10 | Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862862B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359391A (en) * | 2022-01-10 | 2022-04-15 | 北京雷神博峰信息技术有限责任公司 | Automobile fuel filler port space positioning method based on geometric modeling |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104133480A (en) * | 2014-04-17 | 2014-11-05 | 中国航空工业集团公司沈阳飞机设计研究所 | Aerial oil receiving guide control method based on machine vision |
CN108665499A (en) * | 2018-05-04 | 2018-10-16 | 北京航空航天大学 | A kind of low coverage aircraft pose measuring method based on parallax method |
CN108955685A (en) * | 2018-05-04 | 2018-12-07 | 北京航空航天大学 | A kind of tanker aircraft tapered sleeve pose measuring method based on stereoscopic vision |
CN109636853A (en) * | 2018-11-23 | 2019-04-16 | 中国航空工业集团公司沈阳飞机设计研究所 | Air refuelling method based on machine vision |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109085845B (en) * | 2018-07-31 | 2020-08-11 | 北京航空航天大学 | Autonomous air refueling and docking bionic visual navigation control system and method |
-
2021
- 2021-02-10 CN CN202110184390.2A patent/CN112862862B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104133480A (en) * | 2014-04-17 | 2014-11-05 | 中国航空工业集团公司沈阳飞机设计研究所 | Aerial oil receiving guide control method based on machine vision |
CN108665499A (en) * | 2018-05-04 | 2018-10-16 | 北京航空航天大学 | A kind of low coverage aircraft pose measuring method based on parallax method |
CN108955685A (en) * | 2018-05-04 | 2018-12-07 | 北京航空航天大学 | A kind of tanker aircraft tapered sleeve pose measuring method based on stereoscopic vision |
CN109636853A (en) * | 2018-11-23 | 2019-04-16 | 中国航空工业集团公司沈阳飞机设计研究所 | Air refuelling method based on machine vision |
Non-Patent Citations (1)
Title |
---|
机器视觉辅助的插头锥套式无人机自主空中加油仿真;王旭峰;董新民;孔星炜;;科学技术与工程(第18期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112862862A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111856448A (en) | Marine obstacle identification method and system based on binocular vision and radar | |
CN103149939B (en) | A kind of unmanned plane dynamic target tracking of view-based access control model and localization method | |
CN111968128B (en) | Unmanned aerial vehicle visual attitude and position resolving method based on image markers | |
CN101609504B (en) | Method for detecting, distinguishing and locating infrared imagery sea-surface target | |
CN113627473B (en) | Multi-mode sensor-based water surface unmanned ship environment information fusion sensing method | |
CN111709968B (en) | Low-altitude target detection tracking method based on image processing | |
CN111126116A (en) | Unmanned ship river channel garbage identification method and system | |
CN109584264B (en) | Unmanned aerial vehicle vision guiding aerial refueling method based on deep learning | |
Zhang et al. | Robust method for measuring the position and orientation of drogue based on stereo vision | |
CN115546741A (en) | Binocular vision and laser radar unmanned ship marine environment obstacle identification method | |
CN112184765A (en) | Autonomous tracking method of underwater vehicle based on vision | |
CN112862862B (en) | Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method | |
Wu et al. | Autonomous UAV landing system based on visual navigation | |
CN113740864B (en) | Laser three-dimensional point cloud-based detector soft landing end-segment autonomous pose estimation method | |
Li et al. | Vision-based target detection and positioning approach for underwater robots | |
CN113933828A (en) | Unmanned ship environment self-adaptive multi-scale target detection method and system | |
Sun et al. | Automatic targetless calibration for LiDAR and camera based on instance segmentation | |
CN105447431A (en) | Docking airplane tracking and positioning method and system based on machine vision | |
CN117710458A (en) | Binocular vision-based carrier aircraft landing process relative position measurement method and system | |
CN112560922A (en) | Vision-based foggy-day airplane autonomous landing method and system | |
Zhang et al. | Tracking and position of drogue for autonomous aerial refueling | |
CN115797397B (en) | Method and system for all-weather autonomous following of robot by target personnel | |
CN116185049A (en) | Unmanned helicopter autonomous landing method based on visual guidance | |
CN115511853A (en) | Remote sensing ship detection and identification method based on direction variable characteristics | |
CN112648998A (en) | Unmanned aerial vehicle cooperative target autonomous guidance measurement method based on shape and color |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |