CN106570820A - Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV) - Google Patents

Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV) Download PDF

Info

Publication number
CN106570820A
CN106570820A CN201610901957.2A CN201610901957A CN106570820A CN 106570820 A CN106570820 A CN 106570820A CN 201610901957 A CN201610901957 A CN 201610901957A CN 106570820 A CN106570820 A CN 106570820A
Authority
CN
China
Prior art keywords
image
point
camera
coordinate
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610901957.2A
Other languages
Chinese (zh)
Other versions
CN106570820B (en
Inventor
陈朋
陈志祥
党源杰
朱威
梁荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201610901957.2A priority Critical patent/CN106570820B/en
Publication of CN106570820A publication Critical patent/CN106570820A/en
Application granted granted Critical
Publication of CN106570820B publication Critical patent/CN106570820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

A monocular visual 3D feature extraction method based on a four-rotor UAV comprises the following steps that 1) an image is obtained and preprocessed; 2) 2D image feature points are extracted, and a feature descriptor is established; 3) an airborne GPS coordinate, height data and an IMU sensor parameter are obtained; and 4) a coordinate system is established for the 2D feature descriptor according to body parameters, and 3D coordinate information is obtained. The monocular-camera 3D feature extraction method aimed at a movement tracking problem of the four-rotor UAV is simple and low in operand, and the realization process of movement tracking of the four-rotor UAV is simplified greatly.

Description

A kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles
Technical field
The present invention relates to the monocular vision field of four rotor wing unmanned aerial vehicles, especially a kind of monocular for being directed to four rotor wing unmanned aerial vehicles The scene of visual movement object identification tracking is come the three-dimensional body feature extracting method realized.
Background technology
In recent years, with computer technology, Theory of Automatic Control, embedded development, chip is designed and sensor technology Develop rapidly, allow unmanned vehicle to possess higher disposal ability while more miniaturization, the phase on unmanned plane Pass technology is also received more and more attention;SUAV possesses that manipulation is flexible, the advantage such as endurance is strong such that it is able to Complex task is processed in narrow and small environment, military attack is militarily able to carry out, is searched under adverse circumstances, information acquisition is contour The work of soldier is substituted under risk environment;On civilian, provide for all trades and professions practitioner and take photo by plane, remote equipment is patrolled and examined, ring Border is monitored, rescue and relief work etc. function;
Four rotors are common rotor unmanned aircraft, and by regulation motor rotating speed the pitching of aircraft is realized, roll and Yaw maneuver;Relative to fixed-wing unmanned plane, rotor wing unmanned aerial vehicle possesses obvious advantage:First, airframe structure is simple, volume Little, unit volume can produce greater lift;Secondly, dynamical system is simple, need to only adjust complete by each rotor motor rotating speed Into the control of aerial statue, various distinctive offline mode such as VTOL, hovering are capable of achieving, and system degree of intelligence is high, Aircraft aerial statue holding capacity is strong;
High-definition camera is carried on unmanned plane, real time execution machine vision algorithm has become hot research neck in recent years Domain, unmanned plane possesses flexible visual angle, and people can be helped to capture the image that some ground moving video cameras are difficult to capture, if Lightweight photographic head is embedded on small-sized four rotor wing unmanned aerial vehicle, moreover it is possible to which abundant and cheap information is provided;Target following is referred to In the unmanned plane of low altitude flight, the relative displacement between target and unmanned plane is obtained by calculating the visual information of camera acquisition, And then attitude and the position of adjust automatically unmanned plane, make tracked mobile surface targets be maintained at camera fields of view immediate vicinity, Realize that unmanned plane follows target motion to complete tracing task, but due to the technical limitations of monocular-camera, it is desirable to moved The three-dimensional coordinate information of object is extremely difficult, therefore, it is desirable to realize moving target tracking need it is a kind of it is simple efficiently Three-dimensional feature extracting method.
The content of the invention
Three-dimensional cannot be effectively extracted in order to overcome existing four rotor wing unmanned aerial vehicles platform monocular vision feature extracting method The deficiency of feature, in order to realize the tracking of ground moving object on monocular-camera, can be by the motion of aircraft letter Turn under a certain height two dimensional surface motion, the two dimensional character plane accessed by monocular-camera be considered as perpendicular to Plane of movement, therefore the relative distance for also needing to obtain between two dimensional character plane and aircraft can realize the fortune of aircraft Motion tracking needs the depth of view information for obtaining characteristic plane, and the two dimensional character for adding depth of view information can be approximated to be three-dimensional spy Reference ceases, and based on such thinking, the present invention proposes that a kind of monocular vision three-dimensional feature for being based on four rotor wing unmanned aerial vehicle platforms is carried Take method.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles, comprises the following steps:
1) obtain image and pretreatment is carried out to image;
2) extract two dimensional image characteristic point and set up feature descriptor;
3) Airborne GPS coordinate, altitude information and IMU sensor parameters are obtained;
4) establishment of coordinate system is carried out to two dimensional character descriptor according to organism parameter, obtains three-dimensional coordinate information, process is such as Under:
First, Intrinsic Matrix is set up according to camera parameter, according to the matrix by step 3) in the two dimensional character that gets Coordinate information is transformed into photo coordinate system I, and according to known focus information camera coordinates system C is transformed into;Secondly, according to camera Coordinate is further changed with the fix error angle of body with relative position be tied to body axis system B;Finally, according to IMU attitude angle Spend and merge the two dimension spy with depth of view information that aircraft GPS coordinate information and elevation information are obtained in world coordinate system E Levy descriptor.
Further, the step 4) in, the three-dimensional coordinate information of two dimensional character, including following step are obtained according to organism parameter Suddenly:
4.1) conversion of image coordinate system and photo coordinate system
Image coordinate system is the image pixel coordinates system [u, v] with the upper left corner as originT, the coordinate system do not have physics list Position, therefore introduce origin OIPhoto coordinate system I=[x on optical axisI,yI]T, image plane is camera according to pinhole imaging system mould The plane with physical significance that type builds, it is assumed that physical size of each pixel on u axles and v direction of principal axis be dx and Dy, it is meant that the actual size of pixel on sensitive chip, is the bridge for connecting image coordinate system and full-size(d) coordinate system, dx It is relevant with focal length of camera f with dy, then point (the x on photo coordinate system1,y1) and pixel coordinate system midpoint (u1,v1) correspondence pass System is as follows:
Wherein, (u0,v0) for the central point in image coordinate system, i.e. pixel corresponding to the origin of photo coordinate system, OrderComprising four parameters relevant with camera internal structure, referred to as the internal reference matrix of camera;
4.2) conversion of photo coordinate system and camera coordinates system
Assume a point P in camera coordinates systemC1=(xC,yC,zC), it is P to connect subpoint of the photocentre in image coordinate systemI1 =(xI,yI), then the coordinate transformation relation between this 2 points is as follows:
It is converted into matrix form as follows:
Wherein f is camera focus;
4.3) conversion of camera coordinates system and world coordinate system
Firstly, since there is alignment error with camera in aircraft, here with [α, beta, gamma]TRepresent and fixed three-dimensional is installed by mistake Declinate, uses [xe,ye,ze]TRepresent video camera to the space length of fuselage coordinates origin, then camera coordinates system and body axis system Relation T=To represent, i.e.,
C=TB (4)
Wherein C represents camera coordinates system, and B represents body axis system;
Secondly, for a point P in spaceE=(xE,yE,zE), the attitude angle of its corresponding camera coordinate system and video camera It is relevant with position, and unmanned plane, in flight course, attitude angle and positional information are obtained in real time, and four rotor wing unmanned aerial vehicles are one The system with 6DOF is planted, its attitude angle is divided into the angle of pitch, roll angle θ and yaw angle, its rotary shaft is respectively defined as X, Y, Z axis, coordinate origin is the center of gravity of aircraft, is multiplied after the spin matrix for respectively obtaining three axles and obtains the rotation of body Matrix:
The x that can be measured by the IMU sensors with four gyroplanes, y, three components of acceleration of z-axis with Gyroscope component Jing quaternarys number is resolved and obtained;OrderWherein (x, y, z) is unmanned plane position in space Confidence ceases, and z is unmanned plane during flying height, and unmanned plane position (x, y, z) can be obtained by GPS and barometer, then PECorrespondence Camera coordinates system under point (xC,yC,zC) can be calculated by relationship below:
Wherein T is camera coordinates system and body axis system transformation matrix, and R is body spin matrix, and M is the world of aircraft Coordinate points, [xE,yE,zE]TThe three-dimensional coordinate of as required characteristic point.
Further, the step 1) in, obtain the step of image and pretreatment as follows:
1.1) image is gathered
Based on the Linux development environments of quadrotor platform, using robot operating system ROS image subject is subscribed to Mode obtain image, camera drives to be realized by ROS and openCV;
1.2) Image semantic classification
The coloured image for collecting first has to carry out gray processing, removes useless image color information, side used herein Method is the gray value that the weighted mean of tri- components of R, G, B for obtaining each pixel is this pixel, different here The weights of passage are optimized according to operational efficiency, it is to avoid floating-point operation computing formula is:
Gray=(R × 30+G × 59+B × 11+50)/100 (7)
Wherein Gray is the gray value of pixel, and R, G, B are respectively the numerical value of red, green, blue chrominance channel.
Further, the step 2) in, the process extracted two dimensional image characteristic point and set up feature descriptor is:
2.1) ORB extracts characteristic point
ORB detects angle point first with Harris angular-point detection methods, measures direction of rotation using brightness center afterwards; Assume that the brightness of an angle point, from its off-centring, then the direction intensity put around synthesis, calculates the direction of angle point, definition Following intensity matrix:
mpq=∑x,y xpyqI(x,y) (8)
Wherein x, y are the centre coordinate of image block, and I (x, y) represents the gray scale at center, xp,yqRepresent the inclined of Dian Dao centers Move, then the direction of angle point is expressed as:
From this vector of angle point center construction, then the deflection θ of this image block can be expressed as:
θ=tan-1(m01,m10) (10)
Because the key point that ORB is extracted has direction, the characteristic point extracted hence with ORB has rotational invariance;
2.2) LDB feature descriptors are set up
After the key point for obtaining image, the feature descriptor of image is just set up using LDB;The processing procedure of LDB according to Secondary is to build gaussian pyramid, build integrogram, binary system test, and position selects and connects;
In order to allow LDB to possess scale invariability, gaussian pyramid is built, and calculate characteristic point in corresponding pyramid level Corresponding LDB descriptors:
Wherein, I (x, y) be given image, G (x, y, σi) be Gaussian filter, σiGradually increase, for building 1 to L layers Gaussian pyramid Pyri;For, without the feature extraction of notable size estimation, needing that each characteristic point is calculated as ORB The LDB descriptions of each layer of pyramid;
LDB calculates rotational coordinates, and using closest interpolation method, one oriented segment of in-time generatin;
Establish vertical integrogram or rotate integrogram and extract after light intensity and gradient information, carry out between paired grid τ binary detections, detection method such as following formula:
Wherein Func ()={ Iavg,dx,dy, for extracting the description information of each grid;
An image block is given, LDB this image block is first divided into the grid cell of the size such as n × n, is extracted The average luminous intensity and gradient information of each grid cell, is respectively compared light intensity and gradient information between paired grid cell, By relevant position 1 of the result more than 0;Average intensity and the gradient along x or y directions can be effectively in different grid cells Image is distinguished, therefore, define Func (i) as follows:
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)
WhereinFor the average intensity of grid cell i, dx(i) =Gradientx(i), dy(i)=GradientyI (), m is the total pixel number in grid cell i, because LDB is used Size and grid, m is consistent on same layer gaussian pyramid;Gradientx(i) and GradientyI () is respectively net Gradients of the lattice unit i along x or y directions;
2.3) matching of feature descriptor
After the LDB descriptors of two width images are obtained, the descriptor of two width images is matched;Using K closest to method To match to two descriptors;For each characteristic point in To Template image, the point is searched in the input image Two matchings of arest neighbors, compare the distance between the two matchings, if the matching distance of any is less than 0.8 in template image The matching distance of times input picture, it is believed that the corresponding point of point and input picture in template is recorded corresponding to be effectively matched Coordinate figure, when the match point between two width images is more than 4, it is believed that have found target object, corresponding coordinate in the input image Information is two dimensional character information.
Further, the step 3) in, obtain the process of Airborne GPS coordinate, altitude information and IMU sensor parameters For:
MAVROS is ROS bag of the third party team for MAVLink exploitations, and as startup MAVROS and with aircraft control is flown After connection, the node will start the sensor parameters and flying quality for issuing aircraft, and the gps coordinate of aircraft is subscribed to here Theme, GPS height themes, the message of IMU attitude angle themes, it is possible to get corresponding data.
The present invention technology design be:With quadrotor technology maturation with it is stable and in large quantities in civilian city Promote on field, increasing people is conceived to the visual system that can be carried on quadrotor, the present invention is exactly in four rotations Rotor aircraft realizes what is proposed under the research background of motion target tracking.
Quadrotor is to realize the tracking of moving target, it is necessary first to extract the three-dimensional feature information of target, and Three-dimensional feature information is difficult to extract to obtain in the case of using monocular-camera, but, if by the tracking of aircraft The two dimensional surface motion that motion is reduced under a certain height can just be reduced to required three-dimensional feature information believe with the depth of field The two dimensional character information of breath, therefore, it is that two dimensional character adds depth of view information that the present invention is proposed according to the space coordinatess of aircraft, with Realize that approximate three-dimensional feature information is extracted.
Mainly included based on the monocular vision three-dimensional feature extracting method of four rotor wing unmanned aerial vehicles:Obtain image and gray scale Change, will further extract the two dimensional character information in image, obtain the space coordinatess and IMU angle informations of aircraft, final root Establishment of coordinate system is carried out to two dimensional character according to organism parameter, three-dimensional feature information is obtained.
The beneficial effect of this method is mainly manifested in:A kind of letter is proposed for the motion tracking problem of quadrotor The low monocular-camera three-dimensional feature extracting method of list and operand, enormously simplify the realization of quadrotor motion tracking Process.
Description of the drawings
Fig. 1 is a kind of monocular vision three-dimensional feature extracting method flow chart for being based on four rotor wing unmanned aerial vehicles;
Fig. 2 is the relation between each coordinate system in three-dimensional feature extraction process, wherein [xc,yc,zc]TIt is camera coordinates System, [xI,yI,zI]TIt is photo coordinate system, [xE,yE,zE]TIt is world coordinate system.
Specific embodiment
The present invention is described further below in conjunction with the accompanying drawings:
See figures.1.and.2, a kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles, comprising following Step:
1) image and pretreatment are obtained:
1.1) image is gathered
In general, collection image method have very more in, the present invention be based on quadrotor platform Linux Development environment, the mode for subscribing to image subject using robot operating system ROS obtains image, camera drive by ROS and OpenCV is realized;
1.2) Image semantic classification
Because feature extracting method used in the present invention is based on the texture light intensity and gradient information of image, therefore The coloured image for collecting first has to carry out gray processing, removes useless image color information, and method used herein is to obtain The weighted mean of tri- components of R, G, B of each pixel is the gray value of this pixel, here the power of different passages Value can be optimized according to operational efficiency, avoid the floating-point operation computing formula to be here:
Gray=(R × 30+G × 59+B × 11+50)/100 (7)
Wherein Gray is the gray value of pixel, and R, G, B are respectively the numerical value of red, green, blue chrominance channel.
2) extract two dimensional image characteristic point and set up feature descriptor:
2.1) ORB extracts characteristic point
ORB is also referred to as rBRIEF, extracts the feature of local invariant, is the improvement to BRIEF algorithms, and BRIEF computings are fast Degree is fast, but no rotational invariance, and it is more sensitive to noise ratio, ORB solves the two shortcomings of BRIEF;In order to allow Algorithm can have rotational invariance, and ORB detects angle point first with Harris angular-point detection methods, afterwards using brightness center (Intensity Centroid) is measuring direction of rotation;Assume that the brightness of an angle point from its off-centring, then synthesizes The direction intensity of surrounding point, can calculate the direction of angle point, be defined as follows intensity matrix:
mpqx,y xpyqI(x,y) (8)
Wherein x, y are the centre coordinate of image block, and I (x, y) represents the gray scale at center, xp,yqRepresent the inclined of Dian Dao centers Move, then the direction of angle point can be expressed as:
From this vector of angle point center construction, then the deflection θ of this image block can be expressed as:
θ=tan-1(m01,m10) (10)
Because the key point that ORB is extracted has direction, the characteristic point extracted hence with ORB has rotational invariance;
2.2) LDB feature descriptors are set up
After the key point for obtaining image, it is possible to the feature descriptor of image is set up using LDB;LDB has 5 mainly Step, is successively to build gaussian pyramid, principal direction to estimate, build integrogram, binary system test, and position selects and connects, due to ORB has been selected to extract characteristic point herein, itself can save principal direction estimation already provided with directivity;
In order to allow LDB to possess scale invariability, gaussian pyramid is built, and calculate characteristic point in corresponding pyramid level Corresponding LDB descriptors:
Wherein, I (x, y) be given image, G (x, y, σi) be Gaussian filter, σiGradually increase, for building 1 to L layers Gaussian pyramid Pyri;For, without the feature extraction of notable size estimation, needing that each characteristic point is calculated as ORB The LDB descriptions of each layer of pyramid;
LDB effectively calculates the average intensity and gradient information of grid cell using integration diagram technology, if image has Rotation, it is impossible to simply using vertical integrogram, and need to set up rotation integrogram, the rotation integrogram of image block is by cumulative Pixel in principal direction generates the two big main computing costs for rotating integrogram in calculating rotational coordinates and directed graph to build As the interpolation of block, in order to reduce this two parts computing cost, azimuth information can be quantified, and set up rotational coordinates lookup in advance Table, however, fine orientation quantifies to need to set up larger look-up table, the internal memory of low speed reads and in turn results in longer fortune The row time, therefore, LDB calculates rotational coordinates, and using closest interpolation method, one oriented segment of in-time generatin;
Establish vertical integrogram or rotate integrogram and extract after light intensity and gradient information, it is possible in paired grid Between carry out τ binary detections, detection method such as following formula:
Wherein Func ()={ Iavg,dx,dy, for extracting the description information of each grid;
An image block is given, LDB this image block is first divided into the grid cell of the size such as n × n, is extracted The average luminous intensity and gradient information of each grid cell, is respectively compared light intensity and gradient information between paired grid cell, By relevant position 1 of the result more than 0, with reference to the significantly high matching accuracy rate of matching process of light intensity and gradient;In different nets Average intensity and the gradient along x or y directions can efficiently differentiate image in lattice unit, therefore, define Func (i) as follows:
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)
WhereinFor the average intensity of grid cell i, dx(i) =Gradientx(i), dy(i)=GradientyI (), m is the total pixel number in grid cell i, because LDB is used Size and grid, m is consistent on same layer gaussian pyramid;Gradientx(i) and GradientyI () is respectively net Gradients of the lattice unit i along x or y directions;
2.3) matching of feature descriptor
After the LDB descriptors of two width images are obtained, it is possible to which the descriptor of two width images is matched;The present invention is adopted With K two descriptors are matched closest to method (k Nearest Neighbors);The thought of KNN assumes that each Individual class includes multiple sample datas, and each data has a unique class labelling to represent which point these samples are belonging to Class, calculates each sample data to the distance of data to be sorted, takes the K sample data nearest with data to be sorted, this K sample The sample data of which classification occupies the majority in notebook data, then data to be sorted just belong to the category;For in To Template image Each characteristic point, two of arest neighbors matchings of the point are searched in the input image, compare the distance between the two matchings, If matching distance of the matching distance of any less than 0.8 times of input picture in template image, it is believed that the point and input in template The corresponding point of image records corresponding coordinate figure to be effectively matched, and when the match point between two width images is more than 4, recognizes herein To have found target object in the input image, corresponding coordinate information is two dimensional character information.
3) process of acquisition Airborne GPS coordinate, altitude information and IMU sensor parameters is:
MAVROS is ROS bag of the third party team for MAVLink exploitations, and as startup MAVROS and with aircraft control is flown After connection, the node will start the sensor parameters and flying quality for issuing aircraft, and the gps coordinate of aircraft is subscribed to here Theme, GPS height themes, the message of IMU attitude angle themes, it is possible to get corresponding data.
4) three-dimensional coordinate information of two dimensional character is obtained according to organism parameter, process is as follows:
4.1) conversion of image coordinate system and photo coordinate system
Image coordinate system is the image pixel coordinates system [u, v] with the upper left corner as originT, the coordinate system do not have physics list Position, therefore introduce origin OIPhoto coordinate system I=[x on optical axisI,yI]T, image plane is camera according to pinhole imaging system mould The plane with physical significance that type builds, it is assumed that physical size of each pixel on u axles and v direction of principal axis be dx and Dy, it is meant that the actual size of pixel on sensitive chip, is the bridge for connecting image coordinate system and full-size(d) coordinate system, dx It is relevant with focal length of camera f with dy, then point (the x on photo coordinate system1,y1) and pixel coordinate system midpoint (u1,v1) correspondence pass System is as follows:
Wherein, (u0,v0) for the central point in image coordinate system, i.e. pixel corresponding to the origin of photo coordinate system, OrderComprising four parameters relevant with camera internal structure, referred to as the internal reference matrix of camera;
4.2) conversion of photo coordinate system and camera coordinates system
Assume a point P in camera coordinates systemC1=(xC,yC,zC), it is P to connect subpoint of the photocentre in image coordinate systemI1 =(xI,yI), then the coordinate transformation relation between this 2 points is as follows:
Matrix form can be converted into as follows:
Wherein f is camera focus;
4.3) conversion of camera coordinates system and world coordinate system
Firstly, since there is alignment error with camera in aircraft, here with [α, beta, gamma]TRepresent and fixed three-dimensional is installed by mistake Declinate, uses [xe,ye,ze]TRepresent video camera to the space length of fuselage coordinates origin, then camera coordinates system and body axis system Relation can useTo represent, i.e.,
C=TB (4)
Wherein C represents camera coordinates system, and B represents body axis system;
Secondly, for a point P in spaceE=(xE,yE,zE), the attitude angle of its corresponding camera coordinate system and video camera It is relevant with position, and unmanned plane, in flight course, attitude angle and positional information can be obtained in real time, four rotor wing unmanned aerial vehicles It is a kind of system with 6DOF, its attitude angle can be divided into the angle of pitchRoll angle θ and yaw angleIts rotary shaft point X, Y, Z axis is not defined as, coordinate origin is the center of gravity of aircraft, is multiplied after the spin matrix for respectively obtaining three axles and is obtained The spin matrix of body:
The x that can be measured by the IMU sensors with four gyroplanes, y, three components of acceleration of z-axis with Gyroscope component Jing quaternarys number is resolved and obtained;OrderWherein (x, y, z) is unmanned plane position in space Confidence ceases, and z is unmanned plane during flying height, and unmanned plane position (x, y, z) can be obtained by GPS and barometer, then PECorrespondence Camera coordinates system under point (xC,yC,zC) can be calculated by relationship below:
Wherein T is camera coordinates system and body axis system transformation matrix, and R is body spin matrix, and M is the world of aircraft Coordinate points, [xE,yE,zE]TThe three-dimensional coordinate of as required characteristic point.

Claims (5)

1. a kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles, it is characterised in that:Methods described includes Following steps:
1) obtain image and pretreatment is carried out to image;
2) extract two dimensional image characteristic point and set up feature descriptor;
3) Airborne GPS coordinate, altitude information and IMU sensor parameters are obtained;
4) establishment of coordinate system is carried out to two dimensional character descriptor according to organism parameter, obtains three-dimensional coordinate information, process is as follows:
First, Intrinsic Matrix is set up according to camera parameter, according to the matrix by step 3) in the two dimensional character coordinate that gets Information is transformed into photo coordinate system I, and according to known focus information camera coordinates system C is transformed into;Secondly, according to camera and machine The fix error angle of body further changes coordinate and is tied to body axis system B with relative position;Finally, according to IMU attitude angles simultaneously And the two dimensional character with depth of view information that fusion aircraft GPS coordinate information and elevation information are obtained in world coordinate system E is retouched State symbol.
2. a kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles as claimed in claim 1, its feature It is:The step 4) in, the three-dimensional coordinate information of two dimensional character is obtained according to organism parameter, comprise the following steps:
4.1) conversion of image coordinate system and photo coordinate system
Image coordinate system is the image pixel coordinates system [u, v] with the upper left corner as originT, the coordinate system do not have physical unit, therefore Introduce origin OIPhoto coordinate system I=[x on optical axisI,yI]T, image plane is that camera is constructed according to national forest park in Xiaokeng The plane with physical significance come, it is assumed that physical size of each pixel on u axles and v direction of principal axis is dx and dy, and it contains Justice is the actual size of pixel on sensitive chip, is the bridge for connecting image coordinate system and full-size(d) coordinate system, dx and dy with Focal length of camera f is relevant, then point (the x on photo coordinate system1,y1) and pixel coordinate system midpoint (u1,v1) corresponding relation is such as Under:
u 1 v 1 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 x 1 y 1 1 = K i n x 1 y 1 1 - - - ( 1 )
Wherein, (u0,v0) for the central point in image coordinate system, i.e. pixel corresponding to the origin of photo coordinate system, orderComprising four parameters relevant with camera internal structure, referred to as the internal reference matrix of camera;
4.2) conversion of photo coordinate system and camera coordinates system
Assume a point P in camera coordinates systemC1=(xC,yC,zC), it is P to connect subpoint of the photocentre in image coordinate systemI1= (xI,yI), then the coordinate transformation relation between this 2 points is as follows:
x I = x c f z c y I = y c f z c - - - ( 2 )
It is converted into matrix form as follows:
x c f y c f z c = f 0 0 0 f 0 0 0 1 x c y c z c - - - ( 3 )
Wherein f is camera focus;
4.3) conversion of camera coordinates system and world coordinate system
Firstly, since there is alignment error with camera in aircraft, here with [α, beta, gamma]TRepresent and fixed three-dimensional error angle be installed, With [xe,ye,ze]TRepresent video camera to the space length of fuselage coordinates origin, the then relation of camera coordinates system and body axis system With To represent, i.e.,
C=TB (4)
Wherein C represents camera coordinates system, and B represents body axis system;
Secondly, for a point P in spaceE=(xE,yE,zE), the attitude angle and institute of its corresponding camera coordinate system and video camera Pass is equipped with place, and unmanned plane, in flight course, attitude angle and positional information are obtained in real time, and four rotor wing unmanned aerial vehicles are a kind of tools There is the system of 6DOF, its attitude angle is divided into the angle of pitchRoll angle θ and yaw angleIts rotary shaft is respectively defined as X, Y, Z Axle, coordinate origin is the center of gravity of aircraft, is multiplied after the spin matrix for respectively obtaining three axles and obtains the spin matrix of body:
The x that can be measured by the IMU sensors with four gyroplanes, y, three components of acceleration of z-axis and gyro Instrument component Jing quaternarys number is resolved and obtained;OrderWherein (x, y, z) is unmanned plane position letter in space Breath, z is unmanned plane during flying height, and unmanned plane position (x, y, z) can be obtained by GPS and barometer, then PECorresponding phase Point (x under machine coordinate systemC,yC,zC) can be calculated by relationship below:
Wherein T is camera coordinates system and body axis system transformation matrix, and R is body spin matrix, and M is the world coordinates of aircraft Point, [xE,yE,zE]TThe three-dimensional coordinate of as required characteristic point.
3. a kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles as claimed in claim 1 or 2, it is special Levy and be:The step 1) in, obtain the step of image and pretreatment as follows:
1.1) image is gathered
Based on the Linux development environments of quadrotor platform, using robot operating system ROS the side of image subject is subscribed to Formula obtains image, and camera drives to be realized by ROS and openCV;
1.2) Image semantic classification
The coloured image for collecting first has to carry out gray processing, removes useless image color information, and method used herein is The weighted mean for obtaining tri- components of R, G, B of each pixel is the gray value of this pixel, here different passages Weights be optimized according to operational efficiency, it is to avoid floating-point operation computing formula is:
Gray=(R × 30+G × 59+B × 11+50)/100 (7)
Wherein Gray is the gray value of pixel, and R, G, B are respectively the numerical value of red, green, blue chrominance channel.
4. a kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles as claimed in claim 1 or 2, it is special Levy and be:The step 2) in, the process extracted two dimensional image characteristic point and set up feature descriptor is:
2.1) ORB extracts characteristic point
ORB detects angle point first with Harris angular-point detection methods, measures direction of rotation using brightness center afterwards;Assume The brightness of one angle point calculates the direction of angle point from its off-centring, then the direction intensity put around synthesis, is defined as follows Intensity matrix:
mpq=∑x,yxpyqI(x,y) (8)
Wherein x, y are the centre coordinate of image block, and I (x, y) represents the gray scale at center, xp,yqThe skew at Dian Dao centers is represented, then The direction of angle point is expressed as:
C = ( m 10 m 00 , m 01 m 00 ) - - - ( 9 )
From this vector of angle point center construction, then the deflection θ of this image block can be expressed as:
θ=tan-1(m01,m10) (10)
Because the key point that ORB is extracted has direction, the characteristic point extracted hence with ORB has rotational invariance;
2.2) LDB feature descriptors are set up
After the key point for obtaining image, the feature descriptor of image is set up using LDB;The processing procedure of LDB is successively structure Build gaussian pyramid, build integrogram, binary system test, position selects and connects;
In order to allow LDB to possess scale invariability, gaussian pyramid is built, and calculate characteristic point correspondence in corresponding pyramid level LDB descriptors:
Pyr i = I ( x , y ) × G ( x , y , σ i ) = 1 2 πσ i e - x 2 + y 2 2 σ i 2 , 1 ≤ i ≤ L - - - ( 11 )
Wherein, I (x, y) be given image, G (x, y, σi) be Gaussian filter, σiGradually increase, for build 1 to L floor heights this Pyramid Pyri;For, without the feature extraction of notable size estimation, needing to calculate each characteristic point golden word as ORB The LDB descriptions of each layer of tower;
LDB calculates rotational coordinates, and using closest interpolation method, one oriented segment of in-time generatin;
Establish vertical integrogram or rotate integrogram and extract after light intensity and gradient information, τ is just carried out between paired grid Binary detection, detection method such as following formula:
τ ( F u n c ( i ) , F u n c ( i ) ) = 1 ( F u n c ( i ) , F u n c ( i ) ) > 0 , i ≠ j 0 - - - ( 12 )
Wherein Func ()={ Iavg,dx,dy, for extracting the description information of each grid;
An image block is given, this image block is first divided into the grid cell of the size such as n × n, each is extracted for LDB The average luminous intensity and gradient information of grid cell, is respectively compared light intensity and gradient information between paired grid cell, will tie Relevant position 1 of the fruit more than 0;Average intensity and the gradient along x or y directions can be efficiently differentiated in different grid cells Image, therefore, define Func (i) as follows:
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)
WhereinFor the average intensity of grid cell i, dx(i)= Gradientx(i), dy(i)=GradientyI (), m is the total pixel number in grid cell i, because LDB uses etc. big Little and grid, m is consistent on same layer gaussian pyramid;Gradientx(i) and GradientyI () is respectively grid Gradients of the unit i along x or y directions;
2.3) matching of feature descriptor
After the LDB descriptors of two width images are obtained, the descriptor of two width images is matched;Using K closest to method come right Two descriptors are matched;For each characteristic point in To Template image, the nearest of the point is searched in the input image Two adjacent matchings, compare the distance between the two matchings, if the matching distance of any is defeated less than 0.8 times in template image Enter the matching distance of image, it is believed that the corresponding point of point and input picture in template records corresponding coordinate to be effectively matched Value, when the match point between two width images is more than 4, it is believed that have found target object, corresponding coordinate information in the input image As two dimensional character information.
5. a kind of monocular vision three-dimensional feature extracting method for being based on four rotor wing unmanned aerial vehicles as claimed in claim 1 or 2, it is special Levy and be:The step 3) in, the method for obtaining Airborne GPS coordinate, altitude information and IMU sensor parameters is:
MAVROS is ROS bag of the third party team for MAVLink exploitations, and as startup MAVROS and with aircraft control connection is flown Afterwards, the node will start the sensor parameters and flying quality for issuing aircraft, the gps coordinate master that aircraft is subscribed to here Topic, GPS height themes, the message of IMU attitude angle themes, get corresponding data.
CN201610901957.2A 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone Active CN106570820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610901957.2A CN106570820B (en) 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610901957.2A CN106570820B (en) 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone

Publications (2)

Publication Number Publication Date
CN106570820A true CN106570820A (en) 2017-04-19
CN106570820B CN106570820B (en) 2019-12-03

Family

ID=58532962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610901957.2A Active CN106570820B (en) 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone

Country Status (1)

Country Link
CN (1) CN106570820B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107966112A (en) * 2017-12-03 2018-04-27 中国直升机设计研究所 A kind of large scale rotor movement parameter measurement method
CN108255187A (en) * 2018-01-04 2018-07-06 北京科技大学 A kind of micro flapping wing air vehicle vision feedback control method
CN108335329A (en) * 2017-12-06 2018-07-27 腾讯科技(深圳)有限公司 Applied to the method for detecting position and device, aircraft in aircraft
CN108681324A (en) * 2018-05-14 2018-10-19 西北工业大学 Mobile robot trace tracking and controlling method based on overall Vision
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN108759826A (en) * 2018-04-12 2018-11-06 浙江工业大学 A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN109117690A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109242779A (en) * 2018-07-25 2019-01-18 北京中科慧眼科技有限公司 A kind of construction method, device and the automatic vehicle control system of camera imaging model
CN109344846A (en) * 2018-09-26 2019-02-15 联想(北京)有限公司 Image characteristic extracting method and device
CN109709977A (en) * 2017-10-26 2019-05-03 广州极飞科技有限公司 The method, apparatus and mobile object of motion track planning
CN109754420A (en) * 2018-12-24 2019-05-14 深圳市道通智能航空技术有限公司 A kind of object distance estimation method, device and unmanned plane
CN109753079A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane precisely lands in mobile platform method
CN109753076A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane vision tracing implementing method
CN109839945A (en) * 2017-11-27 2019-06-04 北京京东尚科信息技术有限公司 Unmanned plane landing method, unmanned plane landing-gear and computer readable storage medium
CN109895099A (en) * 2019-03-28 2019-06-18 哈尔滨工业大学(深圳) A kind of flight mechanical arm visual servo grasping means based on physical feature
CN110032983A (en) * 2019-04-22 2019-07-19 扬州哈工科创机器人研究院有限公司 A kind of track recognizing method based on ORB feature extraction and FLANN Rapid matching
CN110254258A (en) * 2019-06-13 2019-09-20 暨南大学 A kind of unmanned plane wireless charging system and method
CN110297498A (en) * 2019-06-13 2019-10-01 暨南大学 A kind of rail polling method and system based on wireless charging unmanned plane
CN110516531A (en) * 2019-07-11 2019-11-29 广东工业大学 A kind of recognition methods of the dangerous mark based on template matching
WO2020014909A1 (en) * 2018-07-18 2020-01-23 深圳市大疆创新科技有限公司 Photographing method and device and unmanned aerial vehicle
CN110942473A (en) * 2019-12-02 2020-03-31 哈尔滨工程大学 Moving target tracking detection method based on characteristic point gridding matching
CN111126450A (en) * 2019-11-29 2020-05-08 上海宇航系统工程研究所 Modeling method and device of cuboid spacecraft based on nine-line configuration
CN111524182A (en) * 2020-04-29 2020-08-11 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN111583093A (en) * 2020-04-27 2020-08-25 西安交通大学 Hardware implementation method for ORB feature point extraction with good real-time performance
CN111754603A (en) * 2020-06-23 2020-10-09 自然资源部四川测绘产品质量监督检验站(四川省测绘产品质量监督检验站) Unmanned aerial vehicle image connection diagram construction method and system
CN111784731A (en) * 2020-06-19 2020-10-16 哈尔滨工业大学 Target attitude estimation method based on deep learning
CN112116651A (en) * 2020-08-12 2020-12-22 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN112197766A (en) * 2020-09-29 2021-01-08 西安应用光学研究所 Vision attitude measuring device for mooring rotor platform
CN112797912A (en) * 2020-12-24 2021-05-14 中国航天空气动力技术研究院 Binocular vision-based wing tip deformation measurement method for large flexible unmanned aerial vehicle
CN112907662A (en) * 2021-01-28 2021-06-04 北京三快在线科技有限公司 Feature extraction method and device, electronic equipment and storage medium
CN113403942A (en) * 2021-07-07 2021-09-17 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN114281096A (en) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 Unmanned aerial vehicle tracking control method, device and medium based on target detection algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN105928493A (en) * 2016-04-05 2016-09-07 王建立 Binocular vision three-dimensional mapping system and method based on UAV
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN105928493A (en) * 2016-04-05 2016-09-07 王建立 Binocular vision three-dimensional mapping system and method based on UAV
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117690A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109709977B (en) * 2017-10-26 2022-08-16 广州极飞科技股份有限公司 Method and device for planning movement track and moving object
CN109709977A (en) * 2017-10-26 2019-05-03 广州极飞科技有限公司 The method, apparatus and mobile object of motion track planning
CN109753076A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane vision tracing implementing method
CN109753076B (en) * 2017-11-03 2022-01-11 南京奇蛙智能科技有限公司 Unmanned aerial vehicle visual tracking implementation method
CN109753079A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane precisely lands in mobile platform method
CN109839945B (en) * 2017-11-27 2022-04-26 北京京东乾石科技有限公司 Unmanned aerial vehicle landing method, unmanned aerial vehicle landing device and computer readable storage medium
CN109839945A (en) * 2017-11-27 2019-06-04 北京京东尚科信息技术有限公司 Unmanned plane landing method, unmanned plane landing-gear and computer readable storage medium
CN107966112A (en) * 2017-12-03 2018-04-27 中国直升机设计研究所 A kind of large scale rotor movement parameter measurement method
CN108335329A (en) * 2017-12-06 2018-07-27 腾讯科技(深圳)有限公司 Applied to the method for detecting position and device, aircraft in aircraft
CN108335329B (en) * 2017-12-06 2021-09-10 腾讯科技(深圳)有限公司 Position detection method and device applied to aircraft and aircraft
CN108255187A (en) * 2018-01-04 2018-07-06 北京科技大学 A kind of micro flapping wing air vehicle vision feedback control method
CN108759826A (en) * 2018-04-12 2018-11-06 浙江工业大学 A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN108759826B (en) * 2018-04-12 2020-10-27 浙江工业大学 Unmanned aerial vehicle motion tracking method based on multi-sensing parameter fusion of mobile phone and unmanned aerial vehicle
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN108711166B (en) * 2018-04-12 2022-05-03 浙江工业大学 Monocular camera scale estimation method based on quad-rotor unmanned aerial vehicle
CN108681324A (en) * 2018-05-14 2018-10-19 西北工业大学 Mobile robot trace tracking and controlling method based on overall Vision
WO2020014909A1 (en) * 2018-07-18 2020-01-23 深圳市大疆创新科技有限公司 Photographing method and device and unmanned aerial vehicle
CN110799921A (en) * 2018-07-18 2020-02-14 深圳市大疆创新科技有限公司 Shooting method and device and unmanned aerial vehicle
CN109242779A (en) * 2018-07-25 2019-01-18 北京中科慧眼科技有限公司 A kind of construction method, device and the automatic vehicle control system of camera imaging model
CN109344846A (en) * 2018-09-26 2019-02-15 联想(北京)有限公司 Image characteristic extracting method and device
CN109344846B (en) * 2018-09-26 2022-03-25 联想(北京)有限公司 Image feature extraction method and device
CN109754420A (en) * 2018-12-24 2019-05-14 深圳市道通智能航空技术有限公司 A kind of object distance estimation method, device and unmanned plane
CN109895099A (en) * 2019-03-28 2019-06-18 哈尔滨工业大学(深圳) A kind of flight mechanical arm visual servo grasping means based on physical feature
CN110032983A (en) * 2019-04-22 2019-07-19 扬州哈工科创机器人研究院有限公司 A kind of track recognizing method based on ORB feature extraction and FLANN Rapid matching
CN110297498A (en) * 2019-06-13 2019-10-01 暨南大学 A kind of rail polling method and system based on wireless charging unmanned plane
CN110254258A (en) * 2019-06-13 2019-09-20 暨南大学 A kind of unmanned plane wireless charging system and method
CN110516531B (en) * 2019-07-11 2023-04-11 广东工业大学 Identification method of dangerous goods mark based on template matching
CN110516531A (en) * 2019-07-11 2019-11-29 广东工业大学 A kind of recognition methods of the dangerous mark based on template matching
CN111126450A (en) * 2019-11-29 2020-05-08 上海宇航系统工程研究所 Modeling method and device of cuboid spacecraft based on nine-line configuration
CN111126450B (en) * 2019-11-29 2024-03-19 上海宇航系统工程研究所 Modeling method and device for cuboid space vehicle based on nine-line configuration
CN110942473A (en) * 2019-12-02 2020-03-31 哈尔滨工程大学 Moving target tracking detection method based on characteristic point gridding matching
CN111583093A (en) * 2020-04-27 2020-08-25 西安交通大学 Hardware implementation method for ORB feature point extraction with good real-time performance
CN111524182B (en) * 2020-04-29 2023-11-10 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN111524182A (en) * 2020-04-29 2020-08-11 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN111784731A (en) * 2020-06-19 2020-10-16 哈尔滨工业大学 Target attitude estimation method based on deep learning
CN111754603B (en) * 2020-06-23 2024-02-13 自然资源部四川测绘产品质量监督检验站(四川省测绘产品质量监督检验站) Unmanned aerial vehicle image connection diagram construction method and system
CN111754603A (en) * 2020-06-23 2020-10-09 自然资源部四川测绘产品质量监督检验站(四川省测绘产品质量监督检验站) Unmanned aerial vehicle image connection diagram construction method and system
CN112116651B (en) * 2020-08-12 2023-04-07 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN112116651A (en) * 2020-08-12 2020-12-22 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN112197766A (en) * 2020-09-29 2021-01-08 西安应用光学研究所 Vision attitude measuring device for mooring rotor platform
CN112797912A (en) * 2020-12-24 2021-05-14 中国航天空气动力技术研究院 Binocular vision-based wing tip deformation measurement method for large flexible unmanned aerial vehicle
CN112907662B (en) * 2021-01-28 2022-11-04 北京三快在线科技有限公司 Feature extraction method and device, electronic equipment and storage medium
CN112907662A (en) * 2021-01-28 2021-06-04 北京三快在线科技有限公司 Feature extraction method and device, electronic equipment and storage medium
CN113403942A (en) * 2021-07-07 2021-09-17 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN114281096A (en) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 Unmanned aerial vehicle tracking control method, device and medium based on target detection algorithm

Also Published As

Publication number Publication date
CN106570820B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN106570820B (en) A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone
Rohan et al. Convolutional neural network-based real-time object detection and tracking for parrot AR drone 2
CN108711166B (en) Monocular camera scale estimation method based on quad-rotor unmanned aerial vehicle
CN106774386B (en) Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN110058602A (en) Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision
CN109029433A (en) Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
Huang et al. Structure from motion technique for scene detection using autonomous drone navigation
CN107087427A (en) Control method, device and the equipment and aircraft of aircraft
CN108428255A (en) A kind of real-time three-dimensional method for reconstructing based on unmanned plane
CN108759826A (en) A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN106529538A (en) Method and device for positioning aircraft
Haroon et al. Multisized object detection using spaceborne optical imagery
Xu et al. A cascade adaboost and CNN algorithm for drogue detection in UAV autonomous aerial refueling
Liu et al. Visual Object Tracking and Servoing Control of a Nano-Scale Quadrotor: System, Algorithms, and Experiments.
Masselli et al. A novel marker based tracking method for position and attitude control of MAVs
Yu et al. Multi-resolution visual fiducial and assistant navigation system for unmanned aerial vehicle landing
Aposporis Object detection methods for improving UAV autonomy and remote sensing applications
CN114581516A (en) Monocular vision-based multi-unmanned aerial vehicle intelligent identification and relative positioning method
Zhai et al. Target detection of low-altitude uav based on improved yolov3 network
Zhou et al. Real-time object detection and pose estimation using stereo vision. An application for a Quadrotor MAV
Montanari et al. Ground vehicle detection and classification by an unmanned aerial vehicle
Rudol Increasing autonomy of unmanned aircraft systems through the use of imaging sensors
Zarei et al. Indoor UAV object detection algorithms on three processors: implementation test and comparison
Yubo et al. Survey of UAV autonomous landing based on vision processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant