WO2018227576A1 - 地面形态检测方法及系统、无人机降落方法和无人机 - Google Patents

地面形态检测方法及系统、无人机降落方法和无人机 Download PDF

Info

Publication number
WO2018227576A1
WO2018227576A1 PCT/CN2017/088710 CN2017088710W WO2018227576A1 WO 2018227576 A1 WO2018227576 A1 WO 2018227576A1 CN 2017088710 W CN2017088710 W CN 2017088710W WO 2018227576 A1 WO2018227576 A1 WO 2018227576A1
Authority
WO
WIPO (PCT)
Prior art keywords
structured light
ground
image
drone
coordinate system
Prior art date
Application number
PCT/CN2017/088710
Other languages
English (en)
French (fr)
Inventor
崔健
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780004468.XA priority Critical patent/CN108474658B/zh
Priority to PCT/CN2017/088710 priority patent/WO2018227576A1/zh
Priority to CN202011528344.1A priority patent/CN112710284A/zh
Publication of WO2018227576A1 publication Critical patent/WO2018227576A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • G01C11/025Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures by scanning the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/30Interpretation of pictures by triangulation
    • G01C11/34Aerial triangulation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/042Control of altitude or depth specially adapted for aircraft

Definitions

  • the present disclosure relates to the field of aircraft technology, and more particularly to a drone based ground form detection method and system, a drone autonomous landing method, and a drone.
  • the existing drone When returning to land, the existing drone is generally operated by the operator to drop to the designated position through the controller. With the continuous development of technology, there have gradually emerged drones that can make autonomous return flights. At present, the autonomous returning and landing method is mainly based on GPS satellite positioning, and uses an inertial measurement unit and a module such as a compass to assist the landing.
  • the current method of autonomous returning and landing can not detect the shape of the ground to be landed, especially if it is impossible to accurately detect whether the ground to be landed is flat. Therefore, in the current autonomous returning and landing mode, the drone may land on the ground that is not suitable for landing. On the situation.
  • a method for detecting a ground form based on a drone wherein the drone is equipped with a projection device for projecting structured light and an image capture device for acquiring an image, The method includes: the projection device projecting predetermined structured light onto a ground on which the drone is to be landed; the image capture device acquiring a structured light image modulated on the ground; and determining the location based on the structured light image The form of the ground.
  • a drone-based ground form detection system comprising: a projection device mounted on the drone for projecting predetermined structured light toward the ground An image capture device mounted on the drone for acquiring a structured light image modulated on the ground; and a data processing device for determining the ground based on the structured light image Shape.
  • a drone autonomous landing method wherein the drone is equipped with a projection device for projecting structured light and an image capture device for acquiring an image, the method comprising: The projection device projects predetermined structural light onto the ground on which the drone is to be landed; the image capture device acquires a structured light image modulated on the ground; and determines the ground based on the structured light image a form; and when the form of the ground does not satisfy a predetermined flatness requirement, the drone is prohibited from landing to the ground, and/or the form of the adjacent ground adjacent to the ground is determined autonomously.
  • a drone comprising: a projection device mounted on the drone for projecting predetermined structured light toward the ground; An on-machine image capture device for acquiring a structured light image modulated on the ground; and a control device for determining a shape of the ground based on the structured light image; and when the ground When the form does not satisfy the predetermined flatness requirement, the drone is prohibited from landing to the ground, and/or the form of the adjacent ground adjacent to the ground is determined autonomously.
  • an electronic device for a drone comprising: a processor; and a memory in which instructions are stored, the instructions causing the processor when executed by the processor : acquiring a structured light image; determining a morphology of the ground based on the structured light image.
  • FIG. 1 is a schematic diagram of a system for detecting a ground shape by a drone based on binocular stereo vision measurement technology according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method for detecting a ground shape by a drone based on binocular stereo vision measurement technology according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a system for detecting a ground form by a drone based on a structured light 3D vision measurement technique, in accordance with another embodiment of the present disclosure
  • FIG. 4 is a flow chart of a method for detecting a ground form by a drone based on a structured light 3D vision measurement technique, in accordance with another embodiment of the present disclosure
  • 5A to 5D are schematic views respectively showing point structure light, line structure light, multi-line structure light, and grid structure light;
  • FIG. 6 is a schematic diagram of a modulated structured light image, in accordance with an exemplary embodiment of the present disclosure.
  • Figure 7 is a schematic view showing the form of the detecting ground system of Figure 3 for detecting the ground;
  • FIG. 8 is a schematic diagram showing the relationship between an image pixel coordinate system and an imaging coordinate system according to an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram showing the relationship between a coordinate system of a video capture device and a coordinate system of a drone according to an embodiment of the present disclosure
  • FIG. 10 is a schematic diagram of an imaging model of an image capture device in accordance with an embodiment of the present disclosure.
  • 11A and 11B are flowcharts of a drone autonomous landing method based on a structured light 3D vision measurement technique, in accordance with an embodiment of the present disclosure
  • FIG. 12 is a schematic diagram of a drone capable of autonomous landing, in accordance with an embodiment of the present disclosure.
  • FIG. 13 is a block diagram showing an example hardware arrangement of a data processing device or a control device according to an embodiment of the present disclosure.
  • the five coordinate systems need to be established, and the five coordinate systems include a drone coordinate system (also referred to as an "aircraft" Coordinate system"), image acquisition device coordinate system, projection device coordinate system, image pixel coordinate system, and imaging coordinate system.
  • a drone coordinate system also referred to as an "aircraft" Coordinate system
  • image acquisition device coordinate system also referred to as an "aircraft" Coordinate system
  • projection device coordinate system image pixel coordinate system
  • imaging coordinate system imaging coordinate system.
  • a camera, a camera, or a camera is taken as an example of an image capturing device.
  • the image capturing device coordinate system may be described as a camera coordinate system, a camera coordinate system, or a camera.
  • the projection device coordinate system can be described as a projector coordinate system.
  • the aircraft coordinate system or the UAV coordinate system uses the center of the drone as the coordinate system origin and the axis perpendicular to the ground as the Z axis to represent any point in the space (for example, a point on the three-dimensional ground) and none.
  • the image acquisition device coordinate system is centered on the optical center of the image acquisition device, and the main optical axis of the image acquisition device is the Z axis, which is used to indicate the relative spatial positional relationship between any point in the space and the image acquisition device.
  • the projection device coordinate system is centered on the optical center of the projection device, and the projection light plane is the plane where the X and Y coordinate axes are located, and is used to indicate the relative spatial positional relationship between any point in the space and the projection device.
  • the image pixel coordinate system is an image coordinate system, and the number of rows and columns of pixels in the array is the coordinates of each pixel.
  • the imaging coordinate system is also an image coordinate system, and the unit is usually expressed in millimeters. Since the image pixel coordinate system is not associated with the physical length, an imaging coordinate system in units of actual length is established.
  • each coordinate system will be further described in conjunction with the drawings.
  • the ground shape when the drone returns to land, the ground shape may be detected based on binocular stereo vision measurement technology to detect whether the ground is flat, thereby determining whether it is suitable for drone landing.
  • FIG. 1 is a schematic diagram of a system for detecting a ground form by a drone based on binocular stereo vision measurement technology in accordance with an embodiment of the present disclosure.
  • the drone 10 is equipped with two image capturing devices 12, 14, such as a camera, a camera, a digital camera, or any other device having an image capturing or photographing function.
  • the two image capture devices 12, 14 are spaced apart by a predetermined distance.
  • the binocular stereo vision measurement technique can utilize two image capture devices 12, 14 to simulate human eyes for depth measurements to obtain a depth map of the ground.
  • step S202 the image capture devices 12, 14 are calibrated.
  • the two image capturing devices 12 and 14 are all cameras.
  • the internal parameters of the image capturing devices 12 and 14 can be calibrated, and the external parameters of the image capturing devices 12 and 14 can also be calibrated.
  • the internal parameters may include parameters related to the characteristics of the image capture device 12, 14, such as the focal length, pixel size, and/or lens distortion parameters of the image capture device.
  • the external parameters may include parameters of the image capture devices 12, 14 in the UAV coordinate system, such as the spatial position and/or rotational direction of the image capture devices 12, 14 relative to the drone 10.
  • step S204 images of the same position of the measured ground are acquired from different orientations by the two image capturing devices 12, 14, respectively.
  • the left image capturing device 12 and the right image capturing device 14 respectively record images of the same position P of the measured ground from different orientations to obtain two two-dimensional images.
  • step S206 feature points are extracted.
  • the method of extracting feature points includes an edge extraction method, an interest operator method, a minimum gray level difference method, and the like.
  • the acquired image may also be pre-processed prior to extracting the feature points to make the feature points in the image more prominent.
  • step S208 feature points in the two two-dimensional images are matched.
  • the matching the feature points in the two two-dimensional images includes finding corresponding points in each of the images acquired by the left image capture device 12 in the image acquired by the right image capture device 14.
  • matching algorithms such as region matching, feature matching, or phase matching may be used to match feature points in two two-dimensional images according to different matching primitives.
  • step S210 based on the internal parameters and external parameters of the two image capturing devices obtained by the calibration, the three-dimensional coordinates of the point in the UAV coordinate system are calculated according to the matched feature points.
  • the space point can be three Dimensional reconstruction. For example, the position of the spatial point relative to the drone 10, ie the three-dimensional coordinates of the spatial point in the UAV coordinate system, is calculated.
  • the shape of the ground may be determined to determine whether the ground is suitable for landing.
  • the "matching" step is a relatively important step of the method.
  • the ground surface to be landed is more complicated.
  • the ground may have no texture, or the ground has a repeated texture.
  • the matching difficulty of feature points will increase. In this way, it is extremely difficult to accurately detect whether the ground to be landed is flat and to accurately determine whether it is suitable for landing.
  • embodiments of the present disclosure also provide a system and method for detecting a ground shape by a drone based on a structured light 3D vision measurement technique.
  • Structured light 3D vision measurement technology is an active optical measurement technology.
  • a structured light projection device projects a controllable light spot, light bar or smooth surface structure onto the surface of the object to be measured, and is image sensor (for example, image acquisition). The device collects images, and calculates the three-dimensional coordinates of the measured object by using the triangular principle through the system geometric relationship.
  • FIG. 3 is a schematic diagram of a system for detecting a ground form by a drone based on a structured light 3D vision measurement technique in accordance with another embodiment of the present disclosure.
  • the drone 30 is equipped with a projection device 32 and an image capture device 34.
  • the projection device 32 and the image capturing device 34 are separated by a certain distance.
  • Projection device 32 is used to project predetermined structured light onto the ground to be landed, and projection device 32 may include a projector.
  • the image capture device 34 is configured to acquire a modulated structured light image on the ground.
  • the image capture device 34 may include a camera, a camera, a camera such as a digital camera, or any other device having an imaging or photographing function.
  • the projection device 32 can project light of various wavelength ranges, such as visible light and infrared light, onto the ground. Accordingly, the image capturing device 34 can sense light of various wavelength ranges such as visible light and infrared light. Image acquisition device. The wavelength range of the light projected by the projection device 32 needs to match the wavelength range of the light induced by the image capture device 34.
  • the projection device 32 can be a projector that projects infrared light
  • the image capture device 34 can be an infrared camera.
  • the drone 30 includes a data processing device 36 for processing images acquired by the image capture device 34, calculating three-dimensional information, and performing three-dimensional reconstruction.
  • the flash frequency of the projection device 32 can be the same or substantially the same as the capture frequency of the image capture device 34.
  • the image obtained by the image capture device 34 does not appear to be flickering or discolored, thereby facilitating the image capture device to acquire a clear and stable image.
  • FIG. 4 is a flow chart of a method of detecting a ground form by a drone based on a structured light 3D vision measurement technique in accordance with another embodiment of the present disclosure.
  • step S402 the projection device 32 and the image capture device 34 are calibrated.
  • the projection device 32 may be a projector
  • the image capture device 34 may be a camera
  • the internal parameters of the image capture device 34 can be calibrated and the external parameters of the image capture device 34 can also be calibrated.
  • the internal parameters may include parameters related to the characteristics of the image capture device, such as focal lengths, pixel sizes, lens distortion parameters, and the like of the image capture device.
  • the external parameter may include parameters of the image capturing device in the UAV coordinate system, such as parameters such as the position and rotation direction of the image capturing device.
  • the calibration projection device 32 can include a positional relationship between the structured light plane of the structured light projected by the projection device and the image acquisition device 34, for example, a plane equation of the structured light plane in the image acquisition device coordinate system. .
  • step S404 the projection device 32 projects the predetermined structured light onto the ground to be lowered P below the drone 30.
  • structured light it can be divided into point structure light, line structure light, multi-line structure light and grid structure light.
  • a beam of structured light is projected onto the surface of the object to be measured, and a spot is generated on the surface thereof.
  • projection device 32 can be a laser head that projects point structured light.
  • the optical axis of the image capturing device and the beam intersect at a spot in space, and the line connecting the projection device and the image capturing device becomes a baseline, and the intersection point and the baseline form a triangle.
  • the constraint relationship of the triangle can be known by calibration, and the spatial position of the spot in the UAV coordinate system can be uniquely determined.
  • the point structure light measurement needs to scan the object to be tested point by point, and the measurement time is long. It may not be possible to meet the requirements of the drone to detect ground patterns in real time.
  • the point beam is replaced by a line beam
  • the image acquisition device acquires an image to obtain spatial position information of all points on the strip.
  • the line beam is projected onto an object whose surface is uneven, the light bar may be distorted or discontinuous, and after a specific calibration, the three-dimensional coordinates of each point on the light bar in the UAV coordinate system may be obtained.
  • the amount of information obtained by one shot is increased without increasing the complexity, so that the speed at which the drone detects the ground form can be accelerated.
  • a fringe projector such as a grating
  • the light strips projected onto the surface of the object to be tested become a plurality of strips.
  • a plurality of strips can be collected in one image collected by the image capturing device.
  • a grid is added to the optical path of the surface structured light.
  • the light beam projected onto the surface of the object to be tested becomes a grid shape.
  • the image acquired by the image capturing device can collect the three-dimensional position information of all the points of the region where the mesh is located on the surface of the object to be measured.
  • step S406 the image capturing device 34 acquires a structured light image modulated on the ground P.
  • a modulated structured light image is formed on the ground P because the ground P may be undulating or uneven.
  • the structured light image is acquired by image capture device 34 at another location to obtain a modulated structured light image, i.e., a distorted image of structured light, as shown in FIG.
  • the degree of distortion of the light strip in the structured light image depends on the relative positional relationship between the projection device 32 and the image capture device 34 and the shape of the ground.
  • the morphology of the ground P may be determined based on the structured light image.
  • the step of determining the morphology of the ground P based on the structured light image may include the following steps S408 and S410.
  • step S408 the three-dimensional information is solved.
  • the solving the three-dimensional information comprises calculating the point corresponding to a point in the structured light image (P1 in FIG. 6) based on the structured light image and the three-dimensionally reconstructed computational model The three-dimensional coordinates of the point on the ground (P1' in Figure 7) in the UAV coordinate system. For example, based on the structured light image acquired in step S406, the image capturing device parameters of the image capturing device 34 calibrated in step S402, and the line structured light plane projected by the calibrated projection device 32 and the position of the image capturing device 34. And calculating a three-dimensional coordinate of the point on the ground corresponding to a point in the structured light image in a UAV coordinate system.
  • step S410 the three-dimensional shape of the ground is reconstructed.
  • the reconstructing the three-dimensional shape of the ground comprises determining a three-dimensional shape of the ground based on three-dimensional coordinates of a series of points on the ground in a UAV coordinate system.
  • the method of detecting a ground form by a drone based on structured light 3D vision measurement techniques can further include the step of processing an image.
  • Image processing can reduce the noise signal in the image, increase the signal-to-noise ratio in the digital image data, and suppress the background. This can facilitate subsequent data processing and meet the requirements of real-time.
  • the structured light image acquired in step S406 may be processed by methods such as image enhancement, image smoothing, and image sharpening.
  • the representation of the image is actually a two-dimensional matrix. Each pixel in the image is mapped to each element in the matrix. If the image is a grayscale image, each element in the matrix corresponds to the pixel value of the point in the image. As shown in FIG. 8, a Cartesian coordinate system O 0 uv is defined, generally starting from the starting point O 0 of the upper left corner of the image, which is the image pixel coordinate system. The number of rows and columns of pixels in the array is the coordinates of each pixel, that is, (u, v) is the coordinates in pixels of the image pixel coordinate system.
  • the imaging plane coordinate system O 1 xy is set as shown in FIG. Since the pixel coordinate system is not associated with the physical length, an imaging coordinate system in units of actual length is to be established.
  • the origin of the imaging coordinate system is the intersection point O 1 (u 0 , v 0 ) between the imaging plane and the central optical axis of the image capturing device, and the relationship between the coordinates in the image pixel coordinate system and the coordinates in the imaging coordinate system can be expressed as
  • the conversion relationship between the image pixel coordinate system and the imaging coordinate system can be obtained as
  • the optical center position O C of the image acquisition device is set as the origin, the Z C axis corresponds to the optical axis of the image acquisition device, and X C and Y C are parallel to the x and y axes of the imaging coordinate system, respectively.
  • the focal length f represents the distance between the optical center of the image capture device and the imaging plane, thereby establishing the image acquisition device coordinate system O C X C Y C Z C , as shown in FIG.
  • the position of the image acquisition device in the space is arbitrary. It is necessary to set a certain standard to describe its posture uniformly. Therefore, we introduce the aircraft coordinate system or the UAV coordinate system as the final benchmark for all coordinate systems.
  • the establishment of the UAV coordinate system O W X W Y W Z W is as shown in FIG.
  • the imaging model of the image acquisition device can be approximated as a pinhole imaging model, as shown in Figure 10. It is assumed that the coordinates of any point in the space under the coordinate system of the image acquisition device are P(X C , Y C , Z C ), on the image plane. The coordinates of the image point in the imaging coordinate system are P'(x, y), the focal length of the image capture device indicated by f. According to the projection relationship and the similar triangle theorem, it can be introduced
  • the above formula (4) associates the coordinates of the image acquisition device coordinate system with the coordinates of the imaging coordinate system. If the coordinates of any imaging coordinate system are to be converted to the UAV coordinate system, it is necessary to know the coordinate system of the image acquisition device and The relationship between the UAV coordinate system, that is, the position of the image acquisition device in the UAV coordinate system.
  • the position change relationship of the image capturing device in the UAV coordinate system includes the angle of the placement and the distance from the origin. From the perspective of matrix transformation, an orthogonal rotation transformation matrix R and a translation transformation matrix T can be used to represent the positional relationship between the two, that is,
  • u 0 , v 0 represents the coordinates of the intersection between the optical axis of the image acquisition device and the image plane in the image pixel coordinate system
  • the unit of f x , f y is a pixel, which represents the scale factor of the u-axis and the v-axis, respectively.
  • matrix All the parameters are related to the internal structure of the image acquisition device, so it is called the internal parameter matrix of the image acquisition device.
  • the matrix can be recorded as matrix A.
  • the matrix [R T] is only related to the position of the image acquisition device in the UAV coordinate system, so it is called the external parameter matrix of the image acquisition device.
  • the plane equation for determining the plane of the structured light is combined with the three equations obtained in (7) to find the coordinates (X W , Y W , Z W ) in the UAV coordinate system.
  • aX W + bY W + cZ W + d 0, where, a, b, c and d respectively represent coefficients of the plane equation, then (X W, Y W, Z
  • the solution to W ) is:
  • equation (8) can be used to pass the coordinates in the image pixel coordinate system.
  • (u, v) Calculate the coordinates (X W , Y W , Z W ) in the UAV coordinate system to achieve three-dimensional reconstruction.
  • the calibration method described below is only for explaining an exemplary calibration method proposed for calibrating each parameter in the above calculation model, and the disclosure is not limited to the calibration method, and those skilled in the art should understand that The disclosed embodiments may also calibrate various parameters of the three dimensionally reconstructed computational model using any other suitable calibration method.
  • the image acquisition device is calibrated for the 2D planar target using the Zhang Zhengyou plane calibration method.
  • the homogeneous coordinates in UAV coordinates are expressed as P W (x W , y W , z W , 1), and the homogeneous coordinates in the image pixel coordinate system are p(u, v, 1), according to formula (7)
  • A is the internal parameter matrix
  • R is the 3 ⁇ 3 rotation matrix
  • T is the 3 ⁇ 1 translation matrix
  • V is a matrix of 2n ⁇ 6.
  • n 3
  • the solution of b can be uniquely determined, which is the eigenvector corresponding to the minimum eigenvalue of the matrix V T V .
  • the matrix A is solved according to the Cholesky matrix decomposition algorithm, and then the matrix B is constructed, and the above internal parameters and external parameters can be further solved.
  • the plane equation can be fitted by the coordinates of the feature points in several known UAV coordinate systems. However, since the structural light equation is unknown at this time, it can be converted to the structural light plane equation under the coordinate system of the image acquisition device, see the following formula:
  • the feature points in the coordinate system of the image acquisition device are converted to the coordinates of the feature points in the UAV coordinate system, and the fitting can be obtained at this time.
  • the plane equation of the line structure light in the UAV coordinate system is obtained.
  • steps other than steps S404, S406 above may be performed by the data processing device 36.
  • a projection device to emit predetermined structured light, acquiring a structured light image, and performing three-dimensional reconstruction, Accurately match the points in the structured light image with the points on the ground, so that the three-dimensional shape of the ground can be accurately determined, and the accuracy and reliability of the ground shape detection can be improved. Further, it can be judged whether the ground is flat or not, which is advantageous for accurately determining whether the ground is suitable for landing.
  • a drone autonomous landing method based on structured light 3D vision measurement technology is also provided.
  • the UAV is equipped with a projection device for projecting structured light and an image capture device for acquiring images.
  • the projection device and the image capture device are separated by a certain distance.
  • 11A is a flow chart showing the autonomous landing method of the drone. Referring to FIG. 11A, the autonomous landing method may include the following steps:
  • the projection device projects predetermined structured light onto the ground of the drone to be landed; in some embodiments, the predetermined structured light may be line structured light;
  • the image capturing device collects a structured light image modulated on the ground;
  • the autonomous landing method may further include landing the drone on the ground when the form of the ground meets a predetermined flatness requirement.
  • the drone when the form of the ground does not meet a predetermined flatness requirement, the drone may be prohibited from landing on the ground, and/or the drone may be controlled to fly adjacent to the ground. Adjacent to the ground above, the above steps S1102 to S1108 are repeatedly performed for the adjacent ground until the ground that satisfies the predetermined flatness requirement is found, and then the drone is controlled to autonomously land in the said satisfaction The flatness required for the ground.
  • the prohibiting the drone from landing on the ground may include controlling the drone to hover above the ground.
  • the UAV autonomous landing method may include the following steps:
  • the projection device projects predetermined structured light onto the ground of the drone to be landed; in some embodiments, the predetermined structured light may be line structured light;
  • the image capturing device collects a structured light image modulated on the ground;
  • step S1112 is performed, that is, the drone is controlled to land on the ground autonomously;
  • step S1114 the drone is prohibited from landing on the ground, for example, controlling the drone to hover above the ground.
  • step S1116 the drone is controlled to fly above the adjacent ground adjacent to the ground.
  • the above steps S1102 to S1108 may then be repeatedly performed to determine the shape of the adjacent ground and determine whether the shape of the adjacent ground satisfies a predetermined flatness requirement. Until the ground that meets the predetermined flatness requirement is found, then the drone is controlled to land on the ground that satisfies the predetermined flatness requirement.
  • the autonomous landing method can further include: calibrating the projection device and the image capture device to determine parameters of the computational model.
  • the calibrating the projection device and the image capture device to determine parameters of the calculation model may include: calibrating internal parameters and/or external parameters of the image capture device; calibrating the projection device a positional relationship between the projected structured light and the image capture device; and determining the calculation model based on the internal parameter, the external parameter, and the positional relationship.
  • the internal parameters include parameters related to the characteristics of the image capture device, and/or the external parameters include the image capture device Parameters in the UAV coordinate system.
  • the internal parameters include a focal length, a pixel size, and/or a lens distortion parameter of the image capture device, the external parameters including a spatial position of the image capture device relative to the drone / or direction of rotation.
  • the positional relationship between the structured light and the image capture device includes a plane equation of the line structure light plane in the image acquisition device coordinate system.
  • the calculation model may be a model represented by the above formula (8).
  • the calibrating internal parameters of the image capture device includes calibrating the u 0 , v 0 , f, d x and d y , and/or calibrating external parameters of the image capture device including a calibration center
  • the translation transformation matrix T and the orthogonal rotation transformation matrix R, and/or calibrating the positional relationship of the structured light projected by the projection device with the image acquisition device includes calibrating the coefficients a, b, c, and d.
  • the autonomous landing method may further include processing the structured light image after the image acquisition device acquires the structured light image modulated on the ground.
  • the step S1102 may include: when the height of the drone is less than a predetermined height, the projection device projects predetermined structural light onto the ground on which the drone is to be landed.
  • the predetermined height may be a height within a range of 2 to 3 meters from the ground.
  • the predetermined flatness requirement may include: a maximum value of a difference of Z coordinate values in a three-dimensional coordinate of the series of points on the ground in a UAV coordinate system is less than a predetermined threshold, for example, 10 cm. It should be understood that the predetermined threshold may be related to the model of the drone and/or the stand used by the drone.
  • an unmanned aerial vehicle that can autonomously land when returning.
  • the drone 1200 can include:
  • a projection device 1202 mounted on the drone for projecting predetermined structured light toward the ground
  • An image capture device 1204 mounted on the drone for acquiring a structured light image modulated on the ground, and
  • the control device 1206 can be configured to determine, at least based on the structured light image, a form of the ground.
  • determining the morphology of the ground based on the structured light image may include calculating, corresponding to a point in the structured light image, based on the structured light image and a three-dimensionally reconstructed computational model The three-dimensional coordinates of the point on the ground in the UAV coordinate system; and the shape of the ground is determined according to the three-dimensional coordinates of the series of points on the ground in the UAV coordinate system.
  • the drone autonomous landing method and the drone based on the structured light 3D vision measurement technology since the points in the structured light image and the points on the ground can be accurately matched, it is possible to It is sure to determine the three-dimensional shape of the ground so that it can accurately determine whether the ground is flat. In this way, the drone can be accurately controlled to land on a flat surface with high flatness, and the safety of the drone's autonomous return to landing is improved.
  • steps of the autonomous landing method other than steps S1102, S1104 described above may be performed by the control device 1206.
  • FIG. 13 is a block diagram showing an example hardware arrangement 1300 of the data processing device 36 or the control device 1206, in accordance with an embodiment of the disclosure.
  • Hardware arrangement 1300 can include a processor 1306 (eg, a central processing unit (CPU), a digital signal processor (DSP), a microcontroller unit (MCU), etc.).
  • Processor 1306 can be a single processing unit or a plurality of processing units for performing different acts of the flows described herein.
  • Hardware arrangement 1300 may also include an input unit 1302 for receiving signals from other entities and an output unit 1304 for providing signals to other entities.
  • Input unit 1302 and output unit 1304 can be arranged as a single entity or as separate entities.
  • hardware arrangement 1300 can include at least one readable storage medium 1308 in the form of a non-volatile or volatile memory, such as an electrically erasable programmable read only memory (EEPROM), flash memory, and/or a hard drive.
  • the readable storage medium 1308 includes computer program instructions 1310 that include code/computer readable instructions that, when executed by the processor 1306 in the hardware arrangement 1300, cause the hardware arrangement 1300 and/or include the hardware arrangement 1300
  • the data processing device or control device can perform, for example, the flow of the above method and any variations thereof.
  • computer program instructions 1310 can be configured as computer program instruction code having a computer program instruction module 1310A-1313B architecture, for example.
  • the code in computer program instructions of arrangement 1300 includes: module 1310B for computing based on the structured light image and the three-dimensionally reconstructed computational model The three-dimensional coordinates of the points on the ground corresponding to the points in the structured light image in the UAV coordinate system.
  • the code in the computer program instructions further includes a module 1310B for determining a form of the ground based on three-dimensional coordinates of the series of points on the ground in the UAV coordinate system.
  • code means in the embodiment disclosed above in connection with FIG. 13 is implemented as a computer program instruction module that, when executed in processor 1306, causes hardware arrangement 1300 to perform the processes or actions of the above-described methods, in alternative embodiments, At least one of the code means can be implemented at least in part as a hardware circuit.
  • the processor may be a single CPU (Central Processing Unit), but may also include two or more processing units.
  • the processor can include a general purpose microprocessor, an instruction set processor, and/or a related chipset and/or A microprocessor (eg, an application specific integrated circuit (ASIC)).
  • the processor may also include an onboard memory for caching purposes.
  • Computer program instructions may be hosted by a computer program instruction product coupled to the processor.
  • the computer program instructions product can comprise a computer readable medium having stored thereon computer program instructions.
  • the computer program instructions product can be flash memory, random access memory (RAM), read only memory (ROM), EEPROM, and the computer program instructions modules described above can be distributed in the form of memory within the UE to alternative embodiments. Different computer program instruction products.
  • functions described herein as being implemented by pure hardware, software and/or firmware may also be implemented by means of dedicated hardware, a combination of general hardware and software, and the like.
  • functions described as being implemented by dedicated hardware eg, Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), etc.
  • general purpose hardware eg, central processing unit (CPU), digital signal processing (DSP) is implemented in a way that is combined with software and vice versa.
  • the three-dimensional shape of the ground is determined mainly based on the structured light image collected by the image capturing device and the calibration model of the three-dimensional reconstruction, but those skilled in the art should understand that such a manner It is an exemplary embodiment made to elaborate the inventive concept of the present disclosure, which is not the only embodiment. In other embodiments of the present disclosure, other embodiments may be employed to determine the shape of the ground based on the structured light image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于无人机的地面形态检测方法和系统、无人机自主降落方法、无人机和用于无人机的电子设备。其中,无人机(30)上搭载用于投射结构光的投影装置(32)和用于获取图像的影像采集装置(34)。该方法包括:(S1102)投影装置(32)将预定的结构光投射到无人机(30)降落的地面上;(S11041)影像采集装置(34)采集地面上调制的结构光图像;(S1106、S1108)基于所述结构光图像,确定地面的形态。

Description

地面形态检测方法及系统、无人机降落方法和无人机
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者专利披露。
技术领域
本公开涉及飞行器技术领域,更具体地涉及基于无人机的地面形态检测方法及系统、无人机自主降落方法和无人机。
背景技术
随着技术的发展,各式各样的飞行器已经被制造出来用于满足不同的用户需求。各种带成像功能的飞行器,例如旋翼式无人机,已经被广泛应用,以执行各种航拍、地理测绘等工作。
在返航降落时,现有的无人机一般是由操作人员通过控制器操纵降落到指定位置。随着技术的不断发展,逐渐出现了可以进行自主返航降落的无人机。目前的自主返航降落方式主要是采用GPS卫星定位,并使用惯性测量单元以及指南针等模块进行辅助降落。
但是,目前的自主返航降落方式无法检测待降落地面的形态,特别是无法准确检测待降落地面是否平整,这样,在目前的自主返航降落方式中,可能出现无人机降落在不适合降落的地面上的情况。
发明内容
根据本公开的第一方面,提出了一种基于无人机的地面形态检测方法,其中,所述无人机上搭载用于投射结构光的投影装置和用于获取图像的影像采集装置,所述方法包括:所述投影装置将预定的结构光投射到所述无人机待降落的地面上;所述影像采集装置采集所述地面上调制的结构光图像;基于所述结构光图像,确定所述地面的形态。
根据本公开的第二方面,提出了一种基于无人机的地面形态检测系统,所述地面形态检测系统包括:搭载在所述无人机上的投影装置,用于朝向地面投射预定的结构光;搭载在所述无人机上的影像采集装置,用于采集所述地面上调制的结构光图像;以及数据处理装置,所述数据处理装置用于:基于所述结构光图像,确定所述地面的形态。
根据本公开的第三方面,提出了一种无人机自主降落方法,其中,所述无人机上搭载用于投射结构光的投影装置和用于获取图像的影像采集装置,所述方法包括:所述投影装置将预定的结构光投射到所述无人机待降落的地面上;所述影像采集装置采集所述地面上调制的结构光图像;基于所述结构光图像,确定所述地面的形态;以及当所述地面的形态不满足预定的平整度要求时,禁止所述无人机降落到所述地面,和/或自主确定与所述地面邻近的邻近地面的形态。
根据本公开的第四方面,提出了一种无人机,所述无人机包括:搭载在所述无人机上的投影装置,用于朝向地面投射预定的结构光;搭载在所述无人机上的影像采集装置,用于采集所述地面上调制的结构光图像;以及控制装置,所述控制装置用于:基于所述结构光图像,确定所述地面的形态;以及当所述地面的形态不满足预定的平整度要求时,禁止所述无人机降落到所述地面,和/或自主确定与所述地面邻近的邻近地面的形态。
根据本公开的第五方面,提出了一种用于无人机的电子设备,包括:处理器;和存储器,其中存储有指令,所述指令在由所述处理器执行时使得所述处理器:获取结构光图像;基于所述结构光图像,确定所述地面的形态。
在根据本公开的多个方面的实施例中,能够基于结构光图像,准确确定出地面的形态,提高了地面形态检测的准确性和可靠性。
附图说明
为了更完整地理解本公开实施例及其优势,现在将参考结合附图的以下描述,其中:
图1是根据本公开实施例的基于双目立体视觉测量技术的无人机检测地面形态的系统的示意图;
图2是根据本公开实施例的基于双目立体视觉测量技术的无人机检测地面形态的方法的流程图;
图3是根据本公开的另一实施例的基于结构光3D视觉测量技术的无人机检测地面形态的系统的示意图;
图4是根据本公开的另一实施例的基于结构光3D视觉测量技术的无人机检测地面形态的方法的流程图;
图5A~5D是分别示出了点结构光、线结构光、多线结构光和网格结构光的示意图;
图6是根据本公开的一个示例性实施例的调制的结构光图像的示意图;
图7是示出图3中的检测地面系统的形态用于检测地面的示意图;
图8是根据本公开实施例的图像像素坐标系和成像坐标系的关系示意图;
图9是根据本公开实施例的影像采集装置坐标系和无人机坐标系的关系示意图;
图10是根据本公开实施例的影像采集装置成像模型的示意图;
图11A和图11B是根据本公开实施例的基于结构光3D视觉测量技术的无人机自主降落方法的流程图;
图12是根据本公开实施例的能够自主降落的无人机的示意图;和
图13是示出了根据本公开实施例的数据处理装置或控制装置的示例硬件布置的框图。
此外,各附图并不一定按比例来绘制,而是仅以不影响读者理解的示意性方式示出。
具体实施方式
根据结合附图对本公开示例性实施例的以下详细描述,本公开的其它方面、优势和突出特征对于本领域技术人员将变得显而易见。
在本公开中,术语“包括”和“含有”及其派生词意为包括而非限制。
在本公开中,下述用于描述本公开原理的各种实施例只是说明,不应该以任何方式解释为限制公开的范围。参照附图的下述描述用于帮助全面理解由权利要求及其等同物限定的本公开的示例性实施例。下述描述包括多种具体细节来帮助理解,但这些细节应认为仅仅是示例性的。因此,本领域普通技术人员应认识到,在不脱 离本公开的范围和精神的情况下,可以对本文中描述的实施例进行多种改变和修改。此外,为了清楚和简洁起见,省略了公知功能和结构的描述。此外,贯穿附图,相同附图标记用于相同或相似的功能和操作。此外,尽管可能在不同实施例中描述了具有不同特征的方案,但是本领域技术人员应当意识到:可以将不同实施例的全部或部分特征相结合,以形成不脱离本公开的精神和范围的新的实施例。
在本公开中,在建立三维重构的计算模型和标定计算模型的参数时,为了方便阐述,需要建立五个坐标系,这五个坐标系包括无人机坐标系(也可以称为“飞机坐标系”)、影像采集装置坐标系、投影装置坐标系、图像像素坐标系和成像坐标系。需要说明的是,在本公开的一些实施例中,以相机、摄像头或摄像机作为影像采集装置的示例,相应地,所述影像采集装置坐标系可以被描述为相机坐标系、摄像头坐标系或摄像机坐标系;而且,在本公开的一些实施例中,以投影仪作为投影装置的示例,相应地,所述投影装置坐标系可以被描述为投影仪坐标系。飞机坐标系或无人机坐标系以无人机的中心为坐标系原点、以垂直于地面的轴为Z轴,用来表示空间内的任意一点(例如,三维地面上的某点)与无人机的相对空间位置关系。影像采集装置坐标系以影像采集装置的光心为中心、以影像采集装置的主光轴为Z轴,用来表示空间内的任意一点与影像采集装置的相对空间位置关系。投影装置坐标系以投影装置的光心为中心、以投射光平面为X、Y坐标轴所在的平面,用来表示空间内的任意一点与投影装置的相对空间位置关系。图像像素坐标系是图像坐标系,像素在数组中的行数和列数为每一像素点的坐标。成像坐标系也是图像坐标系,单位通常用毫米表示,由于图像像素坐标系没有与物理长度关联,所以要建立以实际长度为单位的成像坐标系。在下文中,将进一步结合附图描述各个坐标系。
根据本公开的一个示例性实施例,在无人机返航降落时,可以基于双目立体视觉测量技术来检测地面形态,以检测地面是否平整,从而确定是否适合无人机降落。
图1是根据本公开实施例的基于双目立体视觉测量技术的无人机检测地面形态的系统的示意图。如图1所示,无人机10上搭载有两个影像采集装置12、14,例如摄像机、摄像头、数码相机或其它任何具有摄像或拍照功能的装置中的一种。两个影像采集装置12、14之间间隔一预定的距离。
图2是根据本公开实施例的基于双目立体视觉测量技术的无人机检测地面形态的方法的流程图。在图1和2示出的实施例中,双目立体视觉测量技术可以利用两台影像采集装置12、14来模拟人的双眼进行深度测量,从而获得地面的深度图。
在步骤S202中,标定影像采集装置12、14。
在本实施例中,所述两个影像采集装置12、14均是相机,在该步骤中,可以标定影像采集装置12、14的内部参数,还可以标定影像采集装置12、14的外部参数。在某些实施例中,所述内部参数可以包括与所述影像采集装置12、14的自身特性相关的参数,例如影像采集装置的焦距、像素大小和/或镜头畸变参数。所述外部参数可以包括所述影像采集装置12、14在无人机坐标系中的参数,例如所述影像采集装置12、14相对于所述无人机10的空间位置和/或旋转方向。
在步骤S204中,利用两个影像采集装置12、14分别从不同方位获取被测地面的同一位置的图像。
在某些实施例中,参照图1,利用左影像采集装置12和右影像采集装置14分别从不同方位记录被测地面的同一位置P的图像,得到两幅二维图像。
在步骤S206中,提取特征点。
在某些实施例中,提取特征点的方法包括边缘提取法、兴趣算子法、最小灰度差法等。在某些实施例中,在提取特征点之前,还可以对获取的图像进行预处理,以使图像中的特征点更加突出。
在步骤S208中,匹配两幅二维图像中的特征点。
在某些实施例中,所述匹配两幅二维图像中的特征点包括寻找左影像采集装置12获取的图像中的每个特征点在右影像采集装置14获取的图像中的对应点。例如,可以根据匹配基元的不同,使用区域匹配、特征匹配或相位匹配等匹配算法来匹配两幅二维图像中的特征点。
在步骤S210中,基于标定得到的两个影像采集装置的内部参数和外部参数,根据匹配的特征点,计算该点在无人机坐标系中的三维坐标。
在某些实施例中,在完成了标定影像采集装置、获取图像、提取特征点和匹配特征点的步骤之后,就得到了任一被测点在两个图像中的对应的齐次坐标和两个影像采集装置的参数矩阵,然后利用所述齐次坐标和参数矩阵,可以对空间点进行三 维重建。例如,计算所述空间点相对于所述无人机10的位置,即所述空间点在无人机坐标系中的三维坐标。
在本公开的实施例中,根据计算出的一系列点在无人机坐标系中的三维坐标,可以确定出所述地面的形态,从而确定所述地面是否适合降落。
在根据本公开实施例的基于双目立体视觉测量技术的无人机检测地面形态的方法中,“匹配”步骤是该方法比较关键的一个步骤。在无人机返航降落时,待降落的地面形态比较复杂,例如,地面可能没有纹理,或者地面具有重复的纹理,此时,由于没有比较明显的特征点,所以匹配两幅二维图像中的特征点的匹配难度会增大。这样,准确检测待降落地面是否平整以及准确确定是否适合降落带来极大的困难。
为了更准确地检测地面形态,以准确确定地面是否适合降落,本公开的实施例还提供一种基于结构光3D视觉测量技术的无人机检测地面形态的系统和方法。结构光3D视觉测量技术是一种主动式光学测量技术,具体地,由结构光投影装置向被测物体表面投射可控制的光点、光条或光面结构,并由图像传感器(例如影像采集装置)采集图像,通过系统几何关系,利用三角原理计算得到被测物体的三维坐标。
图3是根据本公开的另一实施例的基于结构光3D视觉测量技术的无人机检测地面形态的系统的示意图。如图3所示,无人机30上搭载有投影装置32和影像采集装置34。其中,投影装置32和影像采集装置34间隔一定的距离。投影装置32用于投射预定的结构光至待降落的地面上,投影装置32可以包括投影仪。影像采集装置34用于采集所述地面上的调制的结构光图像,影像采集装置34可以包括摄像机、摄像头、例如数码相机等的相机或其它任何具有摄像或拍照功能的装置。在某些实施例中,投影装置32可以投射可见光、红外光等各种波长范围的光至所述地面上,相应地,影像采集装置34可以是感应可见光、红外光等各种波长范围的光的影像采集装置。投影装置32投射的光的波长范围需要与影像采集装置34感应的光的波长范围相匹配,例如,投影装置32可以是投射红外光的投影仪,影像采集装置34可以是红外摄像头。无人机30包括数据处理装置36,用于处理影像采集装置34采集的图像、计算三维信息以及进行三维重构。
在某些实施例中,投影装置32的闪光频率与影像采集装置34的拍摄频率可以相同或大致相同。这样,影像采集装置34获得的图像不会出现闪烁或变色的画面,从而有利于使所述影像采集装置获取清晰且稳定的图像。
图4是根据本公开的另一实施例的基于结构光3D视觉测量技术的无人机检测地面形态的方法的流程图。
在步骤S402中,标定投影装置32和影像采集装置34。
本实施例中,所述投影装置32可以为投影仪,所述影像采集装置34可以为相机。
在某些实施例中,可以标定影像采集装置34的内部参数,还可以标定影像采集装置34的外部参数。所述内部参数可以包括与所述影像采集装置的自身特性相关的参数,例如所述影像采集装置的焦距、像素大小、镜头畸变参数等参数。所述外部参数可以包括所述影像采集装置在无人机坐标系中的参数,例如所述影像采集装置的位置、旋转方向等参数。在某些实施例中,标定投影装置32可以包括标定投影装置投射的结构光的结构光平面与影像采集装置34的位置关系,例如,所述结构光平面在影像采集装置坐标系下的平面方程。通过标定投影装置32和影像采集装置34的上述参数,可以建立例如无人机坐标系的预定的三维坐标系与影像采集装置坐标系之间的映射关系。关于标定的参数以及标定的详细步骤将在下文中更详细地阐述。
在步骤S404中,所述投影装置32将预定的结构光投射至无人机30下方的待降落地面P上。
根据结构光的不同形态,可以将其划分为点结构光、线结构光、多线结构光和网格结构光。
对于点结构光,如图5A所示,将一束结构光投射到被测物体表面上,在其表面产生一个光点。例如,投影装置32可以是投射点结构光的激光头。影像采集装置的光轴和该束光在空间中于光点处相交,投影装置和影像采集装置的连线成为基线,相交点和基线构成一个三角形。通过标定可知该三角形的约束关系,并可以唯一确定所述光点在无人机坐标系下的空间位置。在该点结构光的实施例中,一次拍摄只能采集到被测物体表面上一个可测点的坐标信息,所以,点结构光测量需要对被测物体进行逐点扫描,测量时间较长,可能无法满足无人机实时检测地面形态的要求。
对于线结构光,如图5B所示,将点光束换成线光束,影像采集装置采集一幅图像就可以获得光条上的所有点的空间位置信息。具体地,当线光束投射到表面不平整的物体上,光条会扭曲或断续,经过特定的标定后,可以把光条上每一点在无人机坐标系下的三维坐标求出来。通过使用线结构光,在没有增加复杂度的同时增大了一次拍摄获得的信息量,从而可以加快无人机检测地面形态的速度。
对于多线结构光,如图5C所示,在面结构光的光路中增加一个条纹投射器,例如光栅。投射至待测物体表面上的光条变成多条,此时,影像采集装置采集的一幅图像中可以采集到多条光条。
对于网格结构光,如图5D所示,在面结构光的光路中增加一个网栅。投射至待测物体表面上的光束变成网格状,此时,影像采集装置采集的一幅图像中可以采集到被测物体表面上网格所在的区域的所有点的三维位置信息。
在下面的描述中,除非另有说明,均以结构光为线结构光为例对本公开的实施例进行阐述。
在步骤S406中,所述影像采集装置34采集所述地面P上调制的结构光图像。
根据本公开的实施例,当投影装置32将线结构光投射至所述地面P上时,由于地面P可能高低起伏或不平整,所以在地面P上会形成调制的结构光图像。该结构光图像由处于另一位置的影像采集装置34采集,从而获得调制的结构光图像,即结构光的畸变图像,如图6所示。结构光图像中光条的畸变程度依赖于投影装置32与影像采集装置34之间的相对位置关系以及地面的形态。
在获取所述结构光图像之后,可以基于所述结构光图像,确定出所述地面P的形态。根据本公开的一些实施例,基于所述结构光图像确定出所述地面P的形态的步骤可以包括下面的步骤S408和步骤S410。
在步骤S408中,解算三维信息。
在某些实施例中,所述解算三维信息包括基于所述结构光图像和三维重构的计算模型,计算与所述结构光图像中的点(如图6中的P1)对应的所述地面上的点(如图7中的P1’)在无人机坐标系中的三维坐标。例如,可以基于步骤S406中采集的结构光图像、步骤S402中标定的影像采集装置34的影像采集装置参数以及标定的投影装置32投射的所述线结构光平面与所述影像采集装置34的位置关系,计算与所述结构光图像中的点对应的所述地面上的点在无人机坐标系中的三维坐标。
在步骤S410中,重构所述地面的三维形态。
在某些实施例中,所述重构所述地面的三维形态包括根据所述地面上的一系列点在无人机坐标系中的三维坐标,确定所述地面的三维形态。
另外,在某些实施例中,所述基于结构光3D视觉测量技术的无人机检测地面形态的方法还可以包括处理图像的步骤。图像处理可以减少图像中的噪声信号,增大数字图像数据中信噪比、抑制背景,这样,可以方便后续进行的数据处理,更加满足实时性的要求。例如,可以采用图像增强、图像平滑和图像锐化等方法,对步骤S406中采集的结构光图像进行处理。
上面结合附图描述了根据本公开实施例的基于结构光3D视觉测量技术的无人机检测地面形态的方法的主要步骤,在下文中,将更详细地阐述根据本公开实施例的基于结构光3D视觉测量技术的无人机检测地面形态的方法中所标定和使用的三维重构的计算模型。
[三维重构的计算模型的建立]
首先,需要建立各个坐标系。
图像的表述形式实际上是二维矩阵,图像中每个像素点映射到矩阵中的每一个元素,若图像为灰度图像,则矩阵中每一个元素对应图像中该点像素值。如图8所示,定义一个直角坐标系O0uv,一般以图像左上角起始点O0作为原点,该坐标系即为图像像素坐标系。像素在数组中的行数和列数为每一像素点的坐标,即(u,v)为图像像素坐标系下以像素为单位的坐标。
在图像像素坐标系的基础上,设置成像平面坐标系O1xy,如图8所示。由于像素坐标系没有与物理长度关联,所以要建立以实际长度为单位的成像坐标系。设dx,dy分别为单位像素在x轴和y轴的物理间距。该成像坐标系的原点为成像平面与影像采集装置中心光轴的交点O1(u0,v0),则图像像素坐标系中坐标和成像坐标系中坐标的关系可表示为
Figure PCTCN2017088710-appb-000001
依据上述定义,可以得到图像像素坐标系和成像坐标系的转换关系式为
Figure PCTCN2017088710-appb-000002
以影像采集装置光心位置OC设置为原点,ZC轴对应影像采集装置光轴,XC、YC则与成像坐标系的x轴和y轴分别平行。焦距f表示影像采集装置光心到成像平面之间的距离,由此建立影像采集装置坐标系OCXCYCZC,如图9所示。
影像采集装置在空间中的摆放位置是任意的,需要设定一定的标准来对其姿态进行统一的描述,因此我们引入飞机坐标系或无人机坐标系,作为所有坐标系最终的基准。无人机坐标系OWXWYWZW的建立如图9中所示。
其次,需要建立影像采集装置成像模型。
影像采集装置成像模型可近似为针孔成像模型,如图10所示,假设空间中任意一点在影像采集装置坐标系下的坐标为P(XC,YC,ZC),像平面上的像点在成像坐标系下的坐标为P′(x,y),f表示的影像采集装置的焦距。依据投影关系与相似三角形定理,可推出
Figure PCTCN2017088710-appb-000003
转换为矩阵表示为
Figure PCTCN2017088710-appb-000004
上式(4)将影像采集装置坐标系下坐标与成像坐标系下坐标建立起联系,若要将任意成像坐标系下坐标转换到无人机坐标系下,还需要知道影像采集装置坐标系与无人机坐标系的关系,即影像采集装置在无人机坐标系下的摆放位置。
接下来,需要建立影像采集装置与无人机的相对位置关系。
影像采集装置在无人机坐标系下的位置变换关系包括摆放的角度,和摆放位置偏离原点的远近。从矩阵变换的角度,可以用一个正交旋转变换矩阵R和一个平移变换矩阵T来表示两者位置关系,即
Figure PCTCN2017088710-appb-000005
其中R为3×3旋转矩阵,记为
Figure PCTCN2017088710-appb-000006
T为平移矩阵,
Figure PCTCN2017088710-appb-000007
可以整理出如下公式
Figure PCTCN2017088710-appb-000008
根据式(2)、(4)和(6)整理可得
Figure PCTCN2017088710-appb-000009
式中,u0,v0表示影像采集装置光轴线同图像平面之间的交点在图像像素坐标系下对应的坐标,设
Figure PCTCN2017088710-appb-000010
fx,fy的单位是像素,分别表示u轴和v轴的尺度因子。矩阵
Figure PCTCN2017088710-appb-000011
中所有参数均与影像采集装置内部结构有关,故称 为影像采集装置内部参数矩阵,为了描述方便,可以将该矩阵记为矩阵A。矩阵[R T]仅与影像采集装置在无人机坐标系下的位置有关,故称为影像采集装置外部参数矩阵。
最后,建立三维重构的计算模型。
由式(7)推导所得的最终模型可知,若要从图像中的坐标得到无人机坐标系下的坐标,(u,v)已知,ZC、(XW,YW,ZW)均未知,4个未知数,3个方程,并不能将对应无人机坐标系下的坐标(XW,YW,ZW)求出,故还需要一个约束条件。
光条纹上的点不仅满足影像采集装置的投影变换关系,而且还在线结构光所在平面上。因此,确定结构光所在平面的平面方程,与(7)得到的3个方程联立,即可求出无人机坐标系下的坐标(XW,YW,ZW)。设线结构光所在平面的平面方程为aXW+bYW+cZW+d=0,其中,a、b、c和d分别表示所述平面方程的系数,则(XW,YW,ZW)的解法为:
Figure PCTCN2017088710-appb-000012
在标定出其中的参数(例如矩阵A、矩阵R、矩阵T以及平面方程的系数a、b、c和d)的情况下,通过式(8)的方程,可以通过图像像素坐标系下的坐标(u,v)计算出无人机坐标系下的坐标(XW,YW,ZW),从而实现三维重构。
[计算模型中各个参数的标定]
首先,需要说明的是,下面描述的标定方法仅是为了说明如何标定上述计算模型中的各个参数提出的示例性标定方法,本公开并不限于所述标定方法,本领域技术人员应理解,本公开的实施例还可以采用任何其它合适的标定方法来标定三维重构的计算模型的各个参数。
(1)标定影像采集装置的内部参数和外部参数
在一个示例中,利用张正友平面标定方法,对影像采集装置进行2D平面靶标的标定。假设空间内有一点P,其在无人机坐标下的齐次坐标表示为PW(xW,yW,zW,1),在图像像素坐标系下的齐次坐标为p(u,v,1),根据式(7)可得
sp=A[R T]PW          (9)
式中s为深度因子,A为内部参数矩阵,R为3×3的旋转矩阵,T为3×1的平移矩阵。
假设2D靶标位于xW-yW平面上,则有
Figure PCTCN2017088710-appb-000013
令H=[h1 h2 h3],λ为常数因子,则可得:
H=[h1 h2 h3]=λA[r1 r2 T]        (11)
由R的正交性可得
Figure PCTCN2017088710-appb-000014
Figure PCTCN2017088710-appb-000015
因为B为对称矩阵,则可以表示为六维向b=[B11 B12 B22 B13 B23 B33]T,再令H矩阵中第i列向量为h=[hi1 hi2 hi3]T,因此可以得到
Figure PCTCN2017088710-appb-000016
其中
Figure PCTCN2017088710-appb-000017
由式(14)可写成齐次方程
Figure PCTCN2017088710-appb-000018
对n张不同位置图像可得到如下方程
Figure PCTCN2017088710-appb-000019
简写为
V·b=0              (17)
上式中V为2n×6的矩阵,当n>3时,b的解可以唯一确定,是矩阵VTV最小特征值对应的特征向量。求出b后,根据Cholesky矩阵分解算法解出矩阵A,再构造出矩阵B,可以进一步求解出上述内部参数和外部参数。
(2)标定线结构光的平面方程
线结构光的标定,即线结构光在无人机坐标系下的平面方程的求解,其表达式为
aWxW+bWyW+cWzW+dW=0          (18)
通过若干已知无人机坐标系下特征点的坐标,即可拟合该平面方程。但由于此时结构光方程是未知的,所以可以转换为求解影像采集装置坐标系下的结构光平面方程,参见下式:
aCxC+bCyC+cCzC+dC=0          (19)
通过求出的影像采集装置坐标系下特征点的坐标,再利用外部参数矩阵把影像采集装置坐标系下特征点转换到无人机坐标系下特征点的坐标,此时进行拟合即可得到无人机坐标系下线结构光的平面方程。
在上述实施例中,描述了根据本公开的实施例的基于结构光3D视觉测量技术的无人机检测地面形态的系统和方法。在某些实施例中,除上述步骤S404、S406之外的其它步骤都可以由所述数据处理装置36执行。
在根据本公开的上述实施例的基于结构光3D视觉测量技术的无人机检测地面心态的系统和方法中,通过使投影装置发出预定的结构光、采集结构光图像并且进行三维重构,能够准确匹配结构光图像中的点和地面上的点,这样,可以准确确定出地面的三维形态,提高地面形态检测的准确性和可靠性。进一步地,可以判断地面是否平整,有利于准确判断所述地面是否适合降落。
根据本公开的又一实施例,还提供一种基于结构光3D视觉测量技术的无人机自主降落方法。所述无人机上搭载用于投射结构光的投影装置和用于获取图像的影像采集装置。其中,所述投影装置和所述影像采集装置间隔一定的距离。图11A示出了所述无人机自主降落方法的流程图,参照图11A,所述自主降落方法可以包括如下步骤:
S1102.所述投影装置将预定的结构光投射到所述无人机待降落的地面上;在某些实施例中,所述预定的结构光可以为线结构光;
S1104.所述影像采集装置采集所述地面上调制的结构光图像;
S1106.基于所述结构光图像和三维重构的计算模型,计算与所述结构光图像中的点对应的所述地面上的点在无人机坐标系中的三维坐标;
S1108.根据所述地面上的一系列点在无人机坐标系中的三维坐标,确定所述地面的形态;以及
S1110.当所述地面的形态不满足预定的平整度要求时,禁止所述无人机降落到所述地面,和/或自主确定与所述地面邻近的邻近地面的形态。
在某些实施例中,所述自主降落方法还可以包括:当所述地面的形态满足预定的平整度要求时,所述无人机降落在所述地面上。
在某些实施例中,当所述地面的形态不满足预定的平整度要求时,可以禁止所述无人机降落在所述地面上,和/或控制无人机飞行至与所述地面邻近的邻近地面上方,然后针对所述邻近地面重复执行上述步骤S1102至步骤S1108,直至寻找到满足所述预定的平整度要求的地面,然后控制所述无人机自主降落在所述满足所述预 定的平整度要求的地面上。作为一个示例,所述禁止所述无人机降落在所述地面上可以包括控制所述无人机悬停在所述地面上方。
参照图11B所示,所述无人机自主降落方法可以包括如下步骤:
S1102.所述投影装置将预定的结构光投射到所述无人机待降落的地面上;在某些实施例中,所述预定的结构光可以为线结构光;
S1104.所述影像采集装置采集所述地面上调制的结构光图像;
S1106.基于所述结构光图像和三维重构的计算模型,计算与所述结构光图像中的点对应的所述地面上的点在无人机坐标系中的三维坐标;
S1108.根据所述地面上的一系列点在无人机坐标系中的三维坐标,确定所述地面的形态;
S1109.判断确定出的所述地面的形态是否满足预定的平整度要求;以及
当所述地面的形态满足预定的平整度要求时,执行步骤S1112,即控制所述无人机自主降落在所述地面上;
或者,当所述地面的形态不满足预定的平整度要求时,执行下面的步骤S1114和/或步骤S1116。
在步骤S1114中,禁止所述无人机降落在所述地面上,例如,控制所述无人机悬停在所述地面上方。
在步骤S1116中,控制无人机飞行至与所述地面邻近的邻近地面上方。然后可以重复执行上述步骤S1102至步骤S1108,以确定所述邻近地面的形态,并判断所述邻近地面的形态是否满足预定的平整度要求。直至寻找到满足所述预定的平整度要求的地面,然后控制所述无人机自主降落在所述满足所述预定的平整度要求的地面上。
在某些实施例中,所述自主降落方法还可以包括:标定所述投影装置和所述影像采集装置,以确定所述计算模型的参数。
在一个示例中,所述标定所述投影装置和所述影像采集装置,以确定所述计算模型的参数可以包括:标定所述影像采集装置的内部参数和/或外部参数;标定所述投影装置投射的结构光与所述影像采集装置的位置关系;以及基于所述内部参数、所述外部参数和所述位置关系,确定所述计算模型。例如,所述内部参数包括与所述影像采集装置的自身特性相关的参数,和/或所述外部参数包括所述影像采集装置 在无人机坐标系中的参数。在某些实施例中,所述内部参数包括所述影像采集装置的焦距、像素大小和/或镜头畸变参数,所述外部参数包括所述影像采集装置相对于所述无人机的空间位置和/或旋转方向。例如,所述结构光与所述影像采集装置的位置关系包括线结构光平面在影像采集装置坐标系下的平面方程。
在某些实施例中,所述计算模型可以是由上式(8)表示的模型。在这些实施例中,所述标定所述影像采集装置的内部参数包括标定所述u0,v0,f,dx和dy,和/或标定所述影像采集装置的外部参数包括标定所述平移变换矩阵T和正交旋转变换矩阵R,和/或标定所述投影装置投射的结构光与所述影像采集装置的位置关系包括标定所述系数a、b、c和d。
在某些实施例中,所述自主降落方法还可以包括:在所述影像采集装置采集所述地面上调制的结构光图像之后,处理所述结构光图像。
在某些实施例中,所述步骤S1102可以包括:当所述无人机的高度小于预定高度时,所述投影装置将预定的结构光投射到所述无人机待降落的地面上。在一个示例中,所述预定高度可以是距地面2~3米范围内的高度。
在一个示例中,所述预定的平整度要求可以包括:所述地面上的一系列点在无人机坐标系中的三维坐标中Z坐标值的差值的最大值小于预定的阈值,例如10厘米。应该理解的是,该预定的阈值可以与无人机的机型和/或无人机所使用的脚架相关。
根据本公开的实施例,还提供一种无人机,该无人机在返航时可以自主降落。如图12所示,所述无人机1200可以包括:
搭载在所述无人机上的投影装置1202,用于朝向地面投射预定的结构光,
搭载在所述无人机上的影像采集装置1204,用于采集所述地面上调制的结构光图像,以及
控制装置1206,所述控制装置至少可以用于:基于所述结构光图像,确定所地面的形态。在某些实施例中,所述基于所述结构光图像,确定所地面的形态可以包括:基于所述结构光图像和三维重构的计算模型,计算与所述结构光图像中的点对应的所述地面上的点在无人机坐标系中的三维坐标;和根据所述地面上的一系列点在无人机坐标系中的三维坐标,确定所述地面的形态。
在根据本公开的上述实施例的基于结构光3D视觉测量技术的无人机自主降落方法和无人机中,由于能够准确匹配结构光图像中的点和地面上的点,所以可以准 确确定出地面的三维形态,从而可以准确判断地面是否平整。这样,能够准确控制无人机降落在平整度较高的地面上,提高无人机自主返航降落的安全性。
在某些实施例中,所述自主降落方法的除上述步骤S1102、S1104之外的其它步骤都可以由所述控制装置1206执行。
图13是示出了根据本公开实施例的所述数据处理装置36或所述控制装置1206的示例硬件布置1300的框图。硬件布置1300可包括处理器1306(例如,中央处理器(CPU)、数字信号处理器(DSP)、微控制器单元(MCU)等)。处理器1306可以是用于执行本文描述的流程的不同动作的单一处理单元或者是多个处理单元。硬件布置1300还可以包括用于从其他实体接收信号的输入单元1302、以及用于向其他实体提供信号的输出单元1304。输入单元1302和输出单元1304可以被布置为单一实体或者是分离的实体。
此外,硬件布置1300可以包括具有非易失性或易失性存储器形式的至少一个可读存储介质1308,例如是电可擦除可编程只读存储器(EEPROM)、闪存、和/或硬盘驱动器。可读存储介质1308包括计算机程序指令1310,该计算机程序指令1310包括代码/计算机可读指令,其在由硬件布置1300中的处理器1306执行时使得硬件布置1300和/或包括硬件布置1300在内的数据处理装置或控制装置可以执行例如上述方法的流程及其任何变形。
在某些实施例中,计算机程序指令1310可被配置为具有例如计算机程序指令模块1310A~1310B架构的计算机程序指令代码。因此,在例如数据处理装置中使用硬件布置1300时的示例实施例中,布置1300的计算机程序指令中的代码包括:模块1310B,用于基于所述结构光图像和三维重构的计算模型,计算与所述结构光图像中的点对应的所述地面上的点在无人机坐标系中的三维坐标。计算机程序指令中的代码还包括:模块1310B,用于根据所述地面上的一系列点在无人机坐标系中的三维坐标,确定所述地面的形态。
尽管上面结合图13所公开的实施例中的代码手段被实现为计算机程序指令模块,其在处理器1306中执行时使得硬件布置1300执行上述方法的流程或动作,然而在备选实施例中,该代码手段中的至少一项可以至少被部分地实现为硬件电路。
处理器可以是单个CPU(中央处理单元),但也可以包括两个或更多个处理单元。例如,处理器可以包括通用微处理器、指令集处理器和/或相关芯片组和/或专用 微处理器(例如,专用集成电路(ASIC))。处理器还可以包括用于缓存用途的板载存储器。计算机程序指令可以由连接到处理器的计算机程序指令产品来承载。计算机程序指令产品可以包括其上存储有计算机程序指令的计算机可读介质。例如,计算机程序指令产品可以是闪存、随机存取存储器(RAM)、只读存储器(ROM)、EEPROM,且上述计算机程序指令模块在备选实施例中可以用UE内的存储器的形式被分布到不同计算机程序指令产品中。
需要注意的是,在本文中被描述为通过纯硬件、纯软件和/或固件来实现的功能,也可以通过专用硬件、通用硬件与软件的结合等方式来实现。例如,被描述为通过专用硬件(例如,现场可编程门阵列(FPGA)、专用集成电路(ASIC)等)来实现的功能,可以由通用硬件(例如,中央处理单元(CPU)、数字信号处理器(DSP))与软件的结合的方式来实现,反之亦然。
在本公开的上述实施例中,主要基于所述影像采集装置采集到的结构光图像和标定的三维重构的计算模型,确定地面的三维形态,但是,本领域技术人员应理解,这样的方式是为了详细地阐述本公开的发明构思而作出的示例性实施方式,其并不是唯一的实施方式。在本公开的其它实施例中,可以采用其它实施方式,以基于所述结构光图像,确定地面的形态。
尽管已经参照本公开的特定示例性实施例示出并描述了本公开,但是本领域技术人员应该理解,在不背离所附权利要求及其等同物限定的本公开的精神和范围的情况下,可以对本公开进行形式和细节上的多种改变。因此,本公开的范围不应该限于上述实施例,而是应该不仅由所附权利要求来进行确定,还由所附权利要求的等同物来进行限定。

Claims (50)

  1. 一种基于无人机的地面形态检测方法,其中,所述无人机上搭载用于投射结构光的投影装置和用于获取图像的影像采集装置,其特征在于,所述方法包括:
    所述投影装置将预定的结构光投射到所述无人机待降落的地面上;
    所述影像采集装置采集所述地面上调制的结构光图像;
    基于所述结构光图像,确定所述地面的形态。
  2. 根据权利要求1所述的方法,其特征在于,所述投影装置的闪光频率与所述影像采集装置的拍摄频率相同。
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述结构光图像,确定所述地面的形态包括:
    基于所述结构光图像和三维重构的计算模型,确定所述地面的形态。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    标定所述影像采集装置的内部参数和/或外部参数;
    标定所述投影装置投射的结构光与所述影像采集装置的位置关系;以及
    基于所述内部参数、所述外部参数和所述位置关系,确定所述计算模型。
  5. 根据权利要求4所述的方法,其特征在于,所述内部参数包括所述影像采集装置的焦距、像素大小和/或镜头畸变参数,和/或所述外部参数包括所述影像采集装置相对于所述无人机的空间位置和/或旋转方向。
  6. 根据权利要求4或5所述的方法,其特征在于,所述预定的结构光包括线结构光。
  7. 根据权利要求6所述的方法,其特征在于,所述结构光与所述影像采集装置的位置关系包括线结构光平面在影像采集装置坐标系下的平面方程。
  8. 根据权利要求3-7中任一项所述的方法,其特征在于,所述计算模型由下式表示:
    Figure PCTCN2017088710-appb-100001
    其中,u,v分别表示所述结构光图像中的点在像素坐标系下的坐标,u0,v0分别表示所述影像采集装置的光轴线与所述结构光图像平面之间的交点在像素坐标系下的坐标,f表示的所述影像采集装置的焦距,dx,dy分别表示单位像素在成像坐标系下x轴和y轴的物理间距,平移变换矩阵T表示所述影像采集装置在无人机坐标系下的相对位置,正交旋转变换矩阵R表示所述影像采集装置在无人机坐标系下的相对旋转方向,a、b、c、d分别表示无人机坐标系下的结构光平面方程的系数,ZC表示所述地面上的点在影像采集装置坐标系下的Z轴坐标,(XW,YW,ZW)分别表示所述地面上的点在无人机坐标系下的坐标。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述方法还包括:
    在所述影像采集装置采集所述地面上调制的结构光图像之后,处理所述结构光图像。
  10. 一种基于无人机的地面形态检测系统,其特征在于,所述地面形态检测系统包括:
    搭载在所述无人机上的投影装置,用于朝向地面投射预定的结构光;
    搭载在所述无人机上的影像采集装置,用于采集所述地面上调制的结构光图像;以及
    数据处理装置,所述数据处理装置用于:基于所述结构光图像,确定所述地面的形态。
  11. 根据权利要求10所述的系统,其特征在于,所述投影装置的闪光频率与所述影像采集装置的拍摄频率相同。
  12. 根据权利要求10所述的系统,其特征在于,所述基于所述结构光图像,确定所述地面的形态包括:
    基于所述结构光图像和三维重构的计算模型,确定所述地面的形态。
  13. 根据权利要求12所述的系统,其特征在于,所述数据处理装置还用于:
    标定所述影像采集装置的内部参数和/或外部参数;
    标定所述投影装置投射的结构光与所述影像采集装置的位置关系;以及
    基于所述内部参数、所述外部参数和所述位置关系,确定所述计算模型。
  14. 根据权利要求13所述的系统,其特征在于,所述内部参数包括所述影像采集装置的焦距、像素大小和/或镜头畸变参数,和/或所述外部参数包括所述影像采集装置相对于所述无人机的空间位置和/或旋转方向。
  15. 根据权利要求13或14所述的系统,其特征在于,所述预定的结构光包括线结构光。
  16. 根据权利要求15所述的系统,其特征在于,所述结构光与所述影像采集装置的位置关系包括线结构光平面在影像采集装置坐标系下的平面方程。
  17. 根据权利要求12-16中任一项所述的系统,其特征在于,所述计算模型由下式表示:
    Figure PCTCN2017088710-appb-100002
    其中,u,v分别表示所述结构光图像中的点在像素坐标系下的坐标,u0,v0分别表示所述影像采集装置的光轴线与所述结构光图像平面之间的交点在像素坐标系下的坐标,f表示的所述影像采集装置的焦距,dx,dy分别表示单位像素在成像坐标系下x轴和y轴的物理间距,平移变换矩阵T表示所述影像采集装置在无人机坐标系下的相对位置,正交旋转变换矩阵R表示所述影像采集装置在无人机坐标系下的相对旋转方向,a、b、c、d分别表示无人机坐标系下的结构光平面方程的系数,ZC表示所述地面上的点在影像采集装置坐标系下的Z轴坐标,(XW,YW,ZW)分别表示所述地面上的点在无人机坐标系下的坐标。
  18. 根据权利要求10-17中任一项所述的系统,其特征在于,所述数据处理装置还用于:
    在所述影像采集装置采集所述地面上调制的结构光图像之后,处理所述结构光图像。
  19. 一种无人机自主降落方法,其中,所述无人机上搭载用于投射结构光的投影装置和用于获取图像的影像采集装置,其特征在于,所述方法包括:
    所述投影装置将预定的结构光投射到所述无人机待降落的地面上;
    所述影像采集装置采集所述地面上调制的结构光图像;
    基于所述结构光图像,确定所述地面的形态;以及
    当所述地面的形态不满足预定的平整度要求时,禁止所述无人机降落到所述地面,和/或自主确定与所述地面邻近的邻近地面的形态。
  20. 根据权利要求19所述的方法,其特征在于,所述方法还包括:
    当所述地面的形态满足预定的平整度要求时,所述无人机降落在所述地面上。
  21. 根据权利要求19所述的方法,其特征在于,所述方法还包括:
    当所述邻近地面的形态满足所述预定的平整度要求时,所述无人机降落在所述邻近地面上。
  22. 根据权利要求19所述的方法,其特征在于,所述投影装置的闪光频率与所述影像采集装置的拍摄频率相同。
  23. 根据权利要求19所述的方法,其特征在于,所述基于所述结构光图像,确定所述地面的形态包括:
    基于所述结构光图像和三维重构的计算模型,确定所述地面的形态。
  24. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    标定所述影像采集装置的内部参数和/或外部参数;
    标定所述投影装置投射的结构光与所述影像采集装置的位置关系;以及
    基于所述内部参数、所述外部参数和所述位置关系,确定所述计算模型。
  25. 根据权利要求24所述的方法,其特征在于,所述内部参数包括所述影像采集装置的焦距、像素大小和/或镜头畸变参数,和/或所述外部参数包括所述影像采集装置相对于所述无人机的空间位置和/或旋转方向。
  26. 根据权利要求24或25所述的方法,其特征在于,所述预定的结构光包括线结构光。
  27. 根据权利要求26所述的方法,其特征在于,所述结构光与所述影像采集装置的位置关系包括线结构光平面在影像采集装置坐标系下的平面方程。
  28. 根据权利要求19-27中任一项所述的方法,其特征在于,所述计算模型由下式表示:
    Figure PCTCN2017088710-appb-100003
    其中,u,v分别表示所述结构光图像中的点在像素坐标系下的坐标,u0,v0分别表示所述影像采集装置的光轴线与所述结构光图像平面之间的交点在像素坐标系下的坐标,f表示的所述影像采集装置的焦距,dx,dy分别表示单位像素在成像坐标系下x轴和y轴的物理间距,平移变换矩阵T表示所述影像采集装置在无人机坐标系下的相对位置,正交旋转变换矩阵R表示所述影像采集装置在无人机坐标系下的相对旋转方向,a、b、c、d分别表示无人机坐标系下的结构光平面方程的系数,ZC表示所述地面上的点在影像采集装置坐标系下的Z轴坐标,(XW,YW,ZW)分别表示所述地面上的点在无人机坐标系下的坐标。
  29. 根据权利要求19-28中任一项所述的方法,其特征在于,所述方法还包括:
    在所述影像采集装置采集所述地面上调制的结构光图像之后,处理所述结构光图像。
  30. 根据权利要求19-29中任一项所述的方法,其特征在于,所述投影装置将预定的结构光投射到所述无人机待降落的地面上包括:
    当所述无人机的高度小于预定高度时,所述投影装置将预定的结构光投射到所述无人机待降落的地面上。
  31. 一种无人机,其特征在于,所述无人机包括:
    搭载在所述无人机上的投影装置,用于朝向地面投射预定的结构光;
    搭载在所述无人机上的影像采集装置,用于采集所述地面上调制的结构光图像;以及
    控制装置,所述控制装置用于:
    基于所述结构光图像,确定所述地面的形态;以及
    当所述地面的形态不满足预定的平整度要求时,禁止所述无人机降落到所述地面,和/或自主确定与所述地面邻近的邻近地面的形态。
  32. 根据权利要求31所述的无人机,其特征在于,所述控制装置还用于:
    当所述地面的形态满足预定的平整度要求时,控制所述无人机降落在所述地面上。
  33. 根据权利要求31所述的无人机,其特征在于,所述控制装置还用于:
    当所述邻近地面的形态满足所述预定的平整度要求时,控制所述无人机降落在所述邻近地面上。
  34. 根据权利要求31所述的无人机,其特征在于,所述投影装置的闪光频率与所述影像采集装置的拍摄频率相同。
  35. 根据权利要求31所述的无人机,其特征在于,所述基于所述结构光图像,确定所述地面的形态包括:
    基于所述结构光图像和三维重构的计算模型,确定所述地面的形态。
  36. 根据权利要求35所述的无人机,其特征在于,所述控制装置还用于:
    标定所述影像采集装置的内部参数和/或外部参数;
    标定所述投影装置投射的结构光与所述影像采集装置的位置关系;以及
    基于所述内部参数、所述外部参数和所述位置关系,确定所述计算模型。
  37. 根据权利要求36所述的无人机,其特征在于,所述内部参数包括所述影像采集装置的焦距、像素大小和/或镜头畸变参数,和/或所述外部参数包括所述影像采集装置相对于所述无人机的空间位置和/或旋转方向。
  38. 根据权利要求36或37所述的无人机,其特征在于,所述预定的结构光包括线结构光。
  39. 根据权利要求38所述的无人机,其特征在于,所述结构光与所述影像采集装置的位置关系包括线结构光平面在影像采集装置坐标系下的平面方程。
  40. 根据权利要求35-39中任一项所述的无人机,其特征在于,所述计算模型由下式表示:
    Figure PCTCN2017088710-appb-100004
    其中,u,v分别表示所述结构光图像中的点在像素坐标系下的坐标,u0,v0分别表示所述影像采集装置的光轴线与所述结构光图像平面之间的交点在像素坐标系下的坐标,f表示的所述影像采集装置的焦距,dx,dy分别表示单位像素在成像坐标系下x轴和y轴的物理间距,平移变换矩阵T表示所述影像采集装置在无人机坐标系下的相对位置,正交旋转变换矩阵R表示所述影像采集装置在无人机坐标系下的相对旋转方向,a、b、c、d分别表示无人机坐标系下的结构光平面方程的系数,ZC表示所述地面上的点在影像采集装置坐标系下的Z轴坐标,(XW,YW,ZW)分别表示所述地面上的点在无人机坐标系下的坐标。
  41. 根据权利要求31-40中任一项所述的无人机,其特征在于,所述控制装置还用于:
    在所述影像采集装置采集所述地面上调制的结构光图像之后,处理所述结构光图像。
  42. 根据权利要求31-41中任一项所述的无人机,其特征在于,所述控制装置还用于:
    检测所述无人机的高度,以及
    当所述无人机的高度小于预定高度时,控制所述投影装置将预定的结构光投射到所述无人机待降落的地面上。
  43. 一种用于无人机的电子设备,包括:
    处理器;和
    存储器,其中存储有指令,所述指令在由所述处理器执行时使得所述处理器:
    获取结构光图像;以及
    基于所述结构光图像,确定地面的形态。
  44. 根据权利要求43所述的电子设备,其特征在于,所述基于所述结构光图像,确定地面的形态包括:
    基于所述结构光图像和三维重构的计算模型,确定所述地面的形态。
  45. 根据权利要求44所述的电子设备,其特征在于,所述指令在由所述处理器执行时还使得所述处理器:
    标定所述影像采集装置的内部参数和/或外部参数;
    标定所述投影装置投射的结构光与所述影像采集装置的位置关系;以及
    基于所述内部参数、所述外部参数和所述位置关系,确定所述计算模型。
  46. 根据权利要求45所述的电子设备,其特征在于,所述内部参数包括所述影像采集装置的焦距、像素大小和/或镜头畸变参数,和/或所述外部参数包括所述影像采集装置相对于所述无人机的空间位置和/或旋转方向。
  47. 根据权利要求45或46所述的电子设备,其特征在于,所述预定的结构光包括线结构光。
  48. 根据权利要求47所述的电子设备,其特征在于,所述结构光与所述影像采集装置的位置关系包括线结构光平面在影像采集装置坐标系下的平面方程。
  49. 根据权利要求44-48中任一项所述的电子设备,其特征在于,所述计算模型由下式表示:
    Figure PCTCN2017088710-appb-100005
    其中,u,v分别表示所述结构光图像中的点在像素坐标系下的坐标,u0,v0分别表示所述影像采集装置的光轴线与所述结构光图像平面之间的交点在像素坐标系下的坐标,f表示的所述影像采集装置的焦距,dx,dy分别表示单位像素在成像坐标系下x轴和y轴的物理间距,平移变换矩阵T表示所述影像采集装置在无人机坐标系下的相对位置,正交旋转变换矩阵R表示所述影像采集装置在无人机坐标系下的相对旋转方向,a、b、c、d分别表示无人机坐标系下的结构光平面方程的系数,ZC表示所述地面上的点在影像采集装置坐标系下的Z轴坐标,(XW,YW,ZW)分别表示所述地面上的点在无人机坐标系下的坐标。
  50. 根据权利要求43-49中任一项所述的电子设备,其特征在于,所述指令在由所述处理器执行时还使得所述处理器:在获取结构光图像之后,处理所述结构光图像。
PCT/CN2017/088710 2017-06-16 2017-06-16 地面形态检测方法及系统、无人机降落方法和无人机 WO2018227576A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780004468.XA CN108474658B (zh) 2017-06-16 2017-06-16 地面形态检测方法及系统、无人机降落方法和无人机
PCT/CN2017/088710 WO2018227576A1 (zh) 2017-06-16 2017-06-16 地面形态检测方法及系统、无人机降落方法和无人机
CN202011528344.1A CN112710284A (zh) 2017-06-16 2017-06-16 地面形态检测方法、系统和无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/088710 WO2018227576A1 (zh) 2017-06-16 2017-06-16 地面形态检测方法及系统、无人机降落方法和无人机

Publications (1)

Publication Number Publication Date
WO2018227576A1 true WO2018227576A1 (zh) 2018-12-20

Family

ID=63266530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/088710 WO2018227576A1 (zh) 2017-06-16 2017-06-16 地面形态检测方法及系统、无人机降落方法和无人机

Country Status (2)

Country Link
CN (2) CN108474658B (zh)
WO (1) WO2018227576A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11772819B2 (en) 2020-05-18 2023-10-03 Sagar Defence Engineering Private Limited Method and system to ascertain location of drone box for landing and charging drones

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242918B (zh) * 2018-11-15 2022-07-15 中国直升机设计研究所 一种直升机机载双目立体视觉标定方法
CN118092500A (zh) * 2018-11-28 2024-05-28 深圳市大疆创新科技有限公司 一种无人机的安全降落方法、装置、无人机及介质
CN109556578A (zh) * 2018-12-06 2019-04-02 成都天睿特科技有限公司 一种无人机旋扫式测量拍摄方法
CN111324139A (zh) * 2018-12-13 2020-06-23 顺丰科技有限公司 无人机降落方法、装置、设备及存储介质
CN109945847B (zh) * 2019-03-20 2021-01-29 武汉建工集团股份有限公司 一种基于标线仪的墙面监测方法及系统
CN110297498B (zh) * 2019-06-13 2022-04-26 暨南大学 一种基于无线充电无人机的轨道巡检方法及系统
CN112306083B (zh) * 2019-07-30 2023-12-05 广州极飞科技股份有限公司 无人机降落区域的确定方法、装置、无人机和存储介质
CN110502022B (zh) * 2019-09-09 2022-09-13 厦门精益远达智能科技有限公司 一种实现无人机稳定悬停的方法、装置、设备和存储介质
WO2021056432A1 (zh) * 2019-09-27 2021-04-01 深圳市大疆创新科技有限公司 一种无人飞行器的降落控制方法及相关设备
CN112344877B (zh) * 2020-11-11 2022-02-01 东北大学 无人机测量大型岩体结构面三维形貌参数的装置及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106800A1 (en) * 2009-10-29 2012-05-03 Saad Masood Khan 3-d model based method for detecting and classifying vehicles in aerial imagery
CN102538763A (zh) * 2012-02-14 2012-07-04 清华大学 一种河工模型试验三维地形的量测方法
CN107690840B (zh) * 2009-06-24 2013-07-31 中国科学院自动化研究所 无人机视觉辅助导航方法及系统
CN104296681A (zh) * 2014-10-16 2015-01-21 浙江大学 基于激光点阵标识的三维地形传感装置及方法
CN105120257A (zh) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 一种基于结构光编码的垂直深度感知装置
CN105203084A (zh) * 2015-07-02 2015-12-30 汤一平 一种无人机3d全景视觉装置
US20160349746A1 (en) * 2015-05-29 2016-12-01 Faro Technologies, Inc. Unmanned aerial vehicle having a projector and being tracked by a laser tracker

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424630A (zh) * 2013-08-20 2015-03-18 华为技术有限公司 三维重建方法及装置、移动终端
CN104656669B (zh) * 2015-03-10 2016-08-17 泰州市泰坦自动化设备有限公司 基于图像处理的无人机着陆位置搜索系统
CN106371281A (zh) * 2016-11-02 2017-02-01 辽宁中蓝电子科技有限公司 基于结构光多模块360度空间扫描和定位的3d相机

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107690840B (zh) * 2009-06-24 2013-07-31 中国科学院自动化研究所 无人机视觉辅助导航方法及系统
US20120106800A1 (en) * 2009-10-29 2012-05-03 Saad Masood Khan 3-d model based method for detecting and classifying vehicles in aerial imagery
CN102538763A (zh) * 2012-02-14 2012-07-04 清华大学 一种河工模型试验三维地形的量测方法
CN104296681A (zh) * 2014-10-16 2015-01-21 浙江大学 基于激光点阵标识的三维地形传感装置及方法
US20160349746A1 (en) * 2015-05-29 2016-12-01 Faro Technologies, Inc. Unmanned aerial vehicle having a projector and being tracked by a laser tracker
CN105203084A (zh) * 2015-07-02 2015-12-30 汤一平 一种无人机3d全景视觉装置
CN105120257A (zh) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 一种基于结构光编码的垂直深度感知装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11772819B2 (en) 2020-05-18 2023-10-03 Sagar Defence Engineering Private Limited Method and system to ascertain location of drone box for landing and charging drones

Also Published As

Publication number Publication date
CN108474658B (zh) 2021-01-12
CN112710284A (zh) 2021-04-27
CN108474658A (zh) 2018-08-31

Similar Documents

Publication Publication Date Title
WO2018227576A1 (zh) 地面形态检测方法及系统、无人机降落方法和无人机
CN109961468B (zh) 基于双目视觉的体积测量方法、装置及存储介质
US10237532B2 (en) Scan colorization with an uncalibrated camera
Pusztai et al. Accurate calibration of LiDAR-camera systems using ordinary boxes
US9826217B2 (en) System and method for adjusting a baseline of an imaging system with microlens array
Pandey et al. Extrinsic calibration of a 3d laser scanner and an omnidirectional camera
KR101706093B1 (ko) 3차원 좌표 추출 시스템 및 그 방법
US8712144B2 (en) System and method for detecting crop rows in an agricultural field
US6915008B2 (en) Method and apparatus for multi-nodal, three-dimensional imaging
US8737720B2 (en) System and method for detecting and analyzing features in an agricultural field
CN111563921B (zh) 一种基于双目相机的水下点云获取方法
WO2021140886A1 (ja) 三次元モデル生成方法、情報処理装置およびプログラム
JP2012533222A (ja) 画像ベースの表面トラッキング
EP3049756B1 (en) Modeling arrangement and method and system for modeling the topography of a three-dimensional surface
CN109410234A (zh) 一种基于双目视觉避障的控制方法及控制系统
CN113658241A (zh) 单目结构光的深度恢复方法、电子设备及存储介质
Bergström et al. Automatic in-line inspection of shape based on photogrammetry
CN110378964A (zh) 一种摄像机外参标定方法及装置、存储介质
JP2024501731A (ja) 複数カメラによる速度測定方法及び速度測定装置
Agrawal et al. RWU3D: Real World ToF and Stereo Dataset with High Quality Ground Truth
WO2022019128A1 (ja) 情報処理装置、情報処理方法、及びコンピュータが読み取り可能な記録媒体
CN117804449B (zh) 割草机地面感知方法、装置、设备及存储介质
WO2022209166A1 (ja) 情報処理装置、情報処理方法、及び較正用ターゲット
JP6604934B2 (ja) 点群画素位置決定装置、方法、及びプログラム
Fugerth et al. Autonomus—Navigation system for mobile robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17913262

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17913262

Country of ref document: EP

Kind code of ref document: A1