WO2021081958A1 - Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage - Google Patents

Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage Download PDF

Info

Publication number
WO2021081958A1
WO2021081958A1 PCT/CN2019/114890 CN2019114890W WO2021081958A1 WO 2021081958 A1 WO2021081958 A1 WO 2021081958A1 CN 2019114890 W CN2019114890 W CN 2019114890W WO 2021081958 A1 WO2021081958 A1 WO 2021081958A1
Authority
WO
WIPO (PCT)
Prior art keywords
position information
pixel
dimensional
point
cloud data
Prior art date
Application number
PCT/CN2019/114890
Other languages
English (en)
Chinese (zh)
Inventor
祝煌剑
高迪
王俊喜
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033289.8A priority Critical patent/CN112154394A/zh
Priority to PCT/CN2019/114890 priority patent/WO2021081958A1/fr
Publication of WO2021081958A1 publication Critical patent/WO2021081958A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Definitions

  • the invention relates to the field of detection technology, in particular to a terrain detection method, a movable platform, a control device, a system and a storage medium.
  • three-dimensional point cloud data is generally collected, and the detection equipment is, for example, a radar, a laser device, or a camera.
  • the detection equipment is, for example, a radar, a laser device, or a camera.
  • most of the existing noise removal methods are based on prior information, set certain clustering rules and search methods,
  • the original observation points in the point cloud data are clustered.
  • the original observation points that do not meet the clustering rules and cannot form a point cluster are regarded as noise removal, and the remaining original observation points are divided according to the point cluster to which they belong.
  • this method relies on the choice of clustering rules and search methods. If the appropriate clustering rules and search methods are not selected, the final clustering effect will deviate greatly from the actual situation, resulting in relatively large errors in the terrain detection results. Large, and the clustering algorithm has higher requirements on the processor, and consumes more computing resources.
  • this application provides a terrain detection method, a movable platform, a control device, a system, and a storage medium to improve the accuracy of terrain detection.
  • this application provides a terrain detection method, which includes:
  • Pixel assignment is performed on pixels lacking pixel values in the two-dimensional image to obtain a two-dimensional image with pixel complementation
  • the terrain information is determined.
  • the present application also provides a movable platform, which includes a detection device, a memory, and a processor;
  • the detection device is used for terrain detection and collecting three-dimensional point cloud data containing terrain information
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • Pixel assignment is performed on pixels lacking pixel values in the two-dimensional image to obtain a two-dimensional image with pixel complementation
  • the terrain information is determined.
  • the present application also provides a control device, the control device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the steps of the above-mentioned terrain detection method, and send the determined terrain information to the movable platform.
  • the present application also provides a control system, the control system includes an aircraft and the control device as described in the third aspect; wherein, the movable platform is used to collect three-dimensional point cloud data and include The three-dimensional point cloud data of the terrain information is sent to the control device.
  • the present application also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the processor implements the above-mentioned terrain detection method.
  • the terrain detection method, movable platform, control equipment, system and storage medium proposed by the present invention can improve the accuracy of terrain detection.
  • Fig. 1 is a schematic block diagram of a control system provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an aircraft provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a radar when collecting terrain information according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of steps of a terrain detection method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a Delaunay triangulation network constructed according to an embodiment of the present application.
  • Fig. 6 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • Fig. 7 is a schematic block diagram of a control device provided by an embodiment of the present application.
  • the embodiments of the present application provide a terrain detection method, a movable platform, a control device, a system, and a storage medium, which are used to improve the accuracy of terrain detection by the detection device mounted on the movable platform.
  • the three-dimensional point cloud data is converted into a two-dimensional image, the two-dimensional image is processed, the noise is removed from the two-dimensional image, and other operations are performed according to the processed two-dimensional image.
  • Image reconstruction of three-dimensional point cloud data to achieve accurate detection of terrain information, and improve the accuracy of terrain detection.
  • control system includes a movable platform and control equipment.
  • the movable platform includes an aircraft, a robot, or an autonomous vehicle, etc.
  • the movable platform is equipped with a detection device, and the detection device includes a radar, a ranging sensor, etc., for ease of description, this application uses the control device as the radar for detailed introduction .
  • control device includes a ground control platform, mobile phone, tablet computer, notebook computer, PC computer, and the like.
  • control system is a terrain detection system
  • terrain detection system 100 includes an aircraft 110 and a control device 120.
  • the aircraft 110 includes a drone, which includes a rotary-wing drone, such as a four-rotor drone, a hexarotor drone, and an eight-rotor drone. It can also be a fixed-wing drone or a rotary-wing drone. The combination of type and fixed-wing UAV is not limited here.
  • FIG. 2 is a schematic structural diagram of an aircraft 110 according to an embodiment of the present specification.
  • a rotary wing unmanned aerial vehicle is taken as an example for description.
  • the aircraft 110 may include a power system, a flight control system, and a frame.
  • the aircraft 110 may communicate with the control device 120 wirelessly, and the control device 120 may display flight information of the aircraft.
  • the control device 120 may communicate with the aircraft 110 in a wireless manner for remote control of the aircraft 110.
  • the frame may include a fuselage 111 and a tripod 112 (also referred to as a landing gear).
  • the fuselage 111 may include a center frame 1111 and one or more arms 1112 connected to the center frame 1111, and the one or more arms 1112 extend radially from the center frame.
  • the tripod 112 is connected to the fuselage 111 for supporting the aircraft 110 when the aircraft 110 is landing.
  • the power system may include one or more electronic governors (referred to as ESCs for short), one or more propellers 113, and one or more motors 114 corresponding to the one or more propellers 113, where the motors 114 are connected to the electronic governors.
  • the motor 114 and the propeller 113 are arranged on the arm 1112 of the aircraft 110; the electronic governor is used to receive the driving signal generated by the flight control system, and provide a driving current to the motor according to the driving signal to control the motor 114 speed.
  • the motor 114 is used to drive the propeller 113 to rotate, so as to provide power for the flight of the aircraft 110, and the power enables the aircraft 110 to realize movement of one or more degrees of freedom.
  • the aircraft 110 may rotate about one or more rotation axes.
  • the aforementioned rotation axis may include a roll axis, a yaw axis, and a pitch axis.
  • the motor 114 may be a DC motor or an AC motor.
  • the motor 114 may be a brushless motor or a brushed motor.
  • the flight control system may include a flight controller and a sensing system.
  • the sensing system is used to measure the attitude information of the unmanned aerial vehicle, that is, the position information and state information of the aerial vehicle 110 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, and three-dimensional angular velocity.
  • the sensing system may include, for example, at least one of sensors such as a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (IMU), a vision sensor, a global navigation satellite system, and a barometer.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the flight controller is used to control the flight of the aircraft 110, for example, it can control the flight of the aircraft 110 according to the attitude information measured by the sensing system. It should be understood that the flight controller may control the aircraft 110 according to pre-programmed program instructions, or may control the aircraft 110 by responding to one or more control instructions from the control device 120.
  • the tripod 112 of the aircraft 110 is equipped with a radar 115, and the radar 115 is used to realize the function of surveying terrain information.
  • the aircraft 110 may include two or more tripods 112, and the radar 115 is mounted on one of the tripods 112.
  • the radar mainly includes an RF front-end module and a signal processing module.
  • the RF front-end module includes a transmitting antenna and a receiving antenna.
  • the signal processing module is responsible for generating modulated signals and processing and analyzing the collected intermediate frequency signals.
  • the RF front-end module receives the modulated signal to generate a high-frequency signal whose frequency changes linearly with the modulated signal, and radiates downward through the transmitting antenna.
  • the electromagnetic wave encounters the ground, targets or obstacles and is reflected back, and then is received by the receiving antenna and transmitted.
  • the signal and the intermediate frequency are mixed to obtain an intermediate frequency signal, and the speed information and distance information can be obtained according to the frequency of the intermediate frequency signal.
  • Radar encounters a target object by radiating electromagnetic waves in space, and the scattered echo from the target object is received by the radar to detect the target object.
  • the radar When the radar is flying with the movable platform, it continuously collects the coordinates of the observation point by radiating electromagnetic waves, and finally collects three-dimensional point cloud data including terrain information.
  • the movable platform sends the collected three-dimensional point cloud data including terrain information to the control device, and the control device processes the three-dimensional point cloud data to determine the terrain information, and the control device can also send the determined terrain information to the movable platform .
  • FIG. 3 it is a schematic diagram of radar collecting terrain information.
  • the data collected by the radar includes the depth of field detection distance, the horizontal detection distance, and the height position information, that is, the elevation value.
  • the three-dimensional point clouds used for the two-dimensional image construction are all in the geodetic coordinate system.
  • FIG. 4 is a schematic flowchart of steps of a terrain detection method according to an embodiment of the present application. This method can be applied to control equipment to improve the accuracy of terrain detection.
  • the terrain detection method will be introduced in detail below in conjunction with the control system in FIG. 1. It should be understood that the control system in FIG. 1 does not constitute a limitation on the application scenario of the terrain detection method.
  • the terrain detection method includes step S101 to step S106.
  • the three-dimensional point cloud data includes a plurality of observation points, each of the observation points includes first position information, second position information, and height position information, wherein the first position information and the second position information are different.
  • the first position information, the second position information, and the height position information are perpendicular to each other.
  • the first position information may be a depth of field detection distance
  • the second position information may be a horizontal detection distance
  • the height position information may be an elevation value.
  • the radar mounted on the aircraft scans the terrain information of the flight area of the aircraft to obtain three-dimensional point cloud data containing terrain information, and send the obtained three-dimensional point cloud data containing terrain information to controlling device.
  • the three-dimensional point cloud data is mapped to the corresponding two-dimensional image according to the three-dimensional point cloud data.
  • control device projects the three-dimensional point cloud data into a two-dimensional image according to the acquired three-dimensional point cloud data to obtain a two-dimensional image corresponding to the three-dimensional point cloud data.
  • the method of obtaining a two-dimensional image corresponding to the three-dimensional point cloud data according to the three-dimensional point cloud data is specifically: determining a two-dimensional matrix according to the three-dimensional point cloud data, and comparing all data according to the two-dimensional matrix.
  • the three-dimensional point cloud data is projected to obtain a two-dimensional image.
  • the corresponding two-dimensional matrix is determined according to the three-dimensional point cloud data.
  • Each observation point in the three-dimensional point cloud data corresponds to a matrix unit in the two-dimensional matrix, where the two-dimensional matrix is a two-dimensional pixel matrix that constitutes a two-dimensional image.
  • Each matrix unit in the matrix is a pixel in the two-dimensional image.
  • all observation points in the three-dimensional point cloud data are projected one by one to obtain a two-dimensional image corresponding to the three-dimensional point cloud data.
  • one observation point can correspond to one matrix unit, or multiple observation points can correspond to one matrix unit.
  • the manner of determining the two-dimensional matrix according to the three-dimensional point cloud data is specifically: determining the first target location information and the second location information of the observation points in the three-dimensional point cloud data.
  • the range resolution refers to the resolution used by the radar mounted on the aircraft to scan terrain information.
  • determining the first target location information and the second target location information according to the first location information and the second location information of the observation points in the three-dimensional point cloud data may be calculated by calculating the average of the first location information of the multiple observation points.
  • the value determines the first target position information, and the average value of the second position information of the multiple observation points is calculated to determine the second target position information.
  • the specific implementation process obtain the first position information of all observation points in the 3D point cloud data, that is, the depth of field detection distance, calculate the sum of the depth detection distances of all observation points, and then divide by the sum of the depth detection distances of all observation points According to the number of observation points, the average value of the depth-of-field detection distance is obtained, and the average value of the depth-of-field detection distance is used as the first target position information.
  • the second position information of all observation points in the 3D point cloud data that is, the horizontal detection distance
  • calculate the sum of the horizontal detection distances of all observation points and then divide the sum of the horizontal detection distances of all observation points by the number of observation points ,
  • the average value of the horizontal detection distance is obtained, and the average value of the horizontal detection distance is used as the second target position information.
  • the method of determining the first target location information and the second target location information according to the first location information and the second location information of the observation point in the three-dimensional point cloud data may be: the observation point from the three-dimensional point cloud data
  • the largest first location information and the largest second location information are determined in the first location information and the second location information as the first target location information and the second target location information, respectively.
  • the first position information of all observation points in the three-dimensional point cloud data that is, the depth-of-field detection distance
  • the largest depth-of-field detection distance is determined from the depth-of-field detection distances of multiple observation points as the first target position information.
  • the second position information of all observation points in the three-dimensional point cloud data that is, the horizontal detection distance
  • determine the largest horizontal detection distance from the horizontal detection distances of the multiple observation points as the first target position information.
  • the length and width of the two-dimensional matrix are determined according to the distance resolution.
  • the length of the two-dimensional matrix may be twice the first target location information divided by the distance resolution
  • the width of the two-dimensional matrix may be twice the second target location information divided by the distance resolution. It is understandable that the length and width of the two-dimensional matrix can be interchanged.
  • the method of projecting the three-dimensional point cloud data according to the two-dimensional matrix to obtain a two-dimensional image is specifically: determining that the observation point of the three-dimensional point cloud data corresponds in the two-dimensional matrix The matrix index of the observation point; assign the height position information of the observation point to the matrix element corresponding to the matrix index of the observation point; use the matrix index of the two-dimensional matrix as a pixel and correspond to the matrix element of the two-dimensional matrix The height position information is used as the pixel value of the pixel to obtain a two-dimensional image.
  • determine the matrix index of each observation point in the three-dimensional point cloud data in the two-dimensional matrix use the matrix index corresponding to the observation point as the pixel point, and use the height position information of the observation point, that is, the elevation value of the observation point as the pixel
  • the pixel value of the point to obtain a two-dimensional image.
  • Determine the matrix index corresponding to each observation point in the 3D point cloud data use the matrix index to project the 3D point cloud data, improve the accuracy of the 3D point cloud data during projection, and improve the accuracy of the obtained 2D image .
  • the step of determining the matrix index corresponding to the observation point of the three-dimensional point cloud data in the two-dimensional matrix is specifically as follows:
  • the first index value and the second index value are used to determine the matrix unit corresponding to the observation point in a two-dimensional matrix.
  • the first index value may be that the observation point is in the The coordinates in the length direction in the two-dimensional matrix
  • the second index value may be the coordinates in the width direction of the observation point in the two-dimensional matrix.
  • the first index value may be the coordinates of the observation point in the length direction of the two-dimensional matrix
  • the second index value may be the coordinates of the observation point in the width direction of the two-dimensional matrix
  • the first index value may be The coordinates of the observation point in the width direction of the two-dimensional matrix
  • the second index value may be the coordinates of the observation point in the length direction of the two-dimensional matrix
  • the height position information of the observation point is assigned to the matrix element corresponding to the matrix index of the observation point, thereby obtaining the pixel value of the pixel point corresponding to the observation point .
  • the coordinates of the observation point are (x i , y i ), where x i is the first position information of the observation point, which is the depth of field detection distance, and x i is the second position information of the observation point, which is the horizontal detection distance .
  • r is the range resolution
  • L x is the maximum first position information, that is, the maximum depth of field detection distance
  • Ly is the maximum second position information, that is, the maximum horizontal detection distance.
  • the step of assigning the height position information of the observation point to the matrix element corresponding to the matrix index of the observation point is specifically as follows:
  • the matrix index corresponding to the observation point is judged separately, and if multiple observation points correspond to the same matrix index, the largest value in the height position information of the several observation points is assigned to the matrix element corresponding to the matrix index.
  • the maximum height position information of the multiple observation points is used as the matrix index
  • Corresponding matrix elements to improve the integrity of the information retention of the three-dimensional point cloud data in the two-dimensional image obtained by the projection.
  • S103 Perform pixel assignment on pixels lacking pixel values in the two-dimensional image to obtain a two-dimensional image with pixel complementation.
  • the radar scans terrain information
  • the radar scan has a scanning blind area
  • the radar uses different range resolutions to scan the same scanning area multiple times
  • Some of the pixels lack pixel values. Therefore, pixel values need to be assigned to pixels lacking pixel values to complement the missing parts in the two-dimensional image to avoid errors in the subsequent morphological processing of the two-dimensional image.
  • the pixel assignment of the pixels lacking pixel values in the two-dimensional image specifically includes: determining the pixels lacking pixel values in the two-dimensional image; and determining the pixels lacking pixel values according to an image interpolation algorithm.
  • the pixel points are subjected to interpolation processing to complement the pixel value of the pixel point lacking pixel value.
  • the image interpolation algorithm includes one of the nearest neighbor interpolation method, linear interpolation method and bilinear interpolation method.
  • the steps of assigning values to pixels of pixels lacking pixel values in a two-dimensional image are specifically as follows:
  • the Delaunay Triangle includes multiple Delaunay Triangles. According to the Delaunay Triangulation, the height position information corresponding to the pixels lacking pixel values in the two-dimensional image is determined, and the determined height position information is assigned to the pixels lacking pixel values in the two-dimensional image.
  • constructing a Delaunay triangulation based on three-dimensional point cloud data is specifically: constructing a plurality of triangles with the observation points in the three-dimensional point cloud data as vertices, and forming the Delaunay triangulation for the plurality of triangles; Wherein, there are no other observation points in the circumcircle of any of the triangles in the Delaunay triangulation network.
  • Figure 5 is a Delaunay triangulation constructed based on the obtained three-dimensional point cloud data. Take the observation points in the three-dimensional point cloud data as the vertices to construct multiple triangles, where the constructed triangles do not intersect each other and there are no other observation points in the circumcircle of any triangle. The constructed multiple triangles constitute Delaunay Triangulation.
  • the step of the Delaunay triangulation determining the height position information corresponding to the pixel points lacking pixel values in the two-dimensional image is specifically:
  • the target triangle is a triangle including pixels lacking pixel values in the triangle plane. After the target triangle is determined, the height position information corresponding to the pixels lacking pixel values in the two-dimensional image can be determined according to the target triangle.
  • the step of determining, according to the target triangle, the height position information corresponding to the pixel point lacking pixel value in the two-dimensional image is specifically:
  • the calculation process of determining the first position information and the second position information corresponding to the pixel point lacking pixel value in the three-dimensional point cloud data may be the calculation of the matrix index corresponding to the observation point in the three-dimensional point cloud data The inverse operation of the calculation process will not be described in detail here.
  • the target triangle After determining the target triangle, obtain the first position information, second position information, and height position information of the three vertices of the target triangle, and calculate the target triangle's position information according to the first position information, second position information, and height position information of the three vertices. Plane equation in ternary space. Substituting the first position information and the second position information of the pixels lacking pixel values into the ternary space plane equation, and solving the height position information of the pixels lacking pixel values. When the terrain changes drastically, reduce the factor The smooth distortion caused by pixel completion improves the accuracy of the calculated height position information of the pixels lacking pixel values.
  • the step of determining the height position information corresponding to the pixels lacking pixel values in the two-dimensional image according to the target triangle is specifically for:
  • the information serves as height position information corresponding to the pixel point lacking pixel value.
  • the target triangle after determining the target triangle, calculate the distance between the pixel point lacking pixel value and the three vertices of the target triangle, and use the vertex corresponding to the calculated minimum distance as the closest target to the pixel point lacking pixel value Vertex, the height position information of the target vertex is taken as the height position information of the pixel point lacking pixel value, and the calculation amount of the height position information is reduced when the terrain changes relatively smoothly.
  • the average value of the height position information of the target vertices is calculated, and the calculated average value is used as the height position information of pixels lacking pixel values.
  • S104 Perform morphological processing on the two-dimensional image after the pixel complementation to obtain a processed two-dimensional image.
  • the morphological processing includes: closing operation, opening operation, closing operation first and then opening operation, and opening operation first and then closing operation. Morphological processing is performed on the two-dimensional image after pixel complementation, and the noise in the two-dimensional image is eliminated.
  • the closing operation operation includes: performing an expansion operation operation first, and then performing an erosion operation operation. After the closing operation, the noise in the two-dimensional image is filtered out, that is, the noise in the three-dimensional point cloud data is filtered out.
  • the kernel is the structural element (that is, the convolution kernel), Is the expansion operator, and ⁇ is the corrosion operator.
  • I i is the first index value of the matrix index, and J i is the second index value of the matrix index.
  • the opening operation operation includes: first performing a corrosion operation operation, and then performing an expansion operation operation. After the open operation, the local lowest point is kept while removing the noise.
  • the step of performing morphological processing on the two-dimensional image after the pixel complementation to obtain a processed two-dimensional image is specifically:
  • a plurality of convolution kernels of different sizes are selected to perform morphological processing on the pixels of the two-dimensional image after the pixel complementation, and different weights are assigned to the results of the calculation processing to obtain a two-dimensional image with noise removed.
  • the convolution kernel used is ⁇ kernal 1 ,kernal 2 ,kernal 3 ,...,kernal n ⁇ , and different weights are assigned to the corresponding morphological operation results ⁇ w 1 ,w 2 ,w 3 ,...,w n ⁇
  • the resulting filtered pixel value is:
  • the closing operation and/or the opening operation are Choose different convolution kernels.
  • the closing operation and/or the opening operation are Different convolution kernels are selected, and different weights are set for the operation processing results of different convolution kernels.
  • the terrain information includes one or more of ground height, ground flatness, and ground slope.
  • the step of determining the terrain information according to the processed two-dimensional image is specifically as follows:
  • S105 Reconstruct 3D point cloud data according to the processed 2D image.
  • the 3D point cloud data is reconstructed according to the processed 2D image, and then the reconstructed 3D point cloud data is fitted to obtain a fitting plane.
  • the ground height, ground slope, and ground flatness can be extracted by fitting the plane. information.
  • the step of reconstructing three-dimensional point cloud data from a processed two-dimensional image is specifically: acquiring a matrix index corresponding to a pixel in the processed two-dimensional image;
  • the pixel point undergoes coordinate conversion to obtain the first position information and the second position information of the reconstructed point corresponding to the pixel point;
  • the pixel value of the pixel point is used as the height position information of the reconstructed point to complete a three-dimensional point cloud Reconstruction of data.
  • the matrix index and pixel value corresponding to the pixel in the two-dimensional image are obtained, and the coordinate conversion of the pixel is performed according to the matrix index to obtain the first position information and the second position information of the reconstructed point corresponding to the pixel, and then the pixel The value is used as the height position information of the reconstructed point to complete the reconstruction of the three-dimensional point cloud data.
  • performing coordinate conversion on the pixel point according to the matrix index of the pixel point to obtain the first position information and the second position information of the reconstructed point corresponding to the pixel point includes: calculating the The product of the first index value of the pixel point and the distance resolution, and the product of the first index value and the distance resolution minus the difference of the maximum depth of field distance is taken as the first reconstruction point corresponding to the pixel point Position information; calculate the product of the second index value of the pixel and the distance resolution, and take the product of the second index value and the distance resolution minus the difference of the second position information as the difference with the pixel The second location information of the corresponding reconstruction point.
  • the coordinates of the reconstructed point are (x i , y i ), where x i is the first position information of the observation point, which is the depth of field detection distance, and x i is the second position information of the observation point, which is the horizontal detection distance .
  • r is the range resolution
  • L x is the maximum first position information, that is, the maximum depth of field detection distance
  • Ly is the maximum second position information, that is, the maximum horizontal detection distance.
  • the step of determining terrain information according to the reconstructed three-dimensional point cloud data is specifically: fitting the reconstructed three-dimensional point cloud data to obtain a fitting plane, and determining the terrain information according to the fitting plane.
  • the 3D point cloud data is reconstructed according to the processed 2D image, and then the reconstructed 3D point cloud data is fitted to obtain a fitting plane.
  • the ground height, ground slope, and ground flatness can be extracted by fitting the plane. information.
  • the average value is calculated according to the height position information of the multiple reconstruction points in the fitting plane, and the ground flatness of the scanning area is determined according to the average value.
  • the slope of the fitting plane is determined according to the height position information of multiple reconstruction points.
  • the foregoing embodiment obtains three-dimensional point cloud data containing terrain information; obtains a two-dimensional image corresponding to the three-dimensional point cloud data according to the three-dimensional point cloud data; performs processing on the pixels in the two-dimensional image that lack pixel values Pixel assignment is used to obtain a two-dimensional image after pixel complementation; morphological processing is performed on the two-dimensional image after pixel complementation to obtain a processed two-dimensional image; terrain information is determined according to the processed two-dimensional image. Based on digital image morphology processing, the three-dimensional point cloud data is projected into the two-dimensional image, and the two-dimensional image is morphologically filtered.
  • the ground can be separated from the ground target while removing the noise, so as to realize the accurate estimation of the ground.
  • FIG. 6 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the mobile platform 11 includes a processor 111, a memory 112, and a detection device 113.
  • the processor 111, the memory 112, and the detection device 113 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus or the detection device 113 and the processing device 113.
  • the device 111 is connected via the CAN bus.
  • the movable platform includes aircraft, robots or autonomous unmanned vehicles.
  • the processor 111 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the detection device 113 is used for terrain detection and collecting three-dimensional point cloud data containing terrain information.
  • the processor is used to run a computer program stored in a memory, and implement the following steps when executing the computer program:
  • Pixel assignment is performed on pixels lacking pixel values in the two-dimensional image to obtain a two-dimensional image with pixel complementation
  • the terrain information is determined.
  • the processor implementing the step of obtaining a two-dimensional image corresponding to the three-dimensional point cloud data according to the three-dimensional point cloud data includes:
  • a two-dimensional matrix is determined according to the three-dimensional point cloud data, and the three-dimensional point cloud data is projected according to the two-dimensional matrix to obtain a two-dimensional image.
  • the three-dimensional point cloud data includes a plurality of observation points, and each of the observation points includes first position information, second position information, and height position information, wherein the first position information and the second position information The location information is different.
  • the processor implementing the step of determining a two-dimensional matrix based on the three-dimensional point cloud data includes:
  • the processor implementing the step of determining the first target location information and the second target location information based on the first location information and the second location information of the observation points in the three-dimensional point cloud data includes:
  • the maximum first position information and the maximum second position information are determined from the first position information and the second position information of the observation point in the three-dimensional point cloud data, as the first target position information and the second target position information, respectively.
  • the processor implementing the step of projecting the three-dimensional point cloud data according to the two-dimensional matrix to obtain a two-dimensional image includes:
  • the processor implementing the step of determining the matrix index corresponding to the observation point of the three-dimensional point cloud data in the two-dimensional matrix includes:
  • the processor implementing the step of assigning the height position information of the observation point to the matrix element corresponding to the matrix index of the observation point includes:
  • the processor implementing the step of pixel assignment to pixels lacking pixel values in the two-dimensional image includes:
  • the image interpolation algorithm includes one of: nearest neighbor interpolation, linear interpolation, and bilinear interpolation.
  • the processor implementing the step of pixel assignment to pixels lacking pixel values in the two-dimensional image includes:
  • the determined height position information is assigned to the pixel point lacking pixel value in the two-dimensional image to complete the pixel value of the pixel point lacking pixel value.
  • the processor implementing the step of constructing Delaunay's triangle based on the three-dimensional point cloud data includes:
  • the processor implementing the step of determining, according to the Delaunay triangulation, the step of determining height position information corresponding to pixels lacking pixel values in the two-dimensional image includes:
  • the height position information corresponding to the pixel point lacking pixel value in the two-dimensional image is determined according to the target triangle.
  • the processor implementing the step of determining height position information corresponding to pixels lacking pixel values in the two-dimensional image according to the target triangle includes:
  • the height position information of the pixel point lacking pixel value is calculated according to the plane equation and the first position information and second position information corresponding to the pixel point lacking pixel value.
  • the processor implementing the step of determining height position information corresponding to pixels lacking pixel values in the two-dimensional image according to the target triangle includes:
  • the vertex closest to the pixel point lacking pixel value is determined as the target vertex according to the distance, and the height position information of the target vertex is used as the height position information corresponding to the pixel point lacking pixel value.
  • the morphological processing includes one of a closing operation, an opening operation, a closing operation first and then an opening operation, and an opening operation before the closing operation.
  • the closing operation includes: performing an expansion operation first, and then performing an erosion operation; or, the opening operation includes: performing an erosion operation first, and then an expansion operation.
  • the step of performing morphological processing on the two-dimensional image after pixel complementation by the processor to obtain a processed two-dimensional image includes:
  • a plurality of convolution kernels of different sizes are selected to perform morphological processing on the pixels of the two-dimensional image after the pixel complementation, and different weights are assigned to the results of the calculation processing to obtain a two-dimensional image with noise removed.
  • a different convolution kernel is selected for the closing operation and/or the opening operation.
  • the closing operation and/or the opening operation both select different convolution kernels, and for different convolution kernels Different weights are set for the operation processing results of.
  • the processor implementing the step of reconstructing three-dimensional point cloud data from the processed two-dimensional image includes:
  • the pixel value of the pixel point is used as the height position information of the reconstructed point to complete the reconstruction of the three-dimensional point cloud data.
  • the processor implements the coordinate conversion of the pixel point according to the matrix index of the pixel point to obtain the first position information and the second position of the reconstructed point corresponding to the pixel point
  • the information steps include:
  • the processor implementing the step of determining terrain information according to the reconstructed three-dimensional point cloud data includes:
  • the terrain information includes one or more of ground height, ground flatness, and ground slope.
  • FIG. 7 is a schematic block diagram of a control device provided by an embodiment of the present application.
  • the control device 12 includes a processor 121 and a memory 122, and the processor 121 and the memory 122 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 121 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 122 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc.
  • the memory 122 is used to store computer programs.
  • the processor is used to run a computer program stored in a memory, and implement the following steps when executing the computer program:
  • Pixel assignment is performed on pixels lacking pixel values in the two-dimensional image to obtain a two-dimensional image with pixel complementation
  • the terrain information is determined, and the determined terrain information is sent to the movable platform.
  • the embodiment of the present application also provides a control system, which may be, for example, the flight control system shown in FIG. 1.
  • the control system includes a movable platform and a control device, and the control device is communicatively connected with the movable platform;
  • the movable platform is used to collect three-dimensional point cloud data and send the three-dimensional point cloud data to the control device.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the terrain detection method provided in the example.
  • the computer-readable storage medium may be the internal storage unit of the removable platform and the control device described in any of the foregoing embodiments, for example, the hard disk or memory of the control device.
  • the computer-readable storage medium may also be an external storage device of the control device, such as a plug-in hard disk equipped on the control device, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) ) Card, Flash Card, etc.
  • a plug-in hard disk equipped on the control device such as a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) ) Card, Flash Card, etc.
  • SD Secure Digital

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage. Le procédé comprend : l'acquisition des données de nuage de points (S101) ; l'obtention d'une image bidimensionnelle selon les données de nuage de points (S102) ; la réalisation d'une attribution de pixel sur l'image bidimensionnelle pour obtenir une image bidimensionnelle achevée (S103) ; la réalisation d'un traitement sur l'image bidimensionnelle achevée pour obtenir une image bidimensionnelle traitée (S104) ; la reconstruction des données de nuage de points selon l'image bidimensionnelle traitée (S105) ; et la détermination des informations de terrain selon les données de nuage de points reconstruites (S106).
PCT/CN2019/114890 2019-10-31 2019-10-31 Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage WO2021081958A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033289.8A CN112154394A (zh) 2019-10-31 2019-10-31 地形检测方法、可移动平台、控制设备、系统及存储介质
PCT/CN2019/114890 WO2021081958A1 (fr) 2019-10-31 2019-10-31 Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/114890 WO2021081958A1 (fr) 2019-10-31 2019-10-31 Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage

Publications (1)

Publication Number Publication Date
WO2021081958A1 true WO2021081958A1 (fr) 2021-05-06

Family

ID=73891967

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/114890 WO2021081958A1 (fr) 2019-10-31 2019-10-31 Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage

Country Status (2)

Country Link
CN (1) CN112154394A (fr)
WO (1) WO2021081958A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734825A (zh) * 2020-12-31 2021-04-30 深兰人工智能(深圳)有限公司 3d点云数据的深度补全方法和装置
CN112937444B (zh) * 2021-03-15 2023-12-29 上海三一重机股份有限公司 作业机械的辅助影像生成方法、装置和作业机械
CN113192201B (zh) * 2021-05-08 2023-08-01 上海皓桦科技股份有限公司 点云数据的数据拟合方法、装置及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129680A (zh) * 2010-01-15 2011-07-20 精工爱普生株式会社 实时几何形状感知投影和快速重校准
CN103854320A (zh) * 2012-12-05 2014-06-11 上海海事大学 基于激光雷达的车型自动识别装置及其识别方法
CN107247834A (zh) * 2017-05-31 2017-10-13 华中科技大学 一种基于图像识别的三维环境模型重构方法、设备及系统
CN108428255A (zh) * 2018-02-10 2018-08-21 台州智必安科技有限责任公司 一种基于无人机的实时三维重建方法
CN109345557A (zh) * 2018-09-19 2019-02-15 东南大学 一种基于三维重建成果的前背景分离方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2950791C (fr) * 2013-08-19 2019-04-16 State Grid Corporation Of China Systeme de navigation visuelle binoculaire et methode fondee sur un robot electrique
CN109410260B (zh) * 2018-09-27 2020-12-29 先临三维科技股份有限公司 点云数据网格化方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129680A (zh) * 2010-01-15 2011-07-20 精工爱普生株式会社 实时几何形状感知投影和快速重校准
CN103854320A (zh) * 2012-12-05 2014-06-11 上海海事大学 基于激光雷达的车型自动识别装置及其识别方法
CN107247834A (zh) * 2017-05-31 2017-10-13 华中科技大学 一种基于图像识别的三维环境模型重构方法、设备及系统
CN108428255A (zh) * 2018-02-10 2018-08-21 台州智必安科技有限责任公司 一种基于无人机的实时三维重建方法
CN109345557A (zh) * 2018-09-19 2019-02-15 东南大学 一种基于三维重建成果的前背景分离方法

Also Published As

Publication number Publication date
CN112154394A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
US11237572B2 (en) Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
CN108419446B (zh) 用于激光深度图取样的系统及方法
WO2020135446A1 (fr) Procédé et dispositif de positionnement de cible, et véhicule aérien sans pilote
EP3803273A1 (fr) Techniques de cartographie en temps réel dans un environnement d'objet mobile
WO2021081958A1 (fr) Procédé de détection de terrain, plateforme mobile, dispositif de commande, système et support de stockage
CN113748357A (zh) 激光雷达的姿态校正方法、装置和系统
WO2020103049A1 (fr) Procédé et dispositif de prédiction de terrain d'un radar à micro-ondes rotatif et système et véhicule aérien sans pilote
CN107796384B (zh) 使用地理弧的2d交通工具定位
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
CN107798695B (zh) 使用地理弧的3d交通工具定位
US11953602B2 (en) Detecting three-dimensional structure models at runtime in vehicles
CN112051575A (zh) 一种毫米波雷达与激光雷达的调整方法及相关装置
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
CN109978954A (zh) 基于箱体的雷达和相机联合标定的方法和装置
JP2023505891A (ja) 環境のトポグラフィを測定するための方法
WO2020019175A1 (fr) Procédé et dispositif de traitement d'image et dispositif photographique et véhicule aérien sans pilote
WO2021262704A1 (fr) Post-traitement de données cartographiques à des fins de précision et de réduction de bruit améliorées
CN116952229A (zh) 无人机定位方法、装置、系统和存储介质
WO2021056503A1 (fr) Procédé et appareil de positionnement pour plateforme mobile, plateforme mobile et support de stockage
WO2020113417A1 (fr) Procédé et système de reconstruction tridimensionnelle d'une scène cible, et véhicule aérien sans pilote
CN110720025B (zh) 移动物体的地图的选择方法、装置、系统和车辆/机器人
EP3943979A1 (fr) Localisation de dispositif intérieur
WO2021087785A1 (fr) Procédé de détection de terrain, plateforme mobile, dispositif et système de commande, et support d'enregistrement
Šuľaj et al. Examples of real-time UAV data processing with cloud computing
WO2021035749A1 (fr) Procédé et dispositif d'optimisation de modèle de reconstruction en trois dimensions, et plate-forme mobile

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19951004

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19951004

Country of ref document: EP

Kind code of ref document: A1