CN116482711A - Local static environment sensing method and device for autonomous selection of landing zone - Google Patents

Local static environment sensing method and device for autonomous selection of landing zone Download PDF

Info

Publication number
CN116482711A
CN116482711A CN202310742397.0A CN202310742397A CN116482711A CN 116482711 A CN116482711 A CN 116482711A CN 202310742397 A CN202310742397 A CN 202310742397A CN 116482711 A CN116482711 A CN 116482711A
Authority
CN
China
Prior art keywords
information
ground
obstacle
point cloud
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310742397.0A
Other languages
Chinese (zh)
Inventor
高海明
华炜
张顺
邱奇波
谢天
史进
李旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310742397.0A priority Critical patent/CN116482711A/en
Publication of CN116482711A publication Critical patent/CN116482711A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C9/00Measuring inclination, e.g. by clinometers, by levels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Navigation (AREA)

Abstract

The application relates to a local static environment sensing method for autonomously selecting a landing zone, which comprises the steps of drawing a point cloud map corresponding to a local ground environment; performing grid discretization processing on a local ground environment based on a point cloud map to obtain a ground grid, and combining a preset height threshold to obtain obstacle point cloud information; traversing all the ground grids to obtain the ground gradient information determined by the normal direction of the center of the current grid; the method comprises the steps of carrying out voxelized segmentation on each obstacle object to obtain obstacle voxel grid information, and classifying obstacles based on the obstacle voxel grid information; and generating local static environment information for autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information. And generating local environment information for aircraft landing by combining the ground gradient information and the obstacle information. The method can efficiently and reliably maintain the environment sensing information of the local area so as to realize the robust and reliable autonomous landing zone selection of the aircraft.

Description

Local static environment sensing method and device for autonomous selection of landing zone
Technical Field
The present disclosure relates to the field of environmental awareness, and in particular, to a local static environmental awareness method and device for autonomous landing zone selection.
Background
With the rapid development of artificial intelligence technology and the enhancement of autonomous behavior and capability, more and more aircrafts are active in various army and civil fusion application scenes. Meanwhile, as the application of aircrafts is more and more widespread, the safety in the flying process is more and more important. For various aircrafts, the requirements of emergency landing in the process of facing complex tasks, such as encountering sudden situations of complex weather, insufficient energy, reconnaissance requirements, machine faults and the like, need to finish emergency landing in an unknown complex environment, so that the selection of a robust and reliable autonomous landing zone can help the aircrafts and unmanned planes safely finish landing.
When the landing zone of the aircraft is selected in an auxiliary mode, a landing zone which is flat in terrain and can avoid various obstacles needs to be selected. If the autonomous landing zone selection cannot be reliably and efficiently performed in real time, the risk of incapability of emergency landing of the aircraft and serious consequences such as machine damage are caused. Therefore, the realization of robust and reliable environment perception under long-time conditions is an important precondition for various aircrafts to complete all complex tasks.
The automatic landing zone identification of the aircraft is oriented, and the development of real-time reliable local static environment perception is also more and more focused by researchers. The laser sensor has the advantages of large visual angle, reliable information, interference resistance and the like, is widely applied to various industrial scenes such as robots, autopilots and the like, and can overcome the limitation of the vision sensor on the conditions such as shadows, uneven illumination, complex weather and the like. The current difficulty of how to effectively combine the advantages of the airborne laser radar sensor, and the practical application requirements of various aircrafts, especially the complex task of autonomous landing zone selection, is still pending in overcoming the operational pressure brought by a large number of point clouds to realize robust and reliable real-time local environment perception.
Disclosure of Invention
Based on the above, it is necessary to provide a local static environment sensing method and device for autonomous landing zone selection, which can construct a local static environment based on ground gradient information and obstacle information in a point cloud map.
In a first aspect, the present application provides a local static context awareness method for autonomous landing zone selection, comprising:
drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information;
Grid discretization processing is carried out on the local ground environment based on the point cloud map, so that a ground grid for storing ground height information is obtained, and obstacle point cloud information is obtained by combining a preset height threshold value;
traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information;
clustering the obstacle point cloud information to obtain obstacle objects, voxel segmentation is carried out on each obstacle object to obtain obstacle voxel grid information, and obstacles are classified based on the obstacle voxel grid information;
and generating local static environment information for autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information.
In one embodiment, the drawing the point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information includes:
acquiring real-time pose information of the aircraft,
acquiring ground point cloud information by an airborne radar sensor of the aircraft,
and performing splicing processing on the ground point cloud information based on the real-time pose information to obtain a point cloud map corresponding to the local ground environment.
In one embodiment, the performing grid discretization on the local ground environment based on the point cloud map to obtain a ground grid storing ground height information, and combining a preset height threshold to obtain obstacle point cloud information includes:
performing simulation processing on the point cloud map to obtain ground information corresponding to a local ground environment, and performing grid discretization processing on the ground information to obtain a ground grid with ground height information stored therein;
performing bilinear interpolation on the point cloud map to obtain ground height, and dividing the point cloud map by combining a preset height threshold to obtain obstacle point cloud information.
In one embodiment, the traversing all the ground grids, obtaining a normal direction of a current grid center based on plane fitting, and determining ground slope information includes:
traversing all the ground grids, selecting a neighborhood ground grid of a target ground grid, and calculating a normal vector of the target ground grid by using plane fitting;
and calculating based on the normal vector and the unit vector to obtain gradient information corresponding to the target ground grid.
In one embodiment, the clustering the obstacle point cloud information to obtain obstacle objects, voxel dividing each obstacle object to obtain obstacle voxel grid information, classifying the obstacle based on the obstacle voxel grid information, and includes:
Constructing a corresponding point cloud KD tree based on the obstacle point cloud information, generating an obstacle object according to the Euclidean distance nearest neighbor cluster, and denoising the obstacle object;
voxel segmentation is carried out according to the obstacle object after denoising, and distribution characteristics of point clouds in each voxel grid including characteristic values and characteristic vectors are counted:
and classifying the obstacle based on the characteristic value and the characteristic vector.
In one embodiment, the voxel segmentation is performed according to the denoised obstacle object, and statistics of distribution attributes of point clouds in each voxel grid including eigenvalues and eigenvectors includes:
voxel segmentation is carried out according to the obstacle object after denoising treatment, and corresponding point cloud information in a voxel grid is obtained;
and obtaining the distribution attribute of the corresponding voxel grid including the eigenvalue and the eigenvector by using principal component analysis.
In one embodiment, the classifying the obstacle based on the feature value and the feature vector includes:
based on the characteristic value and the characteristic vector of the voxel grid, carrying out consistency analysis on the obstacle;
Obstacles including cables, houses, vegetation, power towers, and other obstacles are classified.
In a second aspect, the present application also provides a local static environment awareness apparatus for autonomous landing zone selection, the apparatus comprising:
the point cloud map construction module is used for drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the combined navigation information;
the point cloud information screening module is used for carrying out grid discretization processing on the local ground environment based on the point cloud map to obtain a ground grid for storing ground height information, and combining a preset height threshold to obtain obstacle point cloud information;
the gradient information calculation module is used for traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the ground gradient information;
the obstacle classification module is used for clustering the obstacle point cloud information to obtain obstacle objects, carrying out voxel segmentation on each obstacle object to obtain obstacle voxel grid information, and classifying the obstacle based on the obstacle voxel grid information;
and the information generation module is used for generating local static environment information for independently selecting the landing zone according to the ground gradient information and the classified obstacle information.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information;
grid discretization processing is carried out on the local ground environment based on the point cloud map, so that a ground grid for storing ground height information is obtained, and obstacle point cloud information is obtained by combining a preset height threshold value;
traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information;
clustering the obstacle point cloud information to obtain obstacle objects, voxel segmentation is carried out on each obstacle object to obtain obstacle voxel grid information, and obstacles are classified based on the obstacle voxel grid information;
and generating local static environment information for autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information;
grid discretization processing is carried out on the local ground environment based on the point cloud map, so that a ground grid for storing ground height information is obtained, and obstacle point cloud information is obtained by combining a preset height threshold value;
traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information;
clustering the obstacle point cloud information to obtain obstacle objects, voxel segmentation is carried out on each obstacle object to obtain obstacle voxel grid information, and obstacles are classified based on the obstacle voxel grid information;
and generating local static environment information for autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information;
Grid discretization processing is carried out on the local ground environment based on the point cloud map, so that a ground grid for storing ground height information is obtained, and obstacle point cloud information is obtained by combining a preset height threshold value;
traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information;
clustering the obstacle point cloud information to obtain obstacle objects, voxel segmentation is carried out on each obstacle object to obtain obstacle voxel grid information, and obstacles are classified based on the obstacle voxel grid information;
and generating local static environment information for autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information.
Acquiring a point cloud map of a local ground environment based on an aircraft-mounted laser radar sensor and combined navigation information; then extracting ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and simultaneously obtaining ground grid information; traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information; meanwhile, based on a nearest neighbor clustering method, clustering the obstacle point cloud into a plurality of obstacle objects, and classifying the obstacles; and finally, combining the ground gradient information and the obstacle information, and generating local environment information for aircraft landing. The method can efficiently and reliably maintain the environment sensing information of the local area so as to realize the robust and reliable autonomous landing zone selection of the aircraft.
Drawings
FIG. 1 is an application environment diagram of a local static environment awareness method for landing zone autonomous selection in one embodiment;
FIG. 2 is a flow diagram of a local static context awareness method for landing zone autonomous selection in one embodiment;
FIG. 3 is a schematic diagram of a coordinate system in one embodiment;
fig. 4 is a schematic diagram of an effect of developing an obstacle based on KD-tree clustering in another embodiment;
FIG. 5 is a block diagram of a local static environment awareness apparatus for landing zone autonomous selection in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The local static environment sensing method for the autonomous selection of the landing zone, which is provided by the embodiment of the application, can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. And (3) constructing a point cloud map of the local ground environment, processing data of the point cloud map to obtain ground gradient information and obstacle information, and finally generating local environment information for assisting the aircraft to land according to the ground gradient information and the classified obstacle information. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a local static environment awareness method for autonomous selection of a landing zone is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
step S202, drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensors and the integrated navigation information.
The cloud map generation in the step is obtained by point cloud splicing on the basis of a given local interest area based on real-time laser perception information and combined navigation information.
Step S204, grid discretization processing is carried out on the local ground environment based on the point cloud map, so as to obtain a ground grid for storing ground height information, and obstacle point cloud information is obtained by combining a preset height threshold.
According to the obtained point cloud map information of the ground environment, extracting the ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and obtaining ground grid information.
And S206, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information.
According to the ground grid information, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information.
Step S208, clustering the obstacle point cloud information to obtain obstacle objects, voxel dividing each obstacle object to obtain obstacle voxel grid information, and classifying the obstacles based on the obstacle voxel grid information.
Wherein, based on nearest neighbor clustering method, the obstacle point Yun Julei is divided into m obstacle objects; voxel segmentation is carried out on each obstacle, and distribution characteristics of point clouds in each voxel grid are counted based on principal component analysis: based on the feature values and the feature vectors of the voxel grids, the undersegmentations and oversegmentations generated by the obstacle clustering are further overcome, and the obstacles are classified.
Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information.
In implementation, firstly, based on steps S204-S206, rasterizing is performed on the current local environment, corresponding ground information and gradient information of each ground grid are obtained, the ground grids meeting gradient requirements are used as initial selectable landing areas, and note that the initial touchable areas are defined as mutually independent ground grid sets meeting gradient requirements;
Next, step S208 is executed to perform clustering, consistency analysis and classification processing on the obstacles existing in the local environment, and the ground grid where the corresponding ground obstacle exists is set to be non-landable in combination with the above-mentioned initial selectable landing zone meeting the requirements. And on the basis of the initial selectable landing zone, carrying out region growth on the landable ground grid, thereby obtaining the space continuous landable zone. Finally, combining the appearance information of the aircraft, such as the size and the like, screening the landable area, and reserving the landable area meeting the conditions for the aircraft to autonomously select.
In addition, although high-altitude barriers such as cables and power transmission towers cannot influence the extraction of ground information of a landable area, the path selection in the landing process can be influenced, and the high-altitude barriers extracted and classified by the method can further improve the safety in the autonomous landing process.
According to the method, firstly, a point cloud map of a local ground environment is obtained according to an aircraft-mounted laser radar sensor and combined navigation information; then extracting ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and simultaneously obtaining ground grid information; traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information; meanwhile, based on a nearest neighbor clustering method, clustering the obstacle point cloud into a plurality of obstacle objects, and classifying the obstacle objects; and finally, combining the ground gradient information and the obstacle information, and generating local environment information for aircraft landing.
In one embodiment, as shown in fig. 3, step S202 includes:
step S302, acquiring real-time pose information of an aircraft;
step S304, acquiring ground point cloud information through an airborne radar sensor of the aircraft;
and step S306, performing splicing processing on the ground point cloud information based on the real-time pose information to obtain a point cloud map corresponding to the local ground environment.
In practice, real-time pose information of the aircraft is provided by on-board integrated navigation; returning ground point cloud information through an airborne laser radar sensor, and further obtaining a point cloud map of the current local ground environment according to the real-time pose information and the ground point cloud information;
specifically, the motion compensation is completed on the single-frame point cloud through the information acquired by the high-frequency inertial measurement unit (Inertial Measurement Unit, IMU), and the point cloud under the laser coordinate system is converted into a unified global coordinate system by using coordinate system conversion, and the coordinate system can be shown in the following fig. 3.
The calculation process of the airborne laser ground point coordinates comprises the following 3 steps:
the coordinates (x L ,y L ,z L (ii) can be obtained according to the following formula:
x L =0;
y L =l×sin(β);
z L = l×cos(β);
coordinates of points in a laser radar coordinate system x L ,y L ,z L (ii) converting into a body coordinate system according to the following formula, wherein the corresponding point (x) B ,y B ,z B (ii) can be determined as follows:
wherein t is 1 、t 2 、t 3 Is the offset between two coordinate systems, r 11 、r 12 、r 13、 r 21 、r 22 、r 23、 r 31 、r 32 、r 33 The transformation parameters are rotation.
Coordinates (x) of the point in the body coordinate system B ,y B ,z B (ii) conversion into a global coordinate system according to the following equation, corresponding points (x) W ,y W ,z W (ii) can be determined as follows:
wherein t 1 、t` 2 、t` 3 Is the offset between two coordinate systems, r 11 、r` 12 、r` 13、 r` 21 、r` 22 、r` 23、 r` 31 、r` 32 、r` 33 The transformation parameters are rotation.
Along with the continuous movement of the aircraft, the ground observation range also becomes larger, the local interest area is determined by taking the center of the aircraft carrier as a reference, and the point cloud map of the local ground environment can be obtained through point cloud splicing and cutting.
Considering the real-time requirement of the aircraft environment perception, the step is to construct the proposed local region of interest by giving the local region of interest as a reference to the current aircraft pose, and real-time maintenance is required as the aircraft moves; accumulating the historical multi-frame point clouds on the basis, cutting the point clouds according to the region of interest range, and reserving the local point clouds to realize an efficient and reliable local perception process.
In one embodiment, step S204 includes:
step S402, performing simulation processing on the point cloud map to obtain ground information corresponding to a local ground environment, and performing grid discretization processing on the ground information to obtain a ground grid with ground height information stored therein;
Step S402, performing bilinear interpolation on the point cloud map to obtain ground height, and dividing the point cloud map by combining a preset height threshold to obtain obstacle point cloud information.
In the implementation, according to the point cloud map of the local ground environment, the ground information of the current local ground environment is efficiently and reliably extracted based on a Cloth Simulation (close Simulation) method in the visual field.
The traditional map extraction method is used for distinguishing obstacles from the ground by considering the differences between gradient and elevation changes, the cloth simulation method is used for overturning point clouds first, and then a piece of cloth is supposed to fall from the upper side under the gravity, so that the finally fallen cloth can represent the current topography, and the method is a new thought for ground extraction. In addition, the ground is finally discretized in a grid form, and the ground point cloud quantity is independent, so that the subsequent algorithm efficiency is improved, and each ground grid stores corresponding ground height information.
In the specific implementation process, based on the obtained ground information, all the point clouds are traversed, the ground height information corresponding to the point clouds is obtained by utilizing bilinear interpolation values for each laser point cloud, the corresponding ground clearance is obtained by calculation, and the point clouds are divided into the ground point clouds and the obstacle point clouds by comparing a given height threshold value.
In one embodiment, step S206 includes:
step S502, traversing all the ground grids, selecting a neighborhood ground grid of a target ground grid, and calculating by using plane fitting to obtain a normal vector of the target ground grid;
and step S504, calculating based on the normal vector and the unit vector to obtain gradient information corresponding to the target ground grid.
In the generation process of the ground gradient information, firstly traversing all ground grid information, and calculating gradient characterization information of a corresponding grid by combining a neighborhood grid through plane fitting:
in consideration of the fact that the number of the ground point clouds is large, the ground gradient information is obtained through the obtained discretized ground grid information in the implementation process. Specifically, firstly traversing all ground grid information, combining the neighborhood grids of the current ground grid, and calculating the normal direction of the current grid by using plane fittingAnd obtaining gradient characterization information of the corresponding grid according to the following formula:
wherein the method comprises the steps ofUnit vector for vertical ground
Given a corresponding landable zone grade threshold in combination with landing characteristics of an aircraftBy comparing gradient information of the current ground gridAnd (3) withThe ground is divided into a landable area and a non-landable area as shown in the following formula.
In one embodiment, step S208 includes:
step S602, constructing a corresponding point cloud KD tree based on the obstacle point cloud information, generating an obstacle object according to the Euclidean distance nearest neighbor cluster, and denoising the obstacle object;
step S604, performing voxel segmentation according to the denoised obstacle object, and counting distribution characteristics of point clouds in each voxel grid including feature values and feature vectors:
step S606, classifying the obstacle based on the feature value and the feature vector.
In the implementation, according to the obstacle point cloud information, a corresponding point cloud KD tree is constructed, and a plurality of obstacle objects are generated according to the Euclidean distance nearest neighbor clusters. Through the obstacle point cloud clustering, each obstacle object can be constructed, for example, the obstacle point clouds of a building, vegetation, a cable, a power transmission tower and the like are clustered respectively. And the obstacle point cloud is constructed into a KD tree form, so that the clustering process can be accelerated. The KD tree constructed here is shown in FIG. 4.
Given an obstacle point cloud quantity thresholdThe number of the reserved point clouds is larger thanThereby achieving that m obstacle objects are most recently reserved until the point cloud noise moment of removal.
In the implementation process, the noise factors existing in the complex environment are considered, and the point cloud threshold value to which the obstacle belongs is givenFor the number of the point cloud is less thanThe obstacles of (2) are filtered so as to reserve m obstacle objects which are stable and reliable finally.
In one embodiment, step S604 includes:
step S702, voxel segmentation is carried out according to the obstacle object after denoising treatment, and corresponding point cloud information in a voxel grid is obtained;
in step S704, the distribution attribute of the corresponding voxel grid including the feature value and the feature vector is obtained by principal component analysis.
In practice, if each obstacle is subjected to voxel segmentation according to the obtained m obstacle objects, each point cloud contained in each obstacle is located in a corresponding voxel grid. And obtaining the point cloud distribution attribute of the corresponding voxel grid by utilizing principal component analysis, wherein the point cloud distribution attribute comprises a characteristic value and a characteristic vector.
In this step, the linearity and flatness of each voxel grid are quantitatively analyzed using principal component analysis (Principal Component Analysis, PCA). Specifically lambda 1 、λ 2 And lambda 3 (λ 123 ) Respectively representing three characteristic values obtained by PCA, wherein the planeness p of the ith voxel grid i And linearity l i Represented by the following formulas, respectively:
in one embodiment, step S606 includes:
step S802, carrying out consistency analysis on the obstacle based on the characteristic value and the characteristic vector of the voxel grid;
step S804 classifies the obstacle including cable, house, vegetation, transmission tower and other obstacles.
In the implementation, according to the obtained voxel grid information of the obstacle, the consistency analysis is carried out on the obstacle based on the characteristic value and the characteristic vector of the voxel grid, and the phenomenon of under-segmentation and over-segmentation generated by the clustering of the obstacle is overcome. In the specific implementation process, the phenomena of under-segmentation and over-segmentation of the barrier caused by complex conditions such as shielding, adhesion and the like can be overcome through consistency analysis after voxelization.
In this step, accurate identification of the obstacle is of great importance for landing of the aircraft. Specific categories include cables, houses, vegetation, power towers, and other obstacles. The obstacle point cloud generated by the cable has higher linearity after voxelization, and meanwhile, the rod-shaped buildings can be distinguished through the main direction, so that obstacle classification is completed; in addition, the voxel is formed to have higher flatness for buildings such as houses, and the voxel can be distinguished from other obstacles obviously. And similarly, classifying other obstacles.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a local static environment sensing device for the autonomous selection of the landing zone, which is used for realizing the above related local static environment sensing method for the autonomous selection of the landing zone. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the local static environment awareness device for autonomous selection of a landing zone provided below may be referred to the limitation of the local static environment awareness method for autonomous selection of a landing zone hereinabove, and will not be described herein.
In one embodiment, as shown in FIG. 5, there is provided a localized static environment awareness apparatus 50 for autonomous landing zone selection, comprising: the system comprises a point cloud map construction module 52, a point cloud information screening module 54, a gradient information calculation module 56, an obstacle classification module 58 and an information generation module 60, wherein:
the point cloud map construction module 52 is configured to draw a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information;
the cloud map generation in the step is obtained by point cloud splicing on the basis of a given local interest area based on real-time laser perception information and combined navigation information.
The point cloud information screening module 54 is configured to perform grid discretization processing on the local ground environment based on the point cloud map to obtain a ground grid storing ground height information, and combine a preset height threshold to obtain obstacle point cloud information;
according to the obtained point cloud map information of the ground environment, extracting the ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and obtaining ground grid information.
The gradient information calculation module 56 is configured to traverse all the ground grids, obtain a normal direction of a current grid center based on plane fitting, and determine ground gradient information;
According to the ground grid information, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information.
The obstacle classification module 58 is configured to cluster the obstacle point cloud information to obtain obstacle objects, voxel-split each obstacle object to obtain obstacle voxel grid information, and classify the obstacle based on the obstacle voxel grid information;
wherein, based on nearest neighbor clustering method, the obstacle point Yun Julei is divided into m obstacle objects; voxel segmentation is carried out on each obstacle, and distribution characteristics of point clouds in each voxel grid are counted based on principal component analysis: based on the feature values and the feature vectors of the voxel grids, the undersegmentations and oversegmentations generated by the obstacle clustering are further overcome, and the obstacles are classified.
The information generating module 60 is configured to generate local static environment information for autonomous selection of a landing zone according to the ground gradient information and the classified obstacle information.
In implementation, firstly, based on the point cloud information screening module 54 and the gradient information calculating module 56, the rasterization processing is completed on the current local environment, corresponding ground information and gradient information of each ground grid are obtained, the ground grids meeting gradient requirements are used as initial selectable landing areas, and note that the initial landable areas are defined as mutually independent ground grid sets meeting gradient requirements;
Next, clustering, consistency analysis, and classification of the obstacles present in the local environment are performed by means of the obstacle classification module 58, and the ground grid in which the corresponding ground obstacle is present is set to be non-landable in combination with the above-described initial selectable landing zone satisfying the requirements. And on the basis of the initial selectable landing zone, carrying out region growth on the landable ground grid, thereby obtaining the space continuous landable zone. Finally, combining the appearance information of the aircraft, such as the size and the like, screening the landable area, and reserving the landable area meeting the conditions for the aircraft to autonomously select.
In addition, although high-altitude barriers such as cables and power transmission towers cannot influence the extraction of ground information of a landable area, the path selection in the landing process can be influenced, and the high-altitude barriers extracted and classified by the method can further improve the safety in the autonomous landing process.
The local static environment sensing device provided by the embodiment firstly obtains a point cloud map of a local ground environment according to an aircraft airborne laser radar sensor and combined navigation information; then extracting ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and simultaneously obtaining ground grid information; traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information; meanwhile, based on a nearest neighbor clustering method, clustering the obstacle point cloud into a plurality of obstacle objects, and classifying the obstacle objects; and finally, combining the ground gradient information and the obstacle information, and generating local environment information for aircraft landing.
The various modules in the local static context awareness apparatus for landing zone autonomous selection described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store relevant data in the point cloud map. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a local static context awareness method for landing zone autonomous selection.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
step S202, drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensors and the integrated navigation information.
The cloud map generation in the step is based on real-time laser perception information and combined navigation information, wherein the cloud map generation in the step is obtained by point cloud splicing on the basis of a given local region of interest based on the real-time laser perception information and the combined navigation information.
Step S204, grid discretization processing is carried out on the local ground environment based on the point cloud map, so as to obtain a ground grid for storing ground height information, and obstacle point cloud information is obtained by combining a preset height threshold.
According to the obtained point cloud map information of the ground environment, extracting the ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and obtaining ground grid information.
And S206, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information.
According to the ground grid information, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information.
Step S208, clustering the obstacle point cloud information to obtain obstacle objects, voxel dividing each obstacle object to obtain obstacle voxel grid information, and classifying the obstacles based on the obstacle voxel grid information.
Wherein, based on nearest neighbor clustering method, the obstacle point Yun Julei is divided into m obstacle objects; voxel segmentation is carried out on each obstacle, and distribution characteristics of point clouds in each voxel grid are counted based on principal component analysis: based on the feature values and the feature vectors of the voxel grids, the undersegmentations and oversegmentations generated by the obstacle clustering are further overcome, and the obstacles are classified.
Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information. Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
step S202, drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensors and the integrated navigation information.
The cloud map generation in the step is obtained by point cloud splicing on the basis of a given local interest area based on real-time laser perception information and combined navigation information.
Step S204, grid discretization processing is carried out on the local ground environment based on the point cloud map, so as to obtain a ground grid for storing ground height information, and obstacle point cloud information is obtained by combining a preset height threshold.
According to the obtained point cloud map information of the ground environment, extracting the ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and obtaining ground grid information.
And S206, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information.
According to the ground grid information, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information.
Step S208, clustering the obstacle point cloud information to obtain obstacle objects, voxel dividing each obstacle object to obtain obstacle voxel grid information, and classifying the obstacles based on the obstacle voxel grid information.
Wherein, based on nearest neighbor clustering method, the obstacle point Yun Julei is divided into m obstacle objects; voxel segmentation is carried out on each obstacle, and distribution characteristics of point clouds in each voxel grid are counted based on principal component analysis: based on the feature values and the feature vectors of the voxel grids, the undersegmentations and oversegmentations generated by the obstacle clustering are further overcome, and the obstacles are classified.
Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information. Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
step S202, drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensors and the integrated navigation information.
The cloud map generation in the step is obtained by point cloud splicing on the basis of a given local interest area based on real-time laser perception information and combined navigation information.
Step S204, grid discretization processing is carried out on the local ground environment based on the point cloud map, so as to obtain a ground grid for storing ground height information, and obstacle point cloud information is obtained by combining a preset height threshold.
According to the obtained point cloud map information of the ground environment, extracting the ground information of the current local area, dividing the point cloud into a ground point cloud and an obstacle point cloud, and obtaining ground grid information.
And S206, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information.
According to the ground grid information, traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the current ground gradient information.
Step S208, clustering the obstacle point cloud information to obtain obstacle objects, voxel dividing each obstacle object to obtain obstacle voxel grid information, and classifying the obstacles based on the obstacle voxel grid information.
Wherein, based on nearest neighbor clustering method, the obstacle point Yun Julei is divided into m obstacle objects; voxel segmentation is carried out on each obstacle, and distribution characteristics of point clouds in each voxel grid are counted based on principal component analysis: based on the feature values and the feature vectors of the voxel grids, the undersegmentations and oversegmentations generated by the obstacle clustering are further overcome, and the obstacles are classified.
Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information. Step S210, local static environment information for autonomous selection of the landing zone is generated according to the ground gradient information and the classified obstacle information.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A localized static environment awareness method for autonomous landing zone selection, the localized static environment awareness method comprising:
drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the integrated navigation information;
grid discretization processing is carried out on the local ground environment based on the point cloud map, so that a ground grid for storing ground height information is obtained, and obstacle point cloud information is obtained by combining a preset height threshold value;
Traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining ground gradient information by combining the normal direction;
clustering the obstacle point cloud information to obtain obstacle objects, carrying out voxelization segmentation on each obstacle object to obtain obstacle voxel grid information, and carrying out obstacle classification based on the obstacle voxel grid information;
and generating local static environment information for autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information.
2. The local static environment awareness method for autonomous landing zone selection of claim 1, wherein the mapping the point cloud for the local ground environment based on the airborne radar sensor and the integrated navigational information comprises:
acquiring real-time pose information of the aircraft,
ground point cloud information is acquired by airborne radar sensors deployed on the aircraft,
and performing splicing processing on the ground point cloud information based on the real-time pose information to obtain a point cloud map corresponding to the local ground environment.
3. The local static environment sensing method for autonomous landing zone selection according to claim 1, wherein the performing grid discretization on the local ground environment based on the point cloud map to obtain a ground grid storing ground height information, and combining a preset height threshold to obtain obstacle point cloud information comprises:
Performing simulation processing on the point cloud map to obtain ground information corresponding to the local ground environment, and performing grid discretization processing on the ground information to obtain a ground grid with ground height information stored therein;
and performing bilinear interpolation operation on the point cloud map to obtain ground height, and dividing the point cloud map by combining a preset height threshold to obtain obstacle point cloud information.
4. The local static environment awareness method for landing zone autonomous selection of claim 1 wherein traversing all of the ground grids, obtaining a normal direction to a current grid center based on a plane fit, determining ground grade information in conjunction with the normal direction, comprises:
traversing all the ground grids, selecting a neighborhood ground grid of a target ground grid, and calculating a normal vector of the target ground grid by using plane fitting;
and calculating based on the normal vector and the unit vector to obtain gradient information corresponding to the target ground grid.
5. The local static environment perception method for autonomous landing zone selection according to claim 1, wherein the clustering of the obstacle point cloud information to obtain obstacle objects, voxel segmentation of each of the obstacle objects to obtain obstacle voxel grid information, and classifying the obstacle based on the obstacle voxel grid information, comprises:
Constructing a point cloud KD tree based on the obstacle point cloud information, performing nearest neighbor clustering according to the Euclidean distance to generate an obstacle object, and performing denoising treatment on the obstacle object;
voxel segmentation is carried out on the obstacle object after the denoising treatment, and distribution characteristics of point clouds in each voxel grid including characteristic values and characteristic vectors are counted:
and classifying the obstacle based on the characteristic value and the characteristic vector.
6. The method for local static environment awareness for autonomous selection of a landing zone according to claim 5, wherein the voxel segmentation of the denoised obstacle object, counting distribution characteristics of point clouds in each voxel grid including eigenvalues and eigenvectors, comprises:
voxel segmentation is carried out according to the obstacle object after denoising treatment, so that corresponding point cloud information in a voxel grid is obtained;
and carrying out principal component analysis on the point cloud information to obtain distribution attributes of the corresponding voxel grids including the characteristic values and the characteristic vectors.
7. The method for localized static environment awareness for landing zone autonomous selection of claim 5, wherein the classifying the obstacle based on the eigenvalues and the eigenvector comprises:
Performing obstacle consistency analysis based on the characteristic values and the characteristic vectors;
and classifying the analyzed obstacles including cables, houses, vegetation and power transmission towers.
8. The local static environment awareness method for landing zone autonomous selection of claim 1, wherein the generating local static environment information for landing zone autonomous selection from the ground grade information and the classified obstacle information comprises:
selecting the ground grid meeting the gradient requirement as an initial selectable landing zone by combining the rasterized ground information;
setting a landable ground grid with ground obstacles as non-landable;
and extracting the space continuous landable area by using a region growing mode, and screening the conditions of the landable area according to parameter information including the size of the aircraft.
9. A localized static environment awareness device for autonomous landing zone selection, the device comprising:
the point cloud map construction module is used for drawing a point cloud map corresponding to the local ground environment based on the airborne radar sensor and the combined navigation information;
the point cloud information screening module is used for carrying out grid discretization processing on the local ground environment based on the point cloud map to obtain a ground grid for storing ground height information, and combining a preset height threshold to obtain obstacle point cloud information;
The gradient information calculation module is used for traversing all the ground grids, obtaining the normal direction of the center of the current grid based on plane fitting, and determining the ground gradient information by combining the normal direction;
the obstacle classification module is used for clustering the obstacle point cloud information to obtain obstacle objects, carrying out voxelization segmentation on each obstacle object to obtain obstacle voxel grid information, and carrying out obstacle classification based on the obstacle voxel grid information;
and the information generation module is used for generating local static environment information for the autonomous selection of the landing zone according to the ground gradient information and the classified obstacle information.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
CN202310742397.0A 2023-06-21 2023-06-21 Local static environment sensing method and device for autonomous selection of landing zone Pending CN116482711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310742397.0A CN116482711A (en) 2023-06-21 2023-06-21 Local static environment sensing method and device for autonomous selection of landing zone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310742397.0A CN116482711A (en) 2023-06-21 2023-06-21 Local static environment sensing method and device for autonomous selection of landing zone

Publications (1)

Publication Number Publication Date
CN116482711A true CN116482711A (en) 2023-07-25

Family

ID=87227238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310742397.0A Pending CN116482711A (en) 2023-06-21 2023-06-21 Local static environment sensing method and device for autonomous selection of landing zone

Country Status (1)

Country Link
CN (1) CN116482711A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721118A (en) * 2023-08-11 2023-09-08 之江实验室 Point cloud-based selection method and device for safe landing points of aircraft
CN117890886A (en) * 2024-03-15 2024-04-16 之江实验室 Scanning equipment and scanning method for realizing two-direction scanning through single-shaft driving

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282208A1 (en) * 2012-04-24 2013-10-24 Exelis, Inc. Point cloud visualization of acceptable helicopter landing zones based on 4d lidar
CN109685821A (en) * 2018-12-26 2019-04-26 中国科学院大学 Region growing 3D rock mass point cloud plane extracting method based on high quality voxel
CN111413708A (en) * 2020-04-10 2020-07-14 湖南云顶智能科技有限公司 Unmanned aerial vehicle autonomous landing site selection method based on laser radar
CN111830534A (en) * 2020-06-08 2020-10-27 上海宇航系统工程研究所 Method for selecting optimal landing point by applying laser radar
CN112784799A (en) * 2021-02-01 2021-05-11 三一机器人科技有限公司 AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV
CN113359810A (en) * 2021-07-29 2021-09-07 东北大学 Unmanned aerial vehicle landing area identification method based on multiple sensors
CN114089377A (en) * 2021-10-21 2022-02-25 江苏大学 Point cloud processing and object identification system and method based on laser radar
CN114564042A (en) * 2022-03-01 2022-05-31 中国商用飞机有限责任公司北京民用飞机技术研究中心 Unmanned aerial vehicle landing method based on multi-sensor fusion
CN115980785A (en) * 2022-11-14 2023-04-18 中国航空工业集团公司洛阳电光设备研究所 Point cloud data processing method for helicopter aided navigation
CN116052099A (en) * 2022-12-30 2023-05-02 北京航空航天大学 Small target detection method for unstructured road

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130282208A1 (en) * 2012-04-24 2013-10-24 Exelis, Inc. Point cloud visualization of acceptable helicopter landing zones based on 4d lidar
CN109685821A (en) * 2018-12-26 2019-04-26 中国科学院大学 Region growing 3D rock mass point cloud plane extracting method based on high quality voxel
CN111413708A (en) * 2020-04-10 2020-07-14 湖南云顶智能科技有限公司 Unmanned aerial vehicle autonomous landing site selection method based on laser radar
CN111830534A (en) * 2020-06-08 2020-10-27 上海宇航系统工程研究所 Method for selecting optimal landing point by applying laser radar
CN112784799A (en) * 2021-02-01 2021-05-11 三一机器人科技有限公司 AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV
CN113359810A (en) * 2021-07-29 2021-09-07 东北大学 Unmanned aerial vehicle landing area identification method based on multiple sensors
CN114089377A (en) * 2021-10-21 2022-02-25 江苏大学 Point cloud processing and object identification system and method based on laser radar
CN114564042A (en) * 2022-03-01 2022-05-31 中国商用飞机有限责任公司北京民用飞机技术研究中心 Unmanned aerial vehicle landing method based on multi-sensor fusion
CN115980785A (en) * 2022-11-14 2023-04-18 中国航空工业集团公司洛阳电光设备研究所 Point cloud data processing method for helicopter aided navigation
CN116052099A (en) * 2022-12-30 2023-05-02 北京航空航天大学 Small target detection method for unstructured road

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
吴凡: "一种实时的三维语义地图生成方法", 计算机工程与应用 *
李爽;彭玉明;刘宇飞;: "星际着陆自主障碍检测与规避技术", 航空学报, no. 08 *
杨泽鑫: "面向室内场景点云的对象重建", 测绘通报 *
王若成: "从高程数据中提取目标地形坡度和粗糙度方法研究", 微电子学与计算机 *
郭保青: "铁路场景三维点云分割与分类识别算法", 仪器仪表学报 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721118A (en) * 2023-08-11 2023-09-08 之江实验室 Point cloud-based selection method and device for safe landing points of aircraft
CN116721118B (en) * 2023-08-11 2024-01-09 之江实验室 Point cloud-based selection method and device for safe landing points of aircraft
CN117890886A (en) * 2024-03-15 2024-04-16 之江实验室 Scanning equipment and scanning method for realizing two-direction scanning through single-shaft driving

Similar Documents

Publication Publication Date Title
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
US7995055B1 (en) Classifying objects in a scene
CN116482711A (en) Local static environment sensing method and device for autonomous selection of landing zone
AU2015404215A1 (en) Vegetation management for power line corridor monitoring using computer vision
Li et al. A GCN-based method for extracting power lines and pylons from airborne LiDAR data
Song et al. Classifying 3D objects in LiDAR point clouds with a back-propagation neural network
CN114387506A (en) Transmission tower monitoring method and device, computer equipment and storage medium
CN113592891B (en) Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
CN113705631A (en) 3D point cloud target detection method based on graph convolution
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
Aguiar et al. Localization and mapping on agriculture based on point-feature extraction and semiplanes segmentation from 3D LiDAR data
GB2610410A (en) Incremental dense 3-D mapping with semantics
Westfechtel et al. Semantic mapping of construction site from multiple daily airborne LiDAR data
Aissou et al. Building roof superstructures classification from imbalanced and low density airborne LiDAR point cloud
Chen et al. Continuous occupancy mapping in dynamic environments using particles
CN116266359A (en) Target tracking method, device, computer equipment and storage medium
Wu et al. A Non-rigid hierarchical discrete grid structure and its application to UAVs conflict detection and path planning
Zhuang et al. Contextual classification of 3D laser points with conditional random fields in urban environments
Sun et al. Automated segmentation of LiDAR point clouds for building rooftop extraction
CN112015199A (en) Flight path planning method and device applied to underground coal mine intelligent inspection unmanned aerial vehicle
Ahmad et al. Resource efficient mountainous skyline extraction using shallow learning
Li et al. [Retracted] PointLAE: A Point Cloud Semantic Segmentation Neural Network via Multifeature Aggregation for Large‐Scale Application
Nedevschi A Critical Evaluation of Aerial Datasets for Semantic Segmentation
Li et al. 3D point cloud multi-target detection method based on PointNet++
Wang et al. 3D building reconstruction from LiDAR data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230725