CN114994706A - Obstacle detection method and device and electronic equipment - Google Patents

Obstacle detection method and device and electronic equipment Download PDF

Info

Publication number
CN114994706A
CN114994706A CN202210820105.6A CN202210820105A CN114994706A CN 114994706 A CN114994706 A CN 114994706A CN 202210820105 A CN202210820105 A CN 202210820105A CN 114994706 A CN114994706 A CN 114994706A
Authority
CN
China
Prior art keywords
voxel
point cloud
point
dimensional
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210820105.6A
Other languages
Chinese (zh)
Inventor
梁康正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202210820105.6A priority Critical patent/CN114994706A/en
Publication of CN114994706A publication Critical patent/CN114994706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an obstacle detection method and device and electronic equipment. The method comprises the following steps: acquiring a point cloud of an obstacle; establishing a corresponding relation between points in the point cloud and voxel units in a three-dimensional voxel grid, wherein the three-dimensional voxel grid comprises S voxel columns, and each voxel column comprises Z voxel units; for each voxel column in the three-dimensional voxel grid, dividing the voxel column into n sub-columns, and performing feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column, wherein n is a positive number greater than 1, and the sample points are points in the point cloud; and determining the detection result of the obstacle according to the characteristic information of the S voxel columns and the two-dimensional convolutional neural network model. The scheme that this application provided, the maintenance point cloud information that can great degree promotes the detection precision.

Description

Obstacle detection method and device and electronic equipment
Technical Field
The application relates to the technical field of intelligent driving, in particular to a method and a device for detecting obstacles and electronic equipment.
Background
The obstacle detection refers to finding out the existing object in the sensible environment and detecting the position and size information of the object, and is a key technology for ensuring that complex systems such as intelligent auxiliary driving, automatic driving and the like can run safely
In the related art, the lidar sensing scheme applied to the large-scale deployment of the automatic driving often adopts a voxel-based neural network model. However, the voxel-based model needs to adopt three-dimensional convolution, so that the real-time performance is not high, the information loss of the point cloud is serious due to voxelization, and the detection precision is low.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides an obstacle detection method, an obstacle detection device and electronic equipment, which can improve the detection precision.
A first aspect of the present application provides an obstacle detection method, including:
acquiring a point cloud of an obstacle;
establishing a corresponding relation between points in the point cloud and voxel units in a three-dimensional voxel grid, wherein the three-dimensional voxel grid comprises S voxel columns, and each voxel column comprises Z voxel units;
for each voxel column in the three-dimensional voxel grid, dividing the voxel column into n sub-columns, and performing feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column, wherein n is a positive number greater than 1, and the sample points are points in the point cloud;
and determining the detection result of the obstacle according to the characteristic information of the S voxel columns and the two-dimensional convolutional neural network model.
In one possible implementation, the establishing a correspondence between points in the point cloud and voxel units in a three-dimensional voxel grid includes:
for a voxel unit in the three-dimensional voxel grid, if a plurality of points exist in the point cloud and correspond to the voxel unit, randomly selecting one point from the plurality of points as a sample point;
and if only one point in the point cloud corresponds to the voxel unit, taking the point as a sample point.
In one possible implementation, after the establishing a correspondence between a point in the point cloud and a voxel unit in a three-dimensional voxel grid, the method includes:
and selecting a sample point from the point cloud by adopting a farthest point sampling algorithm.
In one possible implementation, before the establishing a correspondence between points in the point cloud and voxel units in a three-dimensional voxel grid, the method further includes:
and establishing the three-dimensional voxel grid according to a preset size and the sensing range of the sensor.
The second aspect of the present application provides a detection apparatus, comprising:
the acquisition module is used for acquiring a point cloud of an obstacle;
the first establishing module is used for establishing a corresponding relation between points in the point cloud and voxel units in a three-dimensional voxel grid, wherein the three-dimensional voxel grid comprises S voxel columns, and each voxel column comprises Z voxel units;
a segmentation module for segmenting each voxel column in the three-dimensional voxel grid into n sub-columns;
an extraction module, configured to perform feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column, where n is a positive number greater than 1, and the sample points are points in the point cloud;
and the first determining module is used for determining the detection result of the obstacle according to the characteristic information of the S voxel column and the two-dimensional convolutional neural network model.
In one possible implementation, the apparatus further includes:
a second determining module, configured to, for a voxel unit in the three-dimensional voxel grid, randomly select one point from a plurality of points as a sample point when the plurality of points in the point cloud correspond to the voxel unit;
and the third determining module is used for regarding a voxel unit in the three-dimensional voxel grid, and when only one point in the point cloud corresponds to the voxel unit, taking the point as a sample point.
In one possible implementation, the apparatus further includes:
and the fourth determining module is used for selecting a sample point from the point cloud by adopting a farthest point sampling algorithm.
In one possible implementation, the apparatus further includes:
and the second establishing module is used for establishing the three-dimensional voxel grid according to the preset size and the sensing range of the sensor.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the method as described above.
According to the technical scheme, after the point cloud of the obstacle is obtained, the corresponding relation between points in the point cloud and voxel units in the three-dimensional voxel grid can be established, then the voxel column is divided into n sub-columns aiming at each voxel column in the three-dimensional voxel grid, the characteristic extraction is carried out on sample points corresponding to the n sub-columns to obtain the characteristic information of the voxel column, and finally the detection result of the obstacle is determined according to the characteristic information of each voxel column and the two-dimensional convolution neural network model. According to the scheme, the point cloud features are extracted by taking the sub-columns as units, point cloud information can be retained to a large extent, and detection precision is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic structural diagram of an obstacle detection system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an obstacle detection method according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a detection apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In order to facilitate understanding of the embodiments of the present application, the words related to the embodiments are described below.
Point cloud: the point cloud is a data set, and each point in the data set represents a set of x, y, z geometric coordinates and an intensity value that records the intensity of the return signal as a function of the reflectivity of the object surface. When these points are combined together, a point cloud, i.e., a collection of data points in space representing a three-dimensional shape or object, is formed. The point cloud can also be automatically colored to achieve more realistic visualization.
Voxel volume: a voxel is short for a Volume element (Volume Pixel), and a Volume containing the voxel can be represented by a Volume rendering or by extracting a polygonal iso-surface of a given threshold contour. The voxel is the minimum unit of digital data on three-dimensional space segmentation, is conceptually similar to the minimum unit of two-dimensional space, namely pixel, and can be used in the fields of three-dimensional imaging, scientific data, medical images and the like.
Three-dimensional voxel grid: a data structure for a three-dimensional object is represented using fixed-size cubes as minimum units. The three-dimensional voxel grid is composed of cubic blocks, namely voxel units, and cubic blocks, namely voxel columns, stacked in the Z direction in the three-dimensional voxel grid.
To facilitate understanding of the embodiments of the present application, the background related to the embodiments is described below.
The academics put forward a neural network model called a point-pillar (point-pillar) to alleviate the problem of the voxel-based model, and compared with the voxel-based model, the point-pillar can largely retain point cloud information, and 2d convolution is adopted to give consideration to accuracy and real-time performance. As the name implies, point-pilar extracts point cloud features in columns (corresponding to the voxels in the Z-direction of the stack). However, point-pilar is very serious for the loss of point cloud information in the Z direction, on one hand, the point-pilar comes from a sampling method, random sampling is adopted by the point-pilar, so that the point cloud sparse place is often undersampled, on the other hand, the point-pilar comes from a feature extraction method, point cloud sampling and feature extraction are performed on the whole column, the loss of information in the Z direction is too large, and the detection of small objects is not favorable.
In view of the above problems, the embodiments of the present application provide an obstacle detection method, which can retain point cloud information to a greater extent and improve detection accuracy.
To facilitate understanding of the present embodiment, a specific scenario example is described below to describe the obstacle detection method in the present embodiment:
the detection device acquires point cloud of the obstacle, the characteristic shape of the point cloud is (m, 4), wherein m is the number of points contained in the point cloud, 4 dimensions are x, y, z and r respectively, x is the horizontal coordinate of the points, y is the vertical coordinate of the points, z is the vertical coordinate of the points, and r is the reflectivity. The method comprises the steps of inputting point cloud into a voxel sampling-based sub-column (voxel-sampled sub-columns) model, finally generating point cloud features (H, W, 64 n) through the model after the point cloud is input (see the specific flow as shown in steps 202 and 203 in fig. 2), inputting the point cloud features into a BEV convolution model to obtain a detection result of an obstacle, and updating model parameters in the BEV convolution model and the voxel-sampled sub-columns model through a back propagation algorithm, as shown in fig. 1.
It should also be understood that the obstacle detection method in this embodiment may be applied to an intelligent vehicle, and after the intelligent vehicle starts the assistant driving function or the automatic driving function, the intelligent vehicle may detect obstacles around the intelligent vehicle by using the obstacle detection method. The obstacle detection method in this embodiment may also be applied to a robot, an aircraft, and other devices that need to perform obstacle detection, and this embodiment is not limited in particular.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of an obstacle detection method according to an embodiment of the present application.
Referring to fig. 2, the obstacle detection method includes:
201. the detection device acquires a point cloud of an obstacle;
when the obstacle needs to be detected, the detection device acquires the point cloud of the obstacle. Specifically, the detection device may scan the obstacle in the measurement range through the laser radar to obtain a point cloud of the obstacle, may also obtain the point cloud of the obstacle to be detected from other devices, and may also obtain the point cloud of the obstacle through other methods, which is not limited in this embodiment.
It should be understood that the point cloud in this embodiment includes m points corresponding to the obstacle, and a three-dimensional space coordinate value and a reflectivity corresponding to each point, where the reflectivity has a value ranging from 0 to 255, and m is an integer greater than or equal to 0.
202. The detection device establishes a corresponding relation between points in the point cloud and voxel units in the three-dimensional voxel grid;
after the detection device acquires the point cloud of the obstacle, a three-dimensional voxel grid can be acquired, and then the corresponding relation between points in the point cloud and voxel units in the three-dimensional voxel grid is established, wherein the three-dimensional voxel grid comprises S voxel columns, and each voxel column comprises Z voxel units.
In some embodiments, the detection apparatus may acquire the three-dimensional voxel grid by: and establishing a three-dimensional voxel grid according to a preset size and a sensing range of the sensor, wherein the preset size refers to a voxel size preset by a system.
In particular, the preset size comprises a voxel length h 0 Width w of voxel 0 And voxel height z 0 The sensing range includes a sensing range [ X ] in the X direction in a three-dimensional coordinate system min ,X max ]Perception range [ Y ] in Y-direction in three-dimensional coordinate system min ,Y max ]And a perception range [ Z ] in the Z direction in a three-dimensional coordinate system min ,Z max ](ii) a The detection means may then depend on the voxel length h 0 And the sensing range in the X direction [ X min ,X max ]Determining the length H of the three-dimensional voxel grid based on the voxel width w 0 And the perception range of the Y direction [ Y min ,Y max ]Determining the width W of the three-dimensional voxel grid based on the voxel height z 0 And the sensing range in the Z direction [ Z ] min ,Z max ]The height Z of the three-dimensional voxel grid (also understood as the thickness of the voxel grid) is determined, where H ═ X max -X min )/h 0 ,W=(Y max -Y min )/w 0 ,Z=(Z max -Z min )/z 0 ,S=H*W。
Illustratively, the system presets h 0 =0.1cm,w 0 =0.1cm,z 0 =0.05cm,X min =0,X max =40,Y min =-20,Y max =20,Z min =-1.6,Z max 3.2, the height H, width W, and height of the three-dimensional voxel grid are 400cm, and 96cm, respectively, i.e. the three-dimensional volumeThe element grid contains 400 × 96 individual element cells.
The detection means may also acquire the three-dimensional voxel grid by: and establishing a three-dimensional voxel grid according to a preset grid size, wherein the preset grid size comprises the length, the width and the height of the three-dimensional voxel grid. The detection device may also obtain the three-dimensional voxel grid in other manners, and this embodiment is not limited in this respect.
In some embodiments, after the detection device acquires the three-dimensional voxel grid, the corresponding relationship between the point in the point cloud and the voxel unit in the three-dimensional voxel grid can be established as follows: and aiming at each point in the point cloud, determining a voxel unit corresponding to the point in the three-dimensional voxel grid according to the three-dimensional space coordinate value corresponding to the point, namely dividing each point in the point cloud into the corresponding voxel unit.
In some embodiments, after the detection device establishes the corresponding relationship between the point in the point cloud and the voxel unit in the three-dimensional voxel grid, a sample point to be subjected to feature extraction may be selected from the point cloud.
Specifically, the detection device may select the sample point by: aiming at each voxel unit in the three-dimensional voxel grid, if a plurality of points exist in the point cloud and correspond to the voxel unit, randomly selecting one point from the plurality of points as a sample point; if only one point in the point cloud corresponds to the voxel unit, the point is taken as a sample point. Optionally, if there is no tie-you corresponding to the voxel unit in the point cloud, adding a zero point as a sample point.
The detection device may also select the sample points by: and selecting sample points from the Point cloud by adopting a Farthent Point Sampling (FPS) algorithm.
The detection device may also select the sample points by: and selecting sample points from the point cloud by adopting a random algorithm.
The detection device may also select the sample point from the point cloud in other manners, and this embodiment is not limited in this respect.
203. The detection device divides each voxel column in the three-dimensional voxel grid into n sub-columns, and performs feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column;
after the detection device determines sample points in the point cloud, each voxel column is divided into n sub-columns, feature extraction is carried out on the sample points corresponding to each sub-column to obtain feature information corresponding to each sub-column, and the feature information of the n sub-columns corresponding to each voxel column is stacked to obtain the feature information of S individual voxel columns. Each sub-column comprises Z/n voxel units, n is a preset value, the preset value is a positive number larger than 1, and the sample points corresponding to the n sub-columns refer to the sample points in the points corresponding to the Z voxel units contained in the n sub-columns. Illustratively, n is 3 and Z is 96, each voxel column contains 32 voxel units.
In some embodiments, the detection apparatus may perform feature extraction on the sample point corresponding to each sub-column through a point cloud processing neural network to obtain feature information corresponding to each sub-column. Specifically, the detection apparatus performs Full Connectivity (FC) and maximum pooling (max-pooling) operations on each sample point through a neural network of a pointnet architecture, so as to extract feature information corresponding to the 64-dimensional features corresponding to each sample point.
In some embodiments, for Z/n voxel units in each sub-column, if one point in the point cloud corresponding to the voxel unit is a sample point corresponding to the voxel unit; if the point in the point cloud corresponding to the voxel unit is multiple, one point selected from the multiple point clouds is a sample point corresponding to the voxel unit; and if the strip you in the point cloud corresponding to the voxel unit does not exist, taking the zero point as the sample point corresponding to the voxel unit. That is, each voxel unit in a sub-column corresponds to one sample point, and each sub-column can correspond to Z/n sample points at most. By means of the method, the sample points (sampling) are selected based on the voxels to extract the features, loss of information in the Z direction can be reduced, the probability that sparse point clouds are sampled is improved, the detection device can accurately distinguish vehicles and pedestrians in the surrounding environment, and detection accuracy is improved.
204. And the detection device determines the detection result of the obstacle according to the characteristic information of the S voxel column and the two-dimensional convolutional neural network.
After the detection device obtains the characteristic information of the S voxel columns contained in the three-dimensional voxel grid, the characteristic information of the S voxel columns is input into the two-dimensional convolution neural network, and the detection result of the obstacle is obtained. Specifically, the two-dimensional convolutional neural network in the present embodiment includes a point cloud aerial view (BEV) convolutional network.
In some embodiments, after the detection apparatus inputs the feature information of the S voxel column into the two-dimensional convolutional neural network, parameters in the BEV convolutional network and the point cloud processing neural network may also be updated through a back propagation algorithm (back propagation), and then obstacles subsequently encountered by the updated BEV convolutional network and the point cloud processing neural network are used for detection.
According to the technical scheme, after the point cloud of the obstacle is obtained, the corresponding relation between points in the point cloud and voxel units in the three-dimensional voxel grid can be established, then the voxel column is divided into n sub-columns aiming at each voxel column in the three-dimensional voxel grid, the characteristic extraction is carried out on sample points corresponding to the n sub-columns to obtain the characteristic information of the voxel column, and finally the detection result of the obstacle is determined according to the characteristic information of each voxel column and the two-dimensional convolution neural network model. According to the scheme, the point cloud characteristics are extracted by taking the sub-columns as units, point cloud information can be retained to a large extent, and the detection precision is improved.
Secondly, the point cloud can be sampled in various ways, so that the flexibility of the scheme is improved.
Corresponding to the embodiment of the application function implementation method, the application also provides a detection device, electronic equipment and a corresponding embodiment.
Fig. 3 is a schematic structural diagram of a detection apparatus according to an embodiment of the present application.
Referring to fig. 3, the detecting device 300 includes:
an obtaining module 301, configured to obtain a point cloud of an obstacle;
a first establishing module 302, configured to establish a corresponding relationship between a point in the point cloud and a voxel unit in a three-dimensional voxel grid, where the three-dimensional voxel grid includes S voxel columns, and each voxel column includes Z voxel units;
a segmentation module 303, configured to segment each voxel column in the three-dimensional voxel grid into n sub-columns;
an extraction module 304, configured to perform feature extraction on sample points corresponding to n sub-columns to obtain feature information of the voxel column, where n is a positive number greater than 1, and a sample point is a point in a point cloud;
the first determining module 305 is configured to determine a detection result of the obstacle according to the feature information of the S voxel column and the two-dimensional convolutional neural network model.
In some embodiments, the detection apparatus 300 may further include:
the second determining module is used for randomly selecting one point from a plurality of points as a sample point when the plurality of points in the point cloud correspond to the voxel unit aiming at the voxel unit in the three-dimensional voxel grid;
and the third determining module is used for regarding a voxel unit in the three-dimensional voxel grid, and when only one point in the point cloud corresponds to the voxel unit, the point is taken as a sample point.
In some embodiments, the detection apparatus 300 may further include:
and the fourth determining module is used for selecting a sample point from the point cloud by adopting a farthest point sampling algorithm.
In some embodiments, the detection apparatus 300 may further include:
and the second establishing module is used for establishing the three-dimensional voxel grid according to the preset size and the sensing range of the sensor.
According to the technical scheme, after the acquisition module 301 acquires the point cloud of the obstacle, the first establishment module 302 may establish a corresponding relationship between a point in the point cloud and a voxel unit in a three-dimensional voxel grid, the segmentation module 303 segments the voxel column into n sub-columns for each voxel column in the three-dimensional voxel grid, the extraction module 304 performs feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column, and finally the first determination module 305 determines a detection result of the obstacle by using the feature information of each voxel column and a two-dimensional convolutional neural network model. According to the scheme, the point cloud characteristics are extracted by taking the sub-columns as units, point cloud information can be retained to a large extent, and detection precision is improved.
Secondly, the point cloud can be sampled in various ways, so that the flexibility of the scheme is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 4, an electronic device 400 includes a memory 410 and a processor 420.
The Processor 420 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 410 may include various types of storage units such as a system memory, a Read Only Memory (ROM), and a permanent storage device. Wherein the ROM may store static data or instructions for processor 420 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 410 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, as well. In some embodiments, memory 410 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 410 has stored thereon executable code that, when processed by the processor 420, causes the processor 420 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An obstacle detection method, characterized by comprising:
acquiring a point cloud of an obstacle;
establishing a corresponding relation between points in the point cloud and voxel units in a three-dimensional voxel grid, wherein the three-dimensional voxel grid comprises S voxel columns, and each voxel column comprises Z voxel units;
for each voxel column in the three-dimensional voxel grid, dividing the voxel column into n sub-columns, and performing feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column, wherein n is a positive number greater than 1, and the sample points are points in the point cloud;
and determining the detection result of the obstacle according to the characteristic information of the S voxel columns and the two-dimensional convolution neural network model.
2. The method of claim 1, wherein after establishing a correspondence between points in the point cloud and voxel cells in a three-dimensional voxel grid, the method comprises:
for a voxel unit in the three-dimensional voxel grid, if a plurality of points exist in the point cloud and correspond to the voxel unit, randomly selecting one point from the plurality of points as a sample point;
and if only one point in the point cloud corresponds to the voxel unit, taking the point as a sample point.
3. The method of claim 1, wherein after establishing a correspondence between points in the point cloud and voxel cells in a three-dimensional voxel grid, the method comprises:
and selecting a sample point from the point cloud by adopting a farthest point sampling algorithm.
4. The method of any one of claims 1 to 3, wherein prior to establishing a correspondence between points in the point cloud and voxel cells in a three-dimensional voxel grid, the method further comprises:
and establishing the three-dimensional voxel grid according to a preset size and the perception range of the sensor.
5. A detection device, comprising:
the acquisition module is used for acquiring a point cloud of an obstacle;
the first establishing module is used for establishing a corresponding relation between points in the point cloud and voxel units in a three-dimensional voxel grid, wherein the three-dimensional voxel grid comprises S voxel columns, and each voxel column comprises Z voxel units;
a segmentation module for segmenting each voxel column in the three-dimensional voxel grid into n sub-columns;
an extraction module, configured to perform feature extraction on sample points corresponding to the n sub-columns to obtain feature information of the voxel column, where n is a positive number greater than 1, and the sample points are points in the point cloud;
and the first determining module is used for determining the detection result of the obstacle according to the characteristic information of the S voxel column and the two-dimensional convolutional neural network model.
6. The apparatus of claim 5, further comprising:
a second determining module, configured to, for a voxel unit in the three-dimensional voxel grid, randomly select a point from a plurality of points as a sample point when the plurality of points in the point cloud correspond to the voxel unit;
and the third determining module is used for regarding a voxel unit in the three-dimensional voxel grid, and when only one point in the point cloud corresponds to the voxel unit, the point is taken as a sample point.
7. The apparatus of claim 5, further comprising:
and the fourth determining module is used for selecting a sample point from the point cloud by adopting a farthest point sampling algorithm.
8. The apparatus of any of claims 5 to 7, further comprising:
and the second establishing module is used for establishing the three-dimensional voxel grid according to the preset size and the sensing range of the sensor.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-4.
10. A computer-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-4.
CN202210820105.6A 2022-07-13 2022-07-13 Obstacle detection method and device and electronic equipment Pending CN114994706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210820105.6A CN114994706A (en) 2022-07-13 2022-07-13 Obstacle detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210820105.6A CN114994706A (en) 2022-07-13 2022-07-13 Obstacle detection method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114994706A true CN114994706A (en) 2022-09-02

Family

ID=83019371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210820105.6A Pending CN114994706A (en) 2022-07-13 2022-07-13 Obstacle detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114994706A (en)

Similar Documents

Publication Publication Date Title
CN108629231B (en) Obstacle detection method, apparatus, device and storage medium
US10521694B2 (en) 3D building extraction apparatus, method and system
CN104833370B (en) System and method for mapping, positioning and pose correction
CN113412505B (en) Processing unit and method for ordered representation and feature extraction of a point cloud obtained by a detection and ranging sensor
WO2022142628A1 (en) Point cloud data processing method and device
CN112613378B (en) 3D target detection method, system, medium and terminal
Balali et al. Multi-class US traffic signs 3D recognition and localization via image-based point cloud model using color candidate extraction and texture-based recognition
Fan et al. Real-time stereo vision-based lane detection system
CN113378760A (en) Training target detection model and method and device for detecting target
CN111797836A (en) Extraterrestrial celestial body patrolling device obstacle segmentation method based on deep learning
CN112446227A (en) Object detection method, device and equipment
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
US20100002942A1 (en) Target orientation
CN112154448A (en) Target detection method and device and movable platform
CN114187579A (en) Target detection method, apparatus and computer-readable storage medium for automatic driving
CN114663485A (en) Processing method and system for power transmission line image and point cloud data
TW202017784A (en) Car detection method based on LiDAR by proceeding the three-dimensional feature extraction and the two-dimensional feature extraction on the three-dimensional point cloud map and the two-dimensional map
CN113281779A (en) 3D object rapid detection method, device, equipment and medium
CN117475428A (en) Three-dimensional target detection method, system and equipment
Gehrung et al. Change detection and deformation analysis based on mobile laser scanning data of urban areas
Gehrung et al. A fast voxel-based indicator for change detection using low resolution octrees
CN111912418A (en) Method, device and medium for deleting obstacles in non-driving area of mobile carrier
CN114994706A (en) Obstacle detection method and device and electronic equipment
WO2021114775A1 (en) Object detection method, object detection device, terminal device, and medium
Kovacs et al. Edge detection in discretized range images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination