CN114675274A - Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus - Google Patents

Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN114675274A
CN114675274A CN202210234432.3A CN202210234432A CN114675274A CN 114675274 A CN114675274 A CN 114675274A CN 202210234432 A CN202210234432 A CN 202210234432A CN 114675274 A CN114675274 A CN 114675274A
Authority
CN
China
Prior art keywords
frame
historical
data
current
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210234432.3A
Other languages
Chinese (zh)
Inventor
尹轩宇
刘博聪
史皓天
冯阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202210234432.3A priority Critical patent/CN114675274A/en
Publication of CN114675274A publication Critical patent/CN114675274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/932Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using own vehicle data, e.g. ground speed, steering wheel direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to an obstacle detection method, apparatus, storage medium, and electronic device, the method including: acquiring data to be detected, wherein the data to be detected comprises current frame data and historical frame data, the current frame data comprises three-dimensional point cloud data of a current frame acquired by a radar of the vehicle and positioning information of the vehicle in the current frame, the historical frame data comprises three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and positioning information of the vehicle in the historical frame, and the historical frame comprises a plurality of frames which are adjacent to the current frame and are before the current frame; determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame; and determining the obstacle information around the vehicle according to the current frame data and the target historical frame data. Therefore, the sensing capability of the vehicle can be improved while the problem of movement of the vehicle is solved, and the accuracy of obstacle identification is improved.

Description

Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus
Technical Field
The present disclosure relates to the field of unmanned vehicles, and in particular, to a method and an apparatus for detecting an obstacle, a storage medium, and an electronic device.
Background
The unmanned vehicle can automatically detect the obstacles in the driving process, and information such as specific positions, shapes, sizes, moving directions and the like of the obstacles around the unmanned vehicle is obtained, so that the unmanned vehicle can be helped to determine an obstacle avoidance strategy. In the related art, the unmanned vehicle utilizes a monocular camera, a binocular camera, a multi-line laser radar and the like to perform 3D target detection, and the existing detection technology comprises single-frame detection and multi-frame detection.
The driving process of a vehicle without a person is a time sequence process, the time sequence characteristics are lost in single-frame detection, the accuracy rate of obstacle identification is low, and the time sequence characteristics of the vehicle without a person are considered in multi-frame detection, but large errors are generated due to the movement of the vehicle without a person, and the accuracy rate of obstacle identification is also low.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides an obstacle detection method, an apparatus, a storage medium, and an electronic device.
In a first aspect, the present disclosure provides an obstacle detection method, the method comprising:
acquiring data to be detected, wherein the data to be detected comprises current frame data and historical frame data, the current frame data comprises three-dimensional point cloud data of a current frame acquired by a radar of a vehicle and positioning information of the vehicle in the current frame, the historical frame data comprises three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and positioning information of the vehicle in the historical frame, and the historical frame comprises a plurality of frames adjacent to the current frame and before the current frame;
Determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame;
and determining the obstacle information around the vehicle according to the current frame data and the target historical frame data.
Optionally, the determining the obstacle information around the vehicle according to the current frame data and the target historical frame data includes:
determining a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data;
and determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature.
Optionally, the current spatial feature comprises a current two-dimensional spatial feature and a current three-dimensional spatial feature, and the historical spatial feature comprises a historical two-dimensional spatial feature and a historical three-dimensional spatial feature; the determining the current spatial feature corresponding to the current frame data and the historical spatial feature corresponding to the historical frame data includes:
dividing the current frame data into a plurality of current voxels, and dividing the historical frame data into a plurality of historical voxels;
and determining the current two-dimensional space characteristic and the current three-dimensional space characteristic corresponding to each current voxel, and the historical two-dimensional space characteristic and the historical three-dimensional space characteristic corresponding to each historical voxel.
Optionally, the determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature includes:
inputting the current spatial features and the historical spatial features into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model;
and determining obstacle information around the vehicle according to the obstacle detection frame.
Optionally, the obstacle detection model includes a first spatial geometric feature obtaining sub-model, a second spatial geometric feature obtaining sub-model, and an obstacle detection frame obtaining sub-model; the inputting the current spatial feature and the historical spatial feature into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model comprises:
inputting the current spatial feature into the first spatial geometric feature acquisition sub-model to acquire a first spatial geometric feature output by the first spatial geometric feature acquisition sub-model;
inputting the historical spatial features into the second spatial geometric feature acquisition submodel to acquire second spatial geometric features output by the second spatial geometric feature acquisition submodel;
And inputting the first space geometric characteristics and the second space geometric characteristics into the obstacle detection frame acquisition submodel to acquire the obstacle detection frame output by the obstacle detection frame acquisition submodel.
Optionally, the obstacle detection model is trained by:
obtaining a plurality of sample detection data, the sample detection data comprising target frame sample data and historical frame sample data, the target frame sample data comprising three-dimensional point cloud data of a target frame acquired by a radar of the vehicle and positioning information of the vehicle at the target frame, the historical frame sample data comprising three-dimensional point cloud data of a target historical frame acquired by a radar of the vehicle and positioning information of the vehicle at the target historical frame, the target historical frame comprising a plurality of frames adjacent to and preceding the target frame;
determining target historical frame sample data corresponding to the historical frame sample data in a local coordinate system of the target frame;
training a target neural network model through a plurality of target frame sample data and a plurality of target historical frame sample data to obtain the obstacle detection model.
In a second aspect, the present disclosure provides an obstacle detection apparatus, the apparatus comprising:
the data acquisition module is used for acquiring data to be detected, wherein the data to be detected comprises current frame data and historical frame data, the current frame data comprises three-dimensional point cloud data of a current frame acquired by a radar of a vehicle and positioning information of the vehicle in the current frame, the historical frame data comprises three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and positioning information of the vehicle in the historical frame, and the historical frame comprises a plurality of frames which are adjacent to the current frame and are before the current frame;
the data determining module is used for determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame;
and the obstacle information determining module is used for determining the obstacle information around the vehicle according to the current frame data and the target historical frame data.
Optionally, the obstacle information determining module is further configured to:
determining a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data;
and determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature.
Optionally, the current spatial features include a current two-dimensional spatial feature and a current three-dimensional spatial feature, and the historical spatial features include a historical two-dimensional spatial feature and a historical three-dimensional spatial feature; the obstacle information determination module is further configured to:
dividing the current frame data into a plurality of current voxels, and dividing the historical frame data into a plurality of historical voxels;
and determining the current two-dimensional space characteristic and the current three-dimensional space characteristic corresponding to each current voxel, and the historical two-dimensional space characteristic and the historical three-dimensional space characteristic corresponding to each historical voxel.
Optionally, the obstacle information determining module is further configured to:
inputting the current spatial features and the historical spatial features into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model;
and determining obstacle information around the vehicle according to the obstacle detection frame.
Optionally, the obstacle detection model includes a first spatial geometric feature obtaining sub-model, a second spatial geometric feature obtaining sub-model, and an obstacle detection frame obtaining sub-model; the obstacle information determination module is further configured to:
Inputting the current spatial feature into the first spatial geometric feature acquisition submodel to acquire a first spatial geometric feature output by the first spatial geometric feature acquisition submodel;
inputting the historical spatial features into the second spatial geometric feature acquisition submodel to acquire second spatial geometric features output by the second spatial geometric feature acquisition submodel;
and inputting the first space geometric characteristics and the second space geometric characteristics into the obstacle detection frame acquisition submodel to acquire the obstacle detection frame output by the obstacle detection frame acquisition submodel.
Optionally, the obstacle information determining module is further configured to:
obtaining a plurality of sample detection data, the sample detection data comprising target frame sample data and historical frame sample data, the target frame sample data comprising three-dimensional point cloud data of a target frame acquired by a radar of the vehicle and positioning information of the vehicle at the target frame, the historical frame sample data comprising three-dimensional point cloud data of a target historical frame acquired by a radar of the vehicle and positioning information of the vehicle at the target historical frame, the target historical frame comprising a plurality of frames adjacent to and preceding the target frame;
Determining target historical frame sample data corresponding to the historical frame sample data in the local coordinate system of the target frame;
training a target neural network model through a plurality of target frame sample data and a plurality of target historical frame sample data to obtain the obstacle detection model.
In a third aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect of the present disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
According to the technical scheme, the data to be detected is obtained and comprises current frame data and historical frame data, the current frame data comprises three-dimensional point cloud data of a current frame acquired by a radar of a vehicle and positioning information of the vehicle in the current frame, the historical frame data comprises three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and positioning information of the vehicle in the historical frame, and the historical frame comprises a plurality of frames which are adjacent to the current frame and are before the current frame; determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame; and determining the obstacle information around the vehicle according to the current frame data and the target historical frame data. That is to say, the present disclosure may determine obstacle information around the vehicle in combination with current frame data and historical frame data, consider spatial time sequence when determining the obstacle information, and both the current frame data and the historical frame data include three-dimensional point cloud data and positioning information of the vehicle, so that the problem of movement of the vehicle is solved, and at the same time, the perception capability of the vehicle is improved, thereby improving the accuracy of obstacle identification.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, but do not constitute a limitation of the disclosure. In the drawings:
fig. 1 is a flow chart illustrating a method of obstacle detection in accordance with an exemplary embodiment of the present disclosure;
fig. 2 is a flow chart illustrating another method of obstacle detection in accordance with an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a current two-dimensional spatial feature in accordance with an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a current three-dimensional spatial feature, according to an exemplary embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating a method of training the obstacle detection model according to an exemplary embodiment of the present disclosure;
fig. 6 is a block diagram illustrating an obstacle detecting device according to an exemplary embodiment of the present disclosure;
fig. 7 is a block diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In the description that follows, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
First, an application scenario of the present disclosure will be explained. The perception of an unmanned vehicle mainly comprises three aspects: in order to ensure the environment understanding and grasp of the unmanned vehicle, the environment sensing part of the unmanned vehicle generally needs to integrate data of various sensors such as a laser radar, a camera, a millimeter wave radar and the like to identify the environment information around the vehicle. The obstacle identification mainly comprises single-frame detection and multi-frame detection, wherein the single-frame detection is to use a permutation matrix to sequence and weight input data, or use a symmetric function to extract global information and local information from feature points to construct a 3D identification convolutional neural network, however, because single-frame points of a low-line-beam radar are too sparse, a high-line-beam radar is too expensive, dense point cloud information cannot be obtained at a low cost, and the single-frame detection does not consider the time sequence in space, and ignores a lot of useful historical information, so that the accuracy and recall rate of single-frame identification are low. Although the multi-frame detection solves the problem of point cloud sparseness of single-frame detection and also considers the time sequence in space, the multi-frame detection can generate larger errors due to the movement of a vehicle, and the identification accuracy is lower.
In order to solve the existing problems, the present disclosure provides an obstacle detection method, an apparatus, a storage medium, and an electronic device, which may determine obstacle information around a vehicle by combining current frame data and historical frame data, consider spatial time sequence when determining the obstacle information, and include three-dimensional point cloud data and positioning information of the vehicle, so that the problem of movement of the vehicle is solved, and at the same time, the sensing capability of the vehicle is improved, thereby improving the accuracy of obstacle identification.
The present disclosure is described below with reference to specific examples.
Fig. 1 is a flowchart illustrating an obstacle detection method according to an exemplary embodiment of the present disclosure, which may be applied to a vehicle, which may be an unmanned vehicle, as shown in fig. 1, which may include:
and S101, acquiring data to be detected.
The data to be detected may include current frame data and historical frame data, the current frame data may include three-dimensional point cloud data of a current frame collected by a radar of the vehicle and positioning information of the vehicle at the current frame, the historical frame data may include three-dimensional point cloud data of a historical frame collected by the radar of the vehicle and positioning information of the vehicle at the historical frame, the historical frame may include a plurality of frames adjacent to and before the current frame, the number of the plurality of frames may be obtained by a test in advance, and for example, the plurality of frames may be 2 frames.
In this step, the three-dimensional point cloud data may be periodically obtained through a laser radar of the vehicle, and the Positioning information of the vehicle may be obtained through a Global Positioning System (GPS) of the vehicle. After the current frame data is acquired, the current frame data may be stored and data before the stored history frame may be deleted. For example, if the historical frame includes 2 frames adjacent to and before the current frame, after storing the current frame data, data of frames other than a pending frame may be deleted, where the pending frame includes the current frame and the first two frames of the current frame.
And S102, determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame.
In this step, after the current frame data is obtained, the stored historical frame data may be obtained, and the historical frame data is unified to the local coordinate system of the current frame by using the existing matrix transformation method, so as to obtain the target historical frame data corresponding to the historical frame data.
And S103, determining the obstacle information around the vehicle according to the current frame data and the target historical frame data.
The obstacle information may include the type and size of the obstacle, which is not limited in this disclosure.
In this step, after the target historical frame data is determined, a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data may be determined, and obstacle information around the vehicle may be determined according to the current spatial feature and the historical spatial feature.
By adopting the method, the obstacle information around the vehicle can be determined by combining the current frame data and the historical frame data, the spatial time sequence is considered when the obstacle information is determined, and the current frame data and the historical frame data both comprise the three-dimensional point cloud data and the positioning information of the vehicle, so that the sensing capability of the vehicle can be improved while the problem of movement of the vehicle is solved, and the accuracy of obstacle identification is improved.
Fig. 2 is a flowchart illustrating another obstacle detection method according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 2:
s201, acquiring data to be detected.
The data to be detected may include current frame data and historical frame data, the current frame data may include three-dimensional point cloud data of a current frame acquired by a radar of the vehicle and positioning information of the vehicle at the current frame, the historical frame data may include three-dimensional point cloud data of a historical frame acquired by a radar of the vehicle and positioning information of the vehicle at the historical frame, the historical frame may include a plurality of frames adjacent to and before the current frame, the number of the plurality of frames may be obtained by testing in advance according to a test, and for example, the plurality of frames may be 2 frames.
S202, determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame.
S203, determining the current spatial feature corresponding to the current frame data and the historical spatial feature corresponding to the historical frame data.
The current spatial feature may include a current two-dimensional spatial feature and a current three-dimensional spatial feature, and the historical spatial feature may include a historical two-dimensional spatial feature and a historical three-dimensional spatial feature.
In this step, after the current frame data is obtained and the target historical frame data is determined, the current frame data may be divided into a plurality of current voxels, the target historical frame data may be divided into a plurality of historical voxels, and a current two-dimensional spatial feature and a current three-dimensional spatial feature corresponding to each current voxel, and a historical two-dimensional spatial feature and a historical three-dimensional spatial feature corresponding to each historical voxel, are determined.
For example, the current frame data may be divided into a plurality of current voxels according to a preset point cloud parameter, and the target historical frame data may be divided into a plurality of historical voxels, where the preset point cloud parameter may be represented as (x, y, z), x is 0.2m, y is 0.2m, and z is 0.5 m.
For each current voxel, determining a current two-dimensional spatial feature corresponding to the current voxel, and fig. 3 is a schematic diagram of a current two-dimensional spatial feature according to an exemplary embodiment of the disclosure, where each circle represents a current voxel, as shown in fig. 3. The current two-dimensional spatial feature may be an 8-dimensional feature represented by (max z, sum z, max intensity, dotted or not, number of points, sqrt (x + y)/max range, atan (y, x)/pi), where max z is the z value of the highest current voxel point in all current voxels corresponding to the current frame data, sum z is the sum of z values of all current voxels corresponding to the current frame data, max intensity represents the maximum intensity in all current voxels corresponding to the current frame data, intensity may be the laser reflection intensity of the current voxel, intensity takes values of 0 to 255, "dotted or not" corresponds to a value of 1 or 0, if there is a current voxel point, if there is a "dotted" takes a value of 1, if there is no current voxel point, if there is a "dotted" takes a value of 0, and number of points is the number of all current voxels corresponding to the current frame data, max range is the length of the diagonal line of the overall space corresponding to the current frame data.
For each current voxel, the current three-dimensional spatial feature corresponding to the current voxel may be a 3-dimensional feature, and fig. 4 is a schematic diagram of a current three-dimensional spatial feature shown according to an exemplary embodiment of the disclosure, as shown in fig. 4, where each circle represents a current voxel. For example, the three-dimensional space may be determined by extracting other current voxels of 3 × 3 space around the current voxel, and the current three-dimensional space feature may be represented as (dx/fx, dy/fy, dz/fz), where dx is an average value of x of all current voxels corresponding to the current frame data, dy is an average value of y of all current voxels corresponding to the current frame data, dz is an average value of z of all current voxels corresponding to the current frame data, fx is an average value of x of other current voxels of 3 × 3 space around the current voxel, fy is an average value of y of other current voxels of 3 × 3 space around the current voxel, and fz is an average value of z of other current voxels of 3 × 3 space around the current voxel.
For each historical voxel, the current two-dimensional spatial feature and the current three-dimensional spatial feature corresponding to the current voxel may be determined by referring to the above method for determining the current two-dimensional spatial feature and the current three-dimensional spatial feature corresponding to the current voxel, which is not described herein again.
And S204, determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature.
In this step, after determining the current spatial feature corresponding to the current frame data and the historical spatial feature corresponding to the historical frame data, the current spatial feature and the historical spatial feature may be input into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model, and obstacle information around the vehicle may be determined according to the obstacle detection frame. The obstacle information may include a center, a size, a rotation angle, a category of the obstacle, and a probability that the obstacle is in each category, which may include a car, a cart, a pedestrian, an animal, and the like.
In a possible implementation manner, the obstacle detection model may include a first spatial geometric feature obtaining sub-model, a second spatial geometric feature obtaining sub-model, and an obstacle detection box obtaining sub-model, and after determining a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data, the current spatial feature may be input to the first spatial geometric feature obtaining sub-model to obtain a first spatial geometric feature output by the first spatial geometric feature obtaining sub-model; inputting the historical spatial features into the second spatial geometric feature acquisition submodel to acquire second spatial geometric features output by the second spatial geometric feature acquisition submodel; the first space geometric feature and the second space geometric feature are spliced, for example, the first space geometric feature and the second space geometric feature may be spliced through a concat algorithm to obtain a spliced space geometric feature, and the spliced space geometric feature is input into the obstacle detection box acquisition sub-model to obtain the obstacle detection box output by the obstacle detection box acquisition sub-model.
Fig. 5 is a flowchart illustrating a training method of the obstacle detection model according to an exemplary embodiment of the present disclosure, and as shown in fig. 5, the method may include:
and S1, acquiring a plurality of sample detection data.
Wherein the sample detection data may include target frame sample data that may include three-dimensional point cloud data of a target frame acquired by a radar of the vehicle and location information of the vehicle at the target frame, and historical frame sample data that may include three-dimensional point cloud data of a target historical frame acquired by a radar of the vehicle and location information of the vehicle at the target historical frame, which may include a plurality of frames adjacent to and preceding the target frame.
The manner of acquiring the multiple sample detection data may refer to the manner of acquiring the data to be detected in step S101, and is not described herein again.
And S2, determining the target historical frame sample data corresponding to the historical frame sample data in the local coordinate system of the target frame.
After a plurality of target frame sample data are obtained, historical frame sample data corresponding to the stored target frame sample data can be obtained for each target frame sample data, and the historical frame sample data are unified to a local coordinate system of the target frame through the existing matrix transformation method to obtain the target historical frame sample data corresponding to the historical frame sample data.
And S3, training the target neural network model through a plurality of target frame sample data and a plurality of target historical frame sample data to obtain the obstacle detection model.
The target neural Network model may include a first convolutional neural Network model, a second convolutional neural Network model, and an RPN (Regional proposed Network), for each sample detection data, a target sample spatial feature corresponding to target frame sample data in the sample detection data and a historical sample spatial feature corresponding to target historical frame sample data in the sample detection data may be determined, the method for determining the target sample spatial feature may refer to the method for determining the current spatial feature in step S203, and the method for determining the historical sample spatial feature may refer to the method for determining the historical spatial feature in step S203, which is not described herein again.
After the target sample space characteristic corresponding to the target frame sample data in each sample detection data and the historical sample space characteristic corresponding to the target historical frame sample data in the historical frame sample data are determined, the model training step can be executed in a circulating mode until the trained target neural network model is determined to meet the preset iteration stopping condition according to the obstacle marking frame and the obstacle predicting frame, and the trained target neural network model is used as the obstacle detection model; and the obstacle prediction box is a prediction box output after the target sample spatial features and the historical sample spatial features are input into the trained target neural network model. The barrier marking frame can be manually marked, and the quality of the barrier marking frame is detected after manual marking, so that the accuracy of the barrier marking frame is improved.
The model training step comprises:
and S31, inputting the target sample space characteristics and the historical sample space characteristics corresponding to the plurality of sample detection data into the target neural network model, and outputting the obstacle prediction frame corresponding to each sample detection data.
For each sample detection data, inputting a target sample spatial feature corresponding to the sample detection data into the first convolutional neural network model to obtain a first sample spatial geometric feature output by the first convolutional neural network model, inputting a historical sample spatial feature corresponding to the sample detection data into the second convolutional neural network model to obtain a second sample spatial geometric feature output by the second convolutional neural network model, splicing the first sample spatial geometric feature and the second sample spatial geometric feature to obtain a spliced sample spatial geometric feature, and inputting the spliced sample spatial geometric feature into the RPN to obtain an obstacle prediction box corresponding to the sample detection data output by the RPN.
And S32, determining a loss value according to the obstacle labeling frame and the obstacle predicting frame under the condition that the target neural network model does not meet the preset iteration stopping condition according to the obstacle labeling frame and the obstacle predicting frame, updating parameters of the target neural network model according to the loss value to obtain a trained target neural network model, and taking the trained target neural network model as a new target neural network model.
The preset iteration stopping condition may be any condition for stopping iteration in the prior art model training, which is not limited by the present disclosure. After determining the loss value, parameters of the first convolutional neural network model, the second convolutional neural network model, and the RPN may be updated according to the loss value.
It should be noted that, the present disclosure trains the first target neural network model by using the target frame sample data, trains the second target neural network model by using the historical frame sample data, and based on this,
by adopting the method, the obstacle information around the vehicle can be determined by combining the current frame data and the historical frame data, the current frame data and the historical frame data are data based on frame level, the spatial time sequence can be embodied when the obstacle information is determined, and the current frame data and the historical frame data both comprise three-dimensional point cloud data and positioning information of the vehicle, so that the sensing capability of the vehicle can be improved while the problem of the movement of the vehicle is solved, and the accuracy of obstacle identification is improved. Furthermore, the current frame data and the historical frame data are divided into a plurality of voxels, feature extraction is carried out on each voxel, and obstacle information around the vehicle is determined according to the extracted features, so that the space geometric features of the data to be processed can be reflected better, and the accuracy of obstacle identification is further improved.
Fig. 6 is a block diagram illustrating an obstacle detection apparatus according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 6:
the data acquisition module 601 is configured to acquire data to be detected, where the data to be detected includes current frame data and historical frame data, the current frame data includes three-dimensional point cloud data of a current frame acquired by a radar of a vehicle and location information of the vehicle in the current frame, the historical frame data includes three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and location information of the vehicle in the historical frame, and the historical frame includes multiple frames adjacent to and before the current frame;
a data determining module 602, configured to determine target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame;
an obstacle information determining module 603, configured to determine obstacle information around the vehicle according to the current frame data and the target historical frame data.
Optionally, the obstacle information determining module 603 is further configured to:
determining a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data;
and determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature.
Optionally, the current spatial feature comprises a current two-dimensional spatial feature and a current three-dimensional spatial feature, and the historical spatial feature comprises a historical two-dimensional spatial feature and a historical three-dimensional spatial feature; the obstacle information determination module is further configured to:
dividing the current frame data into a plurality of current voxels, and dividing the historical frame data into a plurality of historical voxels;
determining the current two-dimensional spatial feature and the current three-dimensional spatial feature corresponding to each of the current voxels, and the historical two-dimensional spatial feature and the historical three-dimensional spatial feature corresponding to each of the historical voxels.
Optionally, the obstacle information determining module 603 is further configured to:
inputting the current spatial feature and the historical spatial feature into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model;
and determining obstacle information around the vehicle according to the obstacle detection frame.
Optionally, the obstacle detection model includes a first spatial geometric feature obtaining sub-model, a second spatial geometric feature obtaining sub-model, and an obstacle detection frame obtaining sub-model; the obstacle information determination module 603 is further configured to:
inputting the current spatial feature into the first spatial geometric feature obtaining sub-model to obtain a first spatial geometric feature output by the first spatial geometric feature obtaining sub-model;
Inputting the historical spatial features into the second spatial geometric feature acquisition submodel to acquire second spatial geometric features output by the second spatial geometric feature acquisition submodel;
and inputting the first spatial geometric characteristic and the second spatial geometric characteristic into the obstacle detection frame acquisition submodel to acquire the obstacle detection frame output by the obstacle detection frame acquisition submodel.
Optionally, the obstacle information determining module 603 is further configured to:
acquiring a plurality of sample detection data, wherein the sample detection data comprises target frame sample data and historical frame sample data, the target frame sample data comprises three-dimensional point cloud data of a target frame acquired by a radar of the vehicle and positioning information of the vehicle on the target frame, the historical frame sample data comprises three-dimensional point cloud data of a target historical frame acquired by the radar of the vehicle and positioning information of the vehicle on the target historical frame, and the target historical frame comprises a plurality of frames adjacent to and before the target frame;
determining target historical frame sample data corresponding to the historical frame sample data in a local coordinate system of the target frame;
training a target neural network model through a plurality of target frame sample data and a plurality of target historical frame sample data to obtain the obstacle detection model.
By the device, the obstacle information around the vehicle can be determined by combining the current frame data and the historical frame data, the spatial time sequence is considered when the obstacle information is determined, and the current frame data and the historical frame data both comprise the three-dimensional point cloud data and the positioning information of the vehicle, so that the sensing capability of the vehicle can be improved while the problem of movement of the vehicle is solved, and the accuracy of obstacle identification is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an electronic device 700 according to an exemplary embodiment of the present disclosure. As shown in fig. 7, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the above-mentioned obstacle detection method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 705 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described obstacle detection method.
In another exemplary embodiment, there is also provided a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described obstacle detection method. For example, the computer readable storage medium may be the memory 702 described above comprising program instructions executable by the processor 701 of the electronic device 700 to perform the obstacle detection method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned obstacle detection method when executed by the programmable apparatus.
In another exemplary embodiment, a vehicle is also provided, including the electronic device 700 described above.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure. It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. An obstacle detection method, characterized in that the method comprises:
acquiring data to be detected, wherein the data to be detected comprises current frame data and historical frame data, the current frame data comprises three-dimensional point cloud data of a current frame acquired by a radar of a vehicle and positioning information of the vehicle in the current frame, the historical frame data comprises three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and positioning information of the vehicle in the historical frame, and the historical frame comprises a plurality of frames which are adjacent to the current frame and are before the current frame;
Determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame;
and determining the obstacle information around the vehicle according to the current frame data and the target historical frame data.
2. The method of claim 1, wherein the determining obstacle information around the vehicle from the current frame data and the target historical frame data comprises:
determining a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data;
and determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature.
3. The method of claim 2, wherein the current spatial features comprise current two-dimensional spatial features and current three-dimensional spatial features, and the historical spatial features comprise historical two-dimensional spatial features and historical three-dimensional spatial features; the determining the current spatial feature corresponding to the current frame data and the historical spatial feature corresponding to the historical frame data includes:
dividing the current frame data into a plurality of current voxels, and dividing the historical frame data into a plurality of historical voxels;
And determining the current two-dimensional space characteristic and the current three-dimensional space characteristic corresponding to each current voxel, and the historical two-dimensional space characteristic and the historical three-dimensional space characteristic corresponding to each historical voxel.
4. The method of claim 2, wherein the determining obstacle information around the vehicle based on the current spatial signature and the historical spatial signature comprises:
inputting the current spatial features and the historical spatial features into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model;
and determining obstacle information around the vehicle according to the obstacle detection frame.
5. The method of claim 4, wherein the obstacle detection model comprises a first spatial geometry acquisition submodel, a second spatial geometry acquisition submodel, and an obstacle detection box acquisition submodel; the step of inputting the current spatial features and the historical spatial features into a pre-trained obstacle detection model to obtain an obstacle detection frame output by the obstacle detection model comprises:
inputting the current spatial feature into the first spatial geometric feature acquisition submodel to acquire a first spatial geometric feature output by the first spatial geometric feature acquisition submodel;
Inputting the historical spatial features into the second spatial geometric feature acquisition submodel to acquire second spatial geometric features output by the second spatial geometric feature acquisition submodel;
and inputting the first space geometric characteristic and the second space geometric characteristic into the obstacle detection frame acquisition submodel to acquire the obstacle detection frame output by the obstacle detection frame acquisition submodel.
6. The method of claim 4, wherein the obstacle detection model is trained by:
obtaining a plurality of sample detection data, the sample detection data comprising target frame sample data and historical frame sample data, the target frame sample data comprising three-dimensional point cloud data of a target frame acquired by a radar of the vehicle and positioning information of the vehicle at the target frame, the historical frame sample data comprising three-dimensional point cloud data of a target historical frame acquired by a radar of the vehicle and positioning information of the vehicle at the target historical frame, the target historical frame comprising a plurality of frames adjacent to and preceding the target frame;
determining target historical frame sample data corresponding to the historical frame sample data in a local coordinate system of the target frame;
Training a target neural network model through a plurality of target frame sample data and a plurality of target historical frame sample data to obtain the obstacle detection model.
7. An obstacle detection device, characterized in that the device comprises:
the data acquisition module is used for acquiring data to be detected, wherein the data to be detected comprises current frame data and historical frame data, the current frame data comprises three-dimensional point cloud data of a current frame acquired by a radar of a vehicle and positioning information of the vehicle in the current frame, the historical frame data comprises three-dimensional point cloud data of a historical frame acquired by the radar of the vehicle and positioning information of the vehicle in the historical frame, and the historical frame comprises a plurality of frames which are adjacent to the current frame and are before the current frame;
the data determining module is used for determining target historical frame data corresponding to the historical frame data in the local coordinate system of the current frame;
and the obstacle information determining module is used for determining the obstacle information around the vehicle according to the current frame data and the target historical frame data.
8. The apparatus of claim 7, wherein the obstacle information determination module is further configured to:
Determining a current spatial feature corresponding to the current frame data and a historical spatial feature corresponding to the historical frame data;
and determining obstacle information around the vehicle according to the current spatial feature and the historical spatial feature.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 6.
CN202210234432.3A 2022-03-10 2022-03-10 Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus Pending CN114675274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210234432.3A CN114675274A (en) 2022-03-10 2022-03-10 Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210234432.3A CN114675274A (en) 2022-03-10 2022-03-10 Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN114675274A true CN114675274A (en) 2022-06-28

Family

ID=82072419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210234432.3A Pending CN114675274A (en) 2022-03-10 2022-03-10 Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN114675274A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020206708A1 (en) * 2019-04-09 2020-10-15 广州文远知行科技有限公司 Obstacle recognition method and apparatus, computer device, and storage medium
CN112154356A (en) * 2019-09-27 2020-12-29 深圳市大疆创新科技有限公司 Point cloud data processing method and device, laser radar and movable platform
CN112329754A (en) * 2021-01-07 2021-02-05 深圳市速腾聚创科技有限公司 Obstacle recognition model training method, obstacle recognition method, device and system
CN113642620A (en) * 2021-07-30 2021-11-12 北京三快在线科技有限公司 Model training and obstacle detection method and device
CN114155268A (en) * 2021-11-24 2022-03-08 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020206708A1 (en) * 2019-04-09 2020-10-15 广州文远知行科技有限公司 Obstacle recognition method and apparatus, computer device, and storage medium
CN112154356A (en) * 2019-09-27 2020-12-29 深圳市大疆创新科技有限公司 Point cloud data processing method and device, laser radar and movable platform
CN112329754A (en) * 2021-01-07 2021-02-05 深圳市速腾聚创科技有限公司 Obstacle recognition model training method, obstacle recognition method, device and system
CN113642620A (en) * 2021-07-30 2021-11-12 北京三快在线科技有限公司 Model training and obstacle detection method and device
CN114155268A (en) * 2021-11-24 2022-03-08 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
US11772654B2 (en) Occupancy prediction neural networks
US11783568B2 (en) Object classification using extra-regional context
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
US20220343758A1 (en) Data Transmission Method and Apparatus
US11755917B2 (en) Generating depth from camera images and known depth data using neural networks
US20210132619A1 (en) Predicting cut-in probabilities of surrounding agents
CN112200129A (en) Three-dimensional target detection method and device based on deep learning and terminal equipment
US20210312177A1 (en) Behavior prediction of surrounding agents
CN113298250A (en) Neural network for localization and object detection
US20210364637A1 (en) Object localization using machine learning
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN114675274A (en) Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus
CN112433193B (en) Multi-sensor-based mold position positioning method and system
CN110892449A (en) Image processing method and device and mobile device
EP3640679B1 (en) A method for assigning ego vehicle to a lane
CN116047537B (en) Road information generation method and system based on laser radar
CN114581615B (en) Data processing method, device, equipment and storage medium
US20220180549A1 (en) Three-dimensional location prediction from images
CN115019278B (en) Lane line fitting method and device, electronic equipment and medium
CN113361379B (en) Method and device for generating target detection system and detecting target
CN116152776A (en) Method, device, equipment and storage medium for identifying drivable area
CN117985053A (en) Sensing capability detection method and device
KR20230099518A (en) Apparatus for acquiring autonomous driving learning data and method thereof
CN114612754A (en) Target detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination