CN113424079A - Obstacle detection method, obstacle detection device, computer device, and storage medium - Google Patents

Obstacle detection method, obstacle detection device, computer device, and storage medium Download PDF

Info

Publication number
CN113424079A
CN113424079A CN201980037711.7A CN201980037711A CN113424079A CN 113424079 A CN113424079 A CN 113424079A CN 201980037711 A CN201980037711 A CN 201980037711A CN 113424079 A CN113424079 A CN 113424079A
Authority
CN
China
Prior art keywords
point cloud
point
image
previous frame
clouds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980037711.7A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Publication of CN113424079A publication Critical patent/CN113424079A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

An obstacle detection method, apparatus (500), computer device (104) and storage medium, comprising: acquiring a current frame point cloud image and a previous frame point cloud image (202); determining a second point cloud (204) matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image; calculating a displacement vector (206) for each second point cloud relative to the matching first point cloud; clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds (208); each set of target point clouds is determined to be an obstacle (210). The method can improve the accuracy of the obstacle detection.

Description

Obstacle detection method, obstacle detection device, computer device, and storage medium Technical Field
The present application relates to an obstacle detection method, apparatus, computer device, and storage medium.
Background
An unmanned vehicle, also known as an autonomous vehicle, a computer-driven vehicle, or a wheeled mobile robot, is an intelligent vehicle that relies on the cooperative cooperation of artificial intelligence, visual computing, radar, monitoring devices, and global positioning equipment to allow a computer to operate a motor vehicle automatically without any human active operation. In the unmanned process, it is most important to identify moving objects in the driving area. In the conventional manner, obstacle detection is mainly performed by a machine learning model. However, the machine learning model can only detect the target obstacle category that has participated in training, such as people, vehicles, etc., and for the obstacle category that has not participated in training in the driving area (for example, animals, cone barrels, etc.), the obstacle category cannot be correctly identified through the machine learning model, so that the probability of safety accidents of the unmanned vehicle caused by that a moving object cannot be correctly identified is increased.
Disclosure of Invention
According to various embodiments disclosed herein, there are provided an obstacle detection method, an apparatus, a computer device, and a storage medium.
An obstacle detection method comprising:
acquiring a current frame point cloud image and a previous frame point cloud image;
determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
calculating a displacement vector of each second point cloud relative to the matched first point cloud;
clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
and judging each group of target point clouds as an obstacle.
A point cloud based target tracking apparatus comprising:
the point cloud matching module is used for acquiring a current frame point cloud image and a previous frame point cloud image; determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
the point cloud clustering module is used for calculating the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
and the judging module is used for judging each group of target point clouds as an obstacle.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the processors, cause the one or more processors to perform the steps of:
acquiring a current frame point cloud image and a previous frame point cloud image;
determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
calculating a displacement vector of each second point cloud relative to the matched first point cloud;
clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
and judging each group of target point clouds as an obstacle.
One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
acquiring a current frame point cloud image and a previous frame point cloud image;
determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
calculating a displacement vector of each second point cloud relative to the matched first point cloud;
clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
and judging each group of target point clouds as an obstacle.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features and advantages of the application will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a diagram illustrating an application scenario of the obstacle detection method according to an embodiment;
FIG. 2 is a schematic flow chart of a method for obstacle detection in one embodiment;
FIG. 3A is a schematic aerial view of a spatial coordinate system according to an embodiment;
FIG. 3B is a three-dimensional schematic diagram of a spatial coordinate system in one embodiment;
FIG. 4 is a schematic flow chart illustrating the point cloud matching process based on point features according to one embodiment;
FIG. 5 is a block diagram of an obstacle detection device in one embodiment;
FIG. 6 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The obstacle detection method provided by the application can be applied to various application environments. For example, it may be applied in an autonomous driving application environment as shown in fig. 1, which may include a laser sensor 102 and a computer device 104. The computer device 104 may communicate with the laser sensor 102 over a network. The laser sensor 102 may acquire a plurality of frames of point cloud images of the surrounding environment, and the computer device 104 may acquire a previous frame of point cloud image and a current frame of point cloud image acquired by the laser sensor 102, and process the previous frame of point cloud image and the current frame of point cloud image by using the above-mentioned obstacle detection method, thereby realizing the detection of the obstacle. The laser sensor 102 may be a sensor mounted on the automatic driving device, and specifically may include a laser radar, a laser scanner, and the like.
In one embodiment, as shown in fig. 2, an obstacle detection method is provided, which is illustrated by applying the method to the computer device 104 in fig. 1, and includes the following steps:
step 202, acquiring a current frame point cloud image and a previous frame point cloud image.
The laser sensor may be mounted on a device capable of automatic driving. For example, the vehicle may be mounted on an unmanned vehicle or may be mounted on a vehicle including an automated driving model. The laser sensor may be used to collect environmental data in the visual range.
Specifically, a laser sensor may be erected on the unmanned vehicle in advance, the laser sensor transmits a detection signal to the driving area according to a preset time frequency, and the signal reflected by the object in the driving area is compared with the detection signal to obtain surrounding environment data, and a corresponding point cloud image is generated based on the environment data. The point cloud image is a collection of point clouds corresponding to a plurality of points on the surface of an object, which is recorded in the form of points by an object in a scanning environment. The point cloud may specifically include multiple information such as three-dimensional spatial position coordinates, laser reflection intensity, and color of a single point on the surface of the object in the environment in a spatial coordinate system. The spatial coordinate system may be a cartesian coordinate system. As shown in fig. 3, the spatial coordinate system uses the central point of the laser sensor as an origin, uses a horizontal plane horizontal to the laser sensor as a reference plane (i.e., xoy plane), and uses an axis horizontal to the direction of motion of the unmanned vehicle as a Y-axis; taking an axis which is in the datum plane, passes through the origin and is transversely vertical to the origin as an X axis; the axis passing through the origin and perpendicular to the reference plane is taken as the Z axis. FIG. 3A is a bird's eye view of a spatial coordinate system according to an embodiment. FIG. 3B is a three-dimensional diagram of a spatial coordinate system according to one embodiment.
And embedding a timestamp in the acquired point cloud image by the laser sensor, and sending the point cloud image embedded with the timestamp to computer equipment. In one embodiment, the laser sensor can transmit the point cloud image collected in the preset time period and stored locally to the computer device at one time. And the computer equipment sequences the point cloud images according to the timestamps in the point cloud images, determines the point cloud image in the previous sequence in the two point cloud images adjacent at the time interval as the current frame point cloud image, and determines the point cloud image in the next sequence as the previous frame point cloud image.
And 204, performing point cloud matching on the current frame point cloud image and the previous frame point cloud image, and determining a second point cloud of each first point cloud in the current frame point cloud image and the second point cloud matched in the previous frame point cloud image.
Specifically, a current frame point cloud image and a previous frame point cloud image are input into the trained matching model. The matching model can extract the spatial position information of the second point cloud points from the previous frame point cloud image, and screen out a plurality of first point clouds of which the distance from the spatial position of the second point cloud is less than a preset distance threshold from the current frame point cloud image. For example, when the distance threshold is q, the spatial position coordinate of the second point cloud is (x2, y2, z2), and the coordinate of the screened first point cloud is (x1, y1, z1), where x1 ∈ x2 ± q, y1 ∈ y2 ± q, and z1 ∈ z2 ± q. The matching model may be a neural network model, a dual path network model (DPN), a support vector machine, or a logistic regression model.
Further, the matching model extracts first point features from the first point cloud, extracts second point features from the second point cloud, performs similarity matching on the first point features and the second point features, and determines the first point cloud with the maximum similarity as the point cloud matched with the second point cloud. The first point cloud and the corresponding second point cloud at this time may be point cloud data acquired for the same point on the surface of the same object at different times. And traversing each second point cloud in the previous frame of point cloud image by the matching model until each second point cloud is matched with the corresponding first point cloud. Because the laser sensor can collect a plurality of point cloud images within 1 second, the position coordinates of the moving obstacles in the driving area in two adjacent frames of images within the adjacent collection time interval are not greatly different, and the matching model can match the first point cloud corresponding to the second point cloud only by performing similarity matching on the first point cloud and the second point cloud which are separated by a distance smaller than a threshold value.
In another embodiment, if the feature matching results are all smaller than the preset threshold, the matching model correspondingly increases the distance threshold, screens out the corresponding first point cloud from the current frame point cloud image according to the increased distance threshold, and then performs point cloud matching based on the re-screened first point cloud.
In another embodiment, the training of the matching model comprises: collecting a large number of point cloud images, and dividing the collected point cloud images into a plurality of image pairs according to the collection time of the point cloud images. The image pair comprises a current frame point cloud image and a previous frame point cloud image. Matching point marking can be carried out on the current frame point cloud image and the previous frame point cloud, then the current frame point cloud image and the previous frame point cloud image which are marked are input into a matching model, and parameters in the model are correspondingly adjusted according to the matching point marks by a matching training model.
In another embodiment, a plurality of current point cloud images with matching point markers and a previous frame of point cloud images may be generated using simulation software.
At step 206, a displacement vector of each second point cloud relative to the matched first point cloud is calculated.
The displacement vector is a directed line segment which takes the coordinates of the moving particle at the current moment in the space coordinate system as a starting point and the coordinates of the moving particle at the next moment in the space coordinate system as an end point.
Specifically, the computer device extracts the spatial position coordinates of the point from the second point cloud, extracts the spatial position coordinates of the point from the first point cloud matched with the second point cloud, takes the spatial position coordinates of the second point cloud as a starting point, and takes the spatial position coordinates of the matched first point cloud as an end point, thereby obtaining the displacement vector of the second point cloud relative to the matched first point cloud. Because the first point cloud and the second point cloud are both based on point cloud data acquired by the same laser sensor, the spatial position coordinate contained in the first point cloud and the spatial position coordinate contained in the second point cloud are coordinate values obtained by statistics based on the same spatial coordinate system, so that the displacement vector can be drawn by the computer equipment only by directly connecting the spatial position coordinate of the first point cloud and the corresponding spatial position coordinate of the second point cloud in the spatial coordinate system.
And 208, clustering the second point clouds of which the displacement vectors are larger than the threshold value to obtain at least one group of target point clouds.
Specifically, the computer device calculates the absolute value of the displacement vector corresponding to each second point cloud based on a preset absolute value calculation formula, and determines the absolute value of the displacement vector as the motion amplitude of a single point on the surface of the object in the traffic area moving within the acquisition interval time of the laser sensor. For example, in the above example, when the spatial position coordinates of the first point cloud are (x1, y1, z1), and the spatial position coordinates of the second point cloud are (x2, y2, z2), the corresponding absolute value calculation formula is:
Figure PCTCN2019130114-APPB-000001
further, the computer device screens out second point clouds of which absolute values are larger than a preset threshold value from a previous frame of point cloud image (for convenience of description, the second point clouds of which absolute values of displacement vectors are larger than the preset threshold value are marked as point clouds to be classified), the point clouds to be classified are used as point cloud data collected by the object surface of a moving object in the driving area, and then the computer device clusters the point cloud data to be classified, so that at least one group of target point clouds is obtained. The computer device can cluster the point clouds to be classified in a plurality of ways. For example, the computer device may determine spatial location coordinates for each point cloud to be classified, dividing the point clouds to be classified with adjacent spatial location spacings less than a threshold into a set of target point clouds. Also for example, the computer device may group point clouds to be classified based on a clustering algorithm such as k-means clustering, Mean-Shift clustering, or the like.
Step 210, determining each group of target point clouds as an obstacle.
Specifically, the computer device determines different sets of target point clouds as point cloud data acquired for different obstacles. Meanwhile, the computer equipment acquires the space position coordinates and the corresponding displacement vectors of the target point clouds in the same group, determines the direction pointed by the displacement vectors as the motion direction of the target point clouds, and then determines the motion direction and the position coordinates of the corresponding obstacles according to the space position coordinates and the motion direction of the target point clouds.
In another embodiment, the computer device determines the movement speed of the corresponding obstacle according to the acquisition interval time of the laser sensor and the absolute value of the displacement vector, and predicts the spatial position information of the obstacle after a preset time period according to the movement speed and the movement direction, so that an obstacle avoidance instruction is correspondingly generated according to the predicted position information. For example, when the computer device predicts that the obstacle moves linearly in the current movement direction and movement speed and may collide with the unmanned vehicle after a preset time, the computer device correspondingly generates a brake instruction according to the prediction result so as to stop the unmanned vehicle.
In another embodiment, if the previous frame of point cloud image is at time t-1, the image data acquired by the laser sensor, and the current frame of point cloud image is the image data acquired at time t. And the computer equipment determines the movement speed of the point cloud A according to the acquisition interval time of the laser sensor and the absolute value of the displacement vector corresponding to the point cloud A at the time of t-1, and predicts the spatial position information of the point cloud at the time of t +2 according to the movement speed and the movement direction. Therefore, when the point cloud image acquired at the time t +1 and the point cloud image acquired at the time t +2 are subjected to point cloud matching, the predicted spatial position information of the point cloud A at the time t +2 can be used for assisting point cloud matching.
More specifically, when spatial position information of point cloud A is extracted from a point cloud image acquired at the time t +1 based on a matching model, and a plurality of first point clouds of which the distance is smaller than a preset distance threshold are screened from the point cloud image acquired at the time t +2 according to the spatial position information of the point cloud A, computer equipment obtains spatial position coordinates of each screened first point cloud, subtracts predicted spatial position coordinates of the point cloud A at the time t +2 from the spatial position coordinates of the first point clouds to obtain coordinate difference values, and carries out absolute value calculation on the coordinate difference values to obtain absolute difference values. And then, further screening out point cloud data with the absolute difference value smaller than a preset difference threshold value from the plurality of first point clouds by the computer equipment. Wherein the difference threshold is less than the distance threshold.
The acquisition interval time of the laser sensor is generally 10ms, so that the movement speed and the movement direction of the obstacle can be kept unchanged from the time t-1 to the time t +2, the degree of confidence of the estimated space position coordinate at the time t +2 is higher by utilizing the movement direction and the movement speed of the obstacle at the time t-1, and therefore the first point cloud matched with the second point cloud can be considered to be nearby the estimated space position coordinate, and therefore the plurality of point cloud data can be further screened based on the estimated space coordinate, the calculated amount of the subsequent matching model for the point cloud is reduced, and the obstacle detection efficiency is improved.
In the embodiment, the point cloud matching is performed on the current frame point cloud image and the previous frame point cloud image, so that a first point cloud and a second point cloud corresponding to the same point on the surface of the same object in the previous frame point cloud image and the next frame point cloud image respectively can be determined; the displacement vector calculation is carried out on the first point cloud and the second point cloud which are matched with each other, the movement displacement of an object in a driving area in the collection interval time of the laser sensor can be determined, so that the point with the movement displacement larger than a threshold value can be determined as a movement point, the point with the displacement smaller than the threshold value is determined as a static point, and then all moving obstacles in the driving area can be identified by clustering the movement points. Compared with the traditional method for detecting the obstacles with known types through a machine learning model, the method does not need to identify the types of the obstacles, and can achieve the purpose of avoiding moving obstacles by only judging whether the obstacles in the trip vehicle area are moving obstacles.
In one embodiment, determining the second point clouds that match each first point cloud in the current frame point cloud image with the point cloud image in the previous frame by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image comprises:
step 302, acquiring point features corresponding to a second point cloud extracted from a previous frame of point cloud image and point features corresponding to a first point cloud extracted from a current point cloud image;
step 304, carrying out similarity matching on the point characteristics corresponding to the second point cloud and the point characteristics corresponding to the first point cloud;
and step 306, determining the first point cloud and the second point cloud with similarity matching results meeting the conditions as matched point cloud pairs.
Specifically, the computer device may input the acquired point cloud image into the matching model, perform rasterization on the acquired point cloud image by the matching model, divide a three-dimensional space corresponding to the point cloud image into a plurality of columnar grids, and determine a grid to which the point cloud belongs according to a spatial position coordinate in the point cloud. The matching model calculates the point cloud in the grid based on a preset convolution core, so that high-dimensionality point features are extracted from the point cloud. The matching model may be one of a variety of neural network models. For example, the matching model may be a convolutional neural network model. And the matching model respectively performs feature matching on the point features extracted from the second point cloud and the point features extracted from the plurality of first point clouds, and determines the first point cloud with the maximum matching degree as the point cloud matched with the second point cloud.
In this embodiment, since the matching model is a machine learning model trained in advance, the current frame point cloud image and the previous frame point cloud image can be accurately point cloud matched based on the matching model, so that the subsequent computer device can judge the motion state of the obstacle based on the successfully matched point cloud.
In one embodiment, acquiring the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image comprises: structuring the previous frame of point cloud image to obtain a processing result; and coding the second point cloud in the previous frame of point cloud image based on the processing result to obtain the point characteristics corresponding to the second point cloud.
Specifically, the matching model may perform a structuring process on the previous frame of point cloud image to obtain a processing result after the structuring process. For example, the matching model may perform rasterization processing on a previous frame of point cloud image, or may perform voxelization processing on the previous frame of point cloud image. Taking the rasterization process as an example, the computer device may rasterize a plane with the laser sensor as an origin, dividing the plane into a plurality of grids. The structured space after the structuring process may be a cylindrical space, and the points may be distributed in the cylindrical space of the grid corresponding to the vertical axis, that is, the abscissa and the ordinate of the point in the cylindrical space are within the corresponding grid coordinate, and each cylindrical space may include at least one point. And then, the matching model can encode each second point cloud according to the structured processing result to obtain the point characteristics corresponding to the second point cloud.
It is easy to understand that the point feature extraction method can also be used to extract the point feature of each first point cloud in the current frame point cloud image.
In the embodiment, the point cloud image is subjected to structuring processing, so that a corresponding processing result can be obtained; by encoding the processing result, point features can be extracted from the point cloud data. Because the point cloud data cannot be influenced by illumination, target movement speed and the like, the interference of the matching model in the point feature extraction is small, and the accuracy of feature extraction is guaranteed.
In one embodiment, calculating a displacement vector for each second point cloud relative to the matching first point cloud comprises: acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; and determining a displacement vector of the second point cloud relative to the corresponding first point cloud based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud.
The point cloud is composed of point cloud data, and the point cloud data comprises three-dimensional coordinates of points in a space coordinate system, laser reflection intensity and other information.
Specifically, the computer device extracts three-dimensional coordinates of a corresponding point in a spatial coordinate system from a first point cloud and a second point cloud that matches the first point cloud, respectively, and inserts a displacement vector including a start point and an end point into point cloud data corresponding to the second point cloud using the three-dimensional coordinates extracted from the second point cloud as a start point of the displacement vector and the three-dimensional coordinates extracted from the first point cloud as an end point of the displacement vector (x1, y1, z1, for example),
Figure PCTCN2019130114-APPB-000002
) The point cloud data of (2).
In another embodiment, if the previous frame point cloud image and the current frame point cloud image are acquired by two laser sensors respectively erected on the left side and the right side of the unmanned vehicle, the computer device further needs to perform coordinate registration on the current frame point cloud image and the previous frame point cloud image, convert the point cloud images acquired based on different coordinate systems into two frame point cloud images in the same coordinate system, and perform displacement vector calculation according to the two frame point cloud images in the same coordinate system. Specifically, the computer device acquires point cloud images acquired by two laser sensors erected on the left side and the right side of the unmanned vehicle at the same moment, records the point cloud image acquired by the laser sensor erected on the left side of the unmanned vehicle as a left point cloud image, and records the point cloud image acquired by the laser sensor erected on the right side of the unmanned vehicle as a right point cloud image. The computer equipment extracts a space coordinate in the left point cloud collected for the object point A from the left point cloud image, extracts a space coordinate in the right point cloud collected for the same object point A from the right point cloud image, and performs coordinate conversion on the previous frame point cloud image or the previous frame point cloud image based on the space coordinate of the left point cloud and the space coordinate of the right point cloud, so that the current frame point cloud image and the previous frame point cloud image are located in the same space coordinate system.
In this embodiment, the spatial position information of the point cloud is extracted from the matched point cloud pair, and the accurate displacement vector determination can be performed based on the extracted spatial position coordinates, so that the motion point cloud can be subsequently screened out from the multiple point clouds based on the displacement vector.
In one embodiment, clustering the second point clouds whose displacement vectors are greater than a threshold value to obtain at least one group of target point clouds comprises: screening out second point clouds of which the displacement vectors are larger than a threshold value from the previous frame of point cloud image, and recording the second point clouds as point clouds to be classified; acquiring spatial position information of point clouds to be classified; determining the motion direction of the point cloud to be classified based on the displacement vector; and clustering the point clouds to be classified, which have similar motion directions and have adjacent space position intervals smaller than a threshold value, to obtain at least one group of target point clouds.
Specifically, when the displacement vector of each second point cloud moving relative to the corresponding first point cloud is obtained through calculation, the computer calculates the absolute value of the displacement vector, so that the spacing distance between each second point cloud and the corresponding first point cloud is obtained, and the motion amplitude of the point on the surface of the object in the collection interval can be obtained. The computer screens out point clouds to be classified with motion amplitude larger than a preset motion threshold value from a previous frame of point cloud image, and extracts space position coordinates and corresponding displacement vectors from the point clouds to be classified. The motion threshold may be set according to actual needs, for example, when the acquisition interval of the laser sensor is 10ms, the motion threshold may be set to be 0.05 m.
Further, the computer determines the current movement direction of the unmanned vehicle through a direction positioning compass installed in the unmanned vehicle, and determines the direction represented by each axis in a space coordinate established by taking the central point of the laser sensor as an origin according to the current movement direction of the unmanned vehicle. For example, after the computer determines that the current unmanned vehicle moves towards the north according to the directional positioning compass, the computer determines the positive Y axis in the three-dimensional space coordinate system as the north, determines the positive X axis as the east, determines the negative Y axis as the south, and determines the negative X axis as the west. The computer projects the displacement vector to an XOY plane, and respectively calculates included angles between the projected displacement vector and an X axis and a Y axis based on a starting point coordinate and an end point coordinate of the displacement vector, and then determines the motion direction of the point cloud to be classified corresponding to the displacement vector according to the calculated included angles and the directions corresponding to the X axis and the Y axis.
Further, the computer clusters the point clouds to be classified according to the motion direction and the spatial position coordinates of the point clouds to be classified, and divides the point clouds to be classified, which have similar motion directions and the distance between adjacent spatial positions is smaller than a threshold value, into the same group of point clouds.
In the embodiment, the motion direction and the spatial position information obtained by calculation cannot be influenced by the ambient illumination and the appearance of the obstacle, so that compared with the traditional method of clustering the obstacle by relying on simple geometric appearance characteristics and the ambient information, the method can effectively reduce the environmental influence, thereby greatly improving the accuracy of clustering.
In one embodiment, the obstacle detection method further includes: calculating the motion parameters of the corresponding obstacles according to the displacement vector of each target point cloud in the same group of target point clouds; generating a corresponding obstacle avoidance instruction according to the motion parameters of the obstacle; and controlling the unmanned vehicle to run based on the obstacle avoidance instruction.
The motion parameter refers to an information value of information such as an object motion speed and an object motion direction.
Specifically, the computer device acquires the acquisition frequency of the laser sensor and calculates the movement speed of the corresponding point cloud based on the acquisition frequency and the displacement vector of each point cloud in the same group. And the computer equipment performs weighted average calculation on the motion speeds of all point clouds in the same group to obtain a speed mean value, and judges the speed mean value as the motion speed of the corresponding obstacle. Meanwhile, the computer equipment acquires the displacement vector of each point cloud in the same group, determines the motion direction of each point cloud based on the displacement vector, and counts the motion directions of all the point clouds in the same group to obtain the motion direction of the corresponding obstacle.
Further, the computer equipment determines the area range where the corresponding obstacle is located according to the space coordinates of each point cloud in the same group, and compares the area range where the obstacle is not located with the area range where the unmanned vehicle is located, so that whether the obstacle and the unmanned vehicle are located in the same lane or not is determined. And if the obstacle and the unmanned vehicle are positioned in the same lane, the computer equipment acquires the spacing distance between the obstacle and the unmanned vehicle, and calculates and obtains the collision probability of the unmanned vehicle and the obstacle according to the current speed, the maximum braking deceleration and the spacing distance of the unmanned vehicle. When the collision probability is larger than a threshold value, the computer equipment correspondingly generates a lane changing instruction, and controls the unmanned vehicle to carry out lane changing processing based on the lane changing instruction; and if the collision probability is smaller than the threshold value, the computer equipment correspondingly generates a deceleration instruction and controls the unmanned vehicle to decelerate based on the deceleration instruction.
In this embodiment, since the computer device generates the obstacle avoidance instruction by integrating the spatial position coordinate, the movement speed, and the current vehicle speed of the obstacle, the confidence of the generated obstacle avoidance instruction is high, so that the unmanned vehicle can correctly run based on the obstacle avoidance instruction, and the safety of unmanned driving is greatly improved.
In one embodiment, calculating the motion parameters of the respective obstacle from the displacement vector of each of the target point clouds includes: determining the motion direction of each target point cloud in the same group based on the displacement vector; counting the point cloud number of the target point cloud corresponding to each motion direction; and determining the motion direction of the target point cloud with the largest point cloud number as the motion direction of the corresponding obstacle.
Specifically, the computer equipment acquires a displacement vector of each target point cloud in the same group, and determines the motion direction of each target point cloud based on the displacement vector and the axis direction of the three-dimensional coordinate axis. And the computer equipment counts the number of the target point clouds corresponding to each different motion direction and determines the motion direction with the maximum number of the target point clouds as the motion direction of the corresponding obstacle.
In this embodiment, since the movement directions of different components of the same obstacle are substantially the same, the movement direction of the target point cloud with the largest number can be directly determined as the movement direction of the corresponding obstacle. In addition, because the quantity statistics belongs to simple superposition calculation, the moving direction of the corresponding barrier can be obtained only based on simple calculation, so that not only is the computing resource of a computer saved, but also the detection rate of the barrier is improved.
In one embodiment, calculating the motion parameters of the respective obstacle from the displacement vector of each of the target point clouds includes: acquiring the acquisition frequency of a laser sensor; determining a displacement value of each target point cloud in the same group based on the displacement vector; determining the motion speed of each target point cloud in the same group according to the acquisition frequency and the displacement value; and performing mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding barrier.
Specifically, after obtaining the displacement vector of each target point cloud in the same group, the computer device performs absolute value operation on the displacement vector, and determines the operation result after the absolute value operation as the displacement value of the target point cloud. And the computer equipment divides the displacement value by the acquisition interval time of the laser acquisition sensor to obtain the movement speed of the target point clouds, and performs mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding barrier. The computer device may perform comprehensive calculation on the motion speed of each target point cloud based on a plurality of hybrid operation algorithms, for example, the computer device may perform averaging calculation on the motion speeds of all target point clouds in the same group, and determine an average speed obtained through the averaging calculation as the motion speed of the corresponding obstacle. For another example, the computer device may remove a certain percentage of the target point clouds with the maximum moving speed and the target point clouds with the minimum moving speed in advance, and then perform the averaging calculation on the remaining point clouds.
In this embodiment, the computer device may determine the movement speed of the target point clouds according to the acquisition frequency and the displacement vector of the laser sensor, and determine the movement speed of the corresponding obstacle according to the movement speed of each target point cloud, which is helpful for the computer device to prompt or control the unmanned device according to the movement speed of the obstacle.
It should be understood that although the steps in the flowcharts of fig. 2 and 4 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 4 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided an obstacle detecting apparatus 500 including: a point cloud matching module 502, a point cloud clustering module 504, and a decision module 506, wherein:
a point cloud matching module 502, configured to obtain a current frame point cloud image and a previous frame point cloud image; and determining a second point cloud which is matched between each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image.
A point cloud clustering module 504 for calculating a displacement vector of each second point cloud relative to the matched first point cloud; and clustering the second point clouds of which the displacement vectors are larger than the threshold value to obtain at least one group of target point clouds.
A determining module 506, configured to determine each group of target point clouds as an obstacle.
In one embodiment, the point cloud matching module 502 is further configured to obtain a point feature corresponding to a second point cloud extracted from a previous frame of point cloud image, and a point feature corresponding to a first point cloud extracted from a current point cloud image; carrying out similarity matching on the point characteristics corresponding to the second point cloud and the point characteristics corresponding to the first point cloud; and determining the first point cloud and the second point cloud of which the similarity matching results meet the conditions as a matched point cloud pair.
In one embodiment, the point cloud matching module 502 is further configured to perform structuring processing on a previous frame of point cloud image to obtain a processing result; and coding the second point cloud in the previous frame of point cloud image based on the processing result to obtain the point characteristics corresponding to the second point cloud.
In one embodiment, the point cloud clustering module 504 is further configured to obtain spatial location information of a first point cloud and spatial location information of a second point cloud that matches the first point cloud; and determining a displacement vector of the second point cloud relative to the corresponding first point cloud based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud.
In one embodiment, the point cloud clustering module 504 is further configured to screen out a second point cloud with a displacement vector larger than a threshold from a previous frame of point cloud image, and record the second point cloud as a point cloud to be classified; acquiring spatial position information of point clouds to be classified; determining the motion direction of the point cloud to be classified based on the displacement vector; and clustering the point clouds to be classified, which have similar motion directions and have adjacent space position intervals smaller than a threshold value, to obtain at least one group of target point clouds.
In one embodiment, the obstacle detection apparatus 500 includes an obstacle avoidance instruction generating module 508, configured to calculate a motion parameter of a corresponding obstacle according to a displacement vector of each target point cloud in the same group of target point clouds; generating a corresponding obstacle avoidance instruction according to the motion parameters of the obstacle; and controlling the unmanned vehicle to run based on the obstacle avoidance instruction.
In one embodiment, the obstacle avoidance instruction generating module 508 is further configured to determine a motion direction of each target point cloud in the same group based on the displacement vector; counting the point cloud number of the target point cloud corresponding to each motion direction; and determining the motion direction of the target point cloud with the largest point cloud number as the motion direction of the corresponding obstacle.
In one embodiment, the obstacle avoidance instruction generating module 508 is further configured to obtain a collecting frequency of the laser sensor; determining a displacement value of each target point cloud in the same group based on the displacement vector; determining the motion speed of each target point cloud in the same group according to the acquisition frequency and the displacement value; and performing mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding barrier.
For specific limitations of the obstacle detection device, reference may be made to the above limitations of the obstacle detection method, which are not described herein again. The respective modules in the above obstacle detection apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operating system and execution of computer-readable instructions in the non-volatile storage medium. The database of the computer device is used for storing the detection data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer readable instructions, when executed by a processor, implement a method of obstacle detection.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions, which, when executed by the processors, cause the one or more processors, when executed, to carry out the steps of the above-described method embodiments.
One or more non-transitory computer-readable storage media storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-described method embodiments.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a non-volatile computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered as being described in the present specification.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

  1. An obstacle detection method comprising:
    acquiring a current frame point cloud image and a previous frame point cloud image;
    determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
    calculating a displacement vector of each second point cloud relative to the matched first point cloud;
    clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
    and judging each group of target point clouds as an obstacle.
  2. The method of claim 1, wherein determining a second point cloud in which each first point cloud in the current frame point cloud image matches in a previous frame point cloud image by point cloud matching the current frame point cloud image and the previous frame point cloud image comprises:
    acquiring point features corresponding to a second point cloud extracted from the previous frame of point cloud image and point features corresponding to a first point cloud extracted from the current point cloud image;
    carrying out similarity matching on the point characteristics corresponding to the second point cloud and the point characteristics corresponding to the first point cloud;
    and determining the first point cloud and the second point cloud of which the similarity matching results meet the conditions as a matched point cloud pair.
  3. The method of claim 2, wherein the obtaining the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image comprises:
    structuring the previous frame point cloud image to obtain a processing result;
    and coding a second point cloud in the previous frame of point cloud image based on the processing result to obtain a point feature corresponding to the second point cloud.
  4. The method of claim 1, wherein calculating a displacement vector for each second point cloud relative to a matching first point cloud comprises:
    acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud;
    and determining a displacement vector of the second point cloud relative to the corresponding first point cloud based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud.
  5. The method of claim 1, wherein clustering the second point clouds having displacement vectors greater than a threshold value to obtain at least one set of target point clouds comprises:
    screening out second point clouds of which the displacement vectors are larger than a threshold value from the previous frame of point cloud image, and recording the second point clouds as point clouds to be classified;
    acquiring spatial position information of the point cloud to be classified;
    determining the motion direction of the point cloud to be classified based on the displacement vector;
    and clustering the point clouds to be classified with similar motion directions and adjacent space position intervals smaller than a threshold value to obtain at least one group of target point clouds.
  6. The method of claim 1, further comprising:
    calculating the motion parameters of the corresponding obstacles according to the displacement vector of each target point cloud in the same group of target point clouds;
    generating a corresponding obstacle avoidance instruction according to the motion parameters of the obstacle;
    and controlling the unmanned vehicle to run based on the obstacle avoidance instruction.
  7. The method of claim 6, wherein the motion parameters include a direction of motion; the calculating the motion parameters of the corresponding obstacles according to the displacement vector of each target point cloud in the target point clouds comprises the following steps:
    determining a motion direction of each target point cloud in the same group based on the displacement vector;
    counting the point cloud number of the target point cloud corresponding to each motion direction;
    and determining the motion direction of the target point cloud with the largest point cloud number as the motion direction of the corresponding obstacle.
  8. The method of claim 6, wherein the motion parameters include a speed of motion; the point cloud image is acquired by a laser sensor; the calculating the motion parameters of the corresponding obstacles according to the displacement vector of each target point cloud in the target point clouds comprises the following steps:
    acquiring the acquisition frequency of the laser sensor;
    determining a displacement value of each target point cloud within the same group based on the displacement vector;
    determining the motion speed of each target point cloud in the same group according to the acquisition frequency and the displacement value;
    and performing mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding barrier.
  9. An obstacle detection device comprising:
    the point cloud matching module is used for acquiring a current frame point cloud image and a previous frame point cloud image; determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
    the point cloud clustering module is used for calculating the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
    and the judging module is used for judging each group of target point clouds as an obstacle.
  10. The apparatus of claim 9, wherein the point cloud matching module is further configured to:
    acquiring point features corresponding to a second point cloud extracted from the previous frame of point cloud image and point features corresponding to a first point cloud extracted from the current point cloud image;
    carrying out similarity matching on the point characteristics corresponding to the second point cloud and the point characteristics corresponding to the first point cloud;
    and determining the first point cloud and the second point cloud of which the similarity matching results meet the conditions as a matched point cloud pair.
  11. The apparatus of claim 9, wherein the point cloud matching module is further configured to:
    structuring the previous frame point cloud image to obtain a processing result;
    and coding a second point cloud in the previous frame of point cloud image based on the processing result to obtain a point feature corresponding to the second point cloud.
  12. A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of:
    acquiring a current frame point cloud image and a previous frame point cloud image;
    determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
    calculating a displacement vector of each second point cloud relative to the matched first point cloud;
    clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
    and judging each group of target point clouds as an obstacle.
  13. The computer device of claim 12, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    acquiring point features corresponding to a second point cloud extracted from the previous frame of point cloud image and point features corresponding to a first point cloud extracted from the current point cloud image;
    carrying out similarity matching on the point characteristics corresponding to the second point cloud and the point characteristics corresponding to the first point cloud;
    and determining the first point cloud and the second point cloud of which the similarity matching results meet the conditions as a matched point cloud pair.
  14. The computer device of claim 13, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    structuring the previous frame point cloud image to obtain a processing result;
    and coding a second point cloud in the previous frame of point cloud image based on the processing result to obtain a point feature corresponding to the second point cloud.
  15. The computer device of claim 12, wherein the processor, when executing the computer readable instructions, further performs the steps of:
    acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud;
    and determining a displacement vector of the second point cloud relative to the corresponding first point cloud based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud.
  16. One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
    acquiring a current frame point cloud image and a previous frame point cloud image;
    determining a second point cloud which is matched with each first point cloud in the current frame point cloud image and the previous frame point cloud image by performing point cloud matching on the current frame point cloud image and the previous frame point cloud image;
    calculating a displacement vector of each second point cloud relative to the matched first point cloud;
    clustering the second point clouds of which the displacement vectors are larger than a threshold value to obtain at least one group of target point clouds;
    and judging each group of target point clouds as an obstacle.
  17. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    acquiring point features corresponding to a second point cloud extracted from the previous frame of point cloud image and point features corresponding to a first point cloud extracted from the current point cloud image;
    carrying out similarity matching on the point characteristics corresponding to the second point cloud and the point characteristics corresponding to the first point cloud;
    and determining the first point cloud and the second point cloud of which the similarity matching results meet the conditions as a matched point cloud pair.
  18. The storage medium of claim 17, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    structuring the previous frame point cloud image to obtain a processing result;
    and coding a second point cloud in the previous frame of point cloud image based on the processing result to obtain a point feature corresponding to the second point cloud.
  19. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud;
    and determining a displacement vector of the second point cloud relative to the corresponding first point cloud based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud.
  20. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of:
    screening out second point clouds of which the displacement vectors are larger than a threshold value from the previous frame of point cloud image, and recording the second point clouds as point clouds to be classified;
    acquiring spatial position information of the point cloud to be classified;
    determining the motion direction of the point cloud to be classified based on the displacement vector;
    and clustering the point clouds to be classified with similar motion directions and adjacent space position intervals smaller than a threshold value to obtain at least one group of target point clouds.
CN201980037711.7A 2019-12-30 2019-12-30 Obstacle detection method, obstacle detection device, computer device, and storage medium Pending CN113424079A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130114 WO2021134296A1 (en) 2019-12-30 2019-12-30 Obstacle detection method and apparatus, and computer device and storage medium

Publications (1)

Publication Number Publication Date
CN113424079A true CN113424079A (en) 2021-09-21

Family

ID=76686199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980037711.7A Pending CN113424079A (en) 2019-12-30 2019-12-30 Obstacle detection method, obstacle detection device, computer device, and storage medium

Country Status (2)

Country Link
CN (1) CN113424079A (en)
WO (1) WO2021134296A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723432A (en) * 2021-10-27 2021-11-30 深圳火眼智能有限公司 Intelligent identification and positioning tracking method and system based on deep learning
CN115965943A (en) * 2023-03-09 2023-04-14 安徽蔚来智驾科技有限公司 Target detection method, device, driving device, and medium
CN116524029A (en) * 2023-06-30 2023-08-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and storage medium for rail vehicle

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591869A (en) * 2021-08-03 2021-11-02 北京地平线信息技术有限公司 Point cloud instance segmentation method and device, electronic equipment and storage medium
CN113673388A (en) * 2021-08-09 2021-11-19 北京三快在线科技有限公司 Method and device for determining position of target object, storage medium and equipment
CN113627372B (en) * 2021-08-17 2024-01-05 北京伟景智能科技有限公司 Running test method, running test system and computer readable storage medium
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN113838112A (en) * 2021-09-24 2021-12-24 东莞市诺丽电子科技有限公司 Trigger signal determining method and trigger signal determining system of image acquisition system
CN114119729A (en) * 2021-11-17 2022-03-01 北京埃福瑞科技有限公司 Obstacle identification method and device
CN114509785A (en) * 2022-02-16 2022-05-17 中国第一汽车股份有限公司 Three-dimensional object detection method, device, storage medium, processor and system
CN114647011B (en) * 2022-02-28 2024-02-02 三一海洋重工有限公司 Anti-hanging monitoring method, device and system for integrated cards
CN114596555B (en) * 2022-05-09 2022-08-30 新石器慧通(北京)科技有限公司 Obstacle point cloud data screening method and device, electronic equipment and storage medium
CN115050192B (en) * 2022-06-09 2023-11-21 南京矽典微系统有限公司 Parking space detection method based on millimeter wave radar and application
CN115082731B (en) * 2022-06-15 2024-03-29 苏州轻棹科技有限公司 Target classification method and device based on voting mechanism
CN114842455B (en) * 2022-06-27 2022-09-09 小米汽车科技有限公司 Obstacle detection method, device, equipment, medium, chip and vehicle
CN115620239B (en) * 2022-11-08 2024-01-30 国网湖北省电力有限公司荆州供电公司 Point cloud and video combined power transmission line online monitoring method and system
CN117455936B (en) * 2023-12-25 2024-04-12 法奥意威(苏州)机器人系统有限公司 Point cloud data processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
CN108398672A (en) * 2018-03-06 2018-08-14 厦门大学 Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time
CN108398672A (en) * 2018-03-06 2018-08-14 厦门大学 Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723432A (en) * 2021-10-27 2021-11-30 深圳火眼智能有限公司 Intelligent identification and positioning tracking method and system based on deep learning
CN115965943A (en) * 2023-03-09 2023-04-14 安徽蔚来智驾科技有限公司 Target detection method, device, driving device, and medium
CN116524029A (en) * 2023-06-30 2023-08-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and storage medium for rail vehicle
CN116524029B (en) * 2023-06-30 2023-12-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and storage medium for rail vehicle

Also Published As

Publication number Publication date
WO2021134296A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
CN113424079A (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
US11893785B2 (en) Object annotation method and apparatus, movement control method and apparatus, device, and storage medium
Behrendt et al. A deep learning approach to traffic lights: Detection, tracking, and classification
CN110765894B (en) Target detection method, device, equipment and computer readable storage medium
CN113424121A (en) Vehicle speed control method and device based on automatic driving and computer equipment
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
CN112149550B (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
US20190310651A1 (en) Object Detection and Determination of Motion Information Using Curve-Fitting in Autonomous Vehicle Applications
WO2022222095A1 (en) Trajectory prediction method and apparatus, and computer device and storage medium
Wirges et al. Capturing object detection uncertainty in multi-layer grid maps
CN110264495B (en) Target tracking method and device
US9513108B2 (en) Sensor system for determining distance information based on stereoscopic images
CN111611853A (en) Sensing information fusion method and device and storage medium
Li et al. An adaptive 3D grid-based clustering algorithm for automotive high resolution radar sensor
CN113490965A (en) Image tracking processing method and device, computer equipment and storage medium
CN115066708A (en) Point cloud data motion segmentation method and device, computer equipment and storage medium
CN113568435B (en) Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
Berriel et al. A particle filter-based lane marker tracking approach using a cubic spline model
JP2024019629A (en) Prediction device, prediction method, program and vehicle control system
CN110781730B (en) Intelligent driving sensing method and sensing device
CN112433193B (en) Multi-sensor-based mold position positioning method and system
CN113744304A (en) Target detection tracking method and device
CN112729289A (en) Positioning method, device, equipment and storage medium applied to automatic guided vehicle
KR20210125371A (en) Method and apparatus for analyzing traffic situation
KR102618680B1 (en) Real-time 3D object detection and tracking system using visual and LiDAR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination