WO2021134296A1 - Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage - Google Patents
Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage Download PDFInfo
- Publication number
- WO2021134296A1 WO2021134296A1 PCT/CN2019/130114 CN2019130114W WO2021134296A1 WO 2021134296 A1 WO2021134296 A1 WO 2021134296A1 CN 2019130114 W CN2019130114 W CN 2019130114W WO 2021134296 A1 WO2021134296 A1 WO 2021134296A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- point
- image
- previous frame
- displacement vector
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
Definitions
- This application relates to an obstacle detection method, device, computer equipment and storage medium.
- Self-driving cars also known as self-driving cars, computer-driven cars, or wheeled mobile robots, rely on artificial intelligence, visual computing, radar, monitoring devices, and global positioning equipment to work together to allow computers to work without any human initiative.
- the smart car of the motor vehicle is automatically operated.
- obstacle detection is mainly performed through machine learning models.
- the machine learning model can only detect the target obstacle categories that have participated in the training, such as people, cars, etc., for the obstacle categories that are not involved in the training in the driving area (such as animals, cones, etc.), the machine learning model cannot be passed. Perform correct recognition, thereby increasing the probability of unmanned vehicle safety accidents due to failure to correctly recognize moving objects.
- an obstacle detection method, device, computer equipment, and storage medium are provided.
- An obstacle detection method includes:
- Each group of target point cloud is judged as an obstacle.
- a point cloud-based target tracking device includes:
- the point cloud matching module is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined A first point cloud that matches a second point cloud in the previous frame of point cloud image;
- the point cloud clustering module is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target points cloud;
- the judging module is used to judge each group of target point clouds as an obstacle.
- a computer device including a memory and one or more processors, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the one or more processors execute The following steps:
- Each group of target point cloud is judged as an obstacle.
- One or more non-volatile computer-readable storage media storing computer-readable instructions.
- the computer-readable instructions When executed by one or more processors, the one or more processors perform the following steps:
- Each group of target point cloud is judged as an obstacle.
- Fig. 1 is an application scenario diagram of an obstacle detection method in an embodiment
- Figure 2 is a schematic flowchart of an obstacle detection method in an embodiment
- 3A is a schematic diagram of a bird's-eye view of the spatial coordinate system in an embodiment
- Fig. 3B is a three-dimensional schematic diagram of a space coordinate system in an embodiment
- FIG. 4 is a schematic flowchart of a step of performing point cloud matching according to point features according to an embodiment
- Figure 5 is a block diagram of an obstacle detection device in an embodiment
- Figure 6 is a block diagram of a computer device in one embodiment.
- the obstacle detection method provided in this application can be applied in a variety of application environments.
- it can be applied to the application environment of automatic driving as shown in FIG. 1, and it can include a laser sensor 102 and a computer device 104.
- the computer device 104 can communicate with the laser sensor 102 via a network.
- the laser sensor 102 can collect multi-frame point cloud images of the surrounding environment, and the computer device 104 can acquire the point cloud image of the previous frame and the point cloud image of the current frame collected by the laser sensor 102, and use the above obstacle detection method to analyze the point cloud image of the previous frame.
- the image and the point cloud image of the current frame are processed to realize the detection of obstacles.
- the laser sensor 102 may be a sensor carried by an automatic driving device, and may specifically include a laser radar, a laser scanner, and the like.
- an obstacle detection method is provided. Taking the method applied to the computer device 104 in FIG. 1 as an example for description, the method includes the following steps:
- Step 202 Obtain a point cloud image of the current frame and a point cloud image of the previous frame.
- the laser sensor may be mounted by a device capable of automatic driving.
- a device capable of automatic driving For example, it can be carried by an unmanned vehicle, or it can be carried by a vehicle including an autonomous driving model.
- Laser sensors can be used to collect environmental data within the visual range.
- a laser sensor can be set up on the unmanned vehicle in advance, and the laser sensor emits a detection signal to the driving area according to a preset time frequency, and compares the signal reflected by objects in the driving area with the detection signal to obtain the surrounding environment data , And generate corresponding point cloud images based on environmental data.
- the point cloud image refers to the object in the scanning environment recorded in the form of points, and the collection of point clouds corresponding to multiple points on the surface of the object.
- the point cloud may specifically include a variety of information such as the three-dimensional space position coordinates, laser reflection intensity, and color of a single point on the surface of the object in the environment in the space coordinate system.
- the spatial coordinate system may be a Cartesian coordinate system.
- the spatial coordinate system takes the center point of the laser sensor as the origin, the horizontal plane horizontal to the laser sensor as the reference plane (that is, the xoy plane), and the axis horizontal to the moving direction of the unmanned vehicle is the Y axis;
- the axis in the datum plane that passes through the origin and is transversely perpendicular to the origin is the X axis;
- the axis that passes through the origin and is perpendicular to the datum plane is the Z axis.
- FIG. 3A is a schematic diagram of a bird's-eye view of the spatial coordinate system in an embodiment.
- Fig. 3B is a three-dimensional schematic diagram of the spatial coordinate system in an embodiment.
- the laser sensor embeds a time stamp in the collected point cloud image, and sends the point cloud image with the embedded time stamp to the computer device.
- the laser sensor can send the locally stored point cloud images collected within a preset time period to the computer device at one time.
- the computer equipment sorts the point cloud images according to the timestamps in the point cloud images, and determines the point cloud image of the previous order among the two point cloud images adjacent in time interval as the point cloud image of the current frame.
- the point cloud image is determined as the point cloud image of the previous frame.
- Step 204 By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame. .
- the point cloud image of the current frame and the point cloud image of the previous frame are input into the trained matching model.
- the matching model can extract the spatial position information of the second point cloud point from the point cloud image of the previous frame, and filter out the point cloud image of the current frame that the distance from the spatial position of the second point cloud is less than the preset distance threshold.
- the first cloud For example, when the distance threshold is q and the spatial position coordinates of the second point cloud are (x2, y2, z2), the first point cloud coordinates filtered out at this time are (x1, y1, z1), where x1 ⁇ x2 ⁇ q, y1 ⁇ y2 ⁇ q, z1 ⁇ z2 ⁇ q.
- the matching model may specifically be a neural network model, a dual path network model (DPN, DualPath Network), a support vector machine, or a logistic regression model.
- the matching model extracts the first point feature from the first point cloud, extracts the second point feature from the second point cloud, and performs similarity matching between the first point feature and the second point feature, and the similarity
- the largest first point cloud is determined as the point cloud matching the second point cloud.
- the first point cloud and the corresponding second point cloud at this time may be point cloud data collected for the same point on the surface of the same object at different times.
- the matching model traverses each second point cloud in the point cloud image of the previous frame until each second point cloud matches the corresponding first point cloud.
- the laser sensor can collect multiple point cloud images within 1 second, within the adjacent collection time interval, the position coordinates of the moving obstacles in the driving area in the adjacent two frames of images are not much different, so the matching model As long as the first point cloud and the second point cloud separated by a distance less than the threshold are matched for similarity, the first point cloud corresponding to the second point cloud can be matched.
- the matching model increases the distance threshold correspondingly, and filters out the corresponding first point cloud from the current frame point cloud image according to the increased distance threshold. Then point cloud matching is performed based on the re-screened first point cloud.
- the training step of the matching model includes: collecting a large number of point cloud images, and dividing the collected point cloud images into a plurality of image pairs according to the collection time of the point cloud images.
- the image pair includes the point cloud image of the current frame and the point cloud image of the previous frame.
- the point cloud image of the current frame and the point cloud of the previous frame can be marked for matching points, and then the marked point cloud image of the current frame and the point cloud image of the previous frame are input into the matching model, and the matching training model is adjusted according to the matching point markings. Parameters in the model.
- simulation software may be used to generate multiple current point cloud images with matching point markers and a point cloud image of the previous frame.
- Step 206 Calculate the displacement vector of each second point cloud relative to the matched first point cloud.
- the displacement vector refers to a directed line segment that takes the coordinates of the moving mass point at the current moment in the space coordinate system as the starting point, and the coordinates of the moving mass point at the next moment in the space coordinate system as the end point.
- the computer device extracts the spatial position coordinates of the point from the second point cloud, extracts the spatial position coordinates of the point from the first point cloud matching the second point cloud, and calculates the spatial position of the second point cloud
- the coordinates are taken as the starting point, and the spatial position coordinates of the matched first point cloud are taken as the end point, so as to obtain the displacement vector of the second point cloud relative to the matched first point cloud.
- the spatial position coordinates contained in the first point cloud and the spatial position coordinates contained in the second point cloud are also based on the same spatial coordinates
- the coordinate value calculated by the system so the computer device only needs to directly connect the space position coordinates of the first point cloud with the space position coordinates of the corresponding second point cloud in the space coordinate system to draw the displacement vector.
- Step 208 Perform clustering on the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud.
- the computer device calculates the absolute value of the displacement vector corresponding to each second point cloud based on the preset absolute value calculation formula, and determines the absolute value of the displacement vector as a single point on the surface of the object in the driving area.
- the movement amplitude of the movement within the collection interval of the sensor For example, in the above example, when the spatial position coordinates of the first point cloud are (x1, y1, z1) and the coordinates of the second point cloud are (x2, y2, z2), the corresponding absolute value calculation formula is:
- the computer device filters out the second point cloud whose absolute value is greater than the preset threshold value from the previous frame of point cloud image (for the convenience of description, the second point cloud whose absolute value of the displacement vector is greater than the preset threshold value will be recorded as pending in the following. Classification point cloud), and use the to-be-classified point cloud as the point cloud data collected on the surface of the moving object in the driving area, and then the computer device clusters the to-be-classified point cloud data to obtain at least one set of target point clouds.
- the computer equipment can cluster the point cloud to be classified in a variety of ways.
- the computer device may determine the spatial position coordinates of each point cloud to be classified, and divide the point cloud to be classified into a group of target point clouds whose distance between adjacent spatial positions is less than a threshold.
- the computer device may group the point clouds to be classified based on clustering algorithms such as k-means clustering and Mean-Shift clustering.
- Step 210 Determine each group of target point clouds as an obstacle.
- the computer device judges different groups of target point clouds as point cloud data collected for different obstacles.
- the computer equipment obtains the spatial position coordinates of the target point cloud in the same group and the corresponding displacement vector, and determines the direction pointed by the displacement vector as the movement direction of the target point cloud. Space position coordinates and movement direction, determine the movement direction and position coordinates of the corresponding obstacle.
- the computer device determines the movement speed of the corresponding obstacle according to the collection interval time of the laser sensor and the absolute value of the displacement vector, and predicts the space of the obstacle after a preset time period according to the movement speed and movement direction. Position information, so as to generate obstacle avoidance instructions corresponding to the predicted position information. For example, when the computer equipment predicts that the obstacle is moving in a straight line at the current direction of motion and the speed of motion, and it may collide with an unmanned vehicle after a preset time, the computer equipment generates a braking instruction corresponding to the prediction result to make the unmanned The car stopped running.
- the point cloud image of the previous frame is the image data collected by the laser sensor at time t-1
- the point cloud image of the current frame is the image data collected at time t.
- the computer equipment determines the movement speed of the point cloud A according to the collection interval time of the laser sensor and the absolute value of the displacement vector corresponding to the point cloud A at time t-1, and predicts the point cloud at t+ according to the movement speed and movement direction. 2 Time location information. Therefore, when performing point cloud matching on the point cloud image collected at time t+1 and the point cloud image collected at time t+2, the predicted spatial position information of the point cloud A at time t+2 can be used to assist the point cloud matching .
- the spatial position information of point cloud A is extracted from the point cloud image collected at time t+1 based on the matching model, and the point cloud image collected at time t+2 is filtered according to the spatial position information of point cloud A
- the computer device obtains the spatial position coordinates of each first point cloud that has been filtered out, and subtracts the predicted point cloud from the spatial position coordinates of the first point cloud
- the spatial position coordinates of A at t+2 time, the coordinate difference is obtained, and the absolute value of the coordinate difference is calculated to obtain the absolute difference.
- the computer device further filters out the point cloud data whose absolute difference is less than the preset difference threshold from the plurality of first point clouds.
- the difference threshold is less than the distance threshold.
- the collection interval of the laser sensor is generally 10ms, it can be considered that from t-1 to t+2, the movement speed and direction of the obstacle remain unchanged, so the movement direction of the obstacle at t-1 and Motion speed, the estimated spatial position coordinates at t+2 have a high degree of confidence, so it can be considered that the first point cloud matching the second point cloud is near the estimated spatial position coordinates, so it can be based on The estimated spatial coordinates further filter multiple point cloud data, thereby reducing the amount of calculation for the subsequent matching model to calculate the point cloud, thereby improving the efficiency of obstacle detection.
- Two point clouds include:
- Step 302 Acquire the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
- Step 304 Perform similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
- Step 306 Determine the first point cloud and the second point cloud whose similarity matching results meet the conditions as matching point cloud pairs.
- the computer device can input the collected point cloud image into the matching model, and the matching model will rasterize the collected point cloud image, and divide the three-dimensional space corresponding to the point cloud image into a plurality of columnar grids. , And determine the grid to which the point cloud belongs according to the spatial position coordinates in the point cloud.
- the matching model calculates the point cloud in the grid based on the preset convolution kernel, thereby extracting high-dimensional point features from the point cloud.
- the matching model can be one of a variety of neural network models.
- the matching model may be a convolutional neural network model.
- the matching model performs feature matching on the point features extracted from the second point cloud with the point features extracted from multiple first point clouds, and determines the first point cloud with the largest matching degree as the second point cloud Matching point cloud.
- the matching model is a pre-trained machine learning model
- the current frame of point cloud image and the previous frame of point cloud image can be accurately matched based on the matching model, so that subsequent computer equipment can be based on successful matching.
- the point cloud judges the movement state of the obstacle.
- acquiring the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame includes: performing structural processing on the point cloud image of the previous frame to obtain the processing result;
- the second point cloud in the cloud image is encoded to obtain the point feature corresponding to the second point cloud.
- the matching model may perform structured case processing on the point cloud image of the previous frame, and obtain the processing result after the structured processing.
- the matching model can perform rasterization processing on the point cloud image of the previous frame, and can also perform voxelization processing on the point cloud image of the previous frame.
- the computer equipment can rasterize the plane with the laser sensor as the origin, and divide the plane into multiple grids.
- the structured space after the structuring process can be a columnar space, and the points can be distributed in the columnar space corresponding to the vertical axis of the grid, that is, the abscissa and ordinate of the points in the columnar space are in the corresponding grid coordinates, and each columnar The space may include at least one point.
- the matching model can encode each second point cloud according to the structured processing result to obtain the point feature corresponding to the second point cloud.
- point feature extraction method described above can also be used to extract the point feature of each first point cloud in the point cloud image of the current frame.
- the corresponding processing result can be obtained; by encoding the processing result, the point feature can be extracted from the point cloud data. Since the point cloud data will not be affected by illumination, target movement speed, etc., the matching model suffers less interference when extracting point features, thereby ensuring the accuracy of feature extraction.
- calculating the displacement vector of each second point cloud relative to the matched first point cloud includes: acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; Based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud, the displacement vector of the second point cloud relative to the corresponding first point cloud is determined.
- the point cloud is composed of point cloud data
- the point cloud data includes information such as the three-dimensional coordinates of the point in the space coordinate system and the laser reflection intensity.
- the computer device respectively extracts the three-dimensional coordinates of the corresponding point in the space coordinate system from the first point cloud and the matched second point cloud, and uses the three-dimensional coordinates extracted from the second point cloud as the starting point of the displacement vector .
- the three-dimensional coordinates extracted from the first point cloud are used as the end point of the displacement vector, and the displacement vector containing the start point and the end point is inserted into the point cloud data corresponding to the second point cloud to obtain for example ((x1, y1, z1) ) Point cloud data.
- the computer device also needs to perform the point cloud image and The previous frame of point cloud image is coordinated, and the point cloud images collected based on different coordinate systems are converted into two frames of point cloud images in the same coordinate system, so as to carry out the displacement vector based on the two frames of point cloud images in the same coordinate system Calculation.
- the computer equipment obtains the point cloud images collected by the two laser sensors installed on the left and right sides of the unmanned vehicle at the same time, and records the point cloud image collected based on the laser sensor installed on the left side of the unmanned vehicle as the left point cloud image ,
- the point cloud image collected by the laser sensor installed on the right side of the unmanned vehicle is recorded as the right point cloud image.
- the computer equipment extracts the space coordinates in the left point cloud collected for the object point A from the left point cloud image, and extracts the space coordinates in the right point cloud collected for the same object point A from the right point cloud image, and based on the left
- the space coordinates of the point cloud and the space coordinates of the right point cloud are coordinate conversion of the point cloud image of the previous frame or the point cloud image of the previous frame, so that the point cloud image of the current frame and the point cloud image of the previous frame are in the same spatial coordinate system.
- clustering the second point cloud whose displacement vector is greater than the threshold value to obtain at least one set of target point cloud includes: filtering out the second point cloud whose displacement vector is greater than the threshold value from the point cloud image of the previous frame, denoted as Point cloud to be classified; obtain the spatial position information of the point cloud to be classified; determine the movement direction of the point cloud to be classified based on the displacement vector; cluster the point cloud to be classified with the similar movement direction and the distance between adjacent spatial positions less than the threshold to obtain at least A set of target point clouds.
- the computer calculates the absolute value of the displacement vector to obtain each second point cloud and the corresponding first point cloud
- the separation distance between, that is, the movement amplitude of the points on the surface of the object within the collection interval can be obtained.
- the computer screens out the point cloud to be classified whose motion amplitude is greater than the preset motion threshold from the previous frame of point cloud image, and extracts the spatial position coordinates and the corresponding displacement vector from the point cloud to be classified.
- the movement threshold can be set according to actual needs. For example, when the collection interval of the laser sensor is 10 ms, the movement threshold can be set to 0.05 meters.
- the computer determines the current movement direction of the unmanned vehicle through the direction of the compass installed in the unmanned vehicle, and determines each of the spatial coordinates established with the center point of the laser sensor as the origin according to the current movement direction of the unmanned vehicle.
- the direction represented by each axis For example, after the computer determines that the current unmanned vehicle is moving north according to the direction of the compass, the computer determines the positive Y axis in the three-dimensional space coordinate system as north, the positive X axis as east, and the negative Y axis as south, and The negative X axis is judged to be West.
- the computer projects the displacement vector to the XOY plane, and calculates the included angle between the projected displacement vector and the X-axis and Y-axis based on the start and end coordinates of the displacement vector, and then calculates the included angle and the X-axis and Y-axis according to the calculation.
- the corresponding direction determines the movement direction of the point cloud to be classified corresponding to the displacement vector.
- the computer clusters the point clouds to be classified according to the movement direction and spatial position coordinates of the point clouds to be classified, and divides the point clouds to be classified into the same group of point clouds that have similar movement directions and whose distance between adjacent spatial positions is less than a threshold.
- the above obstacle detection method further includes: calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; generating corresponding obstacle avoidance instructions according to the motion parameters of the obstacle; Control unmanned vehicles to drive based on obstacle avoidance commands.
- the motion parameter refers to the information value of information such as the speed of the object and the direction of the object.
- the computer device obtains the acquisition frequency of the laser sensor, and calculates the movement speed of the corresponding point cloud based on the acquisition frequency and the displacement vector of each point cloud in the same group.
- the computer device performs a weighted average calculation on the movement speeds of all point clouds in the same group to obtain the average speed, and judges the average speed as the movement speed of the corresponding obstacle.
- the computer equipment obtains the displacement vector of each point cloud in the same group, determines the movement direction of each point cloud based on the displacement vector, and calculates the movement direction of all point clouds in the same group to obtain the movement of the corresponding obstacle direction.
- the computer equipment determines the area where the corresponding obstacle is located according to the spatial coordinates of each point cloud in the same group, and compares the area where the obstacle is not located with the area where the unmanned vehicle is located, so as to determine whether the obstacle is different from that of the unmanned vehicle. People and vehicles are in the same lane. If the obstacle and the unmanned vehicle are in the same lane, the computer equipment obtains the separation distance between the obstacle and the unmanned vehicle, and calculates the unmanned vehicle and the obstacle according to the current speed, maximum braking deceleration and separation distance of the unmanned vehicle The probability of collision.
- the computer equipment When the collision probability is greater than the threshold, the computer equipment generates a lane change instruction and controls the unmanned vehicle to perform lane change processing based on the lane change instruction; if the collision probability is less than the threshold, the computer equipment generates a deceleration instruction and controls the unmanned vehicle based on the deceleration instruction Slow down.
- the computer device synthesizes the space position coordinates of the obstacle, the movement speed, and the current vehicle speed to generate the obstacle avoidance instruction
- the generated obstacle avoidance instruction has a high degree of confidence, so that the unmanned vehicle can be based on the obstacle avoidance instruction Carrying out correct driving greatly improves the safety of unmanned driving.
- calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud includes: determining the motion direction of each target point cloud in the same group based on the displacement vector; and counting the corresponding motion directions The number of point clouds of the target point cloud; the motion direction of the target point cloud with the largest number of point clouds is determined as the motion direction of the corresponding obstacle.
- the computer device obtains the displacement vector of each target point cloud in the same group, and determines the movement direction of each target point cloud based on the displacement vector and the axis direction of the three-dimensional coordinate axis.
- the computer device counts the number of target point clouds corresponding to each different motion direction, and determines the motion direction with the largest number of target point clouds as the motion direction of the corresponding obstacle.
- the movement direction of the most numerous target point cloud can be directly determined as the movement direction of the corresponding obstacle.
- this solution only needs to be based on simple calculations to obtain the movement direction of the corresponding obstacles, which not only saves the computing resources of the computer, but also improves the detection rate of obstacles.
- calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud includes: obtaining the acquisition frequency of the laser sensor; and determining the displacement value of each target point cloud in the same group based on the displacement vector ; Determine the movement speed of each target point cloud in the same group according to the acquisition frequency and displacement value; perform a mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
- the computer device after obtaining the displacement vector of each target point cloud in the same group, the computer device performs an absolute value calculation on the displacement vector, and determines the calculation result after the absolute value calculation as the displacement value of the target point cloud.
- the computer equipment divides the displacement value by the collection interval time of the laser acquisition sensor to obtain the movement speed of the target point cloud, and performs a mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
- the computer equipment can comprehensively calculate the movement speed of each target point cloud based on a variety of hybrid computing algorithms. For example, the computer equipment can calculate the average value of the movement speed of all target point clouds in the same group, and calculate the average speed obtained by the average value. Determine the movement speed of the corresponding obstacle. For another example, the computer device may remove a certain percentage of the target point cloud with the maximum motion speed and the target point cloud with the minimum motion speed in advance, and then perform an average calculation on the remaining point clouds.
- the computer device can determine the movement speed of the target point cloud according to the acquisition frequency and displacement vector of the laser sensor, and determine the movement speed of the corresponding obstacle according to the movement speed of each target point cloud, which is helpful for the computer equipment to root the obstacle.
- the movement speed of the object prompts or controls the unmanned equipment.
- steps in the flowcharts of FIGS. 2 and 4 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in Figures 2 and 4 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
- an obstacle detection device 500 including: a point cloud matching module 502, a point cloud clustering module 504, and a determination module 506, wherein:
- the point cloud matching module 502 is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined The first point cloud matches the second point cloud in the point cloud image of the previous frame.
- the point cloud clustering module 504 is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target point clouds .
- the determination module 506 is used to determine each group of target point clouds as an obstacle.
- the above-mentioned point cloud matching module 502 is also used to obtain the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame, and the point feature corresponding to the first point cloud extracted from the current point cloud image.
- Point features; similarity matching is performed on the point features corresponding to the second point cloud and the point features corresponding to the first point cloud; the first point cloud and the second point cloud whose similarity matching results meet the conditions are determined as matching point clouds Correct.
- the above-mentioned point cloud matching module 502 is further configured to perform structural processing on the point cloud image of the previous frame to obtain the processing result; and encode the second point cloud in the point cloud image of the previous frame based on the processing result , Get the point feature corresponding to the second point cloud.
- the above-mentioned point cloud clustering module 504 is also used to obtain the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; based on the spatial position information and the relative spatial position information of the first point cloud The spatial position information of the matched second point cloud determines the displacement vector of the second point cloud relative to the corresponding first point cloud.
- the above-mentioned point cloud clustering module 504 is further configured to filter the second point cloud whose displacement vector is greater than the threshold value from the previous frame of point cloud image, and record it as the point cloud to be classified; to obtain the space of the point cloud to be classified Position information; determine the moving direction of the point cloud to be classified based on the displacement vector; cluster the point clouds to be classified with similar moving directions and the distance between adjacent spatial locations less than a threshold to obtain at least one set of target point clouds.
- the obstacle detection device 500 includes an obstacle avoidance instruction generating module 508, which is used to calculate the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; The motion parameters generate corresponding obstacle avoidance instructions; based on the obstacle avoidance instructions, the unmanned vehicle is controlled to drive.
- an obstacle avoidance instruction generating module 508 which is used to calculate the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; The motion parameters generate corresponding obstacle avoidance instructions; based on the obstacle avoidance instructions, the unmanned vehicle is controlled to drive.
- the obstacle avoidance instruction generation module 508 is further configured to determine the movement direction of each target point cloud in the same group based on the displacement vector; count the number of point clouds of the target point cloud corresponding to each movement direction; The movement direction of the target point cloud with the largest number of point clouds is determined as the movement direction of the corresponding obstacle.
- the obstacle avoidance command generation module 508 is also used to obtain the acquisition frequency of the laser sensor; determine the displacement value of each target point cloud in the same group based on the displacement vector; determine the displacement value of each target point cloud in the same group according to the acquisition frequency and displacement value.
- the movement speed of each target point cloud; the mixed calculation is performed on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
- Each module in the above obstacle detection device can be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
- a computer device is provided.
- the computer device may be a server, and its internal structure diagram may be as shown in FIG. 6.
- the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
- the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
- the database of the computer equipment is used to store detection data.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer readable instruction is executed by the processor to realize an obstacle detection method.
- FIG. 6 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
- a computer device includes a memory and one or more processors.
- the memory stores computer readable instructions.
- the one or more processors execute the above method embodiments. step.
- One or more non-volatile computer-readable storage media storing computer-readable instructions.
- the computer-readable instructions execute A step of.
- Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Sont décrits ici un procédé et un appareil (500) de détection d'obstacles, un dispositif informatique et un support de stockage. Le procédé consiste à : obtenir une trame d'image de nuage de points actuelle et une trame d'image de nuage de points précédente (202) ; par la mise en correspondance des nuages de points sur la trame d'image de nuage de points actuelle et la trame d'image de nuage de points précédente, déterminer chaque nuage de premiers points dans la trame d'image de nuage de points actuelle et les nuages de seconds points correspondants dans la trame d'image de nuage de points précédente (204) ; calculer un vecteur de déplacement de chaque nuage de seconds points par rapport au nuage de premiers points correspondant (206) ; regrouper les nuages de seconds points à l'aide du vecteur de déplacement supérieur à un seuil pour obtenir au moins un groupe de nuages de points cibles (208) ; et déterminer chaque groupe de nuages de points cibles en tant qu'obstacle (210). La précision de la détection d'obstacles peut ainsi être améliorée.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980037711.7A CN113424079A (zh) | 2019-12-30 | 2019-12-30 | 障碍物检测方法、装置、计算机设备和存储介质 |
PCT/CN2019/130114 WO2021134296A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/130114 WO2021134296A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021134296A1 true WO2021134296A1 (fr) | 2021-07-08 |
Family
ID=76686199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/130114 WO2021134296A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113424079A (fr) |
WO (1) | WO2021134296A1 (fr) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113569812A (zh) * | 2021-08-31 | 2021-10-29 | 东软睿驰汽车技术(沈阳)有限公司 | 未知障碍物的识别方法、装置和电子设备 |
CN113569979A (zh) * | 2021-08-06 | 2021-10-29 | 中国科学院宁波材料技术与工程研究所 | 一种基于注意力机制的三维物体点云分类方法 |
CN113591869A (zh) * | 2021-08-03 | 2021-11-02 | 北京地平线信息技术有限公司 | 点云实例分割方法和装置、电子设备和存储介质 |
CN113627372A (zh) * | 2021-08-17 | 2021-11-09 | 北京伟景智能科技有限公司 | 跑步测试方法、系统及计算机可读存储介质 |
CN113673388A (zh) * | 2021-08-09 | 2021-11-19 | 北京三快在线科技有限公司 | 一种目标物位置的确定方法、装置、存储介质及设备 |
CN113838112A (zh) * | 2021-09-24 | 2021-12-24 | 东莞市诺丽电子科技有限公司 | 图像采集系统的触发信号确定方法及触发信号确定系统 |
CN114119729A (zh) * | 2021-11-17 | 2022-03-01 | 北京埃福瑞科技有限公司 | 障碍物识别方法及装置 |
CN114509785A (zh) * | 2022-02-16 | 2022-05-17 | 中国第一汽车股份有限公司 | 三维物体检测方法、装置、存储介质、处理器及系统 |
CN114545947A (zh) * | 2022-02-25 | 2022-05-27 | 北京捷象灵越科技有限公司 | 移动机器人互相避让方法、装置、电子设备及存储介质 |
CN114596555A (zh) * | 2022-05-09 | 2022-06-07 | 新石器慧通(北京)科技有限公司 | 障碍物点云数据筛选方法、装置、电子设备及存储介质 |
CN114647011A (zh) * | 2022-02-28 | 2022-06-21 | 三一海洋重工有限公司 | 集卡防吊监控方法、装置及系统 |
CN114724105A (zh) * | 2021-11-29 | 2022-07-08 | 山东交通学院 | 一种基于云边端架构的复杂背景下锥桶识别方法 |
CN114842455A (zh) * | 2022-06-27 | 2022-08-02 | 小米汽车科技有限公司 | 障碍物检测方法、装置、设备、介质、芯片及车辆 |
CN115050192A (zh) * | 2022-06-09 | 2022-09-13 | 南京矽典微系统有限公司 | 基于毫米波雷达的停车位检测的方法及应用 |
CN115082731A (zh) * | 2022-06-15 | 2022-09-20 | 苏州轻棹科技有限公司 | 一种基于投票机制的目标分类方法和装置 |
CN115390085A (zh) * | 2022-07-28 | 2022-11-25 | 广州小马智行科技有限公司 | 基于激光雷达的定位方法、装置、计算机设备和存储介质 |
CN115620239A (zh) * | 2022-11-08 | 2023-01-17 | 国网湖北省电力有限公司荆州供电公司 | 一种点云和视频结合的输电线路在线监测方法和系统 |
CN117455936A (zh) * | 2023-12-25 | 2024-01-26 | 法奥意威(苏州)机器人系统有限公司 | 点云数据处理方法、装置及电子设备 |
CN117687408A (zh) * | 2023-11-03 | 2024-03-12 | 广州发展燃料港口有限公司 | 一种装船机智能控制方法、系统、装置与存储介质 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113723432B (zh) * | 2021-10-27 | 2022-02-22 | 深圳火眼智能有限公司 | 一种基于深度学习的智能识别、定位追踪的方法及系统 |
CN115965943A (zh) * | 2023-03-09 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | 目标检测方法、设备、驾驶设备和介质 |
CN116524029B (zh) * | 2023-06-30 | 2023-12-01 | 长沙智能驾驶研究院有限公司 | 轨道交通工具的障碍物检测方法、装置、设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105652873A (zh) * | 2016-03-04 | 2016-06-08 | 中山大学 | 一种基于Kinect的移动机器人避障方法 |
CN108152831A (zh) * | 2017-12-06 | 2018-06-12 | 中国农业大学 | 一种激光雷达障碍物识别方法及系统 |
CN108398672A (zh) * | 2018-03-06 | 2018-08-14 | 厦门大学 | 基于前倾2d激光雷达移动扫描的路面与障碍检测方法 |
CN109633688A (zh) * | 2018-12-14 | 2019-04-16 | 北京百度网讯科技有限公司 | 一种激光雷达障碍物识别方法和装置 |
EP3517997A1 (fr) * | 2018-01-30 | 2019-07-31 | Wipro Limited | Procédé et système de détection d'obstacles par des véhicules autonomes en temps réel |
-
2019
- 2019-12-30 WO PCT/CN2019/130114 patent/WO2021134296A1/fr active Application Filing
- 2019-12-30 CN CN201980037711.7A patent/CN113424079A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105652873A (zh) * | 2016-03-04 | 2016-06-08 | 中山大学 | 一种基于Kinect的移动机器人避障方法 |
CN108152831A (zh) * | 2017-12-06 | 2018-06-12 | 中国农业大学 | 一种激光雷达障碍物识别方法及系统 |
EP3517997A1 (fr) * | 2018-01-30 | 2019-07-31 | Wipro Limited | Procédé et système de détection d'obstacles par des véhicules autonomes en temps réel |
CN108398672A (zh) * | 2018-03-06 | 2018-08-14 | 厦门大学 | 基于前倾2d激光雷达移动扫描的路面与障碍检测方法 |
CN109633688A (zh) * | 2018-12-14 | 2019-04-16 | 北京百度网讯科技有限公司 | 一种激光雷达障碍物识别方法和装置 |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113591869A (zh) * | 2021-08-03 | 2021-11-02 | 北京地平线信息技术有限公司 | 点云实例分割方法和装置、电子设备和存储介质 |
CN113569979A (zh) * | 2021-08-06 | 2021-10-29 | 中国科学院宁波材料技术与工程研究所 | 一种基于注意力机制的三维物体点云分类方法 |
CN113673388A (zh) * | 2021-08-09 | 2021-11-19 | 北京三快在线科技有限公司 | 一种目标物位置的确定方法、装置、存储介质及设备 |
CN113627372B (zh) * | 2021-08-17 | 2024-01-05 | 北京伟景智能科技有限公司 | 跑步测试方法、系统及计算机可读存储介质 |
CN113627372A (zh) * | 2021-08-17 | 2021-11-09 | 北京伟景智能科技有限公司 | 跑步测试方法、系统及计算机可读存储介质 |
CN113569812A (zh) * | 2021-08-31 | 2021-10-29 | 东软睿驰汽车技术(沈阳)有限公司 | 未知障碍物的识别方法、装置和电子设备 |
CN113838112A (zh) * | 2021-09-24 | 2021-12-24 | 东莞市诺丽电子科技有限公司 | 图像采集系统的触发信号确定方法及触发信号确定系统 |
CN114119729A (zh) * | 2021-11-17 | 2022-03-01 | 北京埃福瑞科技有限公司 | 障碍物识别方法及装置 |
CN114724105A (zh) * | 2021-11-29 | 2022-07-08 | 山东交通学院 | 一种基于云边端架构的复杂背景下锥桶识别方法 |
CN114509785A (zh) * | 2022-02-16 | 2022-05-17 | 中国第一汽车股份有限公司 | 三维物体检测方法、装置、存储介质、处理器及系统 |
CN114545947A (zh) * | 2022-02-25 | 2022-05-27 | 北京捷象灵越科技有限公司 | 移动机器人互相避让方法、装置、电子设备及存储介质 |
CN114647011A (zh) * | 2022-02-28 | 2022-06-21 | 三一海洋重工有限公司 | 集卡防吊监控方法、装置及系统 |
CN114647011B (zh) * | 2022-02-28 | 2024-02-02 | 三一海洋重工有限公司 | 集卡防吊监控方法、装置及系统 |
CN114596555A (zh) * | 2022-05-09 | 2022-06-07 | 新石器慧通(北京)科技有限公司 | 障碍物点云数据筛选方法、装置、电子设备及存储介质 |
CN114596555B (zh) * | 2022-05-09 | 2022-08-30 | 新石器慧通(北京)科技有限公司 | 障碍物点云数据筛选方法、装置、电子设备及存储介质 |
CN115050192B (zh) * | 2022-06-09 | 2023-11-21 | 南京矽典微系统有限公司 | 基于毫米波雷达的停车位检测的方法及应用 |
CN115050192A (zh) * | 2022-06-09 | 2022-09-13 | 南京矽典微系统有限公司 | 基于毫米波雷达的停车位检测的方法及应用 |
CN115082731A (zh) * | 2022-06-15 | 2022-09-20 | 苏州轻棹科技有限公司 | 一种基于投票机制的目标分类方法和装置 |
CN115082731B (zh) * | 2022-06-15 | 2024-03-29 | 苏州轻棹科技有限公司 | 一种基于投票机制的目标分类方法和装置 |
CN114842455A (zh) * | 2022-06-27 | 2022-08-02 | 小米汽车科技有限公司 | 障碍物检测方法、装置、设备、介质、芯片及车辆 |
CN115390085A (zh) * | 2022-07-28 | 2022-11-25 | 广州小马智行科技有限公司 | 基于激光雷达的定位方法、装置、计算机设备和存储介质 |
CN115620239A (zh) * | 2022-11-08 | 2023-01-17 | 国网湖北省电力有限公司荆州供电公司 | 一种点云和视频结合的输电线路在线监测方法和系统 |
CN115620239B (zh) * | 2022-11-08 | 2024-01-30 | 国网湖北省电力有限公司荆州供电公司 | 一种点云和视频结合的输电线路在线监测方法和系统 |
CN117687408A (zh) * | 2023-11-03 | 2024-03-12 | 广州发展燃料港口有限公司 | 一种装船机智能控制方法、系统、装置与存储介质 |
CN117455936A (zh) * | 2023-12-25 | 2024-01-26 | 法奥意威(苏州)机器人系统有限公司 | 点云数据处理方法、装置及电子设备 |
CN117455936B (zh) * | 2023-12-25 | 2024-04-12 | 法奥意威(苏州)机器人系统有限公司 | 点云数据处理方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN113424079A (zh) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021134296A1 (fr) | Procédé et appareil de détection d'obstacles, et dispositif informatique et support de stockage | |
CN110163904B (zh) | 对象标注方法、移动控制方法、装置、设备及存储介质 | |
JP7345504B2 (ja) | Lidarデータと画像データの関連付け | |
US11816585B2 (en) | Machine learning models operating at different frequencies for autonomous vehicles | |
CN110363058B (zh) | 使用单触发卷积神经网络的用于避障的三维对象定位 | |
US9990736B2 (en) | Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms | |
WO2021134441A1 (fr) | Procédé et appareil de contrôle de vitesse de véhicule basé sur la conduite automatisée, et dispositif informatique | |
WO2022222095A1 (fr) | Procédé et appareil de prédiction de trajectoire, dispositif informatique et support de stockage | |
Wirges et al. | Capturing object detection uncertainty in multi-layer grid maps | |
US8818702B2 (en) | System and method for tracking objects | |
WO2022099530A1 (fr) | Procédé et appareil de segmentation de mouvement pour données de nuage de points, dispositif informatique et support de stockage | |
WO2021134285A1 (fr) | Procédé et appareil de traitement de suivi d'image, et dispositif informatique et support de stockage | |
EP2960858A1 (fr) | Système de capteur pour déterminer des informations de distance basées sur des images stéréoscopiques | |
CN112171675B (zh) | 一种移动机器人的避障方法、装置、机器人及存储介质 | |
CN113239719A (zh) | 基于异常信息识别的轨迹预测方法、装置和计算机设备 | |
Berriel et al. | A particle filter-based lane marker tracking approach using a cubic spline model | |
CN107909024B (zh) | 基于图像识别和红外避障的车辆跟踪系统、方法及车辆 | |
JP2022035033A (ja) | 情報処理システム、情報処理方法、プログラムおよび車両制御システム | |
Dao et al. | Aligning bird-eye view representation of point cloud sequences using scene flow | |
CN113744304A (zh) | 一种目标检测跟踪的方法和装置 | |
CN115943400B (zh) | 基于时间与空间学习的轨迹预测方法、装置和计算机设备 | |
Li et al. | TTC4MCP: Monocular Collision Prediction Based on Self-Supervised TTC Estimation | |
CN117746524B (zh) | 一种基于slam和人群异常行为识别的安防巡检系统和方法 | |
CN118089794B (zh) | 一种基于多源信息的自适应多信息组合导航的仿真方法 | |
TK et al. | DETECTING AND TRACKING MOVING OBJECTS USING STATISTICAL ADAPTIVE THRESHOLDING APPROACH WITH SOM & GMA TRACKING |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19958654 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19958654 Country of ref document: EP Kind code of ref document: A1 |