WO2021134296A1 - Obstacle detection method and apparatus, and computer device and storage medium - Google Patents

Obstacle detection method and apparatus, and computer device and storage medium Download PDF

Info

Publication number
WO2021134296A1
WO2021134296A1 PCT/CN2019/130114 CN2019130114W WO2021134296A1 WO 2021134296 A1 WO2021134296 A1 WO 2021134296A1 CN 2019130114 W CN2019130114 W CN 2019130114W WO 2021134296 A1 WO2021134296 A1 WO 2021134296A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
image
previous frame
displacement vector
Prior art date
Application number
PCT/CN2019/130114
Other languages
French (fr)
Chinese (zh)
Inventor
吴伟
何明
叶茂盛
邹晓艺
许双杰
许家妙
曹通易
Original Assignee
深圳元戎启行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳元戎启行科技有限公司 filed Critical 深圳元戎启行科技有限公司
Priority to CN201980037711.7A priority Critical patent/CN113424079A/en
Priority to PCT/CN2019/130114 priority patent/WO2021134296A1/en
Publication of WO2021134296A1 publication Critical patent/WO2021134296A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Definitions

  • This application relates to an obstacle detection method, device, computer equipment and storage medium.
  • Self-driving cars also known as self-driving cars, computer-driven cars, or wheeled mobile robots, rely on artificial intelligence, visual computing, radar, monitoring devices, and global positioning equipment to work together to allow computers to work without any human initiative.
  • the smart car of the motor vehicle is automatically operated.
  • obstacle detection is mainly performed through machine learning models.
  • the machine learning model can only detect the target obstacle categories that have participated in the training, such as people, cars, etc., for the obstacle categories that are not involved in the training in the driving area (such as animals, cones, etc.), the machine learning model cannot be passed. Perform correct recognition, thereby increasing the probability of unmanned vehicle safety accidents due to failure to correctly recognize moving objects.
  • an obstacle detection method, device, computer equipment, and storage medium are provided.
  • An obstacle detection method includes:
  • Each group of target point cloud is judged as an obstacle.
  • a point cloud-based target tracking device includes:
  • the point cloud matching module is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined A first point cloud that matches a second point cloud in the previous frame of point cloud image;
  • the point cloud clustering module is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target points cloud;
  • the judging module is used to judge each group of target point clouds as an obstacle.
  • a computer device including a memory and one or more processors, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the one or more processors execute The following steps:
  • Each group of target point cloud is judged as an obstacle.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors perform the following steps:
  • Each group of target point cloud is judged as an obstacle.
  • Fig. 1 is an application scenario diagram of an obstacle detection method in an embodiment
  • Figure 2 is a schematic flowchart of an obstacle detection method in an embodiment
  • 3A is a schematic diagram of a bird's-eye view of the spatial coordinate system in an embodiment
  • Fig. 3B is a three-dimensional schematic diagram of a space coordinate system in an embodiment
  • FIG. 4 is a schematic flowchart of a step of performing point cloud matching according to point features according to an embodiment
  • Figure 5 is a block diagram of an obstacle detection device in an embodiment
  • Figure 6 is a block diagram of a computer device in one embodiment.
  • the obstacle detection method provided in this application can be applied in a variety of application environments.
  • it can be applied to the application environment of automatic driving as shown in FIG. 1, and it can include a laser sensor 102 and a computer device 104.
  • the computer device 104 can communicate with the laser sensor 102 via a network.
  • the laser sensor 102 can collect multi-frame point cloud images of the surrounding environment, and the computer device 104 can acquire the point cloud image of the previous frame and the point cloud image of the current frame collected by the laser sensor 102, and use the above obstacle detection method to analyze the point cloud image of the previous frame.
  • the image and the point cloud image of the current frame are processed to realize the detection of obstacles.
  • the laser sensor 102 may be a sensor carried by an automatic driving device, and may specifically include a laser radar, a laser scanner, and the like.
  • an obstacle detection method is provided. Taking the method applied to the computer device 104 in FIG. 1 as an example for description, the method includes the following steps:
  • Step 202 Obtain a point cloud image of the current frame and a point cloud image of the previous frame.
  • the laser sensor may be mounted by a device capable of automatic driving.
  • a device capable of automatic driving For example, it can be carried by an unmanned vehicle, or it can be carried by a vehicle including an autonomous driving model.
  • Laser sensors can be used to collect environmental data within the visual range.
  • a laser sensor can be set up on the unmanned vehicle in advance, and the laser sensor emits a detection signal to the driving area according to a preset time frequency, and compares the signal reflected by objects in the driving area with the detection signal to obtain the surrounding environment data , And generate corresponding point cloud images based on environmental data.
  • the point cloud image refers to the object in the scanning environment recorded in the form of points, and the collection of point clouds corresponding to multiple points on the surface of the object.
  • the point cloud may specifically include a variety of information such as the three-dimensional space position coordinates, laser reflection intensity, and color of a single point on the surface of the object in the environment in the space coordinate system.
  • the spatial coordinate system may be a Cartesian coordinate system.
  • the spatial coordinate system takes the center point of the laser sensor as the origin, the horizontal plane horizontal to the laser sensor as the reference plane (that is, the xoy plane), and the axis horizontal to the moving direction of the unmanned vehicle is the Y axis;
  • the axis in the datum plane that passes through the origin and is transversely perpendicular to the origin is the X axis;
  • the axis that passes through the origin and is perpendicular to the datum plane is the Z axis.
  • FIG. 3A is a schematic diagram of a bird's-eye view of the spatial coordinate system in an embodiment.
  • Fig. 3B is a three-dimensional schematic diagram of the spatial coordinate system in an embodiment.
  • the laser sensor embeds a time stamp in the collected point cloud image, and sends the point cloud image with the embedded time stamp to the computer device.
  • the laser sensor can send the locally stored point cloud images collected within a preset time period to the computer device at one time.
  • the computer equipment sorts the point cloud images according to the timestamps in the point cloud images, and determines the point cloud image of the previous order among the two point cloud images adjacent in time interval as the point cloud image of the current frame.
  • the point cloud image is determined as the point cloud image of the previous frame.
  • Step 204 By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame. .
  • the point cloud image of the current frame and the point cloud image of the previous frame are input into the trained matching model.
  • the matching model can extract the spatial position information of the second point cloud point from the point cloud image of the previous frame, and filter out the point cloud image of the current frame that the distance from the spatial position of the second point cloud is less than the preset distance threshold.
  • the first cloud For example, when the distance threshold is q and the spatial position coordinates of the second point cloud are (x2, y2, z2), the first point cloud coordinates filtered out at this time are (x1, y1, z1), where x1 ⁇ x2 ⁇ q, y1 ⁇ y2 ⁇ q, z1 ⁇ z2 ⁇ q.
  • the matching model may specifically be a neural network model, a dual path network model (DPN, DualPath Network), a support vector machine, or a logistic regression model.
  • the matching model extracts the first point feature from the first point cloud, extracts the second point feature from the second point cloud, and performs similarity matching between the first point feature and the second point feature, and the similarity
  • the largest first point cloud is determined as the point cloud matching the second point cloud.
  • the first point cloud and the corresponding second point cloud at this time may be point cloud data collected for the same point on the surface of the same object at different times.
  • the matching model traverses each second point cloud in the point cloud image of the previous frame until each second point cloud matches the corresponding first point cloud.
  • the laser sensor can collect multiple point cloud images within 1 second, within the adjacent collection time interval, the position coordinates of the moving obstacles in the driving area in the adjacent two frames of images are not much different, so the matching model As long as the first point cloud and the second point cloud separated by a distance less than the threshold are matched for similarity, the first point cloud corresponding to the second point cloud can be matched.
  • the matching model increases the distance threshold correspondingly, and filters out the corresponding first point cloud from the current frame point cloud image according to the increased distance threshold. Then point cloud matching is performed based on the re-screened first point cloud.
  • the training step of the matching model includes: collecting a large number of point cloud images, and dividing the collected point cloud images into a plurality of image pairs according to the collection time of the point cloud images.
  • the image pair includes the point cloud image of the current frame and the point cloud image of the previous frame.
  • the point cloud image of the current frame and the point cloud of the previous frame can be marked for matching points, and then the marked point cloud image of the current frame and the point cloud image of the previous frame are input into the matching model, and the matching training model is adjusted according to the matching point markings. Parameters in the model.
  • simulation software may be used to generate multiple current point cloud images with matching point markers and a point cloud image of the previous frame.
  • Step 206 Calculate the displacement vector of each second point cloud relative to the matched first point cloud.
  • the displacement vector refers to a directed line segment that takes the coordinates of the moving mass point at the current moment in the space coordinate system as the starting point, and the coordinates of the moving mass point at the next moment in the space coordinate system as the end point.
  • the computer device extracts the spatial position coordinates of the point from the second point cloud, extracts the spatial position coordinates of the point from the first point cloud matching the second point cloud, and calculates the spatial position of the second point cloud
  • the coordinates are taken as the starting point, and the spatial position coordinates of the matched first point cloud are taken as the end point, so as to obtain the displacement vector of the second point cloud relative to the matched first point cloud.
  • the spatial position coordinates contained in the first point cloud and the spatial position coordinates contained in the second point cloud are also based on the same spatial coordinates
  • the coordinate value calculated by the system so the computer device only needs to directly connect the space position coordinates of the first point cloud with the space position coordinates of the corresponding second point cloud in the space coordinate system to draw the displacement vector.
  • Step 208 Perform clustering on the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud.
  • the computer device calculates the absolute value of the displacement vector corresponding to each second point cloud based on the preset absolute value calculation formula, and determines the absolute value of the displacement vector as a single point on the surface of the object in the driving area.
  • the movement amplitude of the movement within the collection interval of the sensor For example, in the above example, when the spatial position coordinates of the first point cloud are (x1, y1, z1) and the coordinates of the second point cloud are (x2, y2, z2), the corresponding absolute value calculation formula is:
  • the computer device filters out the second point cloud whose absolute value is greater than the preset threshold value from the previous frame of point cloud image (for the convenience of description, the second point cloud whose absolute value of the displacement vector is greater than the preset threshold value will be recorded as pending in the following. Classification point cloud), and use the to-be-classified point cloud as the point cloud data collected on the surface of the moving object in the driving area, and then the computer device clusters the to-be-classified point cloud data to obtain at least one set of target point clouds.
  • the computer equipment can cluster the point cloud to be classified in a variety of ways.
  • the computer device may determine the spatial position coordinates of each point cloud to be classified, and divide the point cloud to be classified into a group of target point clouds whose distance between adjacent spatial positions is less than a threshold.
  • the computer device may group the point clouds to be classified based on clustering algorithms such as k-means clustering and Mean-Shift clustering.
  • Step 210 Determine each group of target point clouds as an obstacle.
  • the computer device judges different groups of target point clouds as point cloud data collected for different obstacles.
  • the computer equipment obtains the spatial position coordinates of the target point cloud in the same group and the corresponding displacement vector, and determines the direction pointed by the displacement vector as the movement direction of the target point cloud. Space position coordinates and movement direction, determine the movement direction and position coordinates of the corresponding obstacle.
  • the computer device determines the movement speed of the corresponding obstacle according to the collection interval time of the laser sensor and the absolute value of the displacement vector, and predicts the space of the obstacle after a preset time period according to the movement speed and movement direction. Position information, so as to generate obstacle avoidance instructions corresponding to the predicted position information. For example, when the computer equipment predicts that the obstacle is moving in a straight line at the current direction of motion and the speed of motion, and it may collide with an unmanned vehicle after a preset time, the computer equipment generates a braking instruction corresponding to the prediction result to make the unmanned The car stopped running.
  • the point cloud image of the previous frame is the image data collected by the laser sensor at time t-1
  • the point cloud image of the current frame is the image data collected at time t.
  • the computer equipment determines the movement speed of the point cloud A according to the collection interval time of the laser sensor and the absolute value of the displacement vector corresponding to the point cloud A at time t-1, and predicts the point cloud at t+ according to the movement speed and movement direction. 2 Time location information. Therefore, when performing point cloud matching on the point cloud image collected at time t+1 and the point cloud image collected at time t+2, the predicted spatial position information of the point cloud A at time t+2 can be used to assist the point cloud matching .
  • the spatial position information of point cloud A is extracted from the point cloud image collected at time t+1 based on the matching model, and the point cloud image collected at time t+2 is filtered according to the spatial position information of point cloud A
  • the computer device obtains the spatial position coordinates of each first point cloud that has been filtered out, and subtracts the predicted point cloud from the spatial position coordinates of the first point cloud
  • the spatial position coordinates of A at t+2 time, the coordinate difference is obtained, and the absolute value of the coordinate difference is calculated to obtain the absolute difference.
  • the computer device further filters out the point cloud data whose absolute difference is less than the preset difference threshold from the plurality of first point clouds.
  • the difference threshold is less than the distance threshold.
  • the collection interval of the laser sensor is generally 10ms, it can be considered that from t-1 to t+2, the movement speed and direction of the obstacle remain unchanged, so the movement direction of the obstacle at t-1 and Motion speed, the estimated spatial position coordinates at t+2 have a high degree of confidence, so it can be considered that the first point cloud matching the second point cloud is near the estimated spatial position coordinates, so it can be based on The estimated spatial coordinates further filter multiple point cloud data, thereby reducing the amount of calculation for the subsequent matching model to calculate the point cloud, thereby improving the efficiency of obstacle detection.
  • Two point clouds include:
  • Step 302 Acquire the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
  • Step 304 Perform similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
  • Step 306 Determine the first point cloud and the second point cloud whose similarity matching results meet the conditions as matching point cloud pairs.
  • the computer device can input the collected point cloud image into the matching model, and the matching model will rasterize the collected point cloud image, and divide the three-dimensional space corresponding to the point cloud image into a plurality of columnar grids. , And determine the grid to which the point cloud belongs according to the spatial position coordinates in the point cloud.
  • the matching model calculates the point cloud in the grid based on the preset convolution kernel, thereby extracting high-dimensional point features from the point cloud.
  • the matching model can be one of a variety of neural network models.
  • the matching model may be a convolutional neural network model.
  • the matching model performs feature matching on the point features extracted from the second point cloud with the point features extracted from multiple first point clouds, and determines the first point cloud with the largest matching degree as the second point cloud Matching point cloud.
  • the matching model is a pre-trained machine learning model
  • the current frame of point cloud image and the previous frame of point cloud image can be accurately matched based on the matching model, so that subsequent computer equipment can be based on successful matching.
  • the point cloud judges the movement state of the obstacle.
  • acquiring the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame includes: performing structural processing on the point cloud image of the previous frame to obtain the processing result;
  • the second point cloud in the cloud image is encoded to obtain the point feature corresponding to the second point cloud.
  • the matching model may perform structured case processing on the point cloud image of the previous frame, and obtain the processing result after the structured processing.
  • the matching model can perform rasterization processing on the point cloud image of the previous frame, and can also perform voxelization processing on the point cloud image of the previous frame.
  • the computer equipment can rasterize the plane with the laser sensor as the origin, and divide the plane into multiple grids.
  • the structured space after the structuring process can be a columnar space, and the points can be distributed in the columnar space corresponding to the vertical axis of the grid, that is, the abscissa and ordinate of the points in the columnar space are in the corresponding grid coordinates, and each columnar The space may include at least one point.
  • the matching model can encode each second point cloud according to the structured processing result to obtain the point feature corresponding to the second point cloud.
  • point feature extraction method described above can also be used to extract the point feature of each first point cloud in the point cloud image of the current frame.
  • the corresponding processing result can be obtained; by encoding the processing result, the point feature can be extracted from the point cloud data. Since the point cloud data will not be affected by illumination, target movement speed, etc., the matching model suffers less interference when extracting point features, thereby ensuring the accuracy of feature extraction.
  • calculating the displacement vector of each second point cloud relative to the matched first point cloud includes: acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; Based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud, the displacement vector of the second point cloud relative to the corresponding first point cloud is determined.
  • the point cloud is composed of point cloud data
  • the point cloud data includes information such as the three-dimensional coordinates of the point in the space coordinate system and the laser reflection intensity.
  • the computer device respectively extracts the three-dimensional coordinates of the corresponding point in the space coordinate system from the first point cloud and the matched second point cloud, and uses the three-dimensional coordinates extracted from the second point cloud as the starting point of the displacement vector .
  • the three-dimensional coordinates extracted from the first point cloud are used as the end point of the displacement vector, and the displacement vector containing the start point and the end point is inserted into the point cloud data corresponding to the second point cloud to obtain for example ((x1, y1, z1) ) Point cloud data.
  • the computer device also needs to perform the point cloud image and The previous frame of point cloud image is coordinated, and the point cloud images collected based on different coordinate systems are converted into two frames of point cloud images in the same coordinate system, so as to carry out the displacement vector based on the two frames of point cloud images in the same coordinate system Calculation.
  • the computer equipment obtains the point cloud images collected by the two laser sensors installed on the left and right sides of the unmanned vehicle at the same time, and records the point cloud image collected based on the laser sensor installed on the left side of the unmanned vehicle as the left point cloud image ,
  • the point cloud image collected by the laser sensor installed on the right side of the unmanned vehicle is recorded as the right point cloud image.
  • the computer equipment extracts the space coordinates in the left point cloud collected for the object point A from the left point cloud image, and extracts the space coordinates in the right point cloud collected for the same object point A from the right point cloud image, and based on the left
  • the space coordinates of the point cloud and the space coordinates of the right point cloud are coordinate conversion of the point cloud image of the previous frame or the point cloud image of the previous frame, so that the point cloud image of the current frame and the point cloud image of the previous frame are in the same spatial coordinate system.
  • clustering the second point cloud whose displacement vector is greater than the threshold value to obtain at least one set of target point cloud includes: filtering out the second point cloud whose displacement vector is greater than the threshold value from the point cloud image of the previous frame, denoted as Point cloud to be classified; obtain the spatial position information of the point cloud to be classified; determine the movement direction of the point cloud to be classified based on the displacement vector; cluster the point cloud to be classified with the similar movement direction and the distance between adjacent spatial positions less than the threshold to obtain at least A set of target point clouds.
  • the computer calculates the absolute value of the displacement vector to obtain each second point cloud and the corresponding first point cloud
  • the separation distance between, that is, the movement amplitude of the points on the surface of the object within the collection interval can be obtained.
  • the computer screens out the point cloud to be classified whose motion amplitude is greater than the preset motion threshold from the previous frame of point cloud image, and extracts the spatial position coordinates and the corresponding displacement vector from the point cloud to be classified.
  • the movement threshold can be set according to actual needs. For example, when the collection interval of the laser sensor is 10 ms, the movement threshold can be set to 0.05 meters.
  • the computer determines the current movement direction of the unmanned vehicle through the direction of the compass installed in the unmanned vehicle, and determines each of the spatial coordinates established with the center point of the laser sensor as the origin according to the current movement direction of the unmanned vehicle.
  • the direction represented by each axis For example, after the computer determines that the current unmanned vehicle is moving north according to the direction of the compass, the computer determines the positive Y axis in the three-dimensional space coordinate system as north, the positive X axis as east, and the negative Y axis as south, and The negative X axis is judged to be West.
  • the computer projects the displacement vector to the XOY plane, and calculates the included angle between the projected displacement vector and the X-axis and Y-axis based on the start and end coordinates of the displacement vector, and then calculates the included angle and the X-axis and Y-axis according to the calculation.
  • the corresponding direction determines the movement direction of the point cloud to be classified corresponding to the displacement vector.
  • the computer clusters the point clouds to be classified according to the movement direction and spatial position coordinates of the point clouds to be classified, and divides the point clouds to be classified into the same group of point clouds that have similar movement directions and whose distance between adjacent spatial positions is less than a threshold.
  • the above obstacle detection method further includes: calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; generating corresponding obstacle avoidance instructions according to the motion parameters of the obstacle; Control unmanned vehicles to drive based on obstacle avoidance commands.
  • the motion parameter refers to the information value of information such as the speed of the object and the direction of the object.
  • the computer device obtains the acquisition frequency of the laser sensor, and calculates the movement speed of the corresponding point cloud based on the acquisition frequency and the displacement vector of each point cloud in the same group.
  • the computer device performs a weighted average calculation on the movement speeds of all point clouds in the same group to obtain the average speed, and judges the average speed as the movement speed of the corresponding obstacle.
  • the computer equipment obtains the displacement vector of each point cloud in the same group, determines the movement direction of each point cloud based on the displacement vector, and calculates the movement direction of all point clouds in the same group to obtain the movement of the corresponding obstacle direction.
  • the computer equipment determines the area where the corresponding obstacle is located according to the spatial coordinates of each point cloud in the same group, and compares the area where the obstacle is not located with the area where the unmanned vehicle is located, so as to determine whether the obstacle is different from that of the unmanned vehicle. People and vehicles are in the same lane. If the obstacle and the unmanned vehicle are in the same lane, the computer equipment obtains the separation distance between the obstacle and the unmanned vehicle, and calculates the unmanned vehicle and the obstacle according to the current speed, maximum braking deceleration and separation distance of the unmanned vehicle The probability of collision.
  • the computer equipment When the collision probability is greater than the threshold, the computer equipment generates a lane change instruction and controls the unmanned vehicle to perform lane change processing based on the lane change instruction; if the collision probability is less than the threshold, the computer equipment generates a deceleration instruction and controls the unmanned vehicle based on the deceleration instruction Slow down.
  • the computer device synthesizes the space position coordinates of the obstacle, the movement speed, and the current vehicle speed to generate the obstacle avoidance instruction
  • the generated obstacle avoidance instruction has a high degree of confidence, so that the unmanned vehicle can be based on the obstacle avoidance instruction Carrying out correct driving greatly improves the safety of unmanned driving.
  • calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud includes: determining the motion direction of each target point cloud in the same group based on the displacement vector; and counting the corresponding motion directions The number of point clouds of the target point cloud; the motion direction of the target point cloud with the largest number of point clouds is determined as the motion direction of the corresponding obstacle.
  • the computer device obtains the displacement vector of each target point cloud in the same group, and determines the movement direction of each target point cloud based on the displacement vector and the axis direction of the three-dimensional coordinate axis.
  • the computer device counts the number of target point clouds corresponding to each different motion direction, and determines the motion direction with the largest number of target point clouds as the motion direction of the corresponding obstacle.
  • the movement direction of the most numerous target point cloud can be directly determined as the movement direction of the corresponding obstacle.
  • this solution only needs to be based on simple calculations to obtain the movement direction of the corresponding obstacles, which not only saves the computing resources of the computer, but also improves the detection rate of obstacles.
  • calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud includes: obtaining the acquisition frequency of the laser sensor; and determining the displacement value of each target point cloud in the same group based on the displacement vector ; Determine the movement speed of each target point cloud in the same group according to the acquisition frequency and displacement value; perform a mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
  • the computer device after obtaining the displacement vector of each target point cloud in the same group, the computer device performs an absolute value calculation on the displacement vector, and determines the calculation result after the absolute value calculation as the displacement value of the target point cloud.
  • the computer equipment divides the displacement value by the collection interval time of the laser acquisition sensor to obtain the movement speed of the target point cloud, and performs a mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
  • the computer equipment can comprehensively calculate the movement speed of each target point cloud based on a variety of hybrid computing algorithms. For example, the computer equipment can calculate the average value of the movement speed of all target point clouds in the same group, and calculate the average speed obtained by the average value. Determine the movement speed of the corresponding obstacle. For another example, the computer device may remove a certain percentage of the target point cloud with the maximum motion speed and the target point cloud with the minimum motion speed in advance, and then perform an average calculation on the remaining point clouds.
  • the computer device can determine the movement speed of the target point cloud according to the acquisition frequency and displacement vector of the laser sensor, and determine the movement speed of the corresponding obstacle according to the movement speed of each target point cloud, which is helpful for the computer equipment to root the obstacle.
  • the movement speed of the object prompts or controls the unmanned equipment.
  • steps in the flowcharts of FIGS. 2 and 4 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in Figures 2 and 4 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • an obstacle detection device 500 including: a point cloud matching module 502, a point cloud clustering module 504, and a determination module 506, wherein:
  • the point cloud matching module 502 is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined The first point cloud matches the second point cloud in the point cloud image of the previous frame.
  • the point cloud clustering module 504 is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target point clouds .
  • the determination module 506 is used to determine each group of target point clouds as an obstacle.
  • the above-mentioned point cloud matching module 502 is also used to obtain the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame, and the point feature corresponding to the first point cloud extracted from the current point cloud image.
  • Point features; similarity matching is performed on the point features corresponding to the second point cloud and the point features corresponding to the first point cloud; the first point cloud and the second point cloud whose similarity matching results meet the conditions are determined as matching point clouds Correct.
  • the above-mentioned point cloud matching module 502 is further configured to perform structural processing on the point cloud image of the previous frame to obtain the processing result; and encode the second point cloud in the point cloud image of the previous frame based on the processing result , Get the point feature corresponding to the second point cloud.
  • the above-mentioned point cloud clustering module 504 is also used to obtain the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; based on the spatial position information and the relative spatial position information of the first point cloud The spatial position information of the matched second point cloud determines the displacement vector of the second point cloud relative to the corresponding first point cloud.
  • the above-mentioned point cloud clustering module 504 is further configured to filter the second point cloud whose displacement vector is greater than the threshold value from the previous frame of point cloud image, and record it as the point cloud to be classified; to obtain the space of the point cloud to be classified Position information; determine the moving direction of the point cloud to be classified based on the displacement vector; cluster the point clouds to be classified with similar moving directions and the distance between adjacent spatial locations less than a threshold to obtain at least one set of target point clouds.
  • the obstacle detection device 500 includes an obstacle avoidance instruction generating module 508, which is used to calculate the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; The motion parameters generate corresponding obstacle avoidance instructions; based on the obstacle avoidance instructions, the unmanned vehicle is controlled to drive.
  • an obstacle avoidance instruction generating module 508 which is used to calculate the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; The motion parameters generate corresponding obstacle avoidance instructions; based on the obstacle avoidance instructions, the unmanned vehicle is controlled to drive.
  • the obstacle avoidance instruction generation module 508 is further configured to determine the movement direction of each target point cloud in the same group based on the displacement vector; count the number of point clouds of the target point cloud corresponding to each movement direction; The movement direction of the target point cloud with the largest number of point clouds is determined as the movement direction of the corresponding obstacle.
  • the obstacle avoidance command generation module 508 is also used to obtain the acquisition frequency of the laser sensor; determine the displacement value of each target point cloud in the same group based on the displacement vector; determine the displacement value of each target point cloud in the same group according to the acquisition frequency and displacement value.
  • the movement speed of each target point cloud; the mixed calculation is performed on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
  • Each module in the above obstacle detection device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 6.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store detection data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instruction is executed by the processor to realize an obstacle detection method.
  • FIG. 6 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device includes a memory and one or more processors.
  • the memory stores computer readable instructions.
  • the one or more processors execute the above method embodiments. step.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions execute A step of.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

An obstacle detection method and apparatus (500), and a computer device (104) and a storage medium. The method comprises: obtaining a current point cloud image frame and a previous point cloud image frame (202); by performing point cloud matching on the current point cloud image frame and the previous point cloud image frame, determining each first point cloud in the current point cloud image frame and matched second point clouds in the previous point cloud image frame (204); calculating a displacement vector of each second point cloud with respect to the matched first point cloud (206); clustering the second point clouds with the displacement vector greater than a threshold to obtain at least one group of target point clouds (208); and determining each group of target point clouds as an obstacle (210). Thus, the accuracy of obstacle detection can be improved.

Description

障碍物检测方法、装置、计算机设备和存储介质Obstacle detection method, device, computer equipment and storage medium 技术领域Technical field
本申请涉及一种障碍物检测方法、装置、计算机设备和存储介质。This application relates to an obstacle detection method, device, computer equipment and storage medium.
背景技术Background technique
无人驾驶汽车又称自动驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种靠人工智能、视觉计算、雷达、监控装置和全球定位设备协同合作,让电脑可以在没有任何人类主动的操作下,自动地操作机动车辆的智能汽车。在无人驾驶的过程中,最重要的是识别行车区域内的运动物体。在传统方式中,主要是通过机器学习模型进行障碍物检测。然而,机器学习模型只能检测出已参与训练的目标障碍物类别,比如人、车等,针对行车区域内未参与训练的障碍物类别(例如动物,锥桶等),则无法通过机器学习模型进行正确识别,从而增大了因未能正确识别运动物体而导致无人车出现安全事故的概率。Self-driving cars, also known as self-driving cars, computer-driven cars, or wheeled mobile robots, rely on artificial intelligence, visual computing, radar, monitoring devices, and global positioning equipment to work together to allow computers to work without any human initiative. Under operation, the smart car of the motor vehicle is automatically operated. In the process of unmanned driving, the most important thing is to identify the moving objects in the driving area. In the traditional way, obstacle detection is mainly performed through machine learning models. However, the machine learning model can only detect the target obstacle categories that have participated in the training, such as people, cars, etc., for the obstacle categories that are not involved in the training in the driving area (such as animals, cones, etc.), the machine learning model cannot be passed. Perform correct recognition, thereby increasing the probability of unmanned vehicle safety accidents due to failure to correctly recognize moving objects.
发明内容Summary of the invention
根据本申请公开的各种实施例,提供一种障碍物检测方法、装置、计算机设备和存储介质。According to various embodiments disclosed in the present application, an obstacle detection method, device, computer equipment, and storage medium are provided.
一种障碍物检测方法,包括:An obstacle detection method includes:
获取当前帧点云图像和前一帧点云图像;Obtain the point cloud image of the current frame and the point cloud image of the previous frame;
通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame;
计算每个第二点云相对于相匹配的第一点云的位移矢量;Calculate the displacement vector of each second point cloud relative to the matched first point cloud;
对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;Clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud;
将每组目标点云判定为一个障碍物。Each group of target point cloud is judged as an obstacle.
一种基于点云的目标跟踪装置,包括:A point cloud-based target tracking device includes:
点云匹配模块,用于获取当前帧点云图像和前一帧点云图像;通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;The point cloud matching module is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined A first point cloud that matches a second point cloud in the previous frame of point cloud image;
点云聚类模块,用于计算每个第二点云相对于相匹配的第一点云的位移矢量;对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;The point cloud clustering module is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target points cloud;
判定模块,用于将每组目标点云判定为一个障碍物。The judging module is used to judge each group of target point clouds as an obstacle.
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:A computer device, including a memory and one or more processors, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the one or more processors execute The following steps:
获取当前帧点云图像和前一帧点云图像;Obtain the point cloud image of the current frame and the point cloud image of the previous frame;
通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, determining that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame;
计算每个第二点云相对于相匹配的第一点云的位移矢量;Calculate the displacement vector of each second point cloud relative to the matched first point cloud;
对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;Clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud;
将每组目标点云判定为一个障碍物。Each group of target point cloud is judged as an obstacle.
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:One or more non-volatile computer-readable storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps:
获取当前帧点云图像和前一帧点云图像;Obtain the point cloud image of the current frame and the point cloud image of the previous frame;
通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame;
计算每个第二点云相对于相匹配的第一点云的位移矢量;Calculate the displacement vector of each second point cloud relative to the matched first point cloud;
对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;Clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud;
将每组目标点云判定为一个障碍物。Each group of target point cloud is judged as an obstacle.
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请 的其它特征和优点将从说明书、附图以及权利要求书变得明显。The details of one or more embodiments of the present application are set forth in the following drawings and description. Other features and advantages of this application will become apparent from the description, drawings and claims.
附图说明Description of the drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to more clearly describe the technical solutions in the embodiments of the present application, the following will briefly introduce the drawings needed in the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. A person of ordinary skill in the art can obtain other drawings based on these drawings without creative work.
图1为一个实施例中障碍物检测方法的应用场景图;Fig. 1 is an application scenario diagram of an obstacle detection method in an embodiment;
图2为一个实施例中障碍物检测方法的流程示意图;Figure 2 is a schematic flowchart of an obstacle detection method in an embodiment;
图3A为一个实施例中空间坐标系鸟瞰示意图;3A is a schematic diagram of a bird's-eye view of the spatial coordinate system in an embodiment;
图3B为一个实施例中空间坐标系三维示意图;Fig. 3B is a three-dimensional schematic diagram of a space coordinate system in an embodiment;
图4为一个实施例根据点特征进行点云匹配步骤的流程示意图;FIG. 4 is a schematic flowchart of a step of performing point cloud matching according to point features according to an embodiment;
图5为一个实施例中障碍物检测装置的框图;Figure 5 is a block diagram of an obstacle detection device in an embodiment;
图6为一个实施例中计算机设备的框图。Figure 6 is a block diagram of a computer device in one embodiment.
具体实施方式Detailed ways
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the technical solutions and advantages of the present application clearer, the following further describes the present application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, and are not used to limit the present application.
本申请提供的障碍物检测方法,可以应用于多种应用环境中。例如,可以应用于如图1所示的自动驾驶的应用环境中,可以包括激光传感器102和计算机设备104。计算机设备104可以通过网络与激光传感器102进行通信。激光传感器102可以采集周围环境的多帧点云图像,计算机设备104可以获取激光传感器102采集的前一帧点云图像以及当前帧点云图像,并采用上述障碍物检测方法对前一帧点云图像以及当前帧点云图像进行处理,从而实现对障碍物的检测。激光传感器102可以是自动驾驶设备搭载的传感器,具体可以包括激光雷达、激光扫描仪等。The obstacle detection method provided in this application can be applied in a variety of application environments. For example, it can be applied to the application environment of automatic driving as shown in FIG. 1, and it can include a laser sensor 102 and a computer device 104. The computer device 104 can communicate with the laser sensor 102 via a network. The laser sensor 102 can collect multi-frame point cloud images of the surrounding environment, and the computer device 104 can acquire the point cloud image of the previous frame and the point cloud image of the current frame collected by the laser sensor 102, and use the above obstacle detection method to analyze the point cloud image of the previous frame. The image and the point cloud image of the current frame are processed to realize the detection of obstacles. The laser sensor 102 may be a sensor carried by an automatic driving device, and may specifically include a laser radar, a laser scanner, and the like.
在其中一个实施例中,如图2所示,提供了一种障碍物检测方法,以该方法应用于图1中的计算机设备104为例进行说明,包括以下步骤:In one of the embodiments, as shown in FIG. 2, an obstacle detection method is provided. Taking the method applied to the computer device 104 in FIG. 1 as an example for description, the method includes the following steps:
步骤202,获取当前帧点云图像和前一帧点云图像。Step 202: Obtain a point cloud image of the current frame and a point cloud image of the previous frame.
其中,激光传感器可以是由能够进行自动驾驶的设备搭载的。比如可以是由无人车搭载的,也可以是由包括自动驾驶模型的车辆搭载的。激光传感器可以用于采集视觉范围内的环境数据。Among them, the laser sensor may be mounted by a device capable of automatic driving. For example, it can be carried by an unmanned vehicle, or it can be carried by a vehicle including an autonomous driving model. Laser sensors can be used to collect environmental data within the visual range.
具体地,可以预先在无人车上架设激光传感器,激光传感器按照预设时间频率向行车区域发射探测信号,并将行车区域内物体反射回的信号与探测信号进行比对,得到周围的环境数据,并基于环境数据生成对应的点云图像。点云图像是指扫描环境中的物体以点的形式记录,物体表面多个点所对应点云的集合。点云具体可以包括环境中物体表面的单个点在空间坐标系中的三维空间位置坐标、激光反射强度以及颜色等多种信息。空间坐标系可以为笛卡尔坐标系。如图3所示,空间坐标系以激光传感器的中心点为原点,以与激光传感器水平的水平面为基准面(即为xoy平面),以与无人车运动方向水平的轴线为Y轴;以处于基准面中、经过原点并与原点横向垂直的轴线为X轴;以经过原点并与垂直基准面的轴线为Z轴。图3A为一个实施例中空间坐标系鸟瞰示意图。图3B为一个实施例中空间坐标系三维示意图。Specifically, a laser sensor can be set up on the unmanned vehicle in advance, and the laser sensor emits a detection signal to the driving area according to a preset time frequency, and compares the signal reflected by objects in the driving area with the detection signal to obtain the surrounding environment data , And generate corresponding point cloud images based on environmental data. The point cloud image refers to the object in the scanning environment recorded in the form of points, and the collection of point clouds corresponding to multiple points on the surface of the object. The point cloud may specifically include a variety of information such as the three-dimensional space position coordinates, laser reflection intensity, and color of a single point on the surface of the object in the environment in the space coordinate system. The spatial coordinate system may be a Cartesian coordinate system. As shown in Figure 3, the spatial coordinate system takes the center point of the laser sensor as the origin, the horizontal plane horizontal to the laser sensor as the reference plane (that is, the xoy plane), and the axis horizontal to the moving direction of the unmanned vehicle is the Y axis; The axis in the datum plane that passes through the origin and is transversely perpendicular to the origin is the X axis; the axis that passes through the origin and is perpendicular to the datum plane is the Z axis. FIG. 3A is a schematic diagram of a bird's-eye view of the spatial coordinate system in an embodiment. Fig. 3B is a three-dimensional schematic diagram of the spatial coordinate system in an embodiment.
激光传感器在采集得到的点云图像中内嵌时间戳,并将内嵌时间戳的点云图像发送至计算机设备。在其中一个实施例中,激光传感器可以将存储于本地的在预设时间段内采集的点云图像一次性发送至计算机设备。计算机设备根据点云图像中的时间戳对点云图像进行排序,并将时间间隔相邻的两张点云图像中前一顺序的点云图像确定为当前帧点云图像,将后一顺序的点云图像确定为前一帧点云图像。The laser sensor embeds a time stamp in the collected point cloud image, and sends the point cloud image with the embedded time stamp to the computer device. In one of the embodiments, the laser sensor can send the locally stored point cloud images collected within a preset time period to the computer device at one time. The computer equipment sorts the point cloud images according to the timestamps in the point cloud images, and determines the point cloud image of the previous order among the two point cloud images adjacent in time interval as the point cloud image of the current frame. The point cloud image is determined as the point cloud image of the previous frame.
步骤204,通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云。Step 204: By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame. .
具体地,将当前帧点云图像以及前一帧点云图像输入已训练的匹配模型。匹配模型可以从前一帧点云图像中提取出第二点云点的空间位置信息,并从 当前帧点云图像中筛选出与第二点云的空间位置相隔距离小于预设距离阈值的多个第一点云。例如,当距离阈值为q,第二点云的空间位置坐标为(x2,y2,z2),此时筛选出的第一点云坐标即为(x1,y1,z1),其中x1∈x2±q,y1∈y2±q,z1∈z2±q。匹配模型具体可以为神经网络模型、双路径网络模型(DPN,DualPathNetwork)、支持向量机或者逻辑回归模型等。Specifically, the point cloud image of the current frame and the point cloud image of the previous frame are input into the trained matching model. The matching model can extract the spatial position information of the second point cloud point from the point cloud image of the previous frame, and filter out the point cloud image of the current frame that the distance from the spatial position of the second point cloud is less than the preset distance threshold. The first cloud. For example, when the distance threshold is q and the spatial position coordinates of the second point cloud are (x2, y2, z2), the first point cloud coordinates filtered out at this time are (x1, y1, z1), where x1∈x2± q, y1∈y2±q, z1∈z2±q. The matching model may specifically be a neural network model, a dual path network model (DPN, DualPath Network), a support vector machine, or a logistic regression model.
进一步地,匹配模型从第一点云中提取出第一点特征,从第二点云中提取出第二点特征,并对第一点特征与第二点特征进行相似度匹配,将相似度最大的一个第一点云确定为与第二点云相匹配的点云。此时的第一点云与对应的第二点云可以为不同时刻针对同一物体表面同一个点所采集的点云数据。匹配模型遍历前一帧点云图像中的每个第二点云,直至每个第二点云均匹配到对应的第一点云。由于激光传感器在1秒内可以采集多张点云图像,故在相邻采集时间间隔内,行车区域中的运动障碍物在相邻两帧图像中所处的位置坐标相差不大,因此匹配模型只需相隔距离小于阈值的第一点云与第二点云进行相似度匹配,即可匹配出与第二点云相对应的第一点云。Further, the matching model extracts the first point feature from the first point cloud, extracts the second point feature from the second point cloud, and performs similarity matching between the first point feature and the second point feature, and the similarity The largest first point cloud is determined as the point cloud matching the second point cloud. The first point cloud and the corresponding second point cloud at this time may be point cloud data collected for the same point on the surface of the same object at different times. The matching model traverses each second point cloud in the point cloud image of the previous frame until each second point cloud matches the corresponding first point cloud. Since the laser sensor can collect multiple point cloud images within 1 second, within the adjacent collection time interval, the position coordinates of the moving obstacles in the driving area in the adjacent two frames of images are not much different, so the matching model As long as the first point cloud and the second point cloud separated by a distance less than the threshold are matched for similarity, the first point cloud corresponding to the second point cloud can be matched.
在另一个实施例中,若特征匹配结果均小于预设阈值时,匹配模型对应增大距离阈值,并根据增大后的距离阈值从当前帧点云图像中筛选出对应的第一点云,之后基于重新筛选后的第一点云进行点云匹配。In another embodiment, if the feature matching results are all less than the preset threshold, the matching model increases the distance threshold correspondingly, and filters out the corresponding first point cloud from the current frame point cloud image according to the increased distance threshold. Then point cloud matching is performed based on the re-screened first point cloud.
在另一个实施例中,匹配模型的训练步骤包括:收集大量的点云图像,根据点云图像的采集时间将收集到的点云图像划分为多个图像对。图像对中包括当前帧点云图像以及前一帧点云图像。可以对当前帧点云图像以及前一帧点云进行匹配点标记,之后将进行标记后的当前帧点云图像和前一帧点云图像输入匹配模型,由匹配训练模型根据匹配点标记对应调整模型中的参数。In another embodiment, the training step of the matching model includes: collecting a large number of point cloud images, and dividing the collected point cloud images into a plurality of image pairs according to the collection time of the point cloud images. The image pair includes the point cloud image of the current frame and the point cloud image of the previous frame. The point cloud image of the current frame and the point cloud of the previous frame can be marked for matching points, and then the marked point cloud image of the current frame and the point cloud image of the previous frame are input into the matching model, and the matching training model is adjusted according to the matching point markings. Parameters in the model.
在另一个实施例中,可以利用仿真软件生成多个带有匹配点标记的当前点云图像和前一帧点云图像。In another embodiment, simulation software may be used to generate multiple current point cloud images with matching point markers and a point cloud image of the previous frame.
步骤206,计算每个第二点云相对于相匹配的第一点云的位移矢量。Step 206: Calculate the displacement vector of each second point cloud relative to the matched first point cloud.
其中,位移矢量是指以当前时刻的运动质点在空间坐标系中的坐标为起点,以下一时刻运动质点在空间坐标系中的坐标为终点的一段有向线段。Among them, the displacement vector refers to a directed line segment that takes the coordinates of the moving mass point at the current moment in the space coordinate system as the starting point, and the coordinates of the moving mass point at the next moment in the space coordinate system as the end point.
具体地,计算机设备从第二点云中提取出点的空间位置坐标,从与第二点云相匹配的第一点云中提取出点的空间位置坐标,并将第二点云的空间位置坐标作为起点,将相匹配的第一点云的空间位置坐标作为终点,从而得到第二点云相对于相匹配的第一点云的位移矢量。由于第一点云以及第二点云都是基于同一激光传感器采集得到的点云数据,因此,第一点云包含的空间位置坐标与第二点云包含的空间位置坐标也为基于同一空间坐标系统计而得的坐标值,故计算机设备只需在空间坐标系中直接将第一点云的空间位置坐标与对应的第二点云的空间位置坐标相连接,即可绘制出位移矢量。Specifically, the computer device extracts the spatial position coordinates of the point from the second point cloud, extracts the spatial position coordinates of the point from the first point cloud matching the second point cloud, and calculates the spatial position of the second point cloud The coordinates are taken as the starting point, and the spatial position coordinates of the matched first point cloud are taken as the end point, so as to obtain the displacement vector of the second point cloud relative to the matched first point cloud. Since the first point cloud and the second point cloud are based on the point cloud data collected by the same laser sensor, the spatial position coordinates contained in the first point cloud and the spatial position coordinates contained in the second point cloud are also based on the same spatial coordinates The coordinate value calculated by the system, so the computer device only needs to directly connect the space position coordinates of the first point cloud with the space position coordinates of the corresponding second point cloud in the space coordinate system to draw the displacement vector.
步骤208,对位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云。Step 208: Perform clustering on the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud.
具体地,计算机设备基于预设的绝对值计算公式计算每个第二点云所对应的位移矢量的绝对值大小,并将位移矢量的绝对值大小确定为行车区域内的物体表面单个点在激光传感器的采集间隔时间内运动的运动幅度。比如,在上述举例中,当第一点云的空间位置坐标为(x1,y1,z1),第二点云坐标为(x2,y2,z2)时,对应的绝对值计算公式即为:Specifically, the computer device calculates the absolute value of the displacement vector corresponding to each second point cloud based on the preset absolute value calculation formula, and determines the absolute value of the displacement vector as a single point on the surface of the object in the driving area. The movement amplitude of the movement within the collection interval of the sensor. For example, in the above example, when the spatial position coordinates of the first point cloud are (x1, y1, z1) and the coordinates of the second point cloud are (x2, y2, z2), the corresponding absolute value calculation formula is:
Figure PCTCN2019130114-appb-000001
Figure PCTCN2019130114-appb-000001
进一步地,计算机设备从前一帧点云图像中筛选出绝对值大于预设阈值的第二点云(为了描述方便,下述将位移矢量的绝对值大于预设阈值的第二点云记作待分类点云),并将待分类点云作为针对行车区域内运动物体的物体表面所采集的点云数据,之后,计算机设备对待分类点云数据进行聚类,从而得到至少一组目标点云。计算机设备可以采用多种方式对待分类点云进行聚类。例如,计算机设备可以确定每个待分类点云的空间位置坐标,将相邻空间位置间距小于阈值的待分类点云划分为一组目标点云。又例如,计算机设备可以基于k-means聚类、Mean-Shift聚类等聚类算法对待分类点云进行分组。Further, the computer device filters out the second point cloud whose absolute value is greater than the preset threshold value from the previous frame of point cloud image (for the convenience of description, the second point cloud whose absolute value of the displacement vector is greater than the preset threshold value will be recorded as pending in the following. Classification point cloud), and use the to-be-classified point cloud as the point cloud data collected on the surface of the moving object in the driving area, and then the computer device clusters the to-be-classified point cloud data to obtain at least one set of target point clouds. The computer equipment can cluster the point cloud to be classified in a variety of ways. For example, the computer device may determine the spatial position coordinates of each point cloud to be classified, and divide the point cloud to be classified into a group of target point clouds whose distance between adjacent spatial positions is less than a threshold. For another example, the computer device may group the point clouds to be classified based on clustering algorithms such as k-means clustering and Mean-Shift clustering.
步骤210,将每组目标点云判定为一个障碍物。Step 210: Determine each group of target point clouds as an obstacle.
具体地,计算机设备将不同组的目标点云判定为针对不同障碍物所采集 的点云数据。与此同时,计算机设备获取同组内的目标点云的空间位置坐标以及所对应的位移矢量,并将位移矢量所指向的方向确定为目标点云的运动方向,之后计算机设备根据目标点云的空间位置坐标以及运动方向,确定对应障碍物的运动方向以及位置坐标。Specifically, the computer device judges different groups of target point clouds as point cloud data collected for different obstacles. At the same time, the computer equipment obtains the spatial position coordinates of the target point cloud in the same group and the corresponding displacement vector, and determines the direction pointed by the displacement vector as the movement direction of the target point cloud. Space position coordinates and movement direction, determine the movement direction and position coordinates of the corresponding obstacle.
在另一个实施例中,计算机设备根据激光传感器的采集间隔时间以及位移矢量的绝对值大小确定对应障碍物的运动速度,并根据运动速度以及运动方向预测障碍物在经过预设时间段后的空间位置信息,从而根据预测得到的位置信息对应生成避障指令。例如,当计算机设备预测出在障碍物以当前运动方向以及运动速度作直线运动,经过预设时间后可能会与无人车相撞时,计算机设备根据预测结果对应生成刹车指令,以使无人车停止运行。In another embodiment, the computer device determines the movement speed of the corresponding obstacle according to the collection interval time of the laser sensor and the absolute value of the displacement vector, and predicts the space of the obstacle after a preset time period according to the movement speed and movement direction. Position information, so as to generate obstacle avoidance instructions corresponding to the predicted position information. For example, when the computer equipment predicts that the obstacle is moving in a straight line at the current direction of motion and the speed of motion, and it may collide with an unmanned vehicle after a preset time, the computer equipment generates a braking instruction corresponding to the prediction result to make the unmanned The car stopped running.
在另一个实施例中,若前一帧点云图像为t-1时刻,由激光传感器采集得到的图像数据,当前帧点云图像为t时刻采集得到的图像数据。计算机设备根据激光传感器的采集间隔时间,以及点云A在t-1时刻所对应的位移矢量的绝对值大小确定点云A的运动速度,并根据运动速度以及运动方向预测该点云在t+2时刻所处的空间位置信息。从而在对t+1时刻采集的点云图像和t+2时刻采集的点云图像进行点云匹配时,可以利用预测得到的点云A在t+2时刻的空间位置信息进行辅助点云匹配。In another embodiment, if the point cloud image of the previous frame is the image data collected by the laser sensor at time t-1, the point cloud image of the current frame is the image data collected at time t. The computer equipment determines the movement speed of the point cloud A according to the collection interval time of the laser sensor and the absolute value of the displacement vector corresponding to the point cloud A at time t-1, and predicts the point cloud at t+ according to the movement speed and movement direction. 2 Time location information. Therefore, when performing point cloud matching on the point cloud image collected at time t+1 and the point cloud image collected at time t+2, the predicted spatial position information of the point cloud A at time t+2 can be used to assist the point cloud matching .
更具体地,在基于匹配模型从t+1时刻采集的点云图像中提取出点云A的空间位置信息,并根据点云A的空间位置信息从t+2时刻采集的点云图像中筛选出相隔距离小于预设距离阈值的多个第一点云时,计算机设备获取筛选出的每个第一点云的空间位置坐标,将第一点云的空间位置坐标减去预测得到的点云A在t+2时刻的空间位置坐标,得到坐标差值,并对坐标差值进行绝对值计算,得到绝对差值。之后计算机设备从多个第一点云中进一步筛选出绝对差值小于预设差值阈值的点云数据。其中差值阈值小于距离阈值。More specifically, the spatial position information of point cloud A is extracted from the point cloud image collected at time t+1 based on the matching model, and the point cloud image collected at time t+2 is filtered according to the spatial position information of point cloud A When multiple first point clouds separated by a distance less than a preset distance threshold are obtained, the computer device obtains the spatial position coordinates of each first point cloud that has been filtered out, and subtracts the predicted point cloud from the spatial position coordinates of the first point cloud The spatial position coordinates of A at t+2 time, the coordinate difference is obtained, and the absolute value of the coordinate difference is calculated to obtain the absolute difference. After that, the computer device further filters out the point cloud data whose absolute difference is less than the preset difference threshold from the plurality of first point clouds. The difference threshold is less than the distance threshold.
由于激光传感器的采集间隔时间一般为10ms,因此可以认为在t-1时刻至t+2时刻内,障碍物的运动速度以及运动方向保持不变,故利用t-1时刻障碍物的运动方向以及运动速度,预估得到的t+2时刻的空间位置坐标的置信 度较高,从而可以认为与第二点云相匹配的第一点云就在预估得到的空间位置坐标附近,因此可以基于预估得到的空间坐标对多个点云数据进行进一步筛选,从而减少了后续匹配模型对点云进行计算的计算量,进而提升了障碍物检测的效率。Since the collection interval of the laser sensor is generally 10ms, it can be considered that from t-1 to t+2, the movement speed and direction of the obstacle remain unchanged, so the movement direction of the obstacle at t-1 and Motion speed, the estimated spatial position coordinates at t+2 have a high degree of confidence, so it can be considered that the first point cloud matching the second point cloud is near the estimated spatial position coordinates, so it can be based on The estimated spatial coordinates further filter multiple point cloud data, thereby reducing the amount of calculation for the subsequent matching model to calculate the point cloud, thereby improving the efficiency of obstacle detection.
在本实施例中,通过对当前帧点云图像和前一帧点云图像进行点云匹配,可以确定同一物体表面的同一个点在前后两帧点云图像中所分别对应的第一点云和第二点云;通过对相匹配的第一点云和第二点云进行位移矢量计算,可以确定行车区域内的物体在激光传感器采集间隔时间内的运动位移量,从而可以将运动位移量大于阈值的点确定为运动点,将位移量小于阈值的点确定为静止点,进而通过对运动点进行聚类,即可识别出行车区域内全部运动的障碍物。相比于传统的通过机器学习模型进行已知类别的障碍物检测,本方案无需识别出障碍物类别,只需判断出行车区域内的障碍物是否为运动障碍物,即可实现避开运动障碍物的目的。In this embodiment, by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is possible to determine the first point cloud corresponding to the same point on the surface of the same object in the two previous and next point cloud images. And the second point cloud; by calculating the displacement vector of the matched first point cloud and the second point cloud, the movement displacement of the object in the driving area within the laser sensor collection interval can be determined, so that the movement displacement can be calculated Points greater than the threshold are determined as moving points, and points with displacement less than the threshold are determined as stationary points, and then by clustering the moving points, all moving obstacles in the driving area can be identified. Compared with the traditional obstacle detection of known categories through the machine learning model, this solution does not need to identify the obstacle category. It only needs to judge whether the obstacle in the driving area is a movement obstacle, so as to avoid movement obstacles. The purpose of things.
在一个实施例中,通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云包括:In one embodiment, by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the first point cloud image of the previous frame. Two point clouds include:
步骤302,获取从前一帧点云图像提取出的第二点云对应的点特征,以及从当前点云图像提取出的第一点云对应的点特征;Step 302: Acquire the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
步骤304,对第二点云对应的点特征和第一点云对应的点特征进行相似度匹配;Step 304: Perform similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
步骤306,将相似度匹配结果符合条件的第一点云以及第二点云确定为相匹配的点云对。Step 306: Determine the first point cloud and the second point cloud whose similarity matching results meet the conditions as matching point cloud pairs.
具体地,计算机设备可以将采集得到的点云图像输入匹配模型中,由匹配模型对采集到的点云图像进行栅格化处理,将点云图像所对应的三维空间划分为多个柱状栅格,并根据点云中的空间位置坐标确定点云所属栅格。匹配模型基于预设的卷积核对栅格内的点云进行运算,从而从点云中提取出高 维度的点特征。匹配模型可以是多种神经网络模型中的一种。例如,匹配模型可以是卷积神经网络模型。匹配模型将从第二点云中提取出的点特征分别与从多个第一点云中提取出的点特征进行特征匹配,并将匹配度最大的第一点云确定为与第二点云相匹配的点云。Specifically, the computer device can input the collected point cloud image into the matching model, and the matching model will rasterize the collected point cloud image, and divide the three-dimensional space corresponding to the point cloud image into a plurality of columnar grids. , And determine the grid to which the point cloud belongs according to the spatial position coordinates in the point cloud. The matching model calculates the point cloud in the grid based on the preset convolution kernel, thereby extracting high-dimensional point features from the point cloud. The matching model can be one of a variety of neural network models. For example, the matching model may be a convolutional neural network model. The matching model performs feature matching on the point features extracted from the second point cloud with the point features extracted from multiple first point clouds, and determines the first point cloud with the largest matching degree as the second point cloud Matching point cloud.
在本实施例中,由于匹配模型是预先训练后的机器学习模型,因此可以基于匹配模型对当前帧点云图像以及前一帧点云图像进行准确地点云匹配,以便后续计算机设备可以基于匹配成功的点云对障碍物的运动状态进行判断。In this embodiment, because the matching model is a pre-trained machine learning model, the current frame of point cloud image and the previous frame of point cloud image can be accurately matched based on the matching model, so that subsequent computer equipment can be based on successful matching. The point cloud judges the movement state of the obstacle.
在一个实施例中,获取从前一帧点云图像提取出的第二点云对应的点特征包括:对前一帧点云图像进行结构化处理,得到处理结果;基于处理结果对前一帧点云图像中的第二点云进行编码,得到第二点云对应的点特征。In one embodiment, acquiring the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame includes: performing structural processing on the point cloud image of the previous frame to obtain the processing result; The second point cloud in the cloud image is encoded to obtain the point feature corresponding to the second point cloud.
具体地,匹配模型可以将前一帧点云图像进行结构化案处理,得到结构化处理后的处理结果。例如,匹配模型可以对前一帧点云图像进行栅格化处理,也可以对前一帧点云图像进行体素化处理。以栅格化处理为例,计算机设备可以对激光传感器为原点的平面进行栅格化,将平面划分为多个栅格。结构化处理后的结构化空间可以为柱状空间,点可以分布在栅格对应竖轴的柱状空间中,即柱状空间中的点的横坐标与纵坐标在对应的栅格坐标内,每个柱状空间可以包括至少一个点。之后,匹配模型可以根据结构化处理结果对每个第二点云进行编码,得到第二点云对应的点特征。Specifically, the matching model may perform structured case processing on the point cloud image of the previous frame, and obtain the processing result after the structured processing. For example, the matching model can perform rasterization processing on the point cloud image of the previous frame, and can also perform voxelization processing on the point cloud image of the previous frame. Taking rasterization processing as an example, the computer equipment can rasterize the plane with the laser sensor as the origin, and divide the plane into multiple grids. The structured space after the structuring process can be a columnar space, and the points can be distributed in the columnar space corresponding to the vertical axis of the grid, that is, the abscissa and ordinate of the points in the columnar space are in the corresponding grid coordinates, and each columnar The space may include at least one point. After that, the matching model can encode each second point cloud according to the structured processing result to obtain the point feature corresponding to the second point cloud.
容易理解的,也可以利用上述点特征提取方法提取出当前帧点云图像中每个第一点云的点特征。It is easy to understand that the point feature extraction method described above can also be used to extract the point feature of each first point cloud in the point cloud image of the current frame.
本实施例中,通过对点云图像进行结构化处理,可以得到对应的处理结果;通过对处理结果进行编码,可以从点云数据中提取出点特征。由于点云数据不会受到光照、目标运动速度等影响,因此,匹配模型在提取点特征时所受到的干扰也较小,从而保证了特征提取的准确性。In this embodiment, by structuring the point cloud image, the corresponding processing result can be obtained; by encoding the processing result, the point feature can be extracted from the point cloud data. Since the point cloud data will not be affected by illumination, target movement speed, etc., the matching model suffers less interference when extracting point features, thereby ensuring the accuracy of feature extraction.
在一个实施例中,计算每个第二点云相对于相匹配的第一点云的位移矢量,包括:获取第一点云的空间位置信息以及相匹配的第二点云的空间位置 信息;基于第一点云的空间位置信息以及相匹配的第二点云的空间位置信息,确定第二点云相对于对应第一点云的位移矢量。In one embodiment, calculating the displacement vector of each second point cloud relative to the matched first point cloud includes: acquiring the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; Based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud, the displacement vector of the second point cloud relative to the corresponding first point cloud is determined.
其中,点云是由点云数据组成,点云数据包括点在空间坐标系中的三维坐标以及激光反射强度等信息。Among them, the point cloud is composed of point cloud data, and the point cloud data includes information such as the three-dimensional coordinates of the point in the space coordinate system and the laser reflection intensity.
具体地,计算机设备分别从第一点云以及相匹配的第二点云中提取出对应点在空间坐标系中的三维坐标,并将从第二点云提取出的三维坐标作为位移矢量的起点,将从第一点云提取出的三维坐标作为位移矢量终点,将包含起点和终点的位移矢量插入对应第二点云的点云数据中,得到例如((x1,y1,z1),
Figure PCTCN2019130114-appb-000002
)的点云数据。
Specifically, the computer device respectively extracts the three-dimensional coordinates of the corresponding point in the space coordinate system from the first point cloud and the matched second point cloud, and uses the three-dimensional coordinates extracted from the second point cloud as the starting point of the displacement vector , The three-dimensional coordinates extracted from the first point cloud are used as the end point of the displacement vector, and the displacement vector containing the start point and the end point is inserted into the point cloud data corresponding to the second point cloud to obtain for example ((x1, y1, z1)
Figure PCTCN2019130114-appb-000002
) Point cloud data.
在另一个实施例中,若前一帧点云图像与当前帧点云图像分别为架设在无人车左右两边的两个激光传感器采集而得时,计算机设备还需对当前帧点云图像以及前一帧点云图像进行坐标配准,将基于不同坐标系采集得到的点云图像转换为同一坐标系内的两帧点云图像,从而根据同一坐标系内的两帧点云图像进行位移矢量计算。具体地,计算机设备获取架设在无人车左右两边的两个激光传感器在同一时刻采集的点云图像,将基于架设在无人车左边的激光传感器采集得到的点云图像记作左点云图像,将架设在无人车右边的激光传感器采集得到的点云图像记作右点云图像。计算机设备从左点云图像提取出针对物体点A所采集的左点云中的空间坐标,从右点云图像提取出针对同一物体点A所采集的右点云中的空间坐标,并基于左点云的空间坐标和右点云的空间坐标,对前帧点云图像或前一帧点云图像进行坐标转换,使得当前帧点云图像与前一帧点云图像位于同一空间坐标系中。In another embodiment, if the point cloud image of the previous frame and the point cloud image of the current frame are collected by two laser sensors installed on the left and right sides of the unmanned vehicle, the computer device also needs to perform the point cloud image and The previous frame of point cloud image is coordinated, and the point cloud images collected based on different coordinate systems are converted into two frames of point cloud images in the same coordinate system, so as to carry out the displacement vector based on the two frames of point cloud images in the same coordinate system Calculation. Specifically, the computer equipment obtains the point cloud images collected by the two laser sensors installed on the left and right sides of the unmanned vehicle at the same time, and records the point cloud image collected based on the laser sensor installed on the left side of the unmanned vehicle as the left point cloud image , The point cloud image collected by the laser sensor installed on the right side of the unmanned vehicle is recorded as the right point cloud image. The computer equipment extracts the space coordinates in the left point cloud collected for the object point A from the left point cloud image, and extracts the space coordinates in the right point cloud collected for the same object point A from the right point cloud image, and based on the left The space coordinates of the point cloud and the space coordinates of the right point cloud are coordinate conversion of the point cloud image of the previous frame or the point cloud image of the previous frame, so that the point cloud image of the current frame and the point cloud image of the previous frame are in the same spatial coordinate system.
本实施例中,通过从相匹配的点云对中提取出点云的空间位置信息,可以基于提取出的空间位置坐标进行精准地位移矢量判定,从而后续可以基于位移矢量从多个点云中筛选出运动点云。In this embodiment, by extracting the spatial position information of the point cloud from the matched point cloud pair, accurate displacement vector determination can be made based on the extracted spatial position coordinates, so that subsequent displacement vectors can be derived from multiple point clouds based on the displacement vector. Filter out the motion point cloud.
在一个实施例中,对位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云包括:从前一帧点云图像中筛选出位移矢量大于阈值的第二点云,记作待分类点云;获取待分类点云的空间位置信息;基于位移矢量确定 待分类点云的运动方向;对运动方向类似以及相邻空间位置间距小于阈值的待分类点云进行聚类,得到至少一组目标点云。In one embodiment, clustering the second point cloud whose displacement vector is greater than the threshold value to obtain at least one set of target point cloud includes: filtering out the second point cloud whose displacement vector is greater than the threshold value from the point cloud image of the previous frame, denoted as Point cloud to be classified; obtain the spatial position information of the point cloud to be classified; determine the movement direction of the point cloud to be classified based on the displacement vector; cluster the point cloud to be classified with the similar movement direction and the distance between adjacent spatial positions less than the threshold to obtain at least A set of target point clouds.
具体地,当计算得到每个第二点云相对于对应第一点云所移动的位移矢量时,计算机对位移矢量进行求绝对值计算,从而得到每个第二点云与对应第一点云之间的间隔距离,即可以得到物体表面的点在采集间隔内的运动幅度。计算机从前一帧点云图像中筛选出运动幅度大于预设运动阈值的待分类点云,并从待分类点云中提取出空间位置坐标和对应的位移矢量。其中,运动阈值可以根据实际需要设定,比如在激光传感器的采集间隔时间为10ms时,可以设定运动阈值为0.05米。Specifically, when the displacement vector of each second point cloud relative to the corresponding first point cloud is calculated, the computer calculates the absolute value of the displacement vector to obtain each second point cloud and the corresponding first point cloud The separation distance between, that is, the movement amplitude of the points on the surface of the object within the collection interval can be obtained. The computer screens out the point cloud to be classified whose motion amplitude is greater than the preset motion threshold from the previous frame of point cloud image, and extracts the spatial position coordinates and the corresponding displacement vector from the point cloud to be classified. Among them, the movement threshold can be set according to actual needs. For example, when the collection interval of the laser sensor is 10 ms, the movement threshold can be set to 0.05 meters.
进一步地,计算机通过安装于无人车中的方向定位指南针,确定当前无人车的运动方向,并根据当前无人车的运动方向确定以激光传感器的中心点为原点所建立的空间坐标中每个轴线所代表的方向。例如,计算机根据方向定位指南针确定当前无人车为朝北运动后,计算机将三维空间坐标系中的正Y轴判定为北方,将正X轴判定为东方,将负Y轴判定为南方,将负X轴判定为西方。计算机将位移矢量投影至XOY平面,并基于位移矢量的起点坐标和终点坐标分别计算投影后的位移矢量与X轴和Y轴之间的夹角,之后根据计算得到夹角以及X轴与Y轴所对应的方向,确定位移矢量所对应的待分类点云的运动方向。Further, the computer determines the current movement direction of the unmanned vehicle through the direction of the compass installed in the unmanned vehicle, and determines each of the spatial coordinates established with the center point of the laser sensor as the origin according to the current movement direction of the unmanned vehicle. The direction represented by each axis. For example, after the computer determines that the current unmanned vehicle is moving north according to the direction of the compass, the computer determines the positive Y axis in the three-dimensional space coordinate system as north, the positive X axis as east, and the negative Y axis as south, and The negative X axis is judged to be West. The computer projects the displacement vector to the XOY plane, and calculates the included angle between the projected displacement vector and the X-axis and Y-axis based on the start and end coordinates of the displacement vector, and then calculates the included angle and the X-axis and Y-axis according to the calculation. The corresponding direction determines the movement direction of the point cloud to be classified corresponding to the displacement vector.
进一步地,计算机根据待分类点云的运动方向以及空间位置坐标对待分类点云进行聚类,将运动方向类似且相邻空间位置的间距小于阈值的待分类点云划分为同组点云。Further, the computer clusters the point clouds to be classified according to the movement direction and spatial position coordinates of the point clouds to be classified, and divides the point clouds to be classified into the same group of point clouds that have similar movement directions and whose distance between adjacent spatial positions is less than a threshold.
本实施例中,由于计算得到的运动方向和空间位置信息不会受环境光照和障碍物外形的影响,因此相比于传统的依赖简单几何外形特征以及环境信息对障碍物进行聚类,本方案可以有效减少环境影响,从而大大提高了提聚类的准确性。In this embodiment, since the calculated movement direction and spatial position information will not be affected by the ambient light and the shape of the obstacle, compared to the traditional clustering of obstacles relying on simple geometric shape features and environmental information, this solution It can effectively reduce the environmental impact, thereby greatly improving the accuracy of clustering.
在一个实施例中,上述障碍物检测方法还包括:根据同组目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数;根据障碍物的运动参数 生成对应的避障指令;基于避障指令控制无人车进行行驶。In an embodiment, the above obstacle detection method further includes: calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; generating corresponding obstacle avoidance instructions according to the motion parameters of the obstacle; Control unmanned vehicles to drive based on obstacle avoidance commands.
其中,运动参数是指如物体运动速度、物体运动方向等信息的信息值。Among them, the motion parameter refers to the information value of information such as the speed of the object and the direction of the object.
具体地,计算机设备获取激光传感器的采集频率,并基于采集频率和同组内每个点云的位移矢量计算对应点云的运动速度。计算机设备对同组内全部点云的运动速度进行加权求平均计算,得到速度均值,并将速度均值判定为对应障碍物的运动速度。与此同时,计算机设备获取同组内每个点云的位移矢量,基于位移矢量确定每个点云的运动方向,对同组内全部点云的运动方向进行统计,从而得到对应障碍物的运动方向。Specifically, the computer device obtains the acquisition frequency of the laser sensor, and calculates the movement speed of the corresponding point cloud based on the acquisition frequency and the displacement vector of each point cloud in the same group. The computer device performs a weighted average calculation on the movement speeds of all point clouds in the same group to obtain the average speed, and judges the average speed as the movement speed of the corresponding obstacle. At the same time, the computer equipment obtains the displacement vector of each point cloud in the same group, determines the movement direction of each point cloud based on the displacement vector, and calculates the movement direction of all point clouds in the same group to obtain the movement of the corresponding obstacle direction.
进一步地,计算机设备根据同组内每个点云的空间坐标确定对应障碍物所在区域范围,并将障碍无所在的区域范围与无人车所在的区域范围进行对比,从而确定障碍物是否与无人车处于同一车道。若障碍物与无人车处于同一车道,计算机设备获取障碍物与无人车之间的间隔距离,并根据无人车当前速度、最大制动减速度以及间隔距离计算得到无人车与障碍物的碰撞概率。当碰撞概率大于阈值时,计算机设备对应生成变道指令,基于变道指令控制无人车进行变道处理;若碰撞概率小于阈值时,计算机设备对应生成减速指令,并基于减速指令控制无人车减速行驶。Further, the computer equipment determines the area where the corresponding obstacle is located according to the spatial coordinates of each point cloud in the same group, and compares the area where the obstacle is not located with the area where the unmanned vehicle is located, so as to determine whether the obstacle is different from that of the unmanned vehicle. People and vehicles are in the same lane. If the obstacle and the unmanned vehicle are in the same lane, the computer equipment obtains the separation distance between the obstacle and the unmanned vehicle, and calculates the unmanned vehicle and the obstacle according to the current speed, maximum braking deceleration and separation distance of the unmanned vehicle The probability of collision. When the collision probability is greater than the threshold, the computer equipment generates a lane change instruction and controls the unmanned vehicle to perform lane change processing based on the lane change instruction; if the collision probability is less than the threshold, the computer equipment generates a deceleration instruction and controls the unmanned vehicle based on the deceleration instruction Slow down.
本实施例中,由于计算机设备是综合障碍物的空间位置坐标、运动速度以及当前车速生成避障指令的,因此生成的避障指令的置信度较高,从而使得无人车可以基于避障指令进行正确行驶,大大提升了无人驾驶的安全性。In this embodiment, because the computer device synthesizes the space position coordinates of the obstacle, the movement speed, and the current vehicle speed to generate the obstacle avoidance instruction, the generated obstacle avoidance instruction has a high degree of confidence, so that the unmanned vehicle can be based on the obstacle avoidance instruction Carrying out correct driving greatly improves the safety of unmanned driving.
在一个实施例中,根据目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数包括:基于位移矢量确定同组内每个目标点云的运动方向;统计各运动方向所对应的目标点云的点云数量;将点云数量最多的目标点云的运动方向确定为对应障碍物的运动方向。In one embodiment, calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud includes: determining the motion direction of each target point cloud in the same group based on the displacement vector; and counting the corresponding motion directions The number of point clouds of the target point cloud; the motion direction of the target point cloud with the largest number of point clouds is determined as the motion direction of the corresponding obstacle.
具体地,计算机设备获取同组内每个目标点云的位移矢量,并基于位移矢量以及三维坐标轴的轴方向确定每个目标点云的运动方向。计算机设备统计每种不同运动方向所对应的目标点云的数量,并将目标点云数量最多的运动方向确定为对应障碍物的运动方向。Specifically, the computer device obtains the displacement vector of each target point cloud in the same group, and determines the movement direction of each target point cloud based on the displacement vector and the axis direction of the three-dimensional coordinate axis. The computer device counts the number of target point clouds corresponding to each different motion direction, and determines the motion direction with the largest number of target point clouds as the motion direction of the corresponding obstacle.
本实施例中,由于同一障碍物的不同组件的运动方向大体一致,因此可以直接将数量最多的目标点云的运动方向判定为对应障碍物的运动方向。此外,由于数量统计属于简单的叠加计算,因此本方案只需基于简单计算即可得到对应障碍物的运动方向,不仅节约的计算机的运算资源,而且提升了障碍物的检测速率。In this embodiment, since the movement directions of different components of the same obstacle are roughly the same, the movement direction of the most numerous target point cloud can be directly determined as the movement direction of the corresponding obstacle. In addition, because the quantity statistics belong to simple superposition calculations, this solution only needs to be based on simple calculations to obtain the movement direction of the corresponding obstacles, which not only saves the computing resources of the computer, but also improves the detection rate of obstacles.
在一个实施例中,根据目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数包括:获取激光传感器的采集频率;基于位移矢量确定同组内每个目标点云的位移值;根据采集频率以及位移值确定同组内每个目标点云的运动速度;对同组内每个目标点云的运动速度进行混合计算,得到对应障碍物的运动速度。In one embodiment, calculating the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud includes: obtaining the acquisition frequency of the laser sensor; and determining the displacement value of each target point cloud in the same group based on the displacement vector ; Determine the movement speed of each target point cloud in the same group according to the acquisition frequency and displacement value; perform a mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
具体地,当获取得到同组内每个目标点云的位移矢量后,计算机设备对位移矢量进行绝对值运算,并将经绝对值运算后的运算结果确定为目标点云的位移值。计算机设备将位移值除以激光采集传感器的采集间隔时间,得到目标点云的运动速度,将同组内每个目标点云的运动速度进行混合计算,得到对应障碍物的运动速度。计算机设备可以基于多种混合运算算法对每个目标点云的运动速度进行综合计算,例如计算机设备可以对同组内全部目标点云的运动速度进行求均值计算,将经均值计算得到的平均速度确定为对应障碍物的运动速度。又例如,计算机设备可以预先去除一定百分比的具有最大运动速度的目标点云和具有最小运动速度的目标点云,之后再对剩余点云进行求均值计算。Specifically, after obtaining the displacement vector of each target point cloud in the same group, the computer device performs an absolute value calculation on the displacement vector, and determines the calculation result after the absolute value calculation as the displacement value of the target point cloud. The computer equipment divides the displacement value by the collection interval time of the laser acquisition sensor to obtain the movement speed of the target point cloud, and performs a mixed calculation on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle. The computer equipment can comprehensively calculate the movement speed of each target point cloud based on a variety of hybrid computing algorithms. For example, the computer equipment can calculate the average value of the movement speed of all target point clouds in the same group, and calculate the average speed obtained by the average value. Determine the movement speed of the corresponding obstacle. For another example, the computer device may remove a certain percentage of the target point cloud with the maximum motion speed and the target point cloud with the minimum motion speed in advance, and then perform an average calculation on the remaining point clouds.
在本实施例中,计算机设备可以根据激光传感器的采集频率和位移矢量确定目标点云的运动速度,根据每个目标点云的运动速度确定对应障碍物的运动速度,有助于计算机设备根障碍物的运动速度对无人驾驶设备进行提示或控制。In this embodiment, the computer device can determine the movement speed of the target point cloud according to the acquisition frequency and displacement vector of the laser sensor, and determine the movement speed of the corresponding obstacle according to the movement speed of each target point cloud, which is helpful for the computer equipment to root the obstacle. The movement speed of the object prompts or controls the unmanned equipment.
应该理解的是,虽然图2、4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其 它的顺序执行。而且,图2、4中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the steps in the flowcharts of FIGS. 2 and 4 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in Figures 2 and 4 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
在其中一个实施例中,如图5所示,提供了一种障碍物检测装置500,包括:点云匹配模块502、点云聚类模块504和判定模块506,其中:In one of the embodiments, as shown in FIG. 5, an obstacle detection device 500 is provided, including: a point cloud matching module 502, a point cloud clustering module 504, and a determination module 506, wherein:
点云匹配模块502,用于获取当前帧点云图像和前一帧点云图像;通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云。The point cloud matching module 502 is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined The first point cloud matches the second point cloud in the point cloud image of the previous frame.
点云聚类模块504,用于计算每个第二点云相对于相匹配的第一点云的位移矢量;对位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云。The point cloud clustering module 504 is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target point clouds .
判定模块506,用于将每组目标点云判定为一个障碍物。The determination module 506 is used to determine each group of target point clouds as an obstacle.
在其中一个实施例中,上述点云匹配模块502还用于获取从前一帧点云图像提取出的第二点云对应的点特征,以及从当前点云图像提取出的第一点云对应的点特征;对第二点云对应的点特征和第一点云对应的点特征进行相似度匹配;将相似度匹配结果符合条件的第一点云以及第二点云确定为相匹配的点云对。In one of the embodiments, the above-mentioned point cloud matching module 502 is also used to obtain the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame, and the point feature corresponding to the first point cloud extracted from the current point cloud image. Point features; similarity matching is performed on the point features corresponding to the second point cloud and the point features corresponding to the first point cloud; the first point cloud and the second point cloud whose similarity matching results meet the conditions are determined as matching point clouds Correct.
在其中一个实施例中,上述点云匹配模块502还用于对前一帧点云图像进行结构化处理,得到处理结果;基于处理结果对前一帧点云图像中的第二点云进行编码,得到第二点云对应的点特征。In one of the embodiments, the above-mentioned point cloud matching module 502 is further configured to perform structural processing on the point cloud image of the previous frame to obtain the processing result; and encode the second point cloud in the point cloud image of the previous frame based on the processing result , Get the point feature corresponding to the second point cloud.
在其中一个实施例中,上述点云聚类模块504还用于获取第一点云的空间位置信息以及相匹配的第二点云的空间位置信息;基于第一点云的空间位置信息以及相匹配的第二点云的空间位置信息,确定第二点云相对于对应第一点云的位移矢量。In one of the embodiments, the above-mentioned point cloud clustering module 504 is also used to obtain the spatial position information of the first point cloud and the spatial position information of the matched second point cloud; based on the spatial position information and the relative spatial position information of the first point cloud The spatial position information of the matched second point cloud determines the displacement vector of the second point cloud relative to the corresponding first point cloud.
在其中一个实施例中,上述点云聚类模块504还用于从前一帧点云图像 中筛选出位移矢量大于阈值的第二点云,记作待分类点云;获取待分类点云的空间位置信息;基于位移矢量确定待分类点云的运动方向;对运动方向类似以及相邻空间位置间距小于阈值的待分类点云进行聚类,得到至少一组目标点云。In one of the embodiments, the above-mentioned point cloud clustering module 504 is further configured to filter the second point cloud whose displacement vector is greater than the threshold value from the previous frame of point cloud image, and record it as the point cloud to be classified; to obtain the space of the point cloud to be classified Position information; determine the moving direction of the point cloud to be classified based on the displacement vector; cluster the point clouds to be classified with similar moving directions and the distance between adjacent spatial locations less than a threshold to obtain at least one set of target point clouds.
在其中一个实施例中,上述障碍物检测装置500包括避障指令生成模块508,用于根据同组目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数;根据障碍物的运动参数生成对应的避障指令;基于所述避障指令控制无人车进行行驶。In one of the embodiments, the obstacle detection device 500 includes an obstacle avoidance instruction generating module 508, which is used to calculate the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds; The motion parameters generate corresponding obstacle avoidance instructions; based on the obstacle avoidance instructions, the unmanned vehicle is controlled to drive.
在其中一个实施例中,避障指令生成模块508还用于基于所述位移矢量确定同组内每个目标点云的运动方向;统计各运动方向所对应的目标点云的点云数量;将点云数量最多的目标点云的运动方向确定为对应障碍物的运动方向。In one of the embodiments, the obstacle avoidance instruction generation module 508 is further configured to determine the movement direction of each target point cloud in the same group based on the displacement vector; count the number of point clouds of the target point cloud corresponding to each movement direction; The movement direction of the target point cloud with the largest number of point clouds is determined as the movement direction of the corresponding obstacle.
在其中一个实施例中,避障指令生成模块508还用于获取激光传感器的采集频率;基于位移矢量确定同组内每个目标点云的位移值;根据采集频率以及位移值确定同组内每个目标点云的运动速度;对同组内每个目标点云的运动速度进行混合计算,得到对应障碍物的运动速度。In one of the embodiments, the obstacle avoidance command generation module 508 is also used to obtain the acquisition frequency of the laser sensor; determine the displacement value of each target point cloud in the same group based on the displacement vector; determine the displacement value of each target point cloud in the same group according to the acquisition frequency and displacement value. The movement speed of each target point cloud; the mixed calculation is performed on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
关于障碍物检测装置的具体限定可以参见上文中对于障碍物检测方法的限定,在此不再赘述。上述障碍物检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific limitation of the obstacle detection device, please refer to the above limitation of the obstacle detection method, which will not be repeated here. Each module in the above obstacle detection device can be implemented in whole or in part by software, hardware, and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图6所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该 计算机设备的数据库用于存储检测数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种障碍物检测方法。In one embodiment, a computer device is provided. The computer device may be a server, and its internal structure diagram may be as shown in FIG. 6. The computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. The database of the computer equipment is used to store detection data. The network interface of the computer device is used to communicate with an external terminal through a network connection. The computer readable instruction is executed by the processor to realize an obstacle detection method.
本领域技术人员可以理解,图6中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 6 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied. The specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
一种计算机设备,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行时实现上述方法实施例中的步骤。A computer device includes a memory and one or more processors. The memory stores computer readable instructions. When the computer readable instructions are executed by the processor, the one or more processors execute the above method embodiments. step.
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行时实现上述方法实施例中的步骤。One or more non-volatile computer-readable storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors execute A step of.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through computer-readable instructions. The computer-readable instructions can be stored in a non-volatile computer. In a readable storage medium, when the computer-readable instructions are executed, they may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database, or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上 述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. In order to make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered as the range described in this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation manners of the present application, and the description is relatively specific and detailed, but it should not be understood as a limitation on the scope of the invention patent. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of this application, several modifications and improvements can be made, and these all fall within the protection scope of this application. Therefore, the scope of protection of the patent of this application shall be subject to the appended claims.

Claims (20)

  1. 一种障碍物检测方法,包括:An obstacle detection method includes:
    获取当前帧点云图像和前一帧点云图像;Obtain the point cloud image of the current frame and the point cloud image of the previous frame;
    通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, determining that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame;
    计算每个第二点云相对于相匹配的第一点云的位移矢量;Calculate the displacement vector of each second point cloud relative to the matched first point cloud;
    对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;Clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud;
    将每组目标点云判定为一个障碍物。Each group of target point cloud is judged as an obstacle.
  2. 根据权利要求1所述的方法,其特征在于,所述通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云,包括:The method according to claim 1, wherein the point cloud image of the current frame and the point cloud image of the previous frame are matched to determine that each first point cloud in the current frame of point cloud image and The second point cloud that matches the point cloud image in the previous frame includes:
    获取从所述前一帧点云图像提取出的第二点云对应的点特征,以及从当前点云图像提取出的第一点云对应的点特征;Acquiring the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
    对第二点云对应的点特征和第一点云对应的点特征进行相似度匹配;Performing similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
    将相似度匹配结果符合条件的第一点云以及第二点云确定为相匹配的点云对。The first point cloud and the second point cloud whose similarity matching result meets the condition are determined as the matched point cloud pair.
  3. 根据权利要求2所述的方法,其特征在于,所述获取从所述前一帧点云图像提取出的第二点云对应的点特征,包括:The method according to claim 2, wherein said acquiring the point feature corresponding to the second point cloud extracted from the point cloud image of the previous frame comprises:
    对所述前一帧点云图像进行结构化处理,得到处理结果;Performing structural processing on the point cloud image of the previous frame to obtain a processing result;
    基于所述处理结果对所述前一帧点云图像中的第二点云进行编码,得到所述第二点云对应的点特征。Encoding the second point cloud in the point cloud image of the previous frame based on the processing result to obtain the point feature corresponding to the second point cloud.
  4. 根据权利要求1所述的方法,其特征在于,所述计算每个第二点云相对于相匹配的第一点云的位移矢量,包括:The method according to claim 1, wherein the calculating the displacement vector of each second point cloud relative to the matched first point cloud comprises:
    获取第一点云的空间位置信息以及相匹配的第二点云的空间位置信息;Acquiring the spatial location information of the first point cloud and the matching spatial location information of the second point cloud;
    基于所述第一点云的空间位置信息以及相匹配的第二点云的空间位置信息,确定所述第二点云相对于对应第一点云的位移矢量。Based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud, a displacement vector of the second point cloud relative to the corresponding first point cloud is determined.
  5. 根据权利要求1所述的方法,其特征在于,所述对位移矢量大于阈值 的第二点云进行聚类,得到至少一组目标点云,包括:The method according to claim 1, wherein the clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud comprises:
    从前一帧点云图像中筛选出位移矢量大于阈值的第二点云,记作待分类点云;The second point cloud whose displacement vector is greater than the threshold is selected from the point cloud image of the previous frame and recorded as the point cloud to be classified;
    获取所述待分类点云的空间位置信息;Acquiring spatial position information of the point cloud to be classified;
    基于所述位移矢量确定所述待分类点云的运动方向;Determining the movement direction of the point cloud to be classified based on the displacement vector;
    对所述运动方向类似以及相邻空间位置间距小于阈值的待分类点云进行聚类,得到至少一组目标点云。Clustering the to-be-classified point clouds whose motion directions are similar and the distance between adjacent spatial positions is less than a threshold value to obtain at least one set of target point clouds.
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    根据同组目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数;Calculate the motion parameters of the corresponding obstacle according to the displacement vector of each target point cloud in the same group of target point clouds;
    根据所述障碍物的运动参数生成对应的避障指令;Generate a corresponding obstacle avoidance instruction according to the motion parameters of the obstacle;
    基于所述避障指令控制无人车进行行驶。The unmanned vehicle is controlled to run based on the obstacle avoidance instruction.
  7. 根据权利要求6所述的方法,其特征在于,所述运动参数包括运动方向;所述根据目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数包括:The method according to claim 6, wherein the motion parameter includes a motion direction; and the calculation of the motion parameter of the corresponding obstacle according to the displacement vector of each target point cloud in the target point cloud comprises:
    基于所述位移矢量确定同组内每个目标点云的运动方向;Determine the movement direction of each target point cloud in the same group based on the displacement vector;
    统计各运动方向所对应的目标点云的点云数量;Count the number of point clouds of the target point cloud corresponding to each movement direction;
    将点云数量最多的目标点云的运动方向确定为对应障碍物的运动方向。The movement direction of the target point cloud with the largest number of point clouds is determined as the movement direction of the corresponding obstacle.
  8. 根据权利要求6所述的方法,其特征在于,所述运动参数包括运动速度;所述点云图像是由激光传感器采集得到的;所述根据目标点云中每个目标点云的位移矢量计算相应障碍物的运动参数包括:The method according to claim 6, wherein the motion parameters include motion speed; the point cloud image is collected by a laser sensor; and the calculation is based on the displacement vector of each target point cloud in the target point cloud The motion parameters of the corresponding obstacles include:
    获取所述激光传感器的采集频率;Acquiring the acquisition frequency of the laser sensor;
    基于所述位移矢量确定同组内每个目标点云的位移值;Determine the displacement value of each target point cloud in the same group based on the displacement vector;
    根据所述采集频率以及位移值确定同组内每个目标点云的运动速度;Determine the movement speed of each target point cloud in the same group according to the acquisition frequency and the displacement value;
    对所述同组内每个目标点云的运动速度进行混合计算,得到对应障碍物的运动速度。The mixed calculation is performed on the movement speed of each target point cloud in the same group to obtain the movement speed of the corresponding obstacle.
  9. 一种障碍物检测装置,包括:An obstacle detection device includes:
    点云匹配模块,用于获取当前帧点云图像和前一帧点云图像;通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;The point cloud matching module is used to obtain the point cloud image of the current frame and the point cloud image of the previous frame; by performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, each point cloud image in the current frame is determined A first point cloud that matches a second point cloud in the previous frame of point cloud image;
    点云聚类模块,用于计算每个第二点云相对于相匹配的第一点云的位移矢量;对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;The point cloud clustering module is used to calculate the displacement vector of each second point cloud relative to the matched first point cloud; clustering the second point clouds whose displacement vector is greater than the threshold to obtain at least one set of target points cloud;
    判定模块,用于将每组目标点云判定为一个障碍物。The judging module is used to judge each group of target point clouds as an obstacle.
  10. 根据权利要求9所述的装置,其特征在于,所述点云匹配模块还用于:The device according to claim 9, wherein the point cloud matching module is further configured to:
    获取从所述前一帧点云图像提取出的第二点云对应的点特征,以及从当前点云图像提取出的第一点云对应的点特征;Acquiring the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
    对第二点云对应的点特征和第一点云对应的点特征进行相似度匹配;Performing similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
    将相似度匹配结果符合条件的第一点云以及第二点云确定为相匹配的点云对。The first point cloud and the second point cloud whose similarity matching result meets the condition are determined as the matched point cloud pair.
  11. 根据权利要求9所述的装置,其特征在于,所述点云匹配模块还用于:The device according to claim 9, wherein the point cloud matching module is further configured to:
    对所述前一帧点云图像进行结构化处理,得到处理结果;Performing structural processing on the point cloud image of the previous frame to obtain a processing result;
    基于所述处理结果对所述前一帧点云图像中的第二点云进行编码,得到所述第二点云对应的点特征。Encoding the second point cloud in the point cloud image of the previous frame based on the processing result to obtain the point feature corresponding to the second point cloud.
  12. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:A computer device includes a memory and one or more processors. The memory stores computer-readable instructions. When the computer-readable instructions are executed by the one or more processors, the one or more Each processor performs the following steps:
    获取当前帧点云图像和前一帧点云图像;Obtain the point cloud image of the current frame and the point cloud image of the previous frame;
    通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame;
    计算每个第二点云相对于相匹配的第一点云的位移矢量;Calculate the displacement vector of each second point cloud relative to the matched first point cloud;
    对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;Clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud;
    将每组目标点云判定为一个障碍物。Each group of target point cloud is judged as an obstacle.
  13. 根据权利要求12所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:The computer device according to claim 12, wherein the processor further executes the following steps when executing the computer-readable instruction:
    获取从所述前一帧点云图像提取出的第二点云对应的点特征,以及从当前点云图像提取出的第一点云对应的点特征;Acquiring the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
    对第二点云对应的点特征和第一点云对应的点特征进行相似度匹配;Performing similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
    将相似度匹配结果符合条件的第一点云以及第二点云确定为相匹配的点云对。The first point cloud and the second point cloud whose similarity matching result meets the condition are determined as the matched point cloud pair.
  14. 根据权利要求13所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:The computer device according to claim 13, wherein the processor further executes the following steps when executing the computer-readable instruction:
    对所述前一帧点云图像进行结构化处理,得到处理结果;Performing structural processing on the point cloud image of the previous frame to obtain a processing result;
    基于所述处理结果对所述前一帧点云图像中的第二点云进行编码,得到所述第二点云对应的点特征。Encoding the second point cloud in the point cloud image of the previous frame based on the processing result to obtain the point feature corresponding to the second point cloud.
  15. 根据权利要求12所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:The computer device according to claim 12, wherein the processor further executes the following steps when executing the computer-readable instruction:
    获取第一点云的空间位置信息以及相匹配的第二点云的空间位置信息;Acquiring the spatial location information of the first point cloud and the matching spatial location information of the second point cloud;
    基于所述第一点云的空间位置信息以及相匹配的第二点云的空间位置信息,确定所述第二点云相对于对应第一点云的位移矢量。Based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud, a displacement vector of the second point cloud relative to the corresponding first point cloud is determined.
  16. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:One or more non-volatile computer-readable storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors execute the following steps:
    获取当前帧点云图像和前一帧点云图像;Obtain the point cloud image of the current frame and the point cloud image of the previous frame;
    通过对当前帧点云图像和前一帧点云图像进行点云匹配,确定所述当前帧点云图像中每个第一点云与前一帧点云图像中相匹配的第二点云;By performing point cloud matching on the point cloud image of the current frame and the point cloud image of the previous frame, it is determined that each first point cloud in the point cloud image of the current frame matches the second point cloud in the point cloud image of the previous frame;
    计算每个第二点云相对于相匹配的第一点云的位移矢量;Calculate the displacement vector of each second point cloud relative to the matched first point cloud;
    对所述位移矢量大于阈值的第二点云进行聚类,得到至少一组目标点云;Clustering the second point cloud whose displacement vector is greater than the threshold to obtain at least one set of target point cloud;
    将每组目标点云判定为一个障碍物。Each group of target point cloud is judged as an obstacle.
  17. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium according to claim 16, wherein the following steps are further executed when the computer-readable instructions are executed by the processor:
    获取从所述前一帧点云图像提取出的第二点云对应的点特征,以及从当前点云图像提取出的第一点云对应的点特征;Acquiring the point feature corresponding to the second point cloud extracted from the previous frame of point cloud image, and the point feature corresponding to the first point cloud extracted from the current point cloud image;
    对第二点云对应的点特征和第一点云对应的点特征进行相似度匹配;Performing similarity matching on the point feature corresponding to the second point cloud and the point feature corresponding to the first point cloud;
    将相似度匹配结果符合条件的第一点云以及第二点云确定为相匹配的点云对。The first point cloud and the second point cloud whose similarity matching result meets the condition are determined as the matched point cloud pair.
  18. 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:18. The storage medium according to claim 17, wherein the following steps are further executed when the computer-readable instructions are executed by the processor:
    对所述前一帧点云图像进行结构化处理,得到处理结果;Performing structural processing on the point cloud image of the previous frame to obtain a processing result;
    基于所述处理结果对所述前一帧点云图像中的第二点云进行编码,得到所述第二点云对应的点特征。Encoding the second point cloud in the point cloud image of the previous frame based on the processing result to obtain the point feature corresponding to the second point cloud.
  19. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium according to claim 16, wherein the following steps are further executed when the computer-readable instructions are executed by the processor:
    获取第一点云的空间位置信息以及相匹配的第二点云的空间位置信息;Acquiring the spatial location information of the first point cloud and the matching spatial location information of the second point cloud;
    基于所述第一点云的空间位置信息以及相匹配的第二点云的空间位置信息,确定所述第二点云相对于对应第一点云的位移矢量。Based on the spatial position information of the first point cloud and the matched spatial position information of the second point cloud, a displacement vector of the second point cloud relative to the corresponding first point cloud is determined.
  20. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium according to claim 16, wherein the following steps are further executed when the computer-readable instructions are executed by the processor:
    从前一帧点云图像中筛选出位移矢量大于阈值的第二点云,记作待分类点云;The second point cloud whose displacement vector is greater than the threshold is selected from the point cloud image of the previous frame and recorded as the point cloud to be classified;
    获取所述待分类点云的空间位置信息;Acquiring spatial position information of the point cloud to be classified;
    基于所述位移矢量确定所述待分类点云的运动方向;Determining the movement direction of the point cloud to be classified based on the displacement vector;
    对所述运动方向类似以及相邻空间位置间距小于阈值的待分类点云进行聚类,得到至少一组目标点云。Clustering the to-be-classified point clouds whose motion directions are similar and the distance between adjacent spatial positions is less than a threshold value to obtain at least one set of target point clouds.
PCT/CN2019/130114 2019-12-30 2019-12-30 Obstacle detection method and apparatus, and computer device and storage medium WO2021134296A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980037711.7A CN113424079A (en) 2019-12-30 2019-12-30 Obstacle detection method, obstacle detection device, computer device, and storage medium
PCT/CN2019/130114 WO2021134296A1 (en) 2019-12-30 2019-12-30 Obstacle detection method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130114 WO2021134296A1 (en) 2019-12-30 2019-12-30 Obstacle detection method and apparatus, and computer device and storage medium

Publications (1)

Publication Number Publication Date
WO2021134296A1 true WO2021134296A1 (en) 2021-07-08

Family

ID=76686199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130114 WO2021134296A1 (en) 2019-12-30 2019-12-30 Obstacle detection method and apparatus, and computer device and storage medium

Country Status (2)

Country Link
CN (1) CN113424079A (en)
WO (1) WO2021134296A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569979A (en) * 2021-08-06 2021-10-29 中国科学院宁波材料技术与工程研究所 Three-dimensional object point cloud classification method based on attention mechanism
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN113591869A (en) * 2021-08-03 2021-11-02 北京地平线信息技术有限公司 Point cloud instance segmentation method and device, electronic equipment and storage medium
CN113627372A (en) * 2021-08-17 2021-11-09 北京伟景智能科技有限公司 Running test method, system and computer readable storage medium
CN113673388A (en) * 2021-08-09 2021-11-19 北京三快在线科技有限公司 Method and device for determining position of target object, storage medium and equipment
CN113838112A (en) * 2021-09-24 2021-12-24 东莞市诺丽电子科技有限公司 Trigger signal determining method and trigger signal determining system of image acquisition system
CN114119729A (en) * 2021-11-17 2022-03-01 北京埃福瑞科技有限公司 Obstacle identification method and device
CN114509785A (en) * 2022-02-16 2022-05-17 中国第一汽车股份有限公司 Three-dimensional object detection method, device, storage medium, processor and system
CN114545947A (en) * 2022-02-25 2022-05-27 北京捷象灵越科技有限公司 Method and device for mutually avoiding mobile robots, electronic equipment and storage medium
CN114596555A (en) * 2022-05-09 2022-06-07 新石器慧通(北京)科技有限公司 Obstacle point cloud data screening method and device, electronic equipment and storage medium
CN114647011A (en) * 2022-02-28 2022-06-21 三一海洋重工有限公司 Method, device and system for monitoring anti-hanging of container truck
CN114724105A (en) * 2021-11-29 2022-07-08 山东交通学院 Cone bucket identification method under complex background based on cloud edge end architecture
CN114842455A (en) * 2022-06-27 2022-08-02 小米汽车科技有限公司 Obstacle detection method, device, equipment, medium, chip and vehicle
CN115050192A (en) * 2022-06-09 2022-09-13 南京矽典微系统有限公司 Parking space detection method based on millimeter wave radar and application
CN115082731A (en) * 2022-06-15 2022-09-20 苏州轻棹科技有限公司 Target classification method and device based on voting mechanism
CN115390085A (en) * 2022-07-28 2022-11-25 广州小马智行科技有限公司 Positioning method and device based on laser radar, computer equipment and storage medium
CN115620239A (en) * 2022-11-08 2023-01-17 国网湖北省电力有限公司荆州供电公司 Point cloud and video combined power transmission line online monitoring method and system
CN117455936A (en) * 2023-12-25 2024-01-26 法奥意威(苏州)机器人系统有限公司 Point cloud data processing method and device and electronic equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723432B (en) * 2021-10-27 2022-02-22 深圳火眼智能有限公司 Intelligent identification and positioning tracking method and system based on deep learning
CN115965943A (en) * 2023-03-09 2023-04-14 安徽蔚来智驾科技有限公司 Target detection method, device, driving device, and medium
CN116524029B (en) * 2023-06-30 2023-12-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and storage medium for rail vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
CN108398672A (en) * 2018-03-06 2018-08-14 厦门大学 Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time
CN108398672A (en) * 2018-03-06 2018-08-14 厦门大学 Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591869A (en) * 2021-08-03 2021-11-02 北京地平线信息技术有限公司 Point cloud instance segmentation method and device, electronic equipment and storage medium
CN113569979A (en) * 2021-08-06 2021-10-29 中国科学院宁波材料技术与工程研究所 Three-dimensional object point cloud classification method based on attention mechanism
CN113673388A (en) * 2021-08-09 2021-11-19 北京三快在线科技有限公司 Method and device for determining position of target object, storage medium and equipment
CN113627372B (en) * 2021-08-17 2024-01-05 北京伟景智能科技有限公司 Running test method, running test system and computer readable storage medium
CN113627372A (en) * 2021-08-17 2021-11-09 北京伟景智能科技有限公司 Running test method, system and computer readable storage medium
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN113838112A (en) * 2021-09-24 2021-12-24 东莞市诺丽电子科技有限公司 Trigger signal determining method and trigger signal determining system of image acquisition system
CN114119729A (en) * 2021-11-17 2022-03-01 北京埃福瑞科技有限公司 Obstacle identification method and device
CN114724105A (en) * 2021-11-29 2022-07-08 山东交通学院 Cone bucket identification method under complex background based on cloud edge end architecture
CN114509785A (en) * 2022-02-16 2022-05-17 中国第一汽车股份有限公司 Three-dimensional object detection method, device, storage medium, processor and system
CN114545947A (en) * 2022-02-25 2022-05-27 北京捷象灵越科技有限公司 Method and device for mutually avoiding mobile robots, electronic equipment and storage medium
CN114647011A (en) * 2022-02-28 2022-06-21 三一海洋重工有限公司 Method, device and system for monitoring anti-hanging of container truck
CN114647011B (en) * 2022-02-28 2024-02-02 三一海洋重工有限公司 Anti-hanging monitoring method, device and system for integrated cards
CN114596555A (en) * 2022-05-09 2022-06-07 新石器慧通(北京)科技有限公司 Obstacle point cloud data screening method and device, electronic equipment and storage medium
CN114596555B (en) * 2022-05-09 2022-08-30 新石器慧通(北京)科技有限公司 Obstacle point cloud data screening method and device, electronic equipment and storage medium
CN115050192B (en) * 2022-06-09 2023-11-21 南京矽典微系统有限公司 Parking space detection method based on millimeter wave radar and application
CN115050192A (en) * 2022-06-09 2022-09-13 南京矽典微系统有限公司 Parking space detection method based on millimeter wave radar and application
CN115082731A (en) * 2022-06-15 2022-09-20 苏州轻棹科技有限公司 Target classification method and device based on voting mechanism
CN115082731B (en) * 2022-06-15 2024-03-29 苏州轻棹科技有限公司 Target classification method and device based on voting mechanism
CN114842455A (en) * 2022-06-27 2022-08-02 小米汽车科技有限公司 Obstacle detection method, device, equipment, medium, chip and vehicle
CN115390085A (en) * 2022-07-28 2022-11-25 广州小马智行科技有限公司 Positioning method and device based on laser radar, computer equipment and storage medium
CN115620239A (en) * 2022-11-08 2023-01-17 国网湖北省电力有限公司荆州供电公司 Point cloud and video combined power transmission line online monitoring method and system
CN115620239B (en) * 2022-11-08 2024-01-30 国网湖北省电力有限公司荆州供电公司 Point cloud and video combined power transmission line online monitoring method and system
CN117455936A (en) * 2023-12-25 2024-01-26 法奥意威(苏州)机器人系统有限公司 Point cloud data processing method and device and electronic equipment
CN117455936B (en) * 2023-12-25 2024-04-12 法奥意威(苏州)机器人系统有限公司 Point cloud data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113424079A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
WO2021134296A1 (en) Obstacle detection method and apparatus, and computer device and storage medium
CN110163904B (en) Object labeling method, movement control method, device, equipment and storage medium
JP7345504B2 (en) Association of LIDAR data and image data
CN110363058B (en) Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural networks
US11816585B2 (en) Machine learning models operating at different frequencies for autonomous vehicles
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
WO2021134441A1 (en) Automated driving-based vehicle speed control method and apparatus, and computer device
Behrendt et al. A deep learning approach to traffic lights: Detection, tracking, and classification
WO2022222095A1 (en) Trajectory prediction method and apparatus, and computer device and storage medium
Wirges et al. Capturing object detection uncertainty in multi-layer grid maps
Yan et al. Multisensor online transfer learning for 3d lidar-based human detection with a mobile robot
US8818702B2 (en) System and method for tracking objects
US10964033B2 (en) Decoupled motion models for object tracking
WO2022099530A1 (en) Motion segmentation method and apparatus for point cloud data, computer device and storage medium
WO2021134285A1 (en) Image tracking processing method and apparatus, and computer device and storage medium
EP2960858A1 (en) Sensor system for determining distance information based on stereoscopic images
CN112171675B (en) Obstacle avoidance method and device for mobile robot, robot and storage medium
CN113239719A (en) Track prediction method and device based on abnormal information identification and computer equipment
Berriel et al. A particle filter-based lane marker tracking approach using a cubic spline model
CN107909024B (en) Vehicle tracking system and method based on image recognition and infrared obstacle avoidance and vehicle
JP2022035033A (en) Information processing system, information processing method, program and vehicle control system
Dao et al. Aligning bird-eye view representation of point cloud sequences using scene flow
CN113744304A (en) Target detection tracking method and device
CN115943400B (en) Track prediction method and device based on time and space learning and computer equipment
Li et al. TTC4MCP: Monocular Collision Prediction Based on Self-Supervised TTC Estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958654

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958654

Country of ref document: EP

Kind code of ref document: A1