CN114998864A - Obstacle detection method, device, equipment and storage medium - Google Patents

Obstacle detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114998864A
CN114998864A CN202210593765.5A CN202210593765A CN114998864A CN 114998864 A CN114998864 A CN 114998864A CN 202210593765 A CN202210593765 A CN 202210593765A CN 114998864 A CN114998864 A CN 114998864A
Authority
CN
China
Prior art keywords
point cloud
merged
time
original
clouds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210593765.5A
Other languages
Chinese (zh)
Inventor
邓皓匀
任凡
王宽
钱少华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210593765.5A priority Critical patent/CN114998864A/en
Publication of CN114998864A publication Critical patent/CN114998864A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application belongs to the technical field of intelligent driving, and provides a method, a device, equipment and a storage medium for detecting obstacles, which comprise the following steps: acquiring an original point cloud and the acquisition time of the original point cloud, wherein the original point cloud is obtained through at least two different viewpoints; according to the acquisition time, carrying out time synchronization processing on the original point cloud to obtain point clouds at the same time; according to the conversion relation between the viewpoint and the vehicle coordinate system, carrying out coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system; merging the intermediate point clouds to obtain merged point clouds; and clustering the merged point cloud to obtain a processed point cloud, and obtaining obstacle information according to the coordinate information of the processed point cloud. The method and the device can keep high precision and high efficiency to detect and identify the barrier, and meet the requirement of identifying the high-precision barrier in various environments.

Description

Obstacle detection method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of intelligent driving, and particularly relates to a method, a device, equipment and a storage medium for detecting obstacles.
Background
The concept of obstacle detection widely exists in many fields such as intelligent driving, intelligent transportation and the like, mainly refers to detecting and identifying obstacles encountered by vehicles and other transportation tools in the driving process in a mode of combining software and hardware so as to acquire information such as types and sizes of the obstacles, and the obstacle detection has great significance for avoiding the influence of the obstacles on the driving and working processes, reducing accident risks and improving driving quality, and is an indispensable technical support in technologies such as intelligent driving and automatic driving.
Currently, the most common obstacle detection method is to collect information on a driving path through a camera, a sensor or a radar and the like, and then recognize the collected information through a controller to determine whether an obstacle exists, for example, patent publication No. CN108627844A discloses an obstacle detection method, which includes collecting information through an ultrasonic sensor or a radar sensor, and recognizing the information through an obstacle processing portion to determine an obstacle. The prior art has the defects that the precision of detecting and identifying the obstacle is poor, the processing efficiency of the acquired information is low, and the obstacle identification requirement under the high-precision requirement is difficult to meet.
In summary, a high-precision and high-efficiency obstacle detection method is sought, the requirement for high-precision obstacle identification in various environments is met, and the method has incomparable significance in the technical field of intelligent driving.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for detecting obstacles, which are used for solving the problems that in the prior art, the obstacle detection method has poor accuracy in detecting and identifying obstacles, has low processing efficiency on acquired information and is difficult to meet the obstacle identification requirement under the high-accuracy requirement.
In a first aspect, there is provided an obstacle detection method, including:
acquiring an original point cloud and the acquisition time of the original point cloud, wherein the original point cloud is obtained through at least two different viewpoints;
according to the acquisition time, carrying out time synchronization processing on the original point cloud to obtain point clouds at the same time;
according to the conversion relation between the viewpoint and the vehicle coordinate system, carrying out coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system;
merging the intermediate point clouds to obtain merged point clouds;
and clustering the merged point cloud to obtain a processed point cloud, and obtaining obstacle information according to the coordinate information of the processed point cloud.
Further, the time synchronization processing is performed on the original point cloud according to the acquisition time to obtain a point cloud at the same time, and the method includes:
obtaining a time difference between the acquisition time and the system time according to the acquisition time and the system time of the original point cloud;
and compensating the original point clouds based on the time difference so as to synchronize the time of the original point clouds and obtain point clouds at the same time.
Further, the converting the original point cloud into the vehicle coordinate system according to the conversion relationship of each viewpoint relative to the vehicle coordinate system to obtain an intermediate point cloud, including:
obtaining a rotation translation matrix representing the conversion relation according to the calibration parameters of the viewpoint relative to the vehicle coordinate system;
and calculating the original point cloud according to the rotation and translation matrix to obtain an intermediate point cloud.
Further, after the merging processing is performed on the intermediate point cloud to obtain a merged point cloud, the method includes:
selecting an interesting region for the merged point cloud according to the driving path information;
and according to the selected region of interest, screening out the parts of the merged point cloud falling outside the region of interest, and reserving the parts of the merged point cloud falling within the region of interest.
Further, after the merging processing is performed on the intermediate point cloud to obtain a merged point cloud, the method includes:
identifying points representing the ground in the merged point cloud according to the coordinate information of the points in the merged point cloud;
and filtering points representing the ground in the merged point cloud according to the identification result.
Further, the clustering the merged point cloud to obtain a processed point cloud includes:
carrying out primary clustering on the merged point cloud at the view angle of the aerial view to obtain primary clustered point cloud;
and performing formal clustering on the initially clustered point cloud at a front view viewing angle to obtain a processed point cloud.
Further, after the primary clustering is performed on the merged point cloud in the bird's-eye view, the method further includes:
identifying noise points in the primary clustering point cloud according to the coordinate information of the points in the primary clustering point cloud;
and screening out the noise points in the primary clustering point cloud according to the identification result.
Further, the method for obtaining the obstacle information according to the coordinate information of the processed point cloud comprises the following steps of
Fitting the processed point cloud in a fitting direction according to the pre-selected fitting direction;
and generating a convex frame according to the fitting processing result, wherein the convex frame comprises the obstacle information.
In a second aspect, there is provided an obstacle detection device including:
the system comprises a data acquisition unit, a data processing unit and a display unit, wherein the data acquisition unit is used for acquiring an original point cloud and the acquisition time of the original point cloud, and the original point cloud is obtained through at least two different viewpoints;
the time synchronization module is used for carrying out time synchronization processing on the original point clouds according to the acquisition time to obtain point clouds at the same time;
the coordinate conversion module is used for converting the viewpoint and the vehicle coordinate system, and performing coordinate system conversion on the simultaneous punctuation cloud to obtain intermediate point cloud under the vehicle coordinate system;
the splicing module is used for merging the intermediate point clouds to obtain merged point clouds;
the clustering module is used for clustering the merged point cloud to obtain a processed point cloud;
and the post-processing module is used for obtaining the obstacle information according to the coordinate information of the processed point cloud.
In a third aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the above-mentioned obstacle detection method when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program, wherein the computer program, when executed by a processor, implements the steps of one of the above-mentioned obstacle detection methods.
In the scheme realized by the barrier detection method, the device, the equipment and the storage medium, when the precise detection requirement of the barrier is met, the acquisition time of an original point cloud and the acquisition time of the original point cloud are obtained, the original point cloud is obtained through at least two different viewpoints, the time synchronization processing is carried out on the original point cloud according to the acquisition time to obtain the point cloud at the same time, the coordinate system conversion is carried out on the simultaneous point cloud according to the conversion relation between the viewpoints and the vehicle coordinate system to obtain the intermediate point cloud under the vehicle coordinate system, the merging processing is carried out on the intermediate point cloud to obtain the merged point cloud, the merged point cloud is clustered to obtain the processed point cloud, the barrier information is obtained according to the coordinate information of the processed point cloud, the barrier can be detected and identified with high precision and high efficiency, the requirement for high-precision obstacle identification under various environments is met.
Drawings
Fig. 1 is a schematic diagram of an application environment of a method for detecting an obstacle according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method of obstacle detection in an exemplary embodiment of the present application;
FIG. 3 is a flowchart illustrating an exemplary embodiment of step S220 in FIG. 2;
FIG. 4 is a flowchart illustrating an exemplary embodiment of step S230 of FIG. 2;
FIG. 5 is a flowchart illustrating an exemplary embodiment of step S240 of FIG. 2, which may include steps;
FIG. 6 is a schematic flow chart of steps that may be included in another exemplary embodiment of step S240 of FIG. 2;
FIG. 7 is a flowchart illustrating an exemplary embodiment of step S250 of FIG. 2;
FIG. 8 is a flowchart illustrating an exemplary embodiment of step S710 of FIG. 7, which may include steps;
FIG. 9 is a schematic flow chart of another exemplary embodiment of step S250 of FIG. 2;
fig. 10 is a schematic view of a structure of an obstacle detecting device according to an exemplary embodiment of the present application;
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
101-automobile; 102-a collection end; 103-a control terminal; 104-an object to be detected;
1001-data acquisition unit; 1002-a time synchronization module; 1003-coordinate transformation module; 1004-splicing module; 1005-clustering module; 1006-post processing module; 1007-a region of interest selection module; 1008-ground filtering module.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The obstacle detection method provided by the present application can be applied to the detection of obstacles when a vehicle runs in various application scenarios, and in an embodiment, can be applied to an application environment of fig. 1, fig. 1 is an application environment schematic diagram of an obstacle detection method in an embodiment of the present application, wherein both the acquisition end 102 and the control end 103 are disposed on the vehicle 101, during the running of the vehicle 101, the acquisition end 102 disposed on the vehicle 101 acquires information of the object to be detected 104, and the control end 103 processes the information of the object to be detected 104 to identify the object to be detected 104, wherein the acquisition end 102 and the control end 103 can communicate through, for example, an ethernet connection. From at least two viewpoints of the acquisition end 102, the control end 103 obtains at least two original point clouds of the object 104 to be detected, time synchronization processing is carried out on the original point clouds according to the acquisition time to obtain point clouds at the same time, coordinate system conversion is carried out on the point clouds at the same time according to the conversion relation between the viewpoints and a vehicle coordinate system to obtain intermediate point clouds under the vehicle coordinate system, and the intermediate point clouds are combined to obtain combined point clouds. In the application, the original point cloud of the object to be detected 104 and the collection time of the original point cloud are obtained from at least two viewpoints of the collection end 102, the obtained original point cloud and the collection time are processed through the control end 103 to obtain the size information of the object to be detected 104, in the application scene, the original point cloud and the collection time of the original point cloud are obtained through at least two different viewpoints according to the accurate detection requirement of the obstacle, the original point cloud is subjected to time synchronization processing according to the collection time to obtain the point cloud at the same time, the simultaneous point cloud is subjected to coordinate system conversion according to the conversion relation between the viewpoints and a vehicle coordinate system to obtain an intermediate point cloud under the vehicle coordinate system, the intermediate point clouds are combined to obtain a combined point cloud, and the combined point cloud is subjected to clustering processing, the processed point cloud is obtained, so that the obstacle information is obtained according to the coordinate information of the processed point cloud, the obstacle can be detected and identified with high precision and high efficiency, and the requirement of high-precision obstacle identification under various environments is met.
In an embodiment, please refer to fig. 2, wherein fig. 2 is a schematic flowchart illustrating an obstacle detection method according to an embodiment of the present application, including the following steps:
step S210: the method comprises the steps of obtaining an original point cloud and the collection time of the original point cloud, wherein the original point cloud is obtained through at least two different viewpoints.
The viewpoint here generally refers to a position where the acquisition end 102 for acquiring information of the obstacle is located, and the acquisition end 102 includes, but is not limited to: the number of the viewpoints is at least two, and the information of the obstacles is collected from each viewpoint according to the visual angle of the viewpoint, so that the original point clouds with the same number as the viewpoints can be obtained at the same time, wherein the original point clouds refer to a set of points representing the obstacles, which are obtained by collecting the information of the obstacles through each viewpoint.
In some embodiments, each viewpoint may be provided with one laser radar, for example, and the number of viewpoints may be two, the devices for acquiring information of an obstacle at each viewpoint are a first laser radar and a second laser radar, respectively, the first laser radar and the second laser radar acquire information of the same obstacle from different viewpoints and at different viewing angles, respectively, to obtain a first original point cloud and a second original point cloud, and the acquisition time of the point cloud is a first acquisition time at which the first original point cloud is acquired and a second acquisition time at which the second original point cloud is acquired, respectively. It should be understood that the first original point cloud and the second original point cloud may both represent features of a surface contour of the same obstacle at the same time, but due to errors of the first laser radar and the second laser radar in calibration and collection processes, the first original point cloud and the second original point cloud are not corresponding in existence time, that is, the first original point cloud and the second original point cloud obtained at the same time have a first collection time different from a second collection time. It is understood that the acquisition time here generally refers to the time when the obstacle starts to be detected from the viewpoint, and the acquisition manner may be, for example, reading a time stamp of the time when the obstacle starts to be detected from the viewpoint.
Step S220: and carrying out time synchronization processing on the original point cloud according to the acquisition time to obtain the point cloud at the same time.
After acquiring the multiple original point clouds, in order to eliminate temporal non-correspondence between the multiple original point clouds, time synchronization needs to be performed on the multiple original point clouds according to the acquisition time, so as to obtain point clouds at the same time.
In an embodiment, as shown in fig. 3, fig. 3 is a schematic flowchart of an exemplary specific implementation manner of step S220 in fig. 2, where in step S220, that is, according to the acquisition time, the time synchronization processing is performed on the original point cloud to obtain a point cloud at the same time, and the method further includes the following steps:
step S310: and obtaining the time difference between the acquisition time and the system time according to the acquisition time and the system time of the original point cloud.
Step S320: and compensating the original point clouds based on the time difference so as to synchronize the time of the original point clouds and obtain point clouds at the same time.
For steps S310 to S320, when performing time synchronization on the original point clouds, firstly, according to the acquired acquisition time, obtaining a time difference between the acquisition time and the system time by combining the system time of acquiring the original point clouds, it is understood that each time difference obtained here represents a time difference between the acquisition time and the system time of one original point cloud, after obtaining the time difference, performing compensation processing on the original point clouds based on the time differences to complete time synchronization of a plurality of original point clouds to obtain point clouds at the same time, it is understood that the compensation processing here generally refers to calculating a change condition of the original point clouds from the acquisition time to the system time by using the original point clouds corresponding to the acquisition time as a calculation reference according to the obtained time difference, thereby obtaining the same time corresponding to the system time, the calculations herein may be implemented by means including, but not limited to, kalman filtering algorithms. It is understood that the system time generally refers to the time when the control system acquires the original point cloud, for example, by the control end 3, and the acquisition may be, for example, by reading a time stamp when the original point cloud was acquired.
Specifically, for example, in some embodiments, a first laser radar acquires an acquisition time T1 of a first original point cloud, a second laser radar acquires an acquisition time T2 of a second original point cloud, a first time difference Ts-T1 corresponding to the first original point cloud and a second time difference Ts-T2 corresponding to the second original point cloud are obtained by combining the acquisition times Ts of the first original point cloud and the second original point cloud acquired by the controller 3, a variation of the first original point cloud caused by vehicle motion in the first time difference Ts-T1 is calculated by using a kalman filter algorithm according to the first time difference Ts-T1, a variation condition of the first original point cloud in the first time difference Ts-T1 is compensated to the first original point cloud, the first original point cloud is synchronized to a system time to obtain a first common time, and a second common time difference Ts-T2, and calculating to obtain the change of the second original point cloud caused by the vehicle motion in the second time difference Ts-T2 by using a Kalman filtering algorithm, compensating the change condition of the second original point cloud in the second time difference Ts-T2 to the second original point cloud, synchronizing the second original point cloud to the system time to obtain a second point cloud at the same time, and taking the system time Ts as the time reference for the first point cloud at the same time and the second point cloud at the same time obtained by the embodiment.
In the above steps, according to the acquisition time and the system time of the original point cloud, a time difference between the acquisition time and the system time is obtained, and then the original point cloud is compensated based on the time difference, so that the time of the original point cloud is synchronized, and a simultaneous cloud point representing the same obstacle at the same time is obtained, which is beneficial to eliminating errors in calibration and acquisition processes and improving the obstacle detection precision.
Step S30: and according to the conversion relation between the viewpoint and the vehicle coordinate system, carrying out coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system.
After the original point clouds are time-synchronized to obtain simultaneous point clouds, the simultaneous point clouds are required to be converted into a vehicle coordinate system according to the conversion relation between the viewpoints and the vehicle coordinate system, so that the simultaneous point clouds have the same spatial reference standard, and the obstacles represented by the multiple point clouds at the same time are related based on the same spatial reference standard.
In an embodiment, as shown in fig. 4, fig. 4 is a schematic flowchart of an exemplary specific implementation of step S230 in fig. 2, and in step S230, that is, converting the original point cloud into the vehicle coordinate system according to a conversion relationship between each viewpoint and the vehicle coordinate system, so as to obtain an intermediate point cloud, including the following steps:
step S410: obtaining a rotation translation matrix representing the conversion relation according to the calibration parameters of the viewpoint relative to the vehicle coordinate system;
step S420: and calculating the original point cloud according to the rotation and translation matrix to obtain an intermediate point cloud.
For steps S410-S420, first, a rotation and translation matrix representing a conversion relationship converted from a viewpoint to a vehicle coordinate system based on the viewpoint is obtained according to calibration parameters of the viewpoint relative to the vehicle coordinate system, where the calibration parameters generally refer to coordinate parameters of a viewpoint position in the vehicle coordinate system, it should be understood that the conversion relationship may be a conversion manner expressed by parameters including but not limited to three-dimensional rectangular coordinates, three-dimensional angles, and six-axis postures, and the rotation and translation matrix generally refers to a matrix obtained by converting the relationship and capable of representing the conversion relationship, and the original point cloud is calculated by the rotation and translation matrix so that the original point cloud is converted to the vehicle coordinate system, and an intermediate point cloud using the vehicle coordinate system as a reference coordinate is obtained.
Specifically, for example, in some embodiments, a first laser radar acquires a first original point cloud, a second laser radar acquires a second original point cloud, calibration parameters of the first laser radar are first calibration parameters (x1, y1, z1, roll1, pitch1, yaw1), the first calibration parameters characterize the position and angle of the viewpoint of the first laser radar relative to the vehicle coordinate system, calibration parameters of the second laser radar are second calibration parameters (x2, y2, z2, roll2, pitch2, yaw2), the second calibration parameters characterize the position and angle of the viewpoint of the second laser radar relative to the vehicle coordinate system, a first rotation-translation matrix and a second rotation-translation matrix are obtained through the first calibration parameters and the second calibration parameters, the first original point cloud is calculated through the first rotation-translation matrix, the first original point cloud is converted into a first intermediate point cloud under the vehicle coordinate system, and calculating the second original point cloud through the second rotation and translation matrix, and converting the second original point cloud into a second intermediate point cloud under the vehicle coordinate system.
Step S240: and merging the intermediate point clouds to obtain merged point clouds.
After a plurality of intermediate point clouds representing obstacles and based on the same spatial reference standard are obtained, the intermediate point clouds need to be combined to obtain a combined point cloud under a vehicle coordinate system. It is understood that the merged point cloud herein includes information in each intermediate point cloud, and specifically, for example, in some embodiments, a first intermediate point cloud and a second intermediate point cloud are obtained, and the method for merging the first intermediate point cloud and the second intermediate point cloud may be, for example, 2000 points in the first intermediate point cloud and 2000 points in the second intermediate point cloud, and corresponding 4000 points are generated in the same point cloud according to coordinates of the 4000 points in the first intermediate point cloud and the second intermediate point cloud, so as to obtain the intermediate point cloud. It is to be understood that the 2000 points in the first intermediate point cloud and the 2000 points in the second intermediate point cloud may be points having repeated coordinates, for example, in some embodiments, 200 points having completely repeated coordinates are present among the 2000 points of the first intermediate point cloud and the 2000 points of the second intermediate point cloud, and in the generated intermediate point cloud, there coexist points having no repeated coordinates of 3800 and 200 points having completely repeated coordinates of the other 3800 points.
Through the step S240, the acquired data are merged into one merged point cloud, so that errors in detection are reduced, and the obstacle detection precision and the obstacle detection efficiency are improved.
In an embodiment, as shown in fig. 5, fig. 5 is a flowchart illustrating an exemplary specific implementation manner of step S240 in fig. 2, which may include the following steps after step S240, that is, after the intermediate point clouds are combined to obtain a combined point cloud:
step S510: and selecting an interesting area for the merged point cloud according to the driving path information.
Step S520: and according to the selected region of interest, screening out the part of the merged point cloud which falls outside the region of interest, and reserving the part of the merged point cloud which falls within the region of interest.
For steps S510 to S520, first, an area of interest needs to be selected for the merged point cloud according to the driving path information, and specifically, in some embodiments, the manner of acquiring the driving path information includes, but is not limited to, acquiring a map provided by a Global Positioning System (GPS), acquiring and mapping the driving path in advance, and identifying the driving path in real time, selecting the area of interest according to the acquired driving path information, reserving a point of the merged point cloud whose coordinates fall within a boundary of the area of interest according to the selected area of interest, and screening out a point of the merged point cloud whose coordinates fall outside the boundary of the area of interest.
Specifically, for example, in some embodiments, the width of the driving path may be obtained by, for example, performing real-time identification on the driving path through a radar to obtain the width of the driving path, where it is understood that the width of the driving path refers to the width of the driving path corresponding to the system time when the original point cloud is obtained, an area within the width of the driving path is set as an area of interest, all points in the combined point cloud are screened according to the set area of interest, points falling outside the area of interest are screened, points falling within the area of interest are retained, and only points falling within the area within the width of the driving path are retained in the combined point cloud after screening.
Through the steps S510-S520, redundant data in the merged point cloud is filtered, so that the data volume needing to be processed is reduced, and the efficiency of obstacle detection is improved.
In an embodiment, as shown in fig. 6, fig. 6 is a flowchart illustrating a step of another exemplary specific implementation of step S240 in fig. 2, where after step S240, that is, after the intermediate point clouds are combined to obtain a combined point cloud, the following steps may be included:
step S610: and identifying the points representing the ground in the merged point cloud according to the coordinate information of the points in the merged point cloud.
Step S620: and filtering points representing the ground in the merged point cloud according to the identification result.
For steps S610 to S620, the points representing the ground in the merged point cloud need to be identified according to the coordinate information of the points in the merged point cloud, where the method for identifying the points representing the ground according to the coordinate information includes, but is not limited to, a plane fitting algorithm, a random sampling consensus algorithm, and the like, and the points representing the ground in the merged point cloud are filtered according to the identification result.
Specifically, for example, in some embodiments, according to the obtained coordinate information of the points in the merged point cloud, the points in the merged point cloud are calculated by using a random sampling consistency algorithm to identify the points in the merged point cloud representing the ground, and according to the identification result, the points in the merged point cloud representing the ground are filtered out, and only the points representing the obstacles are reserved.
Through the steps S610 to S620, the data of the points representing the ground in the merged point cloud are filtered, so that the data amount needing to be processed is reduced, and the efficiency of obstacle detection is improved.
Step S250: and clustering the merged point cloud to obtain a processed point cloud, and obtaining obstacle information according to the coordinate information of the processed point cloud.
In step S250, the merged point cloud is clustered, it is understood that clustering the merged point cloud generally means that the points in the point cloud are clustered according to the aggregation condition between the points to obtain processed point cloud, each point cloud cluster in the processed point cloud represents information such as an obstacle, it is understood that a specific method of clustering may be, for example, calculating the merged point cloud by a clustering algorithm, where the principle of the clustering algorithm includes but is not limited to determining whether the merged point cloud belongs to the same cluster according to the number of other points in the neighborhood of each point in the merged point cloud, or determining whether the merged point cloud belongs to the same cluster according to the number of connected points of each point, specifically, for example, in some embodiments, the method of clustering the merged point cloud is to calculate the number of other points in the neighborhood of each point in the merged point cloud according to a neighborhood range set in advance, and judging whether the point and the point in the neighborhood belong to the same cluster according to the number of other points in the neighborhood. In other embodiments, the method for clustering the merged point cloud includes calculating the number of connected points of each point according to a defined range of connected points set in advance, and determining whether the connected point belongs to a cluster or not according to the number of the connected points of the point. Obtaining the obstacle information according to the coordinate information of the processed point cloud after obtaining the processed point cloud, and it is understood that the obstacle information here includes, but is not limited to, the shape of the obstacle directly shown by the processed point cloud, the coordinate calculated by the coordinates of the points in the processed point cloud, and the size parameter such as the maximum length of the obstacle, the maximum width of the obstacle, the maximum height of the obstacle, etc. Specifically, for example, in some embodiments, there are 300 points in one cluster of the post-processed point cloud, the maximum height coordinate zmax and the minimum height coordinate zmin of the 300 points are read, zmax-zmin is calculated, and the maximum height of the obstacle is obtained.
In an embodiment, as shown in fig. 7, fig. 7 is a flowchart illustrating an exemplary specific implementation manner of step S250 in fig. 2, where in step S250, that is, performing clustering processing on the merged point cloud to obtain a processed point cloud, the method specifically includes the following steps:
step S710: and carrying out primary clustering on the merged point cloud at the view angle of the aerial view to obtain primary clustered point cloud.
Step S720: and performing formal clustering on the primarily clustered point cloud at a front view viewing angle to obtain a processed point cloud.
For steps S710 to S720, first clustering the merged point cloud at the viewing angle of the bird ' S-eye view to obtain a primarily clustered point cloud, and then performing formal clustering on the primarily clustered point cloud at the viewing angle of the front view to obtain a processed point cloud, it is understood that each point in the merged point cloud includes coordinate information of the point, but the density of the point cloud distribution of the merged point cloud at the viewing angle of the bird ' S-eye view is less than the density of points in the point cloud at the viewing angle of the front view, and performing clustering in different directions, and the accuracy of the obtained clustering result is also affected by the density of points of the point cloud, and first clustering at the viewing angle of the bird ' S-eye view, and then performing formal clustering at the viewing angle of the front view is beneficial to improve the accuracy of the clustering process, thereby improving the accuracy of the obstacle detection.
In an embodiment, as shown in fig. 8, fig. 8 is a flowchart illustrating an exemplary embodiment of step S710 in fig. 7, and after step S710, that is, after the merged point cloud is primarily clustered in the bird' S eye view to obtain a primarily clustered point cloud, the method further includes the following steps:
step S810: and identifying noise points in the primary clustering point cloud according to the coordinate information of the points in the primary clustering point cloud.
Step S820: and screening out the noise points in the primary clustering point cloud according to the identification result.
For steps S810 to S820, after the merged point cloud is primarily clustered at the view angle of the bird' S eye view, noise points in the primarily clustered point cloud are identified according to the coordinate information of the points in the primarily clustered point cloud, it should be understood that the noise points herein generally refer to points that are not related to the detection target and appear in the point cloud under the influence of influence factors, where the influence factors include but are not limited to floating impurities, light, impulse interference, etc., and the method for identifying noise points through the coordinate information of the points in the primarily clustered point cloud includes but is not limited to, determining whether each point in the primarily clustered point cloud belongs to a noise point by calculating the number of other points in the neighborhood of the point, or determining whether each point belongs to a noise point by calculating the number of connected points of each point. And screening out the noise points according to the recognition result of the noise points in the primary clustering point cloud.
Specifically, for example, in some embodiments, noise points are identified according to coordinate information of points in the primary clustered point cloud, and a noise point identification method is, for example, according to a preset determination criterion of connected points, for example, if the spacing distance is not greater than 1mm, determining that the connected points are connected, calculating the number of connected points of each point, if the number of connected points is, for example, 0, determining that the point is a noise point, calculating each point in the primary clustered point cloud, identifying the noise points in the primary clustered point cloud, and screening the noise points in the primary clustered point cloud according to an identification result.
Through the steps S810-S820, noise points in the primary clustering point cloud are screened out, interference of the noise points on obstacle detection is eliminated, and the obstacle detection efficiency is improved.
In an embodiment, as shown in fig. 9, fig. 9 is a schematic flowchart of another exemplary specific implementation manner of step S250 in fig. 2, and in step S250, that is, in obtaining the obstacle information according to the coordinate information of the processed point cloud, the method specifically includes the following steps:
step S910: and fitting the processed point cloud in the fitting direction according to the pre-selected fitting direction.
Step S920: and generating a convex frame according to the fitting processing result, wherein the convex frame comprises the obstacle information.
For steps S910 to S920, first, according to a pre-selected fitting direction, fitting processing needs to be performed on the processed point cloud in the fitting direction, where it can be understood that the fitting direction here may be different according to actual requirements for obstacle identification, for example, the same one or more directions may be selected for all clusters in the processed point cloud, or different one or more directions may be selected for each cluster, for example, the fitting processing here generally refers to including points in one cluster in the fitting direction by using geometric representation methods including but not limited to a closed frame, a closed curve, and the like, so as to characterize the shape of an obstacle. And after the point cloud after being processed is subjected to fitting processing, generating a convex hull frame according to a fitting processing result, wherein the convex hull frame comprises the obstacle information, and the obstacle information represented by the convex hull frame can be intuitively acquired through the convex hull frame.
Specifically, for example, in some embodiments, the roll angle direction is selected as a fitting direction of the obstacle, it is understood that the roll angle direction generally refers to a direction that rotates around an axis with the driving direction as an axis, and all points of one cluster in the processed point cloud are subjected to fitting calculation by a convex hull algorithm in the roll angle direction, so as to obtain a maximum outer shape boundary capable of characterizing the obstacle in the roll angle direction, where the outer shape boundary is, for example, an irregular hexadecimal shape, and a convex hull box that includes all points of the cluster is generated according to the irregular hexadecimal shape obtained by fitting, where the convex hull box is, for example, an irregular hexadecimal shape box.
Through the steps S910-S920, the specific information of the obstacle is obtained according to the processed point cloud, the point cloud is subjected to fitting processing in the fitting direction according to the pre-selected fitting direction, and the convex hull frame containing the obstacle information is generated according to the fitting processing result, so that the obstacle information is represented in the form of the convex hull frame, the information of the obstacle can be accurately detected and identified, and the obstacle detection accuracy is further improved.
It can be seen that, in the above scheme, according to the requirement of accurate detection of the obstacle, by acquiring the original point cloud and the acquisition time of the original point cloud, the original point cloud is obtained by at least two different viewpoints, performing time synchronization processing on the original point cloud to obtain point cloud at the same time, and according to the conversion relation between the viewpoint and the vehicle coordinate system, the point clouds at the same time are subjected to coordinate system conversion to obtain intermediate point clouds under a vehicle coordinate system, merging the intermediate point clouds to obtain merged point clouds, clustering the merged point clouds to obtain processed point clouds, the obstacle information is obtained according to the coordinate information of the processed point cloud, so that the obstacle can be detected and identified with high precision and high efficiency, and the requirement of high-precision obstacle identification in various environments is met.
In an embodiment, an obstacle detection device is provided, which corresponds to the obstacle detection methods in the foregoing embodiments one to one, as shown in fig. 10, fig. 10 is a schematic structural diagram of an obstacle detection device shown in an exemplary embodiment of the present application, and includes a data acquisition unit 1001, a time synchronization module 1002, a coordinate conversion module 1003, a concatenation module 1004, a clustering module 1005, and a post-processing module 1006, where each functional module is described in detail as follows:
the data acquisition unit 1001 is configured to acquire an original point cloud and an acquisition time of the original point cloud, where the original point cloud is obtained from at least two different viewpoints.
And the time synchronization module 1002 is configured to perform time synchronization processing on the original point clouds according to the acquisition time to obtain point clouds at the same time.
And the coordinate conversion module 1003 is used for converting the viewpoint and the vehicle coordinate system, and performing coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system.
And the splicing module 1004 is configured to merge the intermediate point clouds to obtain merged point clouds.
And a clustering module 1005 for clustering the merged point cloud to obtain a processed point cloud.
And the post-processing module 1006 is configured to obtain the obstacle information according to the coordinate information of the processed point cloud.
In some embodiments, the time synchronization module 1002 is specifically configured to:
obtaining the time difference between the acquisition time and the system time according to the acquisition time and the system time of the original point cloud;
and compensating the time difference to the original point cloud so as to synchronize the time of the original point cloud and obtain the point cloud at the same time.
In some embodiments, the coordinate transformation module 1003 is specifically configured to:
obtaining a rotation translation matrix representing the conversion relation according to the calibration parameters of the viewpoint relative to the vehicle coordinate system;
and calculating the original point cloud according to the rotation and translation matrix to obtain an intermediate point cloud.
In some embodiments, the clustering module 1005 is specifically configured to:
carrying out primary clustering on the merged point cloud at the view angle of the aerial view to obtain primary clustered point cloud;
and performing formal clustering on the initially clustered point cloud at a front view viewing angle to obtain a processed point cloud.
In some embodiments, the clustering module 1005 is further configured to:
identifying noise points in the primary clustering point cloud according to the coordinate information of the points in the primary clustering point cloud;
and screening out the noise points in the primary clustering point cloud according to the identification result.
In some embodiments, the post-processing module 1006 is specifically configured to:
and selecting at least one fitting direction, and fitting the processed point cloud in the fitting direction by using a convex hull algorithm.
And generating a convex frame for representing the obstacle information according to the fitting processing result.
In some embodiments, the obstacle detection apparatus further includes a region of interest selecting module 1007, specifically configured to:
merging the intermediate point clouds to obtain merged point clouds, acquiring the width of a driving path, and selecting an area in the width of the driving path as the region of interest;
and according to the selected region of interest, screening out the part of the merged point cloud which falls outside the region of interest, and reserving the part of the merged point cloud which falls within the region of interest.
In some embodiments, the obstacle detection device further includes a ground filtering module 1008, specifically configured to:
after the intermediate point clouds are combined to obtain combined point clouds, identifying points representing the ground in the combined point clouds according to coordinate information of the points in the combined point clouds;
and filtering points representing the ground in the merged point cloud according to the identification result.
According to the obstacle detection device provided by the embodiment, aiming at the accurate detection requirement of the obstacle, the acquisition time of the original point cloud and the acquisition time of the original point cloud are acquired, the original point cloud is acquired through at least two different viewpoints, and according to the acquisition time, time synchronization processing is carried out on the original point clouds to obtain point clouds at the same time, and according to the conversion relation between the viewpoint and the vehicle coordinate system, the point clouds at the same time are subjected to coordinate system conversion to obtain intermediate point clouds under a vehicle coordinate system, the intermediate point clouds are merged to obtain merged point clouds, and clustering the merged point cloud to obtain processed point cloud, so as to obtain obstacle information according to the coordinate information of the processed point cloud, detect and identify the obstacle with high precision and high efficiency, and meet the requirement of identifying the high-precision obstacle in various environments.
For the specific definition of the obstacle detection device, reference may be made to the above definition of an obstacle detection method, which is not described herein again. Each module in the above-described obstacle detection apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules may be embedded in a hardware form or may be independent of a processor in a computer, or may be stored in a memory in the computer in a software form, so that the processor invokes and executes operations corresponding to the modules.
In one embodiment, an obstacle detection device is provided, which may be, for example, a removable smart device including an acquisition terminal 102 and a control terminal 103, or may also be, for example, an automobile 101 integrated with the acquisition terminal 102 and the control terminal 103, and specifically, a computer device, whose internal structure diagram may be as shown in fig. 10. The computer equipment comprises a processor, a memory, a network interface, a display screen, an input device and a collection device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the computer equipment is used for connecting and communicating with the acquisition device and receiving the obstacle information acquired by the acquisition device from the viewpoint. The computer program when executed by a processor may implement the steps of:
acquiring an original point cloud and the acquisition time of the original point cloud, wherein the original point cloud is obtained through at least two different viewpoints;
according to the acquisition time, carrying out time synchronization processing on the original point cloud to obtain point clouds at the same time;
according to the conversion relation between the viewpoint and the vehicle coordinate system, performing coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system;
merging the intermediate point clouds to obtain merged point clouds;
and clustering the merged point cloud to obtain a processed point cloud, and obtaining obstacle information according to the coordinate information of the processed point cloud.
It is to be understood that the network interface may be, but is not limited to, a local area network interface, a wide area network interface, a wireless network interface, etc., and may be in the form of a wireless network interface or a wired network interface, which is typically used to establish a communication link between the computer device and other electronic devices. The network may be an Intranet (Internet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi (wireless fidelity), or other wireless or wired network.
According to the obstacle detection equipment provided by the embodiment of the application, at least two original point clouds are simultaneously obtained from at least two viewpoints according to the accurate detection requirement of an obstacle; performing time synchronization on the original point cloud by acquiring the acquisition time of the original point cloud at each viewpoint; converting the original point cloud into the vehicle coordinate system according to the conversion relation of each viewpoint relative to the vehicle coordinate system to obtain an intermediate point cloud; merging the intermediate point clouds to obtain merged point clouds; setting an interesting area, and screening the merged point cloud; filtering the merged point cloud by using a ground filtering algorithm; clustering the merged point cloud to obtain a processed point cloud; according to the processed point cloud, the size information of the obstacle is obtained, the obstacle can be detected and identified with high precision and high efficiency, and the requirement for high-precision obstacle identification under various environments is met.
In one embodiment, a device readable storage medium is provided, having stored thereon a software program that when executed by a processor performs the steps of:
acquiring an original point cloud and the acquisition time of the original point cloud, wherein the original point cloud is obtained through at least two different viewpoints;
according to the acquisition time, carrying out time synchronization processing on the original point cloud to obtain point clouds at the same time;
according to the conversion relation between the viewpoint and the vehicle coordinate system, carrying out coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system;
merging the intermediate point clouds to obtain merged point clouds;
and clustering the merged point cloud to obtain a processed point cloud, and obtaining obstacle information according to the coordinate information of the processed point cloud.
It should be noted that the device herein includes, but is not limited to, electronic devices such as computers, car machines, removable intelligent devices, etc., and the software program herein includes, but is not limited to, a computer program obtained by encoding through a computer.
It should be noted that, the functions or steps that can be implemented by the device-readable storage medium or the device may be referred to in the foregoing method embodiments, and are described in order to avoid repetition, and are not described here one by one.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a software program, which may be stored in a non-volatile device readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional unit modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional unit modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above described functions.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Those skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (11)

1. An obstacle detection method, comprising:
acquiring an original point cloud and the acquisition time of the original point cloud, wherein the original point cloud is obtained through at least two different viewpoints;
according to the acquisition time, carrying out time synchronization processing on the original point cloud to obtain point clouds at the same time;
according to the conversion relation between the viewpoint and the vehicle coordinate system, carrying out coordinate system conversion on the simultaneous punctuation cloud to obtain an intermediate point cloud under the vehicle coordinate system;
merging the intermediate point clouds to obtain merged point clouds;
and clustering the merged point cloud to obtain a processed point cloud, and obtaining obstacle information according to the coordinate information of the processed point cloud.
2. The method according to claim 1, wherein the time synchronization of the original point clouds according to the acquisition time to obtain point clouds at the same time comprises:
obtaining a time difference between the acquisition time and the system time according to the acquisition time and the system time of the original point cloud;
and compensating the original point clouds based on the time difference so as to synchronize the time of the original point clouds and obtain point clouds at the same time.
3. The method according to claim 1, wherein the converting the original point cloud into the vehicle coordinate system according to the conversion relationship of each viewpoint with respect to the vehicle coordinate system to obtain an intermediate point cloud comprises:
obtaining a rotation translation matrix representing the conversion relation according to the calibration parameters of the viewpoint relative to the vehicle coordinate system;
and calculating the original point cloud according to the rotation and translation matrix to obtain an intermediate point cloud.
4. The method according to claim 1, wherein the merging the intermediate point clouds to obtain a merged point cloud comprises:
selecting an interesting region for the merged point cloud according to the driving path information;
and according to the selected region of interest, screening out the part of the merged point cloud which falls outside the region of interest, and reserving the part of the merged point cloud which falls within the region of interest.
5. The method according to claim 1, wherein the merging the intermediate point clouds to obtain a merged point cloud comprises:
identifying points representing the ground in the merged point cloud according to the coordinate information of the points in the merged point cloud;
and filtering points representing the ground in the merged point cloud according to the identification result.
6. The method of claim 1, wherein the clustering the merged point cloud to obtain a processed point cloud comprises:
carrying out primary clustering on the merged point cloud at the view angle of the aerial view to obtain primary clustered point cloud;
and performing formal clustering on the primarily clustered point cloud at a front view viewing angle to obtain a processed point cloud.
7. The method of claim 6, wherein the step of primarily clustering the merged point cloud from the perspective of the bird's eye view further comprises:
identifying noise points in the primary clustering point cloud according to the coordinate information of the points in the primary clustering point cloud;
and screening out the noise points in the primary clustering point cloud according to the identification result.
8. The method according to claim 1, wherein obtaining the obstacle information from the coordinate information of the processed point cloud comprises:
fitting the processed point cloud in a fitting direction according to the pre-selected fitting direction;
and generating a convex frame according to the fitting processing result, wherein the convex frame comprises the obstacle information.
9. An obstacle detection device, comprising:
the system comprises a data acquisition unit, a data processing unit and a data processing unit, wherein the data acquisition unit is used for acquiring an original point cloud and the acquisition time of the original point cloud, and the original point cloud is obtained through at least two different viewpoints;
the time synchronization module is used for carrying out time synchronization processing on the original point clouds according to the acquisition time to obtain point clouds at the same time;
the coordinate conversion module is used for converting the viewpoint and the vehicle coordinate system, and performing coordinate system conversion on the simultaneous punctuation cloud to obtain intermediate point cloud under the vehicle coordinate system;
the splicing module is used for merging the intermediate point clouds to obtain merged point clouds;
the clustering module is used for clustering the merged point cloud to obtain a processed point cloud;
and the post-processing module is used for obtaining the obstacle information according to the coordinate information of the processed point cloud.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the obstacle detection method according to any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for detecting an obstacle according to any one of claims 1 to 8.
CN202210593765.5A 2022-05-27 2022-05-27 Obstacle detection method, device, equipment and storage medium Pending CN114998864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210593765.5A CN114998864A (en) 2022-05-27 2022-05-27 Obstacle detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210593765.5A CN114998864A (en) 2022-05-27 2022-05-27 Obstacle detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114998864A true CN114998864A (en) 2022-09-02

Family

ID=83030083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210593765.5A Pending CN114998864A (en) 2022-05-27 2022-05-27 Obstacle detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114998864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116197910A (en) * 2023-03-16 2023-06-02 江苏集萃清联智控科技有限公司 Environment sensing method and device for wind power blade wheel type mobile polishing robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
US20200180652A1 (en) * 2018-12-10 2020-06-11 Beijing Baidu Netcom Science Technology Co., Ltd. Point cloud data processing method, apparatus, device, vehicle and storage medium
CN112327329A (en) * 2020-11-25 2021-02-05 浙江欣奕华智能科技有限公司 Obstacle avoidance method, target device, and storage medium
WO2021134441A1 (en) * 2019-12-31 2021-07-08 深圳元戎启行科技有限公司 Automated driving-based vehicle speed control method and apparatus, and computer device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200180652A1 (en) * 2018-12-10 2020-06-11 Beijing Baidu Netcom Science Technology Co., Ltd. Point cloud data processing method, apparatus, device, vehicle and storage medium
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
WO2021134441A1 (en) * 2019-12-31 2021-07-08 深圳元戎启行科技有限公司 Automated driving-based vehicle speed control method and apparatus, and computer device
CN112327329A (en) * 2020-11-25 2021-02-05 浙江欣奕华智能科技有限公司 Obstacle avoidance method, target device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116197910A (en) * 2023-03-16 2023-06-02 江苏集萃清联智控科技有限公司 Environment sensing method and device for wind power blade wheel type mobile polishing robot
CN116197910B (en) * 2023-03-16 2024-01-23 江苏集萃清联智控科技有限公司 Environment sensing method and device for wind power blade wheel type mobile polishing robot

Similar Documents

Publication Publication Date Title
CN111077506B (en) Method, device and system for calibrating millimeter wave radar
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
CN111383279A (en) External parameter calibration method and device and electronic equipment
CN111179274B (en) Map ground segmentation method, device, computer equipment and storage medium
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN114930401A (en) Point cloud-based three-dimensional reconstruction method and device and computer equipment
EP4083917A1 (en) Depth image processing method, small obstacle detection method and system, robot, and medium
CN111080784B (en) Ground three-dimensional reconstruction method and device based on ground image texture
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN115066708A (en) Point cloud data motion segmentation method and device, computer equipment and storage medium
CN111360810A (en) External parameter calibration method and device for robot sensor, robot and storage medium
WO2022133770A1 (en) Method for generating point cloud normal vector, apparatus, computer device, and storage medium
CN112802092A (en) Obstacle sensing method and device and electronic equipment
CN113850786A (en) Method and device for detecting vehicle door gap parameters and measuring equipment
CN114998864A (en) Obstacle detection method, device, equipment and storage medium
CN116295279A (en) Unmanned aerial vehicle remote sensing-based building mapping method and unmanned aerial vehicle
CN114910891A (en) Multi-laser radar external parameter calibration method based on non-overlapping fields of view
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN114863064A (en) Method and system for constructing automobile contour curved surface model
CN114494466A (en) External parameter calibration method, device and equipment and storage medium
CN114359856A (en) Feature fusion method and device, server and computer readable storage medium
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN115272408A (en) Vehicle stationary detection method, device, computer equipment and storage medium
CN113433568B (en) Laser radar observation simulation method and device
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination