CN115953761A - Obstacle detection method and device for automatic driving and system for automatic driving - Google Patents

Obstacle detection method and device for automatic driving and system for automatic driving Download PDF

Info

Publication number
CN115953761A
CN115953761A CN202310028466.1A CN202310028466A CN115953761A CN 115953761 A CN115953761 A CN 115953761A CN 202310028466 A CN202310028466 A CN 202310028466A CN 115953761 A CN115953761 A CN 115953761A
Authority
CN
China
Prior art keywords
detection data
frame
obstacle detection
polygon
box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310028466.1A
Other languages
Chinese (zh)
Inventor
陈至元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuzhi Suzhou Intelligent Technology Co ltd
Jiuzhizhixing Beijing Technology Co ltd
Original Assignee
Jiuzhi Suzhou Intelligent Technology Co ltd
Jiuzhizhixing Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuzhi Suzhou Intelligent Technology Co ltd, Jiuzhizhixing Beijing Technology Co ltd filed Critical Jiuzhi Suzhou Intelligent Technology Co ltd
Priority to CN202310028466.1A priority Critical patent/CN115953761A/en
Publication of CN115953761A publication Critical patent/CN115953761A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application discloses an obstacle detection method and device for automatic driving and a system for automatic driving, wherein the method comprises the following steps: acquiring first obstacle detection data output by a deep learning model aiming at a current scene and second obstacle detection data output by a clustering algorithm aiming at the current scene; performing correlation calculation on the first obstacle detection data and the second obstacle detection data, and determining detection data correlated with the first obstacle detection data in the second obstacle detection data based on a correlation calculation result; updating the first obstacle detection data based on the associated detection data, and outputting the updated first obstacle detection data as an obstacle detection result. The method and the device for detecting the obstacle in the automatic driving and the system for the automatic driving improve the automatic detection speed and accuracy of the obstacle in the automatic driving, so that the safety of an automatic driving vehicle is improved.

Description

Obstacle detection method and device for automatic driving and system for automatic driving
Technical Field
The application relates to the technical field of automatic driving, in particular to an obstacle detection method and device for automatic driving and a system for automatic driving
Background
With the continuous development of the automatic driving technology, in order to ensure the safe operation of the autonomous vehicle, the obstacle detection is performed by continuously scanning the surrounding environment through a laser radar (Lidar) on the vehicle in the automatic driving process.
In the related art, a single obstacle detection module is mostly used for detecting obstacles, missed detection is easily caused to the obstacles with undefined characteristics, the passing safety of autonomous vehicles is influenced, accurate obstacle polygon information can not be obtained generally, a lot of passing space can be lost, and the requirements are difficult to meet.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides an obstacle detection method and device for automatic driving and a system for automatic driving.
In a first aspect, the application discloses an automatic driving obstacle detection method, which includes acquiring first obstacle detection data output by a deep learning model for a current scene and second obstacle detection data output by a clustering algorithm for the current scene; performing correlation calculation on the first obstacle detection data and the second obstacle detection data, and determining detection data correlated with the first obstacle detection data in the second obstacle detection data based on a correlation calculation result; updating the first obstacle detection data based on the associated detection data, and outputting the updated first obstacle detection data as an obstacle detection result.
Illustratively, when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, the method further includes: and outputting the non-relevant detection data as an obstacle detection result.
Illustratively, when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, after updating the first obstacle detection data based on the associated detection data, the method further includes: performing the correlation calculation on the updated first obstacle detection data and the non-correlated detection data; when it is determined that there is detection data associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the first obstacle detection data is updated again based on the determined associated detection data, and the updated first obstacle detection data is output.
Illustratively, when it is determined that there is detection data that is not associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the method further includes: the determined non-correlated detection data is output as the obstacle detection result.
Illustratively, the first obstacle detection data includes a bounding box of an obstacle at the bird's eye view angle and a category, and the second obstacle detection data includes a polygonal box of the obstacle at the bird's eye view angle and point cloud information contained inside the polygonal box; the correlation calculation includes: for each of the polygon boxes, determining whether there is an overlap of one or more of the bounding boxes with the polygon box; the determining, based on the correlation calculation result, detection data in the second obstacle detection data that is correlated with the first obstacle detection data includes: when it is determined that the surrounding frame and the polygon frame are not overlapped, taking the polygon frame as the non-associated detection data; when it is determined that the bounding box and the polygon box are overlapped, point cloud points in the polygon box are determined as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapped with the multi-deformation box.
Illustratively, each bounding box overlapping with the polygon box as a target bounding box, the determining point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapping with the multi-deformation box comprises: when at least two target enclosing frames are determined to exist, point cloud points in the polygon frame and located in the target enclosing frames are used as the associated detection data, and point cloud points not in the target enclosing frames are used as the non-associated detection data; when it is determined that one target enclosure box exists, point cloud points in the polygon box are determined as the associated detection data and/or the non-associated detection data based on the overlapping rate of the target enclosure box and the polygon box.
Exemplarily, the determining point cloud points in the polygon frame as the associated detection data and/or the non-associated detection data based on the overlapping ratio of the target bounding box and the polygon frame comprises: when the overlapping rate of the target enclosing frame and the polygonal frame is larger than or equal to a first threshold value, all point cloud points in the polygonal frame are used as the associated detection data; and when the overlapping rate of the target enclosing frame and the polygon frame is smaller than the first threshold value, taking the point cloud points in the polygon frame, which are positioned in the target enclosing frame, as the associated detection data, and taking the point cloud points which are not in the target enclosing frame as the non-associated detection data.
Illustratively, for each of the polygon boxes, determining whether there is one or more of the bounding boxes that overlap the polygon box comprises: calculating the overlapping rate of the polygon frame and each surrounding frame in height; when the overlapping rate is smaller than a second threshold value, determining that the polygon frame and the bounding box are not overlapped; when the overlapping rate is larger than or equal to the second threshold, determining that the polygonal frame and the bounding box are overlapped.
Illustratively, the updating the first obstacle detection data based on the correlated detection data comprises: for each enclosure frame, when detection data associated with the enclosure frame exists, generating a convex hull based on all point cloud points in the associated detection data, wherein a boundary frame of the convex hull is used as a polygon frame of the enclosure frame, and updating the enclosure frame based on the boundary points of the polygon frame of the enclosure frame and four corner points of the enclosure frame.
Illustratively, for each of the bounding boxes, when there is no detection data associated with the bounding box, a convex hull is generated based on four corner points of the bounding box, and a bounding box of the convex hull is taken as a polygon box of the bounding box.
In a second aspect, the present application discloses an obstacle detection apparatus for autonomous driving, the apparatus comprising a memory and a processor, the memory having stored thereon a computer program for execution by the processor, the computer program, when executed by the processor, causing the processor to perform the steps of: acquiring first obstacle detection data output by a deep learning model aiming at a current scene and second obstacle detection data output by a clustering algorithm aiming at the current scene; performing correlation calculation on the first obstacle detection data and the second obstacle detection data, and determining detection data correlated with the first obstacle detection data in the second obstacle detection data based on a correlation calculation result; updating the first obstacle detection data based on the associated detection data, and outputting the updated first obstacle detection data as an obstacle detection result.
Illustratively, when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, the processor is further configured to: and outputting the non-relevant detection data as an obstacle detection result.
Illustratively, when it is determined that there is detection data that is not correlated with the first obstacle detection data in the second obstacle detection data based on the correlation calculation result, after updating the first obstacle detection data based on the correlated detection data, the processor is further configured to: performing the correlation calculation on the updated first obstacle detection data and the non-correlated detection data; when it is determined that there is detection data associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the first obstacle detection data is updated again based on the determined associated detection data, and the updated first obstacle detection data is output.
Illustratively, when it is determined that there is detection data that is not associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the processor is further configured to: the determined non-correlated detection data is output as the obstacle detection result.
Illustratively, the first obstacle detection data includes a bounding box of an obstacle at the bird's eye view angle and a category, and the second obstacle detection data includes a polygonal box of the obstacle at the bird's eye view angle and point cloud information contained inside the polygonal box; the correlation calculation includes: for each of the polygon boxes, determining whether there is one or more of the bounding boxes that overlap the polygon box; the determining, based on the correlation calculation result, detection data in the second obstacle detection data that is correlated with the first obstacle detection data includes: when it is determined that the bounding box and the polygon box are not overlapped, taking the polygon box as the non-associated detection data; when it is determined that the bounding box and the polygon box are overlapped, point cloud points in the polygon box are determined as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapped with the multi-deformation box.
Illustratively, each bounding box overlapping with the polygon box is taken as a target bounding box, and the processor determines point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapping with the multi-deformation box, including: when at least two target enclosing frames are determined to exist, point cloud points in the polygon frame and located in the target enclosing frames are used as the associated detection data, and point cloud points not in the target enclosing frames are used as the non-associated detection data; when it is determined that one target enclosure box exists, point cloud points in the polygon box are determined as the associated detection data and/or the non-associated detection data based on the overlapping rate of the target enclosure box and the polygon box.
Illustratively, the processor determines point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on an overlap ratio of the target bounding box and the polygon box, comprising: when the overlapping rate of the target enclosing frame and the polygon frame is larger than or equal to a first threshold value, all point cloud points in the polygon frame are used as the associated detection data; and when the overlapping rate of the target enclosing frame and the polygon frame is smaller than the first threshold, taking the point cloud point in the polygon frame, which is positioned in the target enclosing frame, as the associated detection data, and taking the point cloud point which is not positioned in the target enclosing frame as the non-associated detection data.
Illustratively, for each of the polygon boxes, the processor determining whether there is one or more of the bounding boxes overlapping the polygon box comprises: calculating the overlapping rate of the polygon frame and each surrounding frame in height; when the overlapping rate is smaller than a second threshold value, determining that the polygon frame and the bounding box are not overlapped; when the overlapping rate is larger than or equal to the second threshold, determining that the polygonal frame and the bounding box are overlapped.
Illustratively, the processor updates the first obstacle detection data based on the associated detection data, including: for each enclosure frame, when detection data associated with the enclosure frame exists, generating a convex hull based on all point cloud points in the associated detection data, wherein a boundary frame of the convex hull is used as a polygon frame of the enclosure frame, and updating the enclosure frame based on the boundary points of the polygon frame of the enclosure frame and four corner points of the enclosure frame.
Illustratively, for each of the bounding boxes, when there is no detection data associated with the bounding box, the processor generates a convex hull based on four corner points of the bounding box, and takes a bounding box of the convex hull as a polygon box of the bounding box.
In a third aspect, the present application discloses a system for autonomous driving, the system comprising a positioning subsystem, a perception subsystem, a decision subsystem and a control subsystem, wherein: the positioning subsystem is used for acquiring the pose information of the automatic driving vehicle in real time and transmitting the pose information to the decision-making subsystem; the perception subsystem is used for detecting lanes and obstacles and transmitting detection results to the decision-making subsystem; the decision subsystem is used for making a decision on the automatic driving vehicle by combining the data information transmitted by the positioning subsystem and the sensing subsystem and transmitting decision information to the control subsystem; the control subsystem is used for controlling the automatic driving vehicle based on decision information transmitted by the decision subsystem; wherein the perception subsystem comprises the obstacle detection device for autonomous driving as described above to obtain an obstacle detection result.
In a fourth aspect, the present application discloses a vehicle comprising the system for autonomous driving described above.
In a fifth aspect, the present application discloses a storage medium having stored thereon a computer program for execution by a processor, which computer program, when executed by the processor, causes the processor to execute the obstacle detection method for autonomous driving as described above.
The utility model provides an automatic obstacle detection method, device and system for automatic driving are obtaining the first obstacle detected data and the clustering algorithm of deep learning model to current scene output and are directed against the second obstacle detected data of current scene output, can be automatic right first obstacle detected data with second obstacle detected data carries out the correlation calculation, and confirm based on the correlation calculation result with the first obstacle detected data correlation detected data in the second obstacle detected data, and in the detection data of correlation is right first obstacle detected data is updated, and the output is updated first obstacle detected data is as obstacle detection result to realize the automatic detection automatic driving's obstacle, can be so that the position of obstacle, the classification, the shape accuracy can all obtain the guarantee, thereby promote the security of automatic driving vehicle.
Drawings
The following drawings of the present application are included to provide an understanding of the present application. The drawings illustrate embodiments of the application and their description, serve to explain the principles and apparatus of the application. In the drawings there is shown in the drawings,
fig. 1 is a flowchart of an obstacle detection method for automatic driving according to an embodiment of the present application.
FIG. 2 is a block diagram of an embodiment of an update enclosure using correlated monitor data.
Fig. 3 is a schematic structural diagram of an obstacle detection device for automatic driving in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a system for automatic driving in an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. It will be apparent, however, to one skilled in the art, that the present application may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present application.
It is to be understood that the present application is capable of implementation in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art. In the drawings, the size and relative sizes of layers and regions may be exaggerated for clarity. Like reference numerals refer to like elements throughout.
It will be understood that, although the terms first, second, third, etc. may be used to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present application.
Spatial relational terms such as "under," "below," "under," "above," "over," and the like may be used herein for convenience in describing the relationship of one element or feature to another element or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures.
Referring to fig. 1, an embodiment of the present application provides an autonomous driving obstacle detection method, including the following steps:
in step S101, first obstacle detection data 5 output by the deep learning model for the current scene and second obstacle detection data output by the clustering algorithm for the current scene are obtained.
In step S102, a correlation calculation is performed on the first obstacle detection data and the second obstacle detection data, and detection data related to the first obstacle detection data in the second obstacle detection data is determined based on a correlation calculation result.
0 in step S103, the first obstacle detection data is subjected to the correlation based on the detection data
Updating, and outputting the updated first obstacle detection data as an obstacle detection result.
By the aid of the obstacle detection method for automatic driving, automatic detection of the obstacle for automatic driving can be achieved, the position, type and shape accuracy of the obstacle can be guaranteed, and safety of an automatic driving vehicle is improved.
5 the method is illustrated in the following with reference to the figures.
Firstly, first obstacle detection data output by a deep learning model aiming at a current scene and second obstacle detection data output by a clustering algorithm aiming at the current scene are obtained.
Illustratively, the second obstacle detection data is stored when it is determined based on the correlation calculation result
When the detection data is not associated with the first obstacle detection data, the method further includes: and outputting the non-relevant detection data of 0 as an obstacle detection result.
For example, in step S102, when it is determined that the partial second obstacle detection data is not associated with the first obstacle detection data based on the association calculation result, the partial second obstacle detection data is considered to be an undetected obstacle in the obstacle detection method for automatic driving, and the partial second obstacle detection data is output as an obstacle detection result.
5 when it is determined that the second obstacle detection data exists based on the correlation calculation result
When the detection data is not associated with the first obstacle detection data, after the first obstacle detection data is updated based on the associated detection data, the method further includes: performing the correlation calculation on the updated first obstacle detection data and the non-correlated detection data; when radical
When it is determined from the correlation calculation result that there is detection data correlated with the updated first obstacle 0 detection data among the non-correlated detection data, the first obstacle 0 detection data is updated again based on the determined correlated detection data
First obstacle detection data, and outputting the updated first obstacle detection data.
Illustratively, when it is determined that there is detection data that is not associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the method further includes: the determined non-correlated detection data is output as an obstacle detection result.
For example, in step S103, the updated first obstacle detection data is traversed, the updated first obstacle detection data is subjected to correlation calculation with the detection data determined in step S102 as being not associated with the non-updated first obstacle detection data, whether the detection data associated with the updated first obstacle detection data exists in the non-associated monitoring data is determined based on the correlation calculation result, for the detection data with the association, the associated detection data is removed from the output obstacle detection result, the associated detection data of the first obstacle detection data is added, and the first obstacle detection data is updated; and adding the monitor data without correlation as the undetected obstacle in the automatic driving obstacle detection method into the output obstacle detection result.
Illustratively, the first obstacle detection data includes a bounding box of an obstacle at the bird's eye view angle and a category, and the second obstacle detection data includes a polygonal box of the obstacle at the bird's eye view angle and point cloud information contained inside the polygonal box; the correlation calculation includes: for each of the polygon boxes, determining whether there is one or more of the bounding boxes that overlap the polygon box; the determining, based on the correlation calculation result, detection data in the second obstacle detection data that is correlated with the first obstacle detection data includes: when it is determined that the bounding box and the polygon box are not overlapped, taking the polygon box as the non-associated detection data; when it is determined that the surrounding frame and the polygonal frame are overlapped, point cloud points in the polygonal frame are determined as the associated detection data and/or the non-associated detection data based on the number of the surrounding frames overlapped with the multi-deformation frame.
For example, in step S101, the acquired first obstacle detection data output by the deep learning model for the current scene includes a bounding box and a category of an obstacle at the bird 'S eye view angle, and the acquired second obstacle detection data output by the clustering algorithm for the current scene includes a polygonal box of the obstacle at the bird' S eye view angle and point cloud information contained inside the polygonal box. In step S102, for each polygon frame of the second obstacle detection data, an overlap degree calculation is performed with each bounding frame of the first obstacle detection data, and whether the polygon frame and the bounding frame overlap is determined based on the overlap degree. And when the polygon frame and the surrounding frame are determined not to be overlapped, taking the polygon frame as non-associated detection data. In an example, the non-associated detection data is a queue of independent point cloud clustering results. In an example, when the polygon frame and the bounding box are determined to be overlapped, point cloud points in the polygon frame are determined as detection data associated with the bounding box and added to an associated point cloud queue corresponding to the bounding box and/or as obstacles which cannot be detected and added to an independent point cloud clustering result queue based on the number of the bounding boxes overlapped with the multi-deformation frame.
Illustratively, each bounding box overlapping with the polygon box as a target bounding box, the determining point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapping with the multi-deformation box comprises: when at least two target enclosing frames exist, taking point cloud points in the polygon frame and positioned in the target enclosing frames as the associated detection data, and taking point cloud points not in the target enclosing frames as the non-associated detection data; when it is determined that one target enclosing frame exists, point cloud points in the polygon frame are determined as the associated detection data and/or the non-associated detection data based on the overlapping rate of the target enclosing frame and the polygon frame.
In an example, in step S102, when it is determined that there are at least two target bounding boxes, for each point cloud point in the polygon box, determining which target bounding box the point cloud point is bounded by, adding the point cloud point to a related point cloud queue of the bounding box as related monitoring data of the target bounding box, and adding other point cloud points in the polygon box, which are not within any range of the target bounding box, as the non-related monitoring data to an independent point cloud clustering result queue as undetected obstacles; when the target enclosing frame is determined to exist, based on the overlapping rate of the target enclosing frame and the polygon frame, point cloud points enclosed by the target enclosing frame in the polygon frame are used as associated detection data to be added into an associated point cloud queue corresponding to the target enclosing frame, and other point cloud points which are not in any range in the target enclosing frame in the polygon frame are used as non-associated monitoring data to be added into an independent point cloud clustering result queue and/or all point cloud points in the polygon frame are used as the non-associated monitoring data, a convex hull is regenerated and added into the independent point cloud clustering result queue.
Exemplarily, the determining point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on the overlapping ratio of the target bounding box and the polygon box comprises: when the overlapping rate of the target enclosing frame and the polygon frame is larger than or equal to a first threshold value, all point cloud points in the polygon frame are used as the associated detection data; and when the overlapping rate of the target enclosing frame and the polygon frame is smaller than the first threshold, taking the point cloud point in the polygon frame, which is positioned in the target enclosing frame, as the associated detection data, and taking the point cloud point which is not positioned in the target enclosing frame as the non-associated detection data.
Illustratively, for each of the polygon boxes, determining whether there is one or more of the bounding boxes that overlap the polygon box comprises: calculating the overlapping rate of the polygon frame and each surrounding frame in height; when the overlapping rate is smaller than a second threshold value, determining that the polygon frame and the bounding box are not overlapped; when the overlapping rate is larger than or equal to the second threshold value, determining that the polygon frame and the bounding box are overlapped.
In an example, the first threshold is 70% and the second threshold is 40%. In an example, when the overlapping rate of the target enclosing frame and the polygon frame is greater than or equal to a first threshold value and is a high overlapping rate, all point cloud points in the polygon frame are placed in an associated point cloud queue of the target enclosing frame to serve as associated detection data; when the overlapping rate of the target enclosing frame and the polygonal frame is smaller than the first threshold and is larger than or equal to the second threshold, and the overlapping rate is low, point cloud points enclosed by the target enclosing frame in point cloud points of the polygonal frame are used as the associated detection data to be added into an associated point cloud queue of the target enclosing frame, point cloud points not enclosed by the target frame are used as non-associated detection data, a convex hull is generated again, and the point cloud points are added into an independent point cloud clustering result queue; and when the overlapping rate of the target enclosing frame and the polygonal frame is smaller than a second threshold value, determining that the polygonal frame and the enclosing frame are not overlapped, and adding the polygonal frame serving as unassociated monitoring data into an independent point cloud clustering result queue.
Referring to fig. 2, in an example, for each of the bounding boxes 201, when there is monitoring data associated with the bounding box 201, that is, there are point cloud points in the point cloud queue associated with the bounding box 201, a convex hull 202 is generated by using all point cloud points in the point cloud queue associated with the bounding box 201 as internal points, a bounding box 203 of the convex hull 202 is used as a polygonal box of the bounding box, a new bounding box 200 is generated by using the bounding box 203 of the polygonal box formed by the convex hull 202 and the bounding points and four corner points of the bounding box 201, and the new bounding box 200 replaces the original bounding box 201 to complete updating of the bounding box; and when the detection data associated with the bounding box does not exist, generating a convex hull by using four corner points of the bounding box, and taking the bounding box of the convex hull as a polygonal box of the bounding box, so far, each deep learning model has the bounding box and the polygonal box aiming at the first obstacle detection data output by the current scene.
According to the method, the acquired first obstacle detection data and the acquired second obstacle detection data are subjected to correlation calculation to obtain the detection data correlated with the first obstacle detection data in the second obstacle detection data, the first obstacle detection data are updated through the correlated detection data, the updated first obstacle detection data are output and serve as the obstacle detection result, the automatic detection speed and the accuracy of the automatically driven obstacle are improved, and therefore the safety of the automatically driven vehicle is improved.
As shown in fig. 3, the present embodiment also provides an autonomous driving obstacle detection apparatus, which includes a memory 301 and a processor 302, where the memory 301 stores a computer program executed by the processor 302, and when the computer program is executed by the processor 302, the processor 302 is caused to perform the above-described method for autonomous driving obstacle detection and/or other desired functions. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the memory 301. The processor 302 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other form of processing unit having data processing capabilities and/or instruction execution capabilities.
The embodiment of the present application further provides a system for automatic driving, as shown in fig. 4, the system includes a positioning subsystem 401, a perception subsystem 402, a decision subsystem 403, and a control subsystem 404, where:
the positioning subsystem 401 is configured to detect lanes and obstacles, and transmit a detection result to the decision-making subsystem 403; the perception subsystem 402 is configured to obtain a current automatic driving scene, classify the scene, and transmit a classification result to the decision subsystem 403; the decision-making subsystem 403 is used for making a decision on the autonomous vehicle by combining the data information transmitted by the positioning subsystem 401 and the perception subsystem 402, and transmitting the decision-making information to the control subsystem 404; the control subsystem 404 is used to control the autonomous vehicle based on decision information communicated from the decision subsystem 403.
Wherein the perception subsystem 402 comprises the above-mentioned autonomous driving obstacle detection device to obtain the obstacle detection result.
The embodiment of the application also provides a vehicle, which comprises the system for automatic driving. The autonomous vehicle may further include other components, such as a driving system for driving the vehicle to move forward, a signal transmission system for transmitting a signal, and the like, which is not limited in this embodiment of the present invention.
The embodiment of the present application further provides a storage medium, where a computer program executed by a processor is stored on the storage medium, and when the computer program is executed by the processor, the processor is enabled to execute the automatic driving scene classification method as described above. Illustratively, the computer storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disk read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above-described illustrative embodiments are only exemplary, and are not intended to limit the scope of the present application thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as claimed in the appended claims.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the present application, various features of the present application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present application should not be construed to reflect the intent: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.

Claims (23)

1. An obstacle detection method for automatic driving, characterized by comprising:
acquiring first obstacle detection data output by a deep learning model aiming at a current scene and second obstacle detection data output by a clustering algorithm aiming at the current scene;
performing correlation calculation on the first obstacle detection data and the second obstacle detection data, and determining detection data correlated with the first obstacle detection data in the second obstacle detection data based on a correlation calculation result;
updating the first obstacle detection data based on the correlated detection data, and outputting the updated first obstacle detection data as an obstacle detection result.
2. The method according to claim 1, characterized in that when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, the method further comprises: and outputting the non-associated detection data as an obstacle detection result.
3. The method according to claim 1, wherein when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, after the first obstacle detection data is updated based on the associated detection data, the method further comprises:
performing the correlation calculation on the updated first obstacle detection data and the non-correlated detection data;
when it is determined that there is detection data associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the first obstacle detection data is updated again based on the determined associated detection data, and the updated first obstacle detection data is output.
4. The method according to claim 3, wherein when it is determined that there is detection data that is not associated with the updated first obstacle detection data among the non-associated detection data based on an association calculation result, the method further comprises: the determined non-correlated detection data is output as the obstacle detection result.
5. The method according to claim 2 or 4, wherein the first obstacle detection data includes a bounding box and a category of an obstacle at a bird's eye view angle, and the second obstacle detection data includes a polygonal box of the obstacle at the bird's eye view angle and point cloud information contained inside the polygonal box;
the correlation calculation includes: for each of the polygon boxes, determining whether there is one or more of the bounding boxes that overlap the polygon box;
the determining, based on the correlation calculation result, detection data correlated with the first obstacle detection data in the second obstacle detection data includes:
when it is determined that the surrounding frame and the polygon frame are not overlapped, taking the polygon frame as the non-associated detection data;
when it is determined that the bounding box and the polygon box are overlapped, point cloud points in the polygon box are determined as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapped with the multi-deformation box.
6. The method according to claim 5, wherein each bounding box overlapping with the polygon box is used as a target bounding box, and the determining point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapping with the multi-transformation box comprises:
when at least two target enclosing frames are determined to exist, point cloud points in the polygon frame and located in the target enclosing frames are used as the associated detection data, and point cloud points not in the target enclosing frames are used as the non-associated detection data;
when it is determined that one target enclosure box exists, point cloud points in the polygon box are determined as the associated detection data and/or the non-associated detection data based on the overlapping rate of the target enclosure box and the polygon box.
7. The method of claim 6, wherein the determining point cloud points in the polygon frame as the associated detection data and/or the non-associated detection data based on the overlap ratio of the target bounding box and the polygon frame comprises:
when the overlapping rate of the target enclosing frame and the polygon frame is larger than or equal to a first threshold value, all point cloud points in the polygon frame are used as the associated detection data;
and when the overlapping rate of the target enclosing frame and the polygon frame is smaller than the first threshold, taking the point cloud point in the polygon frame, which is positioned in the target enclosing frame, as the associated detection data, and taking the point cloud point which is not positioned in the target enclosing frame as the non-associated detection data.
8. The method of claim 5, wherein determining whether one or more of the bounding boxes overlap the polygon box for each of the polygon boxes comprises:
calculating the overlapping rate of the polygon frame and each surrounding frame in height;
when the overlapping rate is smaller than a second threshold value, determining that the polygon frame and the bounding box are not overlapped;
when the overlapping rate is larger than or equal to the second threshold, determining that the polygonal frame and the bounding box are overlapped.
9. The method of claim 5, wherein said updating the first obstacle detection data based on the associated detection data comprises:
for each enclosure frame, when detection data associated with the enclosure frame exists, generating a convex hull based on all point cloud points in the associated detection data, wherein a boundary frame of the convex hull is used as a polygon frame of the enclosure frame, and updating the enclosure frame based on the boundary points of the polygon frame of the enclosure frame and four corner points of the enclosure frame.
10. The method according to claim 9, wherein for each of the bounding boxes, when there is no detection data associated with the bounding box, a convex hull is generated based on four corner points of the bounding box, and a bounding box of the convex hull is taken as a polygon box of the bounding box.
11. An obstacle detection apparatus for autonomous driving, characterized in that the apparatus comprises a memory and a processor, the memory having stored thereon a computer program to be run by the processor, the computer program, when being run by the processor, causing the processor to carry out the steps of:
acquiring first obstacle detection data output by a deep learning model aiming at a current scene and second obstacle detection data output by a clustering algorithm aiming at the current scene;
performing correlation calculation on the first obstacle detection data and the second obstacle detection data, and determining detection data correlated with the first obstacle detection data in the second obstacle detection data based on a correlation calculation result;
updating the first obstacle detection data based on the associated detection data, and outputting the updated first obstacle detection data as an obstacle detection result.
12. The apparatus according to claim 11, wherein when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, the processor is further configured to: and outputting the non-relevant detection data as an obstacle detection result.
13. The apparatus according to claim 11, wherein when it is determined that there is detection data that is not associated with the first obstacle detection data in the second obstacle detection data based on the association calculation result, after the first obstacle detection data is updated based on the associated detection data, the processor is further configured to:
performing the correlation calculation on the updated first obstacle detection data and the non-correlated detection data;
when it is determined that there is detection data associated with the updated first obstacle detection data among the non-associated detection data based on the association calculation result, the first obstacle detection data is updated again based on the determined associated detection data, and the updated first obstacle detection data is output.
14. The apparatus of claim 13, wherein when it is determined that there is detection data that is not associated with the updated first obstacle detection data in the non-associated detection data based on an association calculation result, the processor is further configured to: the determined non-correlated detection data is output as an obstacle detection result.
15. The apparatus according to claim 12 or 14, wherein the first obstacle detection data includes a bounding box of an obstacle at a bird's eye view angle and a category, and the second obstacle detection data includes a polygonal box of the obstacle at the bird's eye view angle and point cloud information contained inside the polygonal box;
the correlation calculation includes: for each of the polygon boxes, determining whether there is one or more of the bounding boxes that overlap the polygon box;
the determining, based on the correlation calculation result, detection data in the second obstacle detection data that is correlated with the first obstacle detection data includes:
when it is determined that the surrounding frame and the polygon frame are not overlapped, taking the polygon frame as the non-associated detection data;
when it is determined that the surrounding frame and the polygonal frame are overlapped, point cloud points in the polygonal frame are determined as the associated detection data and/or the non-associated detection data based on the number of the surrounding frames overlapped with the multi-deformation frame.
16. The apparatus according to claim 15, wherein each bounding box overlapping the polygon box is a target bounding box, and wherein the processor determines point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on the number of the bounding boxes overlapping the multi-deformation box, and comprises:
when at least two target enclosing frames are determined to exist, point cloud points in the polygon frame and located in the target enclosing frames are used as the associated detection data, and point cloud points not in the target enclosing frames are used as the non-associated detection data;
when it is determined that one target enclosing frame exists, point cloud points in the polygon frame are determined as the associated detection data and/or the non-associated detection data based on the overlapping rate of the target enclosing frame and the polygon frame.
17. The apparatus of claim 16, wherein the processor determines point cloud points in the polygon box as the associated detection data and/or the non-associated detection data based on an overlap ratio of the target bounding box and the polygon box, comprises:
when the overlapping rate of the target enclosing frame and the polygon frame is larger than or equal to a first threshold value, all point cloud points in the polygon frame are used as the associated detection data;
and when the overlapping rate of the target enclosing frame and the polygon frame is smaller than the first threshold, taking the point cloud point in the polygon frame, which is positioned in the target enclosing frame, as the associated detection data, and taking the point cloud point which is not positioned in the target enclosing frame as the non-associated detection data.
18. The apparatus of claim 15, wherein for each of the polygon boxes, the processor determines whether there is an overlap of one or more of the bounding boxes with the polygon box, comprising:
calculating the overlapping rate of the polygon frame and each surrounding frame in height;
when the overlapping rate is smaller than a second threshold value, determining that the polygon frame and the bounding box are not overlapped;
when the overlapping rate is larger than or equal to the second threshold value, determining that the polygon frame and the bounding box are overlapped.
19. The apparatus of claim 15, wherein the processor updates the first obstacle detection data based on the associated detection data, comprising:
for each enclosure frame, when detection data associated with the enclosure frame exists, generating a convex hull based on all point cloud points in the associated detection data, wherein a boundary frame of the convex hull is used as a polygon frame of the enclosure frame, and updating the enclosure frame based on the boundary points of the polygon frame of the enclosure frame and four corner points of the enclosure frame.
20. The apparatus of claim 19, wherein for each of the bounding boxes, when there is no detection data associated with the bounding box, the processor generates a convex hull based on four corner points of the bounding box, and treats a bounding box of the convex hull as a polygon box of the bounding box.
21. A system for autonomous driving, the system comprising a positioning subsystem, a perception subsystem, a decision subsystem, and a control subsystem, wherein:
the positioning subsystem is used for acquiring the pose information of the automatic driving vehicle in real time and transmitting the pose information to the decision-making subsystem;
the perception subsystem is used for detecting lanes and obstacles and transmitting detection results to the decision-making subsystem;
the decision subsystem is used for making a decision on the automatic driving vehicle by combining the data information transmitted by the positioning subsystem and the sensing subsystem and transmitting decision information to the control subsystem;
the control subsystem is used for controlling the automatic driving vehicle based on decision information transmitted by the decision subsystem;
wherein the perception subsystem comprises the obstacle detection apparatus for autonomous driving of any one of claims 11-20 to obtain an obstacle detection result.
22. A vehicle characterized in that it comprises a system for autonomous driving according to claim 21.
23. A storage medium, characterized in that the storage medium has stored thereon a computer program run by a processor, which computer program, when executed by the processor, causes the processor to execute the obstacle detection method for automatic driving according to any one of claims 1-10.
CN202310028466.1A 2023-01-09 2023-01-09 Obstacle detection method and device for automatic driving and system for automatic driving Pending CN115953761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310028466.1A CN115953761A (en) 2023-01-09 2023-01-09 Obstacle detection method and device for automatic driving and system for automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310028466.1A CN115953761A (en) 2023-01-09 2023-01-09 Obstacle detection method and device for automatic driving and system for automatic driving

Publications (1)

Publication Number Publication Date
CN115953761A true CN115953761A (en) 2023-04-11

Family

ID=87290596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310028466.1A Pending CN115953761A (en) 2023-01-09 2023-01-09 Obstacle detection method and device for automatic driving and system for automatic driving

Country Status (1)

Country Link
CN (1) CN115953761A (en)

Similar Documents

Publication Publication Date Title
CN111507157B (en) Method and device for optimizing resource allocation during automatic driving based on reinforcement learning
JP6830139B2 (en) 3D data generation method, 3D data generation device, computer equipment and computer readable storage medium
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
US10228693B2 (en) Generating simulated sensor data for training and validation of detection models
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
JP7239703B2 (en) Object classification using extraterritorial context
US10948907B2 (en) Self-driving mobile robots using human-robot interactions
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN111026131B (en) Expansion region determining method and device, robot and storage medium
US20180211119A1 (en) Sign Recognition for Autonomous Vehicles
CN111507369B (en) Space learning method and device for automatic driving vehicle, and testing method and device
CN110286389A (en) A kind of grid management method for obstacle recognition
Dey et al. VESPA: A framework for optimizing heterogeneous sensor placement and orientation for autonomous vehicles
US20210018590A1 (en) Perception system error detection and re-verification
US20210134002A1 (en) Variational 3d object detection
JP2023500994A (en) Obstacle recognition method, device, autonomous mobile device and storage medium
CN111308500B (en) Obstacle sensing method and device based on single-line laser radar and computer terminal
US11719799B2 (en) Method for determining a collision free space
CN108169729A (en) The method of adjustment of the visual field of laser radar, medium, laser radar system
CN111507161B (en) Method and device for heterogeneous sensor fusion by utilizing merging network
US20220171975A1 (en) Method for Determining a Semantic Free Space
US11144747B2 (en) 3D data generating device, 3D data generating method, 3D data generating program, and computer-readable recording medium storing 3D data generating program
CN112654998B (en) Lane line detection method and device
CN115953761A (en) Obstacle detection method and device for automatic driving and system for automatic driving
CN116189150A (en) Monocular 3D target detection method, device, equipment and medium based on fusion output

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination