CN117876992A - Obstacle detection method, device, equipment and automatic driving vehicle - Google Patents

Obstacle detection method, device, equipment and automatic driving vehicle Download PDF

Info

Publication number
CN117876992A
CN117876992A CN202311754160.0A CN202311754160A CN117876992A CN 117876992 A CN117876992 A CN 117876992A CN 202311754160 A CN202311754160 A CN 202311754160A CN 117876992 A CN117876992 A CN 117876992A
Authority
CN
China
Prior art keywords
obstacle
background
target
foreground
barrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311754160.0A
Other languages
Chinese (zh)
Inventor
李昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311754160.0A priority Critical patent/CN117876992A/en
Publication of CN117876992A publication Critical patent/CN117876992A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method, a device and equipment for detecting an obstacle and an automatic driving vehicle, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of automatic driving, target detection and the like. The obstacle detection method comprises the following steps: detecting a foreground obstacle for the point cloud data of the current frame to obtain a foreground detection result; if the foreground detection result does not contain the target foreground obstacle, detecting the background obstacle of the current frame point cloud data to obtain the target background obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame; and carrying out fusion processing on the target foreground barrier and the target background barrier to obtain the target barrier. The present disclosure may reduce obstacle missed detection problems.

Description

Obstacle detection method, device, equipment and automatic driving vehicle
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of automatic driving, target detection and the like, and particularly relates to a method, a device and equipment for detecting obstacles and an automatic driving vehicle.
Background
In an automatic driving system, the missed detection of an obstacle can cause serious safety problems, and therefore, the method is particularly important for accurately identifying and tracking any obstacle possibly interacting with a host vehicle on a road.
The current sensing system has better detection capability for conventional obstacles such as vehicles, pedestrians and non-motor vehicles, but the detection capability for abnormal obstacles needs to be improved.
Disclosure of Invention
The present disclosure provides a method, apparatus, and device for obstacle detection and an autonomous vehicle.
According to an aspect of the present disclosure, there is provided an obstacle detection method including: detecting a foreground obstacle for the point cloud data of the current frame to obtain a foreground detection result; if the foreground detection result does not contain the target foreground obstacle, detecting the background obstacle of the current frame point cloud data to obtain the target background obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame; and carrying out fusion processing on the target foreground barrier and the target background barrier to obtain the target barrier.
According to another aspect of the present disclosure, there is provided an obstacle detecting apparatus including: the foreground detection module is used for detecting foreground obstacles on the current frame point cloud data to obtain a foreground detection result; the background detection module is used for detecting the background obstacle of the current frame point cloud data to obtain a target background obstacle if the foreground detection result does not contain the target foreground obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame; and the fusion module is used for carrying out fusion processing on the target foreground barrier and the target background barrier so as to obtain the target barrier.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the above aspects.
According to another aspect of the present disclosure, there is provided an autonomous vehicle comprising the electronic device of any one of the above aspects.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an application scenario for implementing an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a shaped obstacle provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a detection flow of foreground-background binding provided in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a fourth embodiment of the present disclosure;
fig. 8 is a schematic diagram of an electronic device for implementing the obstacle detection method of the embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Shaped obstacles, which are non-conventional obstacles, are generally of a fixed size and volume, and are not of a definite class, and are found on roads with a relatively low frequency. Mainly comprises the following steps: a tree pulling cart, a steel tube pulling truck, a balloon pulling non-motor vehicle, a trailer pedestrian and the like. Because the size, volume and class of the obstacles are uncertain, the detection capability of the current sensing system for the obstacles is poor, and the problem of missed detection often occurs.
In order to improve the detection capability of the abnormal obstacle and reduce the problem of missed detection, the present disclosure provides the following embodiments.
Fig. 1 is a schematic diagram of a first embodiment of the present disclosure, and the present embodiment provides an obstacle detection method, which includes:
101. and detecting the foreground obstacle of the current frame point cloud data to obtain a foreground detection result.
102. If the foreground detection result does not contain the target foreground obstacle, detecting the background obstacle of the current frame point cloud data to obtain the target background obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame.
103. And carrying out fusion processing on the target foreground barrier and the target background barrier to obtain the target barrier.
Among them, a foreground obstacle generally refers to an obstacle that can be modeled and represented by a bounding box, such as a vehicle, a pedestrian, a non-motor vehicle, etc.
Background obstacles, typically, refer to obstacles that cannot be modeled and represented by bounding boxes without explicit contour information, such as railways, bushes, water-filled horses, and the like.
Specifically, the foreground obstacle can be detected through a pre-trained target detection model, certain frame of point cloud data is input into the target detection model, the output of the target detection model is the detection result of the foreground obstacle included in the frame of point cloud data, and the detection result can be represented by a bounding box. The object detection model is, for example, a PointPicloras model, which is an object detection model that can output a three-dimensional (3D) bounding box of an object. The object detection model is adopted, and the recognition effect of the obstacle with a clear outline, namely the obstacle which can be represented by the bounding box is good.
The background obstacle can be based on the idea of semantic segmentation, and whether each point in a certain frame of point cloud data is an obstacle or not can be detected point by point, so that the recognition effect on an irregular obstacle (an obstacle without a clear outline) is better.
Assuming that the current frame is represented by a t frame, the previous frame is represented by a (t-1) frame, and the foreground obstacle obtained after the previous frame point cloud data, namely the (t-1) frame point cloud data, is subjected to foreground obstacle detection is called as a target foreground obstacle. For example, after the previous frame of point cloud data is input to the target detection model for processing, the 3D bounding box of the vehicle a is output, and the target foreground barrier is the vehicle a.
And inputting the current frame point cloud data into the target detection model to obtain a foreground detection result corresponding to the current frame point cloud data.
If the foreground detection result does not include the target foreground obstacle, for example, the foreground obstacle included in the foreground detection result is the vehicle B, and since the target foreground obstacle is the vehicle a, the foreground detection result at this time does not include the target foreground obstacle, and the target foreground obstacle (the vehicle a) at this time may have a problem of missed detection.
In order to improve the detection rate of the obstacle and avoid or reduce the problem of missed detection, when the foreground detection result does not contain the target foreground obstacle, background obstacle detection is also carried out on the current frame point cloud data.
The background obstacle obtained based on the current frame point cloud data may be referred to as a target background obstacle.
Different from the detection of the foreground obstacle based on the target detection model, when the background obstacle is detected, the probability that each point in the point cloud belongs to the obstacle can be judged point by point, and the point with the probability larger than the threshold value is used as the point in the background obstacle, so that not only the conventional obstacle but also the non-conventional obstacle can be identified. For example, in a scene that the special-shaped obstacle is a tree pulling cart, through point-by-point analysis, the point corresponding to the cart in the scene can be identified, the point corresponding to the tree pulled on the cart can also be identified, namely, the target background obstacle at the moment comprises the cart and the tree, and because the characteristics of the target background obstacle are identified point by point, the identification effect of the target background obstacle on the irregular obstacle, such as the tree pulled by the cart in the scene, is more accurate.
After the target foreground barrier and the target background barrier are obtained, the target foreground barrier and the target background barrier can be fused to obtain the target barrier. For example, the target foreground obstacle is a vehicle a, the target background obstacle comprises the tree, and after the fusion treatment, the vehicle a and the tree can be combined to obtain the special-shaped obstacle of the tree pulling cart.
In this embodiment, the target obstacle is obtained by fusing the target foreground obstacle and the target background obstacle, so that the advantages of foreground obstacle detection and background obstacle detection can be combined, the obstacle detection capability is improved, and the problem of missed detection is avoided or reduced.
For better understanding of the embodiments of the present disclosure, application scenarios of the embodiments of the present disclosure are described. The present embodiment takes an autopilot scenario as an example.
Fig. 2 is a schematic diagram of an application scenario for implementing an embodiment of the present disclosure. As shown in fig. 2, an autonomous vehicle 201 may communicate with a server 202 during travel. The server may be a vehicle enterprise local server or cloud server, and may be a single server or a server cluster. The autonomous vehicle may communicate with the server via a mobile communication network and/or a satellite communication network.
In the driving process of the autopilot vehicle 201, multiple frames of point cloud data can be collected by using a sensor such as a laser radar or a depth camera, and the collected point cloud data is sent to a server, and the server detects an obstacle based on the point cloud data to obtain a target obstacle. While the above description uses a server for obstacle detection as an example, it is understood that if the autonomous vehicle has a relevant capability, the obstacle detection may also be performed off-line by the autonomous vehicle.
In the concrete implementation, the foreground obstacle detection and the background obstacle detection can be combined to obtain the target obstacle so as to improve the detection capability of the obstacle, in particular to the abnormal obstacle.
Shaped obstacles, which are non-conventional obstacles, are generally of a fixed size and volume, and are not of a definite class, and are found on roads with a relatively low frequency. Mainly comprises the following steps: a tree pulling cart, a steel tube pulling truck, a balloon pulling non-motor vehicle, a trailer pedestrian and the like.
As shown in fig. 3, a schematic representation of three shaped obstacles are given, which are: balloon pulling vehicles, tree pulling vehicles and trailer pedestrians.
Because the size, the volume and the category of the special-shaped obstacles are uncertain, the detection capability of the current perception system for the obstacles is poor, and the problem of missed detection often occurs.
In order to reduce the problem of obstacle missed detection, in this embodiment, as shown in fig. 4, obstacle detection is mainly performed based on two adjacent frames of point cloud data, where the two adjacent frames of point cloud data may be respectively referred to as current frame point cloud data and last frame point cloud data, and are respectively represented by t-th frame point cloud data and (t-1) -th frame point cloud data.
As shown in fig. 4, for the (t-1) th frame point cloud data, the frame point cloud data is input into a pre-trained target detection model, and by taking the example that the target detection model is a 3D target detection model, the 3D bounding box of the foreground obstacle contained in the frame point cloud data can be accurately identified through the target detection model. For example, for a shaped obstacle such as a treelet, a 3D bounding box may be identified in which the vehicle is located. And (3) the foreground obstacle in the (t-1) frame point cloud data identified by the target detection model is called a target foreground obstacle.
And inputting the t frame point cloud data into the target detection model to obtain a foreground detection result aiming at the t frame point cloud data. If the foreground obstacle is included, the foreground obstacle obtained based on the t-th frame point cloud data may be used as a new target foreground obstacle, and the corresponding operation is executed based on the foreground detection result of the next frame point cloud data, so as to continuously track the foreground obstacle.
If the foreground detection result does not include the target foreground obstacle, as shown in fig. 4, background obstacle detection is performed on the t frame point cloud data, and the background obstacle finally obtained by the background obstacle detection may be referred to as a target background obstacle. For example, for a special-shaped obstacle such as a treelet, a relatively precise boundary of the tree can be identified through a background obstacle.
And then, carrying out fusion treatment on the target foreground barrier and the target background barrier to obtain the target barrier. Specifically, a fusion rule may be configured in advance, and fusion processing may be performed based on the rule. For example, a union region of a 3D bounding box region corresponding to the target foreground obstacle and a region corresponding to the target background obstacle boundary may be taken as the region of the target obstacle.
In combination with the application scenario, the disclosure further provides an obstacle detection method.
Fig. 5 is a schematic diagram of a second embodiment of the present disclosure, where the present embodiment provides an obstacle detection method, the method including:
501. and detecting the foreground obstacle for the point cloud data of the previous frame to obtain the target foreground obstacle.
In connection with fig. 4, the previous frame point cloud data (the (t-1) th frame point cloud data) may be input into a target detection model (such as a pointpilars model), where the foreground obstacle included in the (t-1) th frame point cloud data detected by the target detection model is referred to as a target foreground obstacle.
502. And detecting the foreground obstacle of the current frame point cloud data to obtain a foreground detection result.
In conjunction with fig. 4, the current frame point cloud data (t-th frame point cloud data) may be input into a target detection model (e.g., a pointpilars model), where the output of the target detection model is a foreground detection result corresponding to the t-th frame point cloud data.
503. And if the foreground detection result does not contain the target foreground obstacle, detecting the background obstacle of the current frame point cloud data to obtain the target background obstacle.
Wherein, a plurality of background obstacle detection algorithms can be adopted to detect the background obstacle of the current frame point cloud data so as to obtain a plurality of background obstacle detection results; and if the at least one background obstacle detection result comprises a background obstacle, taking the background obstacle contained in the at least one background obstacle detection result as the target background obstacle.
For example, the plurality of background obstacle detection algorithms include a first background obstacle detection algorithm and a second background obstacle detection algorithm, the first background obstacle is obtained based on the first background obstacle detection algorithm, and the first background obstacle is taken as a target background obstacle if the background obstacle is not detected based on the second background obstacle detection algorithm; or, if the first background obstacle detection algorithm does not detect the background obstacle and the second background obstacle detection algorithm detects the second background obstacle, the second background obstacle is taken as the target background obstacle; or, the first background obstacle is obtained based on the first background obstacle detection algorithm, and the first background obstacle and the second background obstacle are used as target background obstacles when the second background obstacle is detected based on the second background obstacle detection algorithm.
If no background obstacle is detected based on any of the background obstacles, the background obstacle may be regarded as empty, and the target foreground obstacle obtained based on the previous frame point cloud data may be regarded as the target obstacle corresponding to the current frame point cloud data.
In this embodiment, a plurality of background obstacle detection algorithms are adopted to detect the background obstacle, so that the detection of the background obstacle can be performed in a plurality of modes, and the detection rate of the background obstacle is improved.
Further, the plurality of background obstacle detection algorithms include: a first background obstacle detection algorithm and a second background obstacle detection algorithm; the first background obstacle detection algorithm is used for detecting a background obstacle in an operation state; the second background obstacle detection algorithm is used for detecting a background obstacle in a static state.
Specifically, the first background obstacle detection algorithm and the second background obstacle detection algorithm may be algorithms for detecting a moving object and a stationary object, respectively, in the related art.
For example, the first background obstacle detection algorithm may identify points having motion in the point cloud data of the current frame and the point cloud data of the previous frame by comparing the point cloud data of the current frame and the point cloud data of the previous frame, cluster the points having motion as points corresponding to the background obstacle, and use boundaries corresponding to the points as boundaries of the first background obstacle, where the first background obstacle is the background obstacle detected by the first background obstacle detection algorithm.
For another example, the second background obstacle detection algorithm may perform semantic segmentation processing on the point cloud data of the current frame, cluster points corresponding to the same semantic as points corresponding to the background obstacle, and use boundaries corresponding to the points as boundaries of the second background obstacle, where the second background obstacle is a background obstacle detected by the second background obstacle detection algorithm.
In this embodiment, since the first background obstacle detection algorithm is used to detect a background obstacle in a moving state, and the second background obstacle detection algorithm is used to detect a background obstacle in a stationary state, detection of a moving obstacle and a stationary obstacle can be covered, and the comprehensiveness of the detection range is improved, so as to detect various possible background obstacles as much as possible.
504. And carrying out fusion processing on the target foreground barrier and the target background barrier to obtain the target barrier.
The target foreground barrier and the target background barrier can be subjected to space-time alignment treatment to obtain a target foreground barrier and a target background barrier after the space-time alignment treatment; performing association processing on the target foreground barrier and the target background barrier after the space-time alignment processing to obtain the target foreground barrier and the target background barrier which are associated with each other; and performing fusion processing of at least one state information on the related target foreground obstacle and the related target background obstacle to obtain the target obstacle.
The target foreground obstacle is obtained based on the previous frame of point cloud data, the target background obstacle is obtained based on the current frame of point cloud data, and in order to improve accuracy, space-time alignment processing needs to be performed on the target foreground obstacle and the target background obstacle, wherein the space-time alignment processing refers to mapping the target foreground obstacle and the target background obstacle under the same space-time coordinate system, and particularly, the space-time alignment processing can be performed in a motion compensation mode.
After the target foreground obstacle and the target background obstacle after the space-time alignment processing are obtained, in order to obtain a detection result for the same obstacle, the correlation processing needs to be performed on the target foreground obstacle and the target background obstacle, specifically, the correlation may be performed based on the intersection ratio of a first surrounding frame corresponding to the target foreground obstacle and a second surrounding frame corresponding to the target background obstacle, for example, if the intersection ratio of the first surrounding frame and the second surrounding frame is greater than a preset threshold, it is indicated that the corresponding target foreground obstacle and the corresponding target background obstacle are the same obstacle, and the two are correlated with each other. And then, carrying out fusion processing on the target foreground barrier and the target background barrier which are related to each other.
In this embodiment, the detection accuracy of the target obstacle may be improved by performing the fusion process after the space-time alignment process and the association process.
Further, the fusion process of the at least one status information includes at least one of: fusion processing of position information, fusion processing of size information and fusion processing of speed information.
Wherein, the acquiring the target obstacle may specifically refer to acquiring at least one kind of status information of the target obstacle. The at least one status information includes, for example: one or more of the position information, the size information, and the speed information, and accordingly, the fusion process may also be a fusion process of at least one of the state information described above.
The specific fusion rule may be preset, and the fusion rules of different state information may be the same or different. For example, the first speed information and the first confidence of the first speed information of the target foreground obstacle may be obtained by using the target detection model, the second speed information and the second confidence of the second speed information of the target background obstacle may be obtained by using the first background obstacle detection algorithm (the algorithm for detecting the movement obstacle), and if the first confidence is greater than the second confidence, the final speed information of the target obstacle is determined to be the first speed information. For another example, the first position information of the target foreground obstacle and the second position information of the target background obstacle may be weighted and summed to obtain the position information of the final target obstacle. For example, regarding the size information, a union region may be obtained from a region corresponding to the first size information of the target foreground obstacle and a region corresponding to the second size information of the target background obstacle, and the size information corresponding to the union region may be used as the size information of the target obstacle.
In this embodiment, by performing one or more of the fusion processing of the position information, the fusion processing of the size information, and the fusion processing of the speed information, flexibility may be improved, and more precise various state information of the target obstacle may be obtained.
Fig. 6 is a schematic diagram of a third embodiment of the present disclosure, which provides an obstacle detecting apparatus 600, including: a foreground detection module 601, a background detection module 602, and a fusion module 603.
The foreground detection module 601 is configured to detect a foreground obstacle of the current frame point cloud data to obtain a foreground detection result; the background detection module 602 is configured to detect a background obstacle for the current frame point cloud data if the foreground detection result does not include the target foreground obstacle, so as to obtain a target background obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame; the fusion module 603 is configured to perform fusion processing on the target foreground obstacle and the target background obstacle to obtain a target obstacle.
In this embodiment, the target obstacle is obtained by fusing the target foreground obstacle and the target background obstacle, so that the advantages of foreground obstacle detection and background obstacle detection can be combined, the obstacle detection capability is improved, and the problem of missed detection is avoided or reduced.
In some embodiments, the background detection module 602 is further configured to:
performing background obstacle detection on the current frame point cloud data by adopting a plurality of background obstacle detection algorithms to obtain a plurality of background obstacle detection results;
and if the at least one background obstacle detection result comprises a background obstacle, taking the background obstacle contained in the at least one background obstacle detection result as the target background obstacle.
In this embodiment, a plurality of background obstacle detection algorithms are adopted to detect the background obstacle, so that the detection of the background obstacle can be performed in a plurality of modes, and the detection rate of the background obstacle is improved.
In some embodiments, the plurality of background obstacle detection algorithms comprises:
a first background obstacle detection algorithm and a second background obstacle detection algorithm;
the first background obstacle detection algorithm is used for detecting a background obstacle in an operation state;
the second background obstacle detection algorithm is used for detecting a background obstacle in a static state.
In this embodiment, since the first background obstacle detection algorithm is used to detect a background obstacle in a moving state, and the second background obstacle detection algorithm is used to detect a background obstacle in a stationary state, detection of a moving obstacle and a stationary obstacle can be covered, and the comprehensiveness of the detection range is improved, so as to detect various possible background obstacles as much as possible.
In some embodiments, the fusion module 603 is further configured to:
performing space-time alignment treatment on the target foreground barrier and the target background barrier to obtain a target foreground barrier and a target background barrier after the space-time alignment treatment;
performing association processing on the target foreground barrier and the target background barrier after the space-time alignment processing to obtain the target foreground barrier and the target background barrier which are associated with each other;
and carrying out fusion processing of at least one state information on the interrelated target foreground obstacle and target background obstacle to obtain the target obstacle.
In this embodiment, the detection accuracy of the target obstacle may be improved by performing the fusion process after the space-time alignment process and the association process.
In some embodiments, the at least one fusion process of state information includes at least one of:
fusion processing of position information, fusion processing of size information and fusion processing of speed information.
In this embodiment, by performing one or more of the fusion processing of the position information, the fusion processing of the size information, and the fusion processing of the speed information, flexibility may be improved, and more precise various state information of the target obstacle may be obtained.
It is to be understood that in the embodiments of the disclosure, the same or similar content in different embodiments may be referred to each other.
It can be understood that "first", "second", etc. in the embodiments of the present disclosure are only used for distinguishing, and do not indicate the importance level, the time sequence, etc.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
In accordance with an embodiment of the present disclosure, as shown in fig. 7, the present disclosure also provides an autonomous vehicle 700 comprising an electronic device 701. The description of the electronic device 701 may be found in the subsequent embodiments. Specifically, the electronic apparatus 701 may specifically perform the obstacle detection operation.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic device 800 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. Electronic device 800 may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in electronic device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, such as an obstacle detection method. For example, in some embodiments, the obstacle detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When a computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the obstacle detection method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the obstacle detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-chips (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable load balancing apparatus, such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. An obstacle detection method comprising:
detecting a foreground obstacle for the point cloud data of the current frame to obtain a foreground detection result;
if the foreground detection result does not contain the target foreground obstacle, detecting the background obstacle of the current frame point cloud data to obtain the target background obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame;
and carrying out fusion processing on the target foreground barrier and the target background barrier to obtain the target barrier.
2. The method of claim 1, wherein the background obstacle detection of the current frame point cloud data to obtain a target background obstacle comprises:
performing background obstacle detection on the current frame point cloud data by adopting a plurality of background obstacle detection algorithms to obtain a plurality of background obstacle detection results;
and if the at least one background obstacle detection result comprises a background obstacle, taking the background obstacle contained in the at least one background obstacle detection result as the target background obstacle.
3. The method of claim 2, wherein the plurality of background obstacle detection algorithms comprises:
a first background obstacle detection algorithm and a second background obstacle detection algorithm;
the first background obstacle detection algorithm is used for detecting a background obstacle in an operation state;
the second background obstacle detection algorithm is used for detecting a background obstacle in a static state.
4. A method according to any one of claims 1-3, wherein said fusing of said target foreground obstacle and said target background obstacle to obtain a target obstacle comprises:
performing space-time alignment treatment on the target foreground barrier and the target background barrier to obtain a target foreground barrier and a target background barrier after the space-time alignment treatment;
performing association processing on the target foreground barrier and the target background barrier after the space-time alignment processing to obtain the target foreground barrier and the target background barrier which are associated with each other;
and carrying out fusion processing of at least one state information on the interrelated target foreground obstacle and target background obstacle to obtain the target obstacle.
5. The method of claim 4, wherein the fusion process of the at least one state information comprises at least one of:
fusion processing of position information, fusion processing of size information and fusion processing of speed information.
6. An obstacle detection device comprising:
the foreground detection module is used for detecting foreground obstacles on the current frame point cloud data to obtain a foreground detection result;
the background detection module is used for detecting the background obstacle of the current frame point cloud data to obtain a target background obstacle if the foreground detection result does not contain the target foreground obstacle; the target foreground obstacle is obtained after the foreground obstacle detection is carried out on the point cloud data of the previous frame;
and the fusion module is used for carrying out fusion processing on the target foreground barrier and the target background barrier so as to obtain the target barrier.
7. The apparatus of claim 6, wherein the background detection module is further to:
performing background obstacle detection on the current frame point cloud data by adopting a plurality of background obstacle detection algorithms to obtain a plurality of background obstacle detection results;
and if the at least one background obstacle detection result comprises a background obstacle, taking the background obstacle contained in the at least one background obstacle detection result as the target background obstacle.
8. The apparatus of claim 7, wherein the plurality of background obstacle detection algorithms comprises:
a first background obstacle detection algorithm and a second background obstacle detection algorithm;
the first background obstacle detection algorithm is used for detecting a background obstacle in an operation state;
the second background obstacle detection algorithm is used for detecting a background obstacle in a static state.
9. The apparatus of any of claims 6-9, wherein the fusion module is further to:
performing space-time alignment treatment on the target foreground barrier and the target background barrier to obtain a target foreground barrier and a target background barrier after the space-time alignment treatment;
performing association processing on the target foreground barrier and the target background barrier after the space-time alignment processing to obtain the target foreground barrier and the target background barrier which are associated with each other;
and carrying out fusion processing of at least one state information on the interrelated target foreground obstacle and target background obstacle to obtain the target obstacle.
10. The apparatus of claim 9, wherein the fusion process of the at least one state information comprises at least one of:
fusion processing of position information, fusion processing of size information and fusion processing of speed information.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-5.
14. An autonomous vehicle comprising: the electronic device of claim 11.
CN202311754160.0A 2023-12-19 2023-12-19 Obstacle detection method, device, equipment and automatic driving vehicle Pending CN117876992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311754160.0A CN117876992A (en) 2023-12-19 2023-12-19 Obstacle detection method, device, equipment and automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311754160.0A CN117876992A (en) 2023-12-19 2023-12-19 Obstacle detection method, device, equipment and automatic driving vehicle

Publications (1)

Publication Number Publication Date
CN117876992A true CN117876992A (en) 2024-04-12

Family

ID=90592669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311754160.0A Pending CN117876992A (en) 2023-12-19 2023-12-19 Obstacle detection method, device, equipment and automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN117876992A (en)

Similar Documents

Publication Publication Date Title
CN112526999B (en) Speed planning method, device, electronic equipment and storage medium
CN113378760A (en) Training target detection model and method and device for detecting target
CN112785625A (en) Target tracking method and device, electronic equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
EP4145408A1 (en) Obstacle detection method and apparatus, autonomous vehicle, device and storage medium
CN113326786A (en) Data processing method, device, equipment, vehicle and storage medium
CN114882198A (en) Target determination method, device, equipment and medium
CN114815851A (en) Robot following method, robot following device, electronic device, and storage medium
CN113688730A (en) Obstacle ranging method, apparatus, electronic device, storage medium, and program product
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN115830268A (en) Data acquisition method and device for optimizing perception algorithm and storage medium
CN117876992A (en) Obstacle detection method, device, equipment and automatic driving vehicle
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
CN113936158A (en) Label matching method and device
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113587937A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN112861811A (en) Target identification method, device, equipment, storage medium and radar
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN114584949B (en) Method and equipment for determining attribute value of obstacle through vehicle-road cooperation and automatic driving vehicle
CN115431968B (en) Vehicle controller, vehicle and vehicle control method
CN117853757A (en) Obstacle tracking and tracking model training method, device, equipment and storage medium
CN113516013B (en) Target detection method, target detection device, electronic equipment, road side equipment and cloud control platform
CN117854038A (en) Construction area acquisition method and device, electronic equipment and automatic driving vehicle
CN115906001A (en) Multi-sensor fusion target detection method, device and equipment and automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination