CN117671643A - Obstacle detection method and device - Google Patents

Obstacle detection method and device Download PDF

Info

Publication number
CN117671643A
CN117671643A CN202311753726.8A CN202311753726A CN117671643A CN 117671643 A CN117671643 A CN 117671643A CN 202311753726 A CN202311753726 A CN 202311753726A CN 117671643 A CN117671643 A CN 117671643A
Authority
CN
China
Prior art keywords
obstacle
foreground
segmented
under
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311753726.8A
Other languages
Chinese (zh)
Inventor
蒋文兰
孔祥振
张晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311753726.8A priority Critical patent/CN117671643A/en
Publication of CN117671643A publication Critical patent/CN117671643A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure provides a method and a device for detecting an obstacle, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to an automatic driving scene. One embodiment of the method comprises the following steps: detecting a foreground obstacle in a previous frame of image shot by a camera of the vehicle; filtering out foreground barrier pairs adjacent to each other from the foreground barriers to serve as candidate undersegregation barrier pairs; predicting the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair; an under-segmented obstacle is detected from the current frame image based on the position of the candidate under-segmented obstacle. This embodiment can efficiently and accurately detect various types of under-divided obstacles by the front-rear frame obstacle continuity judgment.

Description

Obstacle detection method and device
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to automatic driving scenes.
Background
An automatic driving automobile is also called an unmanned automobile, a computer driving automobile or a wheel type mobile robot, and is an intelligent automobile for realizing unmanned through a computer system. The automatic driving automobile relies on cooperation of artificial intelligence, visual computing, radar, monitoring device and global positioning system, so that the computer can automatically and safely operate the motor vehicle without any active operation of human beings.
In an autopilot scenario, an autopilot vehicle requires detection of obstacles in the surrounding environment. However, the perception model sometimes detects multiple obstacles as one and the same, which causes unsafe driving of the vehicle and takes over by the safety officer in the interaction scene.
Disclosure of Invention
The embodiment of the disclosure provides an obstacle detection method, an obstacle detection device, a storage medium and a program product.
In a first aspect, an embodiment of the present disclosure provides an obstacle detection method, including: detecting a foreground obstacle in a previous frame of image shot by a camera of the vehicle; filtering out foreground barrier pairs adjacent to each other from the foreground barriers to serve as candidate undersegregation barrier pairs; predicting the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair; an under-segmented obstacle is detected from the current frame image based on the position of the candidate under-segmented obstacle.
In a second aspect, an embodiment of the present disclosure proposes an obstacle detection device including: a first detection module configured to detect a foreground obstacle in a previous frame of image captured by a camera of the vehicle; a filtering module configured to filter pairs of foreground obstacles adjacent to each other from the foreground obstacles as candidate undersegregation obstacle pairs; a prediction module configured to predict a position of the candidate under-segmented obstacle in the current frame image based on a speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair; and a second detection module configured to detect an under-segmented obstacle from the current frame image based on the position of the candidate under-segmented obstacle.
In a third aspect, an embodiment of the present disclosure proposes an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as described in the first aspect.
In a fifth aspect, embodiments of the present disclosure propose a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the first aspect.
The embodiment of the disclosure provides an obstacle detection method, which can efficiently and accurately detect various types of undersegregated obstacles through the continuity judgment of the obstacles of front and rear frames.
Nor is it intended to limit the scope of the present disclosure to the critical or important features of the embodiments of the present disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings. The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of an obstacle detection method according to the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of an obstacle detection method according to the present disclosure;
FIG. 4 is a flow chart of another embodiment of an obstacle detection method according to the present disclosure;
FIG. 5 is a flow diagram of an obstacle detection method in which embodiments of the present disclosure may be implemented;
FIG. 6 is a schematic structural view of one embodiment of an obstacle detection device according to the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing an obstacle detection method of an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 in which the present application may be applied.
As shown in fig. 1, a system architecture 100 may include a camera 101, a network 102, and a server 103. The network 102 is a medium used to provide a communication link between the camera 101 and the server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The camera 101 may be mounted on an autonomous vehicle, interact with a server 103 via a network 102 to receive or send messages, etc. The camera 101 may acquire a sequence of image frames around the autonomous vehicle and send to the server 103 for processing.
The server 103 may be an on-board server or a cloud server of the autonomous vehicle. The server 103 may perform processing such as analysis of the received image frame sequence or the like and generate processing results (e.g., under-cut obstacles) for subsequent control of the autonomous vehicle.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module. The present invention is not particularly limited herein.
It should be noted that, the obstacle detection method or the perception model training method provided in the embodiments of the present application is generally executed by the server 103, and accordingly, the obstacle detection device or the perception model training device is generally disposed in the server 103.
It should be understood that the number of cameras, networks and servers in fig. 1 is merely illustrative. There may be any number of cameras, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of an obstacle detection method according to the present disclosure is shown. The obstacle detection method comprises the following steps:
step 201, detecting a foreground obstacle in a previous frame of image shot by a camera of a vehicle.
In the present embodiment, the execution subject of the obstacle detection method (e.g., the server 103 shown in fig. 1) may acquire the image frame sequence captured by the camera of the vehicle. For the previous frame image of the current frame image, the execution subject may detect a foreground obstacle in the previous frame image.
The foreground obstacle may be an obstacle in the foreground in the previous frame of image. The foreground obstacle is typically closer to the vehicle and the vehicle continues to travel along the planned trajectory, potentially colliding with the foreground obstacle.
Typically, all obstructions in the previous frame of image are detected and translated into the vehicle body coordinate system. An obstacle falling within the range of the observed obstacle is a foreground obstacle. The obstacle falling outside the range of the observed obstacle is a rear obstacle, and filtering is required. Wherein the range of the observed obstacle may be determined by the distance to the vehicle and the distance to the planned trajectory of the vehicle. For example, an obstacle having a distance from the vehicle less than a distance threshold (e.g., 50 meters) and a perpendicular distance from the planned trajectory of the vehicle less than an observed threshold may fall within the range of observed obstacles.
Step 202, filtering out foreground obstacle pairs adjacent to each other from the foreground obstacles as candidate undersegregated obstacle pairs.
In this embodiment, the execution body may filter out, from the foreground obstacles, the foreground obstacle pairs adjacent to each other as candidate undersegregation obstacle pairs.
The candidate under-segmented obstacle pair, also called suspected under-segmented obstacle pair, may comprise two foreground obstacles adjacent to each other. The two foreground obstacles adjacent to each other may undergo an under-segmentation phenomenon, and are detected as the same obstacle. In general, the under-segmentation phenomenon occurs between two obstacles that are close to each other. To improve the accuracy of the delineation of the under-segmentation, the bounding boxes of the two obstacles intersect. The bounding box may be a smallest rectangular solid including an obstacle and oriented in any direction relative to the coordinate axis.
In some embodiments, a bounding box for each foreground obstacle is determined. If the bounding boxes of two foreground obstacles intersect, then the two foreground obstacles are taken as candidate undersegregation obstacle pairs. By selecting the obstacle intersected by the two bounding boxes as the candidate underspection obstacle pair, the accuracy of the underspection is improved.
Step 203, predicting the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair.
In this embodiment, the execution body may predict the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair.
Typically, the product of the velocity of the candidate under-segmented obstacle and the time interval between two frames of images is calculated, and the course of the candidate under-segmented obstacle can be obtained. And adding a distance to the position of the candidate under-segmentation obstacle in the previous frame image to obtain the position of the candidate under-segmentation obstacle in the current frame image.
Step 204, detecting the under-segmented obstacle from the current frame image based on the position of the candidate under-segmented obstacle.
In the present embodiment, the execution subject may detect the under-segmented obstacle from the current frame image based on the position of the candidate under-segmented obstacle.
In general, the under-segmented obstacle is in the vicinity of the position of the candidate under-segmented obstacle, has the same category as the candidate under-segmented obstacle, and has a longer width than the candidate under-segmented obstacle.
In some embodiments, for two candidate under-segmented obstacles in the candidate under-segmented obstacle pair, an obstacle of the same category as one of the candidate under-segmented obstacles and having a length-width greater than that of the candidate under-segmented obstacle of the category is found in two preset ranges centered on the position of the candidate under-segmented obstacle. If there is only one obstacle satisfying the condition, the obstacle is regarded as an under-divided obstacle. The preset range is centered at the position of the candidate under-segmented obstacle in the current frame image, and the sum of the width of the candidate under-segmented obstacle and a preset value is taken as the radius.
The embodiment of the disclosure provides an obstacle detection method, which can efficiently and accurately detect various types of undersegregated obstacles through the continuity judgment of the obstacles of front and rear frames.
With further reference to fig. 3, a flow 300 of one embodiment of an obstacle detection method according to the present disclosure is shown. The obstacle detection method comprises the following steps:
step 301, detecting an obstacle in a previous frame of image.
In the present embodiment, the execution subject of the obstacle detection method (e.g., the server 103 shown in fig. 1) may acquire the image frame sequence captured by the camera of the vehicle. For the previous frame image of the current frame image, the execution subject may detect all the obstacles in the previous frame image.
In general, the last frame image is input to the sensing model, and all obstacles in the last frame image can be detected.
Step 302, selecting an obstacle with a distance from the vehicle smaller than a distance threshold and a vertical distance from a planned track of the vehicle smaller than an observation threshold from the obstacles as a foreground obstacle.
In this embodiment, the execution body may select, as the foreground obstacle, an obstacle whose distance from the vehicle is smaller than the distance threshold and whose vertical distance from the planned trajectory of the vehicle is smaller than the observation threshold from among the obstacles.
The foreground obstacle may be an obstacle in the foreground in the previous frame of image. The foreground obstacle is typically closer to the vehicle and the vehicle continues to travel along the planned trajectory, potentially colliding with the foreground obstacle.
Typically, all obstructions in the previous frame of image are detected and translated into the vehicle body coordinate system. An obstacle falling within the range of the observed obstacle is a foreground obstacle. The obstacle falling outside the range of the observed obstacle is a rear obstacle, and filtering is required. Wherein the range of the observed obstacle may be determined by the distance to the vehicle and the distance to the planned trajectory of the vehicle. For example, an obstacle having a distance from the vehicle less than a distance threshold (e.g., 50 meters) and a perpendicular distance from the planned trajectory of the vehicle less than an observed threshold may fall within the range of observed obstacles. Wherein the position of the obstacle is pos obs The predicted trajectory of the obstacle and the vehicle is s= { pos 1 ,pos 2 ,...,pos n The vertical distance between the obstacle and the planned trajectory of the vehicle is min { dist (pos) obs ,pos i ) I=0, 1,..n }. The observation threshold value may be equal to the sum of the vehicle width, the obstacle width and the preset value, and the formula may be as follows:
L observation threshold =l Vehicle width +l Width of obstacle +l Threshold value
Wherein L is Observation threshold To observe the threshold value, l Vehicle width For the length of the vehicle l Width of obstacle For the width of the obstacle, l Threshold value The predetermined value may be an empirical value, such as 1 meter.
Step 303, calculating the distance between every two foreground obstacles, and filtering out the foreground obstacle pairs with the distance smaller than the sum of the width of the two foreground obstacles and the preset value as the foreground obstacle pairs approaching each other.
In this embodiment, the execution body may calculate the distance between every two foreground obstacles, and filter out a foreground obstacle pair whose distance is smaller than the sum of the width of the two foreground obstacles and a preset value, as a foreground obstacle pair that approaches each other.
In general, the under-segmentation phenomenon occurs between two obstacles that are close to each other.
The distance formula between every two foreground barriers is as follows:
wherein, (x) i ,y i ) Is the coordinates of the ith foreground obstacle, (x) j ,y j ) Is the coordinate of the jth foreground obstacle, if d ij <l Obstacle i width +l Width of obstacle j +l Threshold value Indicating that two foreground obstacles are close to each other, l Obstacle i width Is the width of the ith foreground barrier, l Width of obstacle j Is the width of the jth foreground barrier, l Threshold value The predetermined value may be an empirical value, such as 1 meter.
Step 304, filtering out the foreground obstacle pairs intersected by the bounding box from the foreground obstacle pairs that are close to each other.
In this embodiment, the execution body may filter out the foreground obstacle pair intersected by the bounding box from the foreground obstacle pair that are close to each other.
In general, the under-segmentation phenomenon occurs between two obstacles that are close to each other. To improve the accuracy of the delineation of the under-segmentation, the bounding boxes of the two obstacles intersect. The bounding box may be a smallest rectangular solid including an obstacle and oriented in any direction relative to the coordinate axis.
Step 305, predicting the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair.
In this embodiment, the execution body may predict the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair.
In this embodiment, the execution body may predict the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair.
Typically, the product of the velocity of the candidate under-segmented obstacle and the time interval between two frames of images is calculated, and the course of the candidate under-segmented obstacle can be obtained. And adding a distance to the position of the candidate under-segmentation obstacle in the previous frame image to obtain the position of the candidate under-segmentation obstacle in the current frame image.
The calculation formula of the position of the candidate under-segmentation obstacle in the current frame image can be as follows:
x′=x+v x
y′=y+v y
wherein (x ', y') is the position of the candidate under-segmented obstacle in the current frame image, and (x, y) is the position of the candidate under-segmented obstacle in the previous frame image, v x V is the speed of the candidate under-segmented obstacle in the X-axis direction y Is the speed of the candidate under-segmented obstacle in the Y-axis direction.
Step 306, searching for an obstacle which is the same as the type of the candidate under-segmented obstacle pair and has a longer width than the candidate under-segmented obstacle pair in a preset range with the position of the candidate under-segmented obstacle as the center, as the under-segmented obstacle.
In this embodiment, the execution body may search, as the under-divided obstacle, an obstacle having the same category as the candidate under-divided obstacle pair and a longer width than the candidate under-divided obstacle pair within a preset range centered on the position of the candidate under-divided obstacle.
In general, the under-segmented obstacle is in the vicinity of the position of the candidate under-segmented obstacle, has the same category as the candidate under-segmented obstacle, and has a longer width than the candidate under-segmented obstacle.
For two candidate under-segmented obstacles in the candidate under-segmented obstacle pair, searching the obstacles which are the same as one candidate under-segmented obstacle in category and have longer width than the candidate under-segmented obstacle in category in two preset ranges taking the position of the candidate under-segmented obstacle as the center. If there is only one obstacle satisfying the condition, the obstacle is regarded as an under-divided obstacle. The preset range is centered at the position (x ', y') of the candidate under-segmented obstacle in the current frame image, and the sum of the width of the candidate under-segmented obstacle and a preset value is taken as a radius r.
The calculation formula of the radius r can be as follows:
r=l width of obstacle +l Threshold value
l Width of obstacle For the width of the obstacle, l Threshold value The predetermined value may be an empirical value, such as 1 meter.
In general, if the maximum of the ratio of the width of the under-segmented obstacle to the maximum width of the candidate under-segmented obstacle pair and the ratio of the length of the under-segmented obstacle to the maximum length of the candidate under-segmented obstacle pair is greater than the variation width threshold, it is indicated that the length and width of the obstacle are significantly greater than the length and width of the corresponding candidate under-segmented obstacle.
Wherein, if the obstacle length of the current frame image is l cur Width w cur The maximum length of the candidate under-segmentation barrier pair of the previous frame image is l max Maximum width w max The width variation is c w =w cur /w max The length change range is c l =l cur /l max . If max (c) w ,c l )>c Threshold value The candidate under-segmentation obstacle pair under-segments at the current frame.
The embodiment of the disclosure provides an obstacle detection method, which combines obstacle frame intersection and front and back frame obstacle continuity judgment, and further improves the detection efficiency and detection accuracy of the undersegmented obstacle.
With further reference to fig. 4, a flow 400 of another embodiment of an obstacle detection method according to the present disclosure is shown. The obstacle detection method comprises the following steps:
step 401, detecting a foreground obstacle in a previous frame of image shot by a camera of a vehicle.
Step 402, filtering out foreground obstacle pairs adjacent to each other from the foreground obstacles as candidate undersegregated obstacle pairs.
Step 403, predicting the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair.
Step 404, detecting the under-segmented obstacle from the current frame image based on the position of the candidate under-segmented obstacle.
In this embodiment, the specific operations of steps 401 to 404 are described in detail in steps 201 to 204 in the embodiment shown in fig. 2, and are not described herein.
And 405, marking information on the under-segmented obstacle in the current frame image.
In the present embodiment, the execution subject of the obstacle detection method (e.g., the server 103 shown in fig. 1) may annotate the under-segmented obstacle information in the current frame image. Wherein the noted information may include, but is not limited to: the type, size, and location of the undersegmented obstacle, etc.
And 406, taking the current frame image as input, taking information as output, and performing model training to obtain a perception model.
In this embodiment, the execution body may input the current frame image into the model and output the prediction result. Based on the prediction results and the annotated information, the loss is calculated. And based on the loss, carrying out parameter adjustment until the loss is small enough, and converging the model to obtain the perception model. Wherein the perceptual model may be used to detect obstacles in the image, including under-segmented obstacles.
The embodiment of the disclosure provides an obstacle detection method, which can efficiently and accurately detect various types of undersegregated obstacles through the continuity judgment of the obstacles of front and rear frames. Under-segmentation barriers are found from mass data in a data mining mode, so that iterative optimization of a perception model is promoted, and driving experience is improved.
For ease of understanding, fig. 5 shows a block flow diagram of an obstacle detection method in which embodiments of the disclosure may be implemented.
As shown in fig. 5, when the vehicle is in the automatic driving mode, vehicle end information can be acquired by using the deployed perception model, including but not limited to real-time position information, real-time perception report, real-time track prediction, real-time vehicle condition, and the like. Subsequently, the following steps are performed:
step 501, a message record is perceptually reported based on vehicle side information.
Step 502 determines whether there are two obstacles intersecting the bounding boxes that are close together in the previous frame. If yes, go to step 504; if not, go to step 503.
Step 503, ignore.
Step 504 determines whether there is only one obstacle around the current frame promotion location. If yes, go to step 505; if not, go to step 503.
Step 505, calculate the length and width change of the obstacle relative to two obstacles of the previous frame.
Step 506 determines whether the change in length and width is greater than a threshold. If yes, go to step 507; if not, go to step 503.
And 507, analyzing the under-segmentation reasons.
And 508, labeling data.
Step 509, model training.
Step 510, model deployment.
The deployed perception model may be used to obtain the vehicle-end information, and return to continue to step 501, thereby driving the continuous iteration of the perception model.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an obstacle detection device, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic apparatuses.
As shown in fig. 6, the obstacle detecting apparatus 600 of the present embodiment may include: a first detection module 601, a filtering module 602, a prediction module 603, and a second detection module 604. Wherein, the first detection module 601 is configured to detect a foreground obstacle in a previous frame of image shot by a camera of the vehicle; a filtering module 602 configured to filter pairs of foreground obstacles adjacent to each other from the foreground obstacles as candidate undersegregated obstacle pairs; a prediction module 603 configured to predict a position of the candidate under-segmented obstacle in the current frame image based on a speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair; the second detection module 604 is configured to detect an under-segmented obstacle from the current frame image based on the position of the candidate under-segmented obstacle.
In the present embodiment, in the obstacle detecting apparatus 600: the specific processing of the first detection module 601, the filtering module 602, the prediction module 603, and the second detection module 604 and the technical effects thereof may refer to the description of steps 201 to 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of this embodiment, the first detection module 601 is further configured to: detecting an obstacle in the previous frame of image; and selecting an obstacle with a distance smaller than a distance threshold value from the obstacles and a perpendicular distance smaller than an observation threshold value from the planned track of the vehicle as a foreground obstacle.
In some alternative implementations of the present embodiment, the observed threshold value is equal to a sum of the vehicle width, the obstacle width, and the preset value.
In some alternative implementations of the present embodiment, the filtering module 602 includes: and a filtering sub-module configured to filter out foreground obstacle pairs intersected by the bounding box from the foreground obstacles.
In some optional implementations of the present embodiment, the filtering submodule is further configured to: calculating the distance between every two foreground obstacles, and filtering out a foreground obstacle pair with the distance smaller than the sum of the width of the two foreground obstacles and a preset value as a foreground obstacle pair which are close to each other; the foreground obstacle pairs intersected by the bounding box are filtered out of the foreground obstacle pairs that are close to each other.
In some alternative implementations of the present embodiment, the second detection module 604 is further configured to: searching the obstacles which are the same as the types of the candidate under-segmentation obstacle pairs and have the length and width larger than those of the candidate under-segmentation obstacle pairs in a preset range taking the position of the candidate under-segmentation obstacle as the center, and taking the obstacles as the under-segmentation obstacles.
In some alternative implementations of this embodiment, the radius of the predetermined range is equal to the sum of the width of the candidate under-segmented obstacle and the predetermined value.
In some alternative implementations of the present embodiment, the maximum of the ratio of the width of the under-segmented obstacle to the maximum width of the candidate under-segmented obstacle pair and the ratio of the length of the under-segmented obstacle to the maximum length of the candidate under-segmented obstacle pair is greater than the variation amplitude threshold.
In some optional implementations of the present embodiment, the obstacle detecting apparatus 600 further includes: the labeling module is configured to label the under-segmented obstacle information in the current frame image, wherein the information comprises at least one of the category, the size and the position of the under-segmented obstacle: and the training module is configured to take the current frame image as input, take the information as output, and perform model training to obtain a perception model.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 701 performs the respective methods and processes described above, such as an obstacle detection method. For example, in some embodiments, the obstacle detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the obstacle detection method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the obstacle detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions provided by the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (21)

1. An obstacle detection method comprising:
detecting a foreground obstacle in a previous frame of image shot by a camera of the vehicle;
filtering out foreground barrier pairs adjacent to each other from the foreground barrier as candidate undersegregation barrier pairs;
predicting the position of the candidate under-segmented obstacle in the current frame image based on the speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair;
an under-segmented obstacle is detected from the current frame image based on the position of the candidate under-segmented obstacle.
2. The method of claim 1, wherein the detecting the foreground obstacle in the last frame of image taken by the camera of the vehicle comprises:
detecting an obstacle in the previous frame of image;
and selecting an obstacle with a distance smaller than a distance threshold value from the obstacles and a perpendicular distance smaller than an observation threshold value from the planned track of the vehicle as the foreground obstacle.
3. The method of claim 2, wherein the observed threshold value is equal to a sum of a vehicle width, an obstacle width, and a preset value.
4. The method of claim 1, wherein the filtering out pairs of foreground obstructions that are adjacent to each other from the foreground obstructions as candidate undersegregated pairs of obstructions comprises:
and filtering out the foreground barrier pairs intersected by the bounding box from the foreground barriers.
5. The method of claim 4, wherein the filtering out bounding box intersecting foreground obstacle pairs from the foreground obstacles comprises:
calculating the distance between every two foreground obstacles, and filtering out a foreground obstacle pair with the distance smaller than the sum of the width of the two foreground obstacles and a preset value as a foreground obstacle pair which are close to each other;
and filtering out the foreground barrier pairs intersected by the bounding box from the foreground barrier pairs which are close to each other.
6. The method of claim 1, wherein the detecting an under-segmented obstacle from the current frame image based on the location of the candidate under-segmented obstacle comprises:
searching an obstacle which is the same as the type of the candidate under-segmentation obstacle pair and has a longer width than the candidate under-segmentation obstacle pair in a preset range taking the position of the candidate under-segmentation obstacle as the center, and taking the obstacle as the under-segmentation obstacle.
7. The method of claim 6, wherein a radius of the predetermined range is equal to a sum of a width of the candidate under-segmented obstacle and a predetermined value.
8. The method of claim 6, wherein a maximum of a ratio of a width of the under-segmented obstacle to a maximum width of the candidate under-segmented obstacle pair and a ratio of a length of the under-segmented obstacle to a maximum length of the candidate under-segmented obstacle pair is greater than a variation amplitude threshold.
9. The method of any of claims 1-8, wherein the method further comprises:
marking information of the under-segmented obstacle in the current frame image, wherein the information comprises at least one of a category, a size and a position of the under-segmented obstacle:
and taking the current frame image as input, taking the information as output, and performing model training to obtain a perception model.
10. An obstacle detection device comprising:
a first detection module configured to detect a foreground obstacle in a previous frame of image captured by a camera of the vehicle;
a filtering module configured to filter pairs of foreground obstructions adjacent to each other from the foreground obstructions as candidate undersegregation pairs of obstructions;
a prediction module configured to predict a position of the candidate under-segmented obstacle in the current frame image based on a speed of the candidate under-segmented obstacle in the candidate under-segmented obstacle pair;
a second detection module configured to detect an under-segmented obstacle from the current frame image based on a position of the candidate under-segmented obstacle.
11. The apparatus of claim 10, wherein the first detection module is further configured to:
detecting an obstacle in the previous frame of image;
and selecting an obstacle with a distance smaller than a distance threshold value from the obstacles and a perpendicular distance smaller than an observation threshold value from the planned track of the vehicle as the foreground obstacle.
12. The apparatus of claim 11, wherein the observation threshold is equal to a sum of a vehicle width, an obstacle width, and a preset value.
13. The apparatus of claim 10, wherein the filtering module comprises:
a filtering sub-module configured to filter out a bounding box intersecting foreground obstacle pair from the foreground obstacles.
14. The apparatus of claim 13, wherein the filtering sub-module is further configured to:
calculating the distance between every two foreground obstacles, and filtering out a foreground obstacle pair with the distance smaller than the sum of the width of the two foreground obstacles and a preset value as a foreground obstacle pair which are close to each other;
and filtering out the foreground barrier pairs intersected by the bounding box from the foreground barrier pairs which are close to each other.
15. The apparatus of claim 10, wherein the second detection module is further configured to:
searching an obstacle which is the same as the type of the candidate under-segmentation obstacle pair and has a longer width than the candidate under-segmentation obstacle pair in a preset range taking the position of the candidate under-segmentation obstacle as the center, and taking the obstacle as the under-segmentation obstacle.
16. The apparatus of claim 15, wherein a radius of the predetermined range is equal to a sum of a width of the candidate under-segmented obstacle and a predetermined value.
17. The apparatus of claim 15, wherein a maximum of a ratio of a width of the under-segmented obstacle to a maximum width of the candidate under-segmented obstacle pair and a ratio of a length of the under-segmented obstacle to a maximum length of the candidate under-segmented obstacle pair is greater than a variation amplitude threshold.
18. The apparatus of any one of claims 10-17, wherein the apparatus further comprises:
an annotating module configured to annotate the under-segmented obstacle information in the current frame image, wherein the information includes at least one of a category, a size, and a location of the under-segmented obstacle:
and the training module is configured to take the current frame image as input, take the information as output, and perform model training to obtain a perception model.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-9.
CN202311753726.8A 2023-12-19 2023-12-19 Obstacle detection method and device Pending CN117671643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311753726.8A CN117671643A (en) 2023-12-19 2023-12-19 Obstacle detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311753726.8A CN117671643A (en) 2023-12-19 2023-12-19 Obstacle detection method and device

Publications (1)

Publication Number Publication Date
CN117671643A true CN117671643A (en) 2024-03-08

Family

ID=90071317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311753726.8A Pending CN117671643A (en) 2023-12-19 2023-12-19 Obstacle detection method and device

Country Status (1)

Country Link
CN (1) CN117671643A (en)

Similar Documents

Publication Publication Date Title
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
CN113715814B (en) Collision detection method, device, electronic equipment, medium and automatic driving vehicle
CN112526999B (en) Speed planning method, device, electronic equipment and storage medium
CN113704116B (en) Data processing method and device for automatic driving vehicle, electronic equipment and medium
EP3940665A1 (en) Detection method for traffic anomaly event, apparatus, program and medium
CN113859264B (en) Vehicle control method, device, electronic equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
EP4145408A1 (en) Obstacle detection method and apparatus, autonomous vehicle, device and storage medium
CN114091515A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN112651535A (en) Local path planning method and device, storage medium, electronic equipment and vehicle
CN116499487B (en) Vehicle path planning method, device, equipment and medium
CN116890876A (en) Vehicle control method and device, electronic equipment and automatic driving vehicle
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN113516013B (en) Target detection method, target detection device, electronic equipment, road side equipment and cloud control platform
CN114030483B (en) Vehicle control method, device, electronic equipment and medium
US12026954B2 (en) Static occupancy tracking
CN117671643A (en) Obstacle detection method and device
CN114998863A (en) Target road identification method, target road identification device, electronic equipment and storage medium
CN112698421A (en) Evaluation method, device, equipment and storage medium for obstacle detection
CN111968071A (en) Method, device, equipment and storage medium for generating spatial position of vehicle
CN115431968B (en) Vehicle controller, vehicle and vehicle control method
CN114407916B (en) Vehicle control and model training method and device, vehicle, equipment and storage medium
CN115951356A (en) Method, device, equipment and storage medium for determining moving state of obstacle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination