CN114332818A - Obstacle detection method and device and electronic equipment - Google Patents

Obstacle detection method and device and electronic equipment Download PDF

Info

Publication number
CN114332818A
CN114332818A CN202111633794.1A CN202111633794A CN114332818A CN 114332818 A CN114332818 A CN 114332818A CN 202111633794 A CN202111633794 A CN 202111633794A CN 114332818 A CN114332818 A CN 114332818A
Authority
CN
China
Prior art keywords
obstacle
data
vehicle
matching
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111633794.1A
Other languages
Chinese (zh)
Other versions
CN114332818B (en
Inventor
杨健
张甲甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202111633794.1A priority Critical patent/CN114332818B/en
Publication of CN114332818A publication Critical patent/CN114332818A/en
Application granted granted Critical
Publication of CN114332818B publication Critical patent/CN114332818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method and a device for detecting obstacles and electronic equipment, and relates to the technical field of artificial intelligence such as intelligent traffic, environment perception and automatic driving. The specific implementation scheme is as follows: when determining whether the obstacle sensed by the roadside device is a false detection obstacle, first obstacle data in a detection road sensed by the roadside device, second obstacle data in the detection road sensed by the vehicle, and vehicle data may be acquired respectively; the acquisition moments of the first obstacle data, the second obstacle data and the vehicle data are the same; matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data; and then, according to the matching result and the vehicle data, the detection result of the first obstacle is determined, so that whether the obstacle sensed by the road side equipment is a false detection obstacle or not is automatically detected, and the detection efficiency of the obstacle is effectively improved.

Description

Obstacle detection method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting an obstacle, and an electronic device.
Background
Obstacle data sensed by road side equipment has important application in many fields. Taking the field of driving assistance as an example, in general, the roadside device will send the obstacle data sensed by the roadside device to the autonomous vehicle, so that the autonomous vehicle can realize driving assistance by combining the obstacle data.
Therefore, the accuracy of obstacle data perceived by the roadside apparatus is crucial. If the obstacle identified based on the obstacle data has a false detection condition, the accuracy of the obstacle data sensed by the road side equipment is affected. Therefore, how to detect whether an obstacle identified based on obstacle data sensed by road side is a false detection obstacle is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The disclosure provides a method and a device for detecting an obstacle and electronic equipment, which can automatically detect whether the obstacle sensed by a road side is a false detection obstacle or not, thereby effectively improving the detection efficiency of the obstacle.
According to a first aspect of the present disclosure, there is provided a method of detecting an obstacle, which may include:
acquiring first obstacle data in a detected road sensed by road side equipment, second obstacle data in the detected road sensed by a vehicle and vehicle data; wherein the first obstacle data, the second obstacle data, and the vehicle data are collected at the same time.
And matching the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data to obtain a matching result.
Determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
According to a second aspect of the present disclosure, there is provided an obstacle detection apparatus, which may include:
the system comprises an acquisition unit, a detection unit and a control unit, wherein the acquisition unit is used for acquiring first obstacle data in a detected road sensed by road side equipment, second obstacle data in the detected road sensed by a vehicle and vehicle data; wherein the first obstacle data, the second obstacle data, and the vehicle data are collected at the same time.
And the matching unit is used for matching the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data to obtain a matching result.
The processing unit is used for determining the detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of obstacle detection of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method for detecting an obstacle of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the method of detecting an obstacle of the first aspect.
According to the technical scheme, whether the obstacle sensed by the road side equipment is the false detection obstacle or not can be automatically detected, so that the detection efficiency of the obstacle is effectively improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a roadside perceived obstacle and a vehicle perceived obstacle provided by embodiments of the present disclosure;
fig. 2 is a schematic flow chart of a method for detecting an obstacle according to a first embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an interaction between a roadside apparatus and a vehicle provided by an embodiment of the present disclosure;
FIG. 4 is a schematic path diagram of obstacle data in a vehicle-perceived full-path coverage scenario provided by an embodiment of the present disclosure;
FIG. 5 is a schematic path diagram of obstacle data in another vehicle-perceived full-path coverage scenario provided by an embodiment of the present disclosure;
FIG. 6 is a schematic path diagram of obstacle data in a vehicle-perceived full-path coverage scenario provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a roadside perceived obstacle and a vehicle perceived obstacle provided by embodiments of the present disclosure;
FIG. 8 is a schematic illustration of another roadside perceived obstacle and a vehicle perceived obstacle provided by embodiments of the present disclosure;
fig. 9 is a flowchart illustrating a method for matching a first obstacle corresponding to first obstacle data with a second obstacle corresponding to second obstacle data according to a second embodiment of the present disclosure;
FIG. 10 is a schematic illustration of a vehicle being matched to a first obstacle according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of intersection division provided by the embodiment of the present disclosure;
fig. 12 is a schematic diagram of matching a third obstacle with a second obstacle according to an embodiment of the present disclosure;
FIG. 13 is a schematic flow chart diagram illustrating a method for determining a first obstacle detection result based on a matching combination and vehicle data according to a third embodiment of the present disclosure;
FIG. 14 is a schematic view of a positional relationship of an obstacle to a vehicle provided by an embodiment of the present disclosure;
fig. 15 is a schematic structural view of an obstacle detection device provided in accordance with a third embodiment of the present disclosure;
fig. 16 is a schematic block diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In embodiments of the present disclosure, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the access relationships of the matching objects, indicating that there may be three relationships, e.g., A and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present disclosure, the character "/" generally indicates that the matching objects before and after are in an "or" relationship. In addition, in the embodiments of the present disclosure, "first", "second", "third", "fourth", "fifth", and "sixth" are only used to distinguish the contents of different objects, and have no other special meaning.
The technical scheme provided by the embodiment of the disclosure can be applied to the technical fields of intelligent transportation, environment perception, automatic driving and the like. Taking the field of assistant driving as an example, the roadside device accurately senses the obstacle data to acquire the accurate obstacle data, and is vital to assistant driving.
Whether the road side equipment senses the obstacle data accurately or not is closely related to the sensing capability of the road side equipment. In the following description, "perception of roadside equipment" may be abbreviated as "roadside perception". When the roadside perceptibility is evaluated, one important factor influencing the roadside perceptibility is as follows: whether the obstacle identified based on the obstacle data sensed by the roadside is a false detection condition or not. If the fault detection exists, the accuracy of obstacle data sensed by the road side equipment is influenced, and the road side sensing capability is influenced. In the following description, the "obstacle identified based on roadside perceived obstacle data" may be abbreviated as "roadside perceived obstacle". Therefore, how to detect whether the obstacle sensed by the roadside is a false detection obstacle is an urgent problem to be solved by those skilled in the art.
At present, when detecting whether an obstacle sensed by a roadside is a false detection obstacle, roadside equipment firstly acquires a scene image of a road and performs obstacle identification on the scene image to extract obstacle data in the scene image, wherein the obstacle data is a roadside sensed obstacle correspondingly; in addition, the staff can also manually label the acquired scene image to mark out obstacles in the road, and the manually marked obstacles are used as true values; and comparing the obstacles sensed by the road side with the manually marked obstacles, and detecting whether the obstacles sensed by the road side are false detection obstacles or not according to a comparison result. However, the manual labeling method is adopted, so that the detection efficiency of the obstacles is improved.
The false detection of the obstacle can be understood as an obstacle sensed by the roadside, but does not exist in the manually marked obstacles.
In order to automatically detect whether the obstacle sensed by the road side equipment is the false detection obstacle, the higher sensing capability of the vehicle in the sensing area is considered, so that the obstacle sensed by the vehicle in the sensing area can be taken as a true value, the obstacle sensed by the road side and the obstacle sensed by the vehicle are matched, and the obstacle sensed by the road side equipment is determined to be the false detection obstacle according to the matching result and the vehicle data, so that the automatic detection of whether the obstacle sensed by the road side equipment is the false detection obstacle is realized, and the detection efficiency of the obstacle is effectively improved.
For example, as shown in fig. 1, fig. 1 is a schematic diagram of a roadside perceived obstacle and a vehicle perceived obstacle provided by an embodiment of the present disclosure, and as can be seen in conjunction with fig. 1, a left diagram in fig. 1 is obstacle data perceived by a vehicle through a vehicle-end viewing angle, where the obstacle corresponding to the obstacle data includes an obstacle 1 and an obstacle 2, and it should be noted that the obstacle data perceived by the vehicle through the vehicle-end viewing angle does not include the vehicle itself; the right diagram in fig. 1 is obstacle data perceived by the roadside device through a roadside perspective, and obstacles corresponding to the obstacle data include an obstacle 1, an obstacle 2, an obstacle 3, an obstacle 4, and an obstacle 5 in the right diagram; among the 5 obstacles sensed by the road side equipment, the obstacle 1 and the obstacle 2 are accurately sensed obstacles, the obstacle 5 is a false detection obstacle, and the obstacle 3 and the obstacle 4 are non-false detection obstacles.
Based on the technical concept, embodiments of the present disclosure provide a method for detecting an obstacle, and the method for detecting an obstacle provided by the present disclosure will be described in detail by specific embodiments below. It is to be understood that the following detailed description may be combined with other embodiments, and that the same or similar concepts or processes may not be repeated in some embodiments.
Example one
Fig. 2 is a flowchart illustrating a method for detecting an obstacle according to a first embodiment of the present disclosure, where the method for detecting an obstacle may be implemented by software and/or a hardware device, for example, a vehicle-mounted terminal or a server of a vehicle. For example, taking a hardware device as a vehicle-mounted terminal of a vehicle as an example, please refer to fig. 2, the method for detecting an obstacle may include:
s201, acquiring first obstacle data sensed by road side equipment in a detected road, second obstacle data sensed by a vehicle in the detected road and vehicle data; the first obstacle data, the second obstacle data and the vehicle data are acquired at the same time.
For example, the obstacle data is used to describe an obstacle, and may include data such as a type of the obstacle, a position of the obstacle, and a size of the obstacle, and may be specifically set according to actual needs, and may also include other information such as a speed of the obstacle.
For example, the vehicle data may include data such as a type of the vehicle, a position of the vehicle, and a size of the vehicle, and may be specifically set according to actual needs, and may also include other information such as a speed of the vehicle, and the like.
For example, when acquiring the first obstacle data, the first obstacle data sent by other electronic devices may be received, for example, the first obstacle data collected by the roadside device; the first obstacle data can also be searched from the local storage; the first obstacle data may also be acquired by other methods, and may be specifically set according to actual needs, where the embodiment of the present disclosure is not specifically limited to the method for acquiring the first obstacle data.
For example, when the second obstacle data is acquired, the second obstacle data sent by other electronic devices may be received; second obstacle data can also be searched from a local storage; the second obstacle data may also be acquired by other methods, and may be specifically set according to actual needs, where the embodiment of the present disclosure is not specifically limited to the acquisition method of the second obstacle data.
Taking receiving first obstacle data sent by a roadside device as an example, as shown in fig. 3, fig. 3 is an interaction schematic diagram between the roadside device and a vehicle provided by the embodiment of the disclosure, the roadside device may sense and detect obstacle information in a road through a roadside sensing system thereof, and the obstacle information includes first obstacle data and sensing time; and the barrier information sensed by the sensor is coded; then the Road Side Unit (RSU) sends the encoded barrier information to the vehicle by a wireless transmission method; it can be understood that the roadside device senses and detects obstacle information in a road in real time, and also sends the sensed obstacle information to the vehicle in real time, and in order to distinguish the obstacle information sensed by the vehicle from the following obstacle information, the obstacle information sensed by the roadside device is recorded as first obstacle information, and the obstacle information sensed by the vehicle is recorded as second obstacle information.
Correspondingly, the vehicle can receive the encoded first obstacle information sensed by the road side equipment through an On Board Unit (OBU); and sending the encoded first obstacle information to an on-board computing unit, wherein the on-board computing unit can decode the encoded first obstacle information to acquire the first obstacle information sensed by the roadside device. In addition, the vehicle-mounted sensing system of the vehicle can also sense and detect the obstacle information in the road, and the obstacle information can be recorded as second obstacle information, wherein the second obstacle information comprises second obstacle data and sensing time; and sending the sensed second obstacle information to the vehicle-mounted computing unit; it can be understood that the vehicle-mounted sensing system also senses and detects the second obstacle information in the road in real time and sends the sensed second obstacle information to the vehicle-mounted computing unit in real time; in addition, a positioning system in the vehicle can also position the position of the vehicle in real time and send the position of the vehicle to the vehicle-mounted computing unit; and the vehicle-mounted computing unit is used for jointly determining whether the obstacle sensed by the road side is the false detection obstacle or not based on the first obstacle information sensed by the road side equipment, the second obstacle information sensed by the vehicle-mounted sensing system and the vehicle information. The vehicle information comprises a plurality of groups of vehicle data collected in real time.
The roadside device has the capability of converting the collected objects in the scene image of the detected road into barrier data with types, coordinate positions and sizes in a calculation mode; the vehicle also has the capability of computationally converting objects in the captured scene image of the detected road into obstacle data having a type, coordinate position, and size.
In combination with the above description, when the vehicle-mounted terminal collects the obstacle information, the collected obstacle information includes first obstacle information sensed by the roadside device and second obstacle information sensed by a vehicle-mounted sensing system of the vehicle. When first obstacle information sensed by road side equipment is received, the distance between a vehicle and a center point of a detected road intersection is calculated in real time by taking the high-precision positioning information of the vehicle as a basis, and when the vehicle is detected to drive into a range of 150m from the center point, the vehicle receives and stores the first obstacle information sensed by the road side equipment; in addition, the vehicle information including the plurality of sets of vehicle data acquired as described above is also stored together.
After acquiring first obstacle information sent by road side equipment in real time, second obstacle information perceived by a vehicle in real time and vehicle information, the vehicle-mounted terminal needs to perform time alignment in order to accurately detect whether the obstacle perceived by the road side is a false detection obstacle, namely, according to the same acquisition time, first initial obstacle data, second initial obstacle data and initial data of the vehicle perceived at the same acquisition time are screened out from more first obstacle information, second obstacle information and vehicle information, and then based on the first initial obstacle data, the second initial obstacle data and the initial data of the vehicle, whether the obstacle perceived by the road side is the false detection obstacle is determined together. Wherein the first initial obstacle data may be understood as being screened from the first perception information, the second initial obstacle data may be understood as being screened from the second perception information, and the initial data of the vehicle may be understood as being screened from the vehicle information. For example, the initial data of the vehicle includes the vehicle position and the size of the vehicle.
It should be noted that, in the embodiment of the present disclosure, when the first initial obstacle data sensed by the roadside device is obstacle data in a full-path coverage scene, correspondingly, the second initial obstacle data sensed by the vehicle is also obstacle data in the full-path coverage scene, so as to ensure that coverage areas of the initial obstacle data are the same, and thus, the obstacle matching can be performed better. For example, in order that the second initial obstacle data perceived by the vehicle is also obstacle data in a full-path coverage scene, see fig. 4, fig. 5 and fig. 6 described below, where fig. 4 is a schematic path diagram of obstacle data in a full-path coverage scene perceived by the vehicle according to an embodiment of the present disclosure, and fig. 5 is a schematic path diagram of obstacle data in another full-path coverage scene perceived by the vehicle according to an embodiment of the present disclosure; FIG. 6 is a schematic path diagram of obstacle data in a vehicle-perceived full-path coverage scenario provided by an embodiment of the present disclosure; by combining the three different path modes, the vehicle can be made to perceive the obstacle data in the full-path coverage scene.
For example, as shown in fig. 7, fig. 7 is a schematic diagram of a roadside perceived obstacle and a vehicle perceived obstacle provided by an embodiment of the present disclosure, where the time of collecting the first initial obstacle data perceived by the roadside device shown in fig. 7 through a roadside viewing angle is the same as the time of collecting the second initial obstacle data perceived by the vehicle through a vehicle-end viewing angle. As can be seen from fig. 7, the diagram on the left side in fig. 7 shows that the obstacle corresponding to the second initial obstacle data sensed by the vehicle through the vehicle-end viewing angle includes: barrier 1, barrier 2, and 4 barriers not labeled with barrier numbers; the diagram on the right side in fig. 7 is an obstacle corresponding to first initial obstacle data perceived by the roadside device through a roadside viewing angle, and includes: obstacle 1, obstacle 2, obstacle 3, obstacle 4, obstacle 5, vehicle, and 2 obstacles not labeled with an obstacle number.
When detecting whether 7 obstacles corresponding to the first initial obstacle data sensed by the road side are false-detected obstacles, 7 obstacles corresponding to the second initial obstacle data sensed by the vehicle are detected as true values of the obstacles. Considering that the obstacle corresponding to the second initial obstacle data sensed by the vehicle is a true value for detecting whether or not the obstacle sensed by the roadside is a false-detection obstacle, it is necessary to ensure the accuracy of the obstacle data sensed by the vehicle.
If the second initial obstacle data sensed by the vehicle is the obstacle data located in the detection range corresponding to the vehicle sensing capability, the first initial obstacle data may be directly determined as the first obstacle data, and the second initial obstacle data may be determined as the second obstacle data. Assuming that the obstacle data shown in fig. 7 is the obstacle data perceived by the vehicle at the position shown in fig. 7, and the positions of two unnumbered obstacles in the left diagram in fig. 7 are not located in the detection range corresponding to the vehicle perception capability, the second initial obstacle data may be further screened. In general, considering that the vehicle sensing capability is a detection range with the vehicle as a center and the radius of about 60 meters, obstacle data located in the detection range corresponding to the vehicle sensing capability may be screened from second initial obstacle data sensed by the vehicle, the screened obstacle data may be determined as second obstacle data, and an obstacle corresponding to the second obstacle data may be used as a true value; correspondingly, obstacle data located in a detection range corresponding to the vehicle sensing capability are screened from the first initial obstacle data sensed by the road side equipment, and the screened obstacle data are determined as first obstacle data; in addition, vehicle data within a detection range corresponding to the vehicle perception capability needs to be screened from the initial data of the vehicle, so that first obstacle data in a detected road perceived by the road side equipment, second obstacle data in the detected road perceived by the vehicle, and the vehicle data are acquired.
In combination with the schematic diagram of the roadside perceived obstacle and the vehicle perceived obstacle shown in fig. 7, by using the detection range defined based on the vehicle perception capability, only the obstacle corresponding to the second obstacle data within the detection range perceived by the vehicle may be used as a true value for determining whether the obstacle corresponding to the first obstacle data within the roadside perceived position detection range is a false detection obstacle. For example, as shown in fig. 8, fig. 8 is a schematic diagram of another roadside perceived obstacle and a vehicle perceived obstacle provided by the embodiment of the present disclosure, and as can be seen in conjunction with fig. 8, the diagram on the left side in fig. 8 is an obstacle corresponding to second initial obstacle data perceived by the vehicle through a vehicle end view angle, and includes: obstacle 1 and obstacle 2; the diagram on the right side in fig. 8 is an obstacle corresponding to first initial obstacle data perceived by the roadside device through a roadside viewing angle, and includes: obstacle 1, obstacle 2, obstacle 3, obstacle 4, obstacle 5, and vehicle.
For example, in the embodiment of the present disclosure, the limitation of vehicle perception sight line may be further considered, a vehicle may have more perception blind areas, that is, sight line blind areas, in the intersection of the detected road, and the vehicle cannot perceive obstacle data in the perception blind areas, so that some area limitations may be additionally performed when screening obstacle data in the detection range corresponding to the vehicle perception capability; further, it is considered that the vehicle can perceive not only the obstacle data in the lane line in the intersection of the detected road but also the obstacle data on the outer periphery of the road; however, roadside devices currently can only sense obstacle data within a road and within a limited range, and therefore obstacle false detection in the present disclosure only involves false detection of obstacles within the road, and does not consider false detection of obstacles at the outer periphery of the road.
After the first obstacle data in the detected road sensed by the roadside device and the second obstacle data in the detected road sensed by the vehicle are acquired respectively, the first obstacle corresponding to the first obstacle data and the second obstacle corresponding to the second obstacle data may be matched, that is, the following S202 is executed:
s202, matching the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data to obtain a matching result.
Wherein the matching result comprises a match or a mismatch.
For example, the number of the first obstacles may be one or more, and may be specifically set according to actual needs; the number of the second obstacles may be one or more, and may be specifically set according to actual needs.
When a first obstacle corresponding to the first obstacle data is matched with a second obstacle corresponding to the second obstacle data, if an obstacle matched with the first obstacle exists in the second obstacle, the first obstacle is an obstacle accurately perceived by the roadside; on the contrary, if there is an obstacle that does not match the first obstacle in the second obstacle, it indicates that the first obstacle may be an obstacle erroneously detected by the roadside device, and therefore, in order to accurately determine whether the first obstacle is an obstacle erroneously detected by the roadside device, the following S203 may be performed in conjunction with the vehicle data and jointly determine whether the first obstacle is an obstacle erroneously detected by the roadside device according to the matching result and the vehicle data:
s203, determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
It can be seen that, in the embodiment of the present disclosure, when determining whether the obstacle sensed by the roadside device is a false detection obstacle, first obstacle data in a detection road sensed by the roadside device, second obstacle data in the detection road sensed by the vehicle, and vehicle data may be respectively obtained; the acquisition moments of the first obstacle data, the second obstacle data and the vehicle data are the same; matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data; and then, according to the matching result and the vehicle data, the detection result of the first obstacle is determined, so that whether the obstacle sensed by the road side equipment is a false detection obstacle or not is automatically detected, and the detection efficiency of the obstacle is effectively improved.
Based on the embodiment shown in fig. 2, in the step S202, when the first obstacle corresponding to the first obstacle data is matched with the second obstacle corresponding to the second obstacle data, considering that the vehicle itself is not included in the second obstacle data in the detected road sensed by the vehicle, in order to perform the matching accurately, the obstacle data of the vehicle as the obstacle should not be included in the first obstacle data sensed by the roadside, so that before the matching, it may be determined whether the obstacle data of the vehicle as the obstacle is included in the first obstacle data sensed by the roadside, and if the obstacle data of the vehicle as the obstacle is not included in the first obstacle data sensed by the roadside, the first obstacle corresponding to the first obstacle data may be directly matched with the second obstacle corresponding to the second obstacle data, and based on the matching result and the vehicle data directly, determining a detection result of the first obstacle; the specific implementation method can be seen in the technical scheme of the third embodiment.
If the first obstacle data sensed by the roadside includes obstacle data of the vehicle as an obstacle, in order to accurately detect whether the obstacle sensed by the roadside is a false detection obstacle or not and to accurately detect the sensing capability of the roadside device, the obstacle data of the vehicle as the obstacle may be removed from the first obstacle data sensed by the roadside and then matching is performed. Next, how to match the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data in S202 will be described in detail through a second embodiment shown in fig. 9 described below.
Example two
Fig. 9 is a flowchart illustrating a method for matching a first obstacle corresponding to first obstacle data with a second obstacle corresponding to second obstacle data according to a second embodiment of the present disclosure, where the method may also be performed by software and/or hardware devices. For example, referring to fig. 9, the method may include:
s901, removing obstacle data of the vehicle as an obstacle from the first obstacle data to obtain third obstacle data.
When the obstacle data of the vehicle as the obstacle is removed from the first obstacle data, the vehicle and the first obstacle corresponding to the first obstacle data may be matched first, the first obstacle matched with the vehicle is determined from the first obstacle corresponding to the first obstacle data, and the determined first obstacle matched with the vehicle is the vehicle perceived by the roadside; and then removing the obstacle data of the first obstacle matched with the vehicle from the first obstacle data, so that third obstacle data which does not include the vehicle can be obtained.
When the vehicle is respectively matched with the first obstacle, a first rectangular area corresponding to the vehicle can be determined according to vehicle data; determining a second rectangular area corresponding to the first barrier; then calculating the overlapping area of the first rectangular area and the second rectangular area, and recording the overlapping area as a first overlapping area; determining a first obstacle matched with the vehicle from the first obstacles according to the first overlapping area; the first obstacle matched with the vehicle is the vehicle, and then the obstacle data of the first obstacle matched with the vehicle is removed from the first obstacle data, namely the obstacle data of the vehicle as the obstacle is removed from the first obstacle data, so that third obstacle data is obtained.
For example, when the first rectangular area corresponding to the vehicle is determined according to the vehicle data, the vehicle position included in the vehicle data may be determined as a center point of the rectangular area, and the size of the rectangular area may be determined according to the vehicle size included in the vehicle data, so as to determine the first rectangular area corresponding to the vehicle.
For example, when the second rectangular area corresponding to the first obstacle is determined, the position of the obstacle included in the first obstacle data may also be determined as the center point of the rectangular area, and the size of the rectangular area is determined according to the size of the obstacle included in the first obstacle data, so as to determine the second rectangular area corresponding to the first obstacle.
For example, when determining a first obstacle matching with the vehicle from the first obstacles according to the first overlap area and removing obstacle data of the first obstacle matching with the vehicle from the first obstacle data, the first obstacle corresponding to the maximum first overlap area may be determined according to the first overlap area; determining a first obstacle corresponding to the maximum first overlapping area as a first obstacle matched with the vehicle; and then removing the obstacle data of the first obstacle corresponding to the maximum first overlapping area from the first obstacle data, thereby obtaining third obstacle data.
For example, referring to the schematic diagram of the roadside perceived obstacle and the vehicle perceived obstacle shown in fig. 8, it is assumed that the obstacle corresponding to the roadside perceived first obstacle data includes an obstacle 1, an obstacle 2, an obstacle 3, an obstacle 4, an obstacle 5, and a vehicle; when determining a first obstacle matching with the vehicle from the 6 obstacles, as shown in fig. 10, fig. 10 is a schematic diagram of matching the vehicle with the first obstacle according to an embodiment of the present disclosure, the vehicle may be respectively matched with each of the 6 obstacles, and during matching, a first overlapping area of a first rectangular region corresponding to the vehicle and a second rectangular region corresponding to each of the 6 obstacles may be respectively calculated; and determining a first obstacle corresponding to the maximum first overlapping area as a first obstacle matched with the vehicle, and removing obstacle data of the first obstacle matched with the vehicle from the first obstacle data sensed by the road side, namely removing obstacle data of the vehicle as the obstacle from the first obstacle data sensed by the road side, so as to obtain third obstacle data, wherein the obstacle corresponding to the third obstacle data comprises an obstacle 1, an obstacle 2, an obstacle 3, an obstacle 4 and an obstacle 5.
After the obstacle data of the vehicle as the obstacle is removed from the first obstacle data, a third obstacle corresponding to the third obstacle data may be matched with a second obstacle corresponding to the second obstacle data, that is, the following S902 is executed:
and S902, matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result.
For example, when a third obstacle corresponding to the third obstacle data is matched with a second obstacle corresponding to the second obstacle data, a second overlapping area of a rectangular region corresponding to the third obstacle and a rectangular region corresponding to the second obstacle may be determined first; determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles; further judging the size relationship between the maximum second overlapping area and a preset threshold value; if the maximum second overlapping area is smaller than or equal to the preset threshold, determining that the third obstacle is not matched with the fifth obstacle; and if the maximum second overlapping area is larger than the preset threshold value, determining that the third obstacle is matched with the fifth obstacle. The value of the preset threshold can be set according to actual needs.
For example, when the third obstacle corresponding to the third obstacle data is matched with the second obstacle corresponding to the second obstacle data, in combination with the description in S201, the limitation of the vehicle perception line of sight may also be considered, and the vehicle cannot perceive the obstacle data in its perception blind area, so that some area limitations may also be additionally performed when screening the obstacle data in the detection range corresponding to the vehicle perception capability; for example, as shown in fig. 11, fig. 11 is a schematic diagram of intersection division provided in the embodiment of the present disclosure, an intersection of a detected road may be divided into 9 regions, and the 9 regions are sequentially numbered as: region 1, region 2, region 3, region 4, region 5, region 6, region 7, region 8, and region 9. Because the influence of factors such as a middle green belt, a median, intersection orientation and the like possibly exists in the detected road, only the obstacle data in the lane line direction where the 9 areas and the vehicles are located can be screened, and the third obstacles corresponding to the third obstacle data in the lane line direction where the 9 areas and the vehicles are located are matched with the second obstacles corresponding to the second obstacle data, so that the accuracy of obstacle matching can be improved.
In combination with the schematic diagram of the obstacle perceived by the roadside and the obstacle perceived by the vehicle shown in fig. 8, when the third obstacle corresponding to the third obstacle data perceived by the roadside includes the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4, and the obstacle 5 in the right diagram, and the obstacles corresponding to the second obstacle data respectively include the obstacle 1 and the obstacle 2 in the left diagram, as an example, see fig. 12, which is a schematic diagram of matching the third obstacle with the second obstacle provided in the embodiment of the present disclosure, the second overlapping area of the obstacle 1 in the left diagram with each of the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4, and the obstacle 5 in the right diagram may be determined first, and the second overlapping area of each of the obstacle 1 in the left diagram with each of the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4, and the obstacle 5 in the right diagram may be determined first, and calculating to obtain that the second overlapping area of the obstacle 1 in the left image and the obstacle 1 in the right image is the largest, and the largest second overlapping area is greater than a preset threshold, and determining that the obstacle 1 in the left image is matched with the obstacle 1 in the right image.
And then matching the obstacle 2 in the left image with the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right image respectively, determining second overlapping areas of the obstacle 2 in the left image and each of the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right image respectively, calculating to obtain that the second overlapping area of the obstacle 2 in the left image and the obstacle 2 in the right image is the largest, and determining that the obstacle 2 in the left image is matched with the obstacle 2 in the right image if the largest second overlapping area is greater than a preset threshold value.
After matching, the obstacle 1 and the obstacle 2 in the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 corresponding to the third obstacle data sensed by the roadside can be determined as the obstacles successfully matched with the second obstacle; the obstacle 3, the obstacle 4, and the obstacle 5 are obstacles that fail to match the second obstacle, and thus a matching result is obtained.
Based on the obtained matching result, the obstacle 1 and the obstacle 2 sensed by the roadside are successfully matched, and the obstacle 1 and the obstacle 2 are indicated as the obstacles accurately sensed by the roadside equipment; the matching failure of the obstacle 3, the obstacle 4 and the obstacle 5 indicates that the obstacle 1 and the obstacle 2 are the obstacles which are not accurately sensed by the roadside equipment, and as for whether the obstacle which is not accurately sensed is a false detection obstacle or not, the further determination needs to be made by combining vehicle data, namely, the detection result of the first obstacle is determined according to the matching combination and the vehicle data.
It can be seen that in the embodiment of the present disclosure, when a first obstacle corresponding to the first obstacle data is matched with a second obstacle corresponding to the second obstacle data, it is considered that the vehicle itself is not included in the second obstacle data in the detected road sensed by the vehicle, and the obstacle data in which the vehicle is an obstacle is included in the first obstacle data sensed by the roadside, so that the obstacle data in which the vehicle is an obstacle may be removed from the first obstacle data; matching a third obstacle corresponding to the obtained third obstacle data with a second obstacle corresponding to the second obstacle data; therefore, whether the obstacle sensed by the road side is the false detection obstacle or not can be accurately detected based on the matching result, and the accuracy of the false detection obstacle detection result is improved.
Based on the above embodiment, after the third obstacle corresponding to the third obstacle data is matched with the second obstacle corresponding to the second obstacle data to obtain a matching result, the detection result of the first obstacle can be determined according to the matching combination and the vehicle data. Next, a detailed description will be made by the following embodiment three shown in fig. 13.
EXAMPLE III
Fig. 13 is a flowchart of a method for determining a detection result of a first obstacle according to a matching combination and vehicle data according to a third embodiment of the present disclosure, which may also be performed by software and/or hardware means. For example, referring to fig. 13, the method may include:
and S1301, if the matching result indicates that a fourth obstacle which is failed to be matched with the second obstacle exists in the third obstacles, determining the position relation between the fourth obstacle and the vehicle according to the vehicle data.
In combination with the schematic diagram of the roadside perceived obstacle and the vehicle perceived obstacle shown in fig. 8, after the third obstacle corresponding to the third obstacle data is respectively matched with the second obstacle corresponding to the second obstacle data, it can be determined that, in the third obstacle corresponding to the third obstacle data, the obstacle 3, the obstacle 4, and the obstacle 5 are fourth obstacles that have failed to be matched with the second obstacle.
The obstacle 3, the obstacle 4, and the obstacle 5 that have failed to match the second obstacle are not necessarily all obstacles whose roadside perception is wrong, resulting in false detection; it is also possible that, when the vehicle senses, the sensing sight line is blocked by some obstacles, so that the vehicle does not sense the obstacle 3, the obstacle 4, and the obstacle 5 that have failed to match the second obstacle, and therefore, for each fourth obstacle of the obstacle 3, the obstacle 4, and the obstacle 5, a position relationship between the fourth obstacle and the vehicle needs to be further determined.
For example, when determining the position relationship between the fourth obstacle and the vehicle, assuming that the obstacle corresponding to the third obstacle data sensed by the roadside includes the obstacle 6, the obstacle 7, the obstacle 8, and the vehicle, when determining the position relationship between the vehicle and the obstacle 6, the obstacle 7, and the obstacle 8, respectively, for example, as shown in fig. 14, fig. 14 is a schematic diagram of the position relationship between the vehicle and the obstacle 6 provided in the embodiment of the present disclosure, the coordinate corresponding to the obstacle 6 may be calculated according to the position of the obstacle 6 included in the obstacle data of the obstacle 6 and the size of the obstacle 6, and the coordinate corresponding to the obstacle 6 may be denoted as (x1, y1), (x2, y 2); calculating coordinates corresponding to the vehicle according to the position of the vehicle and the size of the vehicle, wherein the coordinates corresponding to the vehicle can be recorded as (x3, y3), (x4, y 4); it can be seen that the coordinates (x1, y1), (x2, y2), (x3, y3), and (x4, y4) enclose a rectangular area. As can be seen from fig. 14, the obstacle 6 is shielded by the obstacle 7, and the position relation with the vehicle is as follows: the obstacle 6 is in a perception blind area in the perception range of the vehicle; the position relation between the obstacle 7 and the vehicle is as follows: the obstacle 7 is not in a perception blind area in the perception range of the vehicle; the position relation between the obstacle 8 and the vehicle is as follows: the obstacle 8 is not in a blind sensing area within the sensing range of the vehicle, so that the positional relationship of the vehicle with the obstacle 6, the obstacle 7, and the obstacle 8 is determined.
Based on the position relation determination method of fig. 14, the position relation between the obstacle 3 and the vehicle shown in fig. 8 is obtained as follows: the obstacle 3 is in a perception blind area in the perception range of the vehicle; the position relation between the obstacle 4 and the vehicle is as follows: the obstacle 4 is positioned in a perception blind area in the perception range of the vehicle; the position relation between the obstacle 5 and the vehicle is as follows: the obstacle 5 is not in a blind sensing area within the sensing range of the vehicle.
After determining the position relationship between the fourth obstacle and the vehicle according to the vehicle data, the detection result of the fourth obstacle may be determined according to the position relationship, that is, the following S1302 is performed:
and S1302, determining a detection result of the fourth obstacle according to the position relation.
For example, when determining the detection result of the fourth obstacle according to the position relationship, it may be determined whether the fourth obstacle is in a blind sensing area within a sensing range of the vehicle according to the position relationship; if the fourth obstacle is not in the perception blind area, determining that the detection result is false detection, namely the fourth obstacle is a false detection obstacle; and if the fourth obstacle is in the perception blind area, determining that the detection result is non-false detection, namely that the fourth obstacle is the non-false detection obstacle.
In conjunction with the description in S1301 above, the obstacle 3 and the obstacle 4 are in the blind sensing area within the sensing range of the vehicle, so that the second obstacle data sensed by the vehicle does not include the obstacle data of the obstacle 3 and the obstacle 4, and therefore, the obstacle 3 and the obstacle 4 are non-false detection obstacles sensed by the roadside; if the obstacle 5 is not in the blind sensing area within the sensing range of the vehicle, the second obstacle data sensed by the vehicle certainly includes the obstacle data of the obstacle 5, and the second obstacle data sensed by the vehicle does not include the obstacle data of the obstacle 5, so that the obstacle 5 is a false detection obstacle sensed by the road side.
It can be seen that, in the embodiment of the present disclosure, when the detection result of the first obstacle is determined according to the matching combination and the vehicle data, if the matching result indicates that a fourth obstacle that fails to match the second obstacle exists in the third obstacle, the position relationship between the fourth obstacle and the vehicle is determined according to the vehicle data; and then, the detection result of the fourth obstacle is determined according to the position relationship, so that whether the obstacle sensed by the roadside is a false detection obstacle or not can be accurately detected based on the position relationship, and the accuracy of the detection result of the false detection obstacle is improved.
EXAMPLE III
Fig. 15 is a schematic structural diagram of a device 150 for detecting an obstacle according to a third embodiment of the present disclosure, for example, please refer to fig. 15, where the device 150 for detecting an obstacle may include:
an acquisition unit 1501, configured to acquire first obstacle data in a detected road sensed by a roadside device, second obstacle data in a detected road sensed by a vehicle, and vehicle data; the first obstacle data, the second obstacle data and the vehicle data are acquired at the same time.
The matching unit 1502 is configured to match a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data, so as to obtain a matching result.
A processing unit 1503 for determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
Optionally, the matching unit 1502 includes a first matching module and a second matching module.
And the first matching module is used for removing the obstacle data of the vehicle as the obstacle from the first obstacle data to obtain third obstacle data.
And the second matching module is used for matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result.
Optionally, the first matching module includes a first matching submodule and a second matching submodule.
The first matching submodule is used for determining a first rectangular area corresponding to the vehicle according to the vehicle data; and determining a second rectangular area corresponding to the first obstacle.
And the second matching submodule is used for removing the obstacle data of the vehicle as the obstacle from the first obstacle data according to the first overlapping area of the first rectangular area and the second rectangular area.
Optionally, the second matching sub-module is specifically configured to remove, from the first obstacle data, obstacle data of the first obstacle corresponding to the largest first overlapping area.
Optionally, the processing unit 1503 includes a first processing module and a second processing module.
And the first processing module is used for determining the position relation between the fourth obstacle and the vehicle according to the vehicle data if the matching result indicates that the fourth obstacle which is failed to be matched with the second obstacle exists in the third obstacles.
And the second processing module is used for determining the detection result of the fourth obstacle according to the position relation.
Optionally, the second processing module includes a first processing sub-module, a second processing sub-module, and a third processing sub-module.
And the first processing submodule is used for determining whether the fourth obstacle is in a perception blind area in the perception range of the vehicle or not according to the position relation.
And the second processing sub-module is used for determining that the detection result is false detection if the fourth obstacle is not in the perception blind area.
And the third processing sub-module is used for determining that the detection result is non-false detection if the fourth obstacle is in the perception blind area.
Optionally, the second matching module includes a third matching submodule, a fourth matching submodule, and a fifth matching submodule.
The third matching submodule is used for determining a rectangular area corresponding to the third obstacle and a second overlapping area of the rectangular area corresponding to the second obstacle; and determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles.
And the fourth matching submodule is used for determining that the third obstacle is not matched with the fifth obstacle if the maximum second overlapping area is smaller than or equal to the preset threshold.
And the fifth matching submodule is used for determining that the third obstacle is matched with the fifth obstacle if the maximum second overlapping area is larger than a preset threshold value.
The obstacle detection device 150 according to the embodiment of the present disclosure may implement the technical solution of the obstacle detection method according to any one of the embodiments, and the implementation principle and the beneficial effect of the obstacle detection method are similar to those of the obstacle detection method.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
Fig. 16 is a schematic block diagram of an electronic device 160 provided by an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 16, the apparatus 160 includes a computing unit 1601 which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)1602 or a computer program loaded from a storage unit 1608 into a Random Access Memory (RAM) 1603. In the RAM1603, various programs and data required for the operation of the device 160 can also be stored. The computing unit 1601, ROM1602 and RAM1603 are connected to each other via a bus 1604. An input/output (I/O) interface 1605 is also connected to the bus 1604.
A number of components in device 160 connect to I/O interface 1605, including: an input unit 1606 such as a keyboard, a mouse, and the like; an output unit 1607 such as various types of displays, speakers, and the like; a storage unit 1608, such as a magnetic disk, optical disk, or the like; and a communication unit 1609 such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 1609 allows the device 160 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 1601 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of computing unit 1601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1601 executes the respective methods and processes described above, such as the detection method of an obstacle. For example, in some embodiments, the method of obstacle detection may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1608. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 160 via ROM1602 and/or communications unit 1609. When the computer program is loaded into RAM1603 and executed by the computing unit 1601, one or more steps of the method for detecting an obstacle described above may be performed. Alternatively, in other embodiments, the computing unit 1601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method of obstacle detection.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A method of detecting an obstacle, comprising:
acquiring first obstacle data in a detected road sensed by road side equipment, second obstacle data in the detected road sensed by a vehicle and vehicle data; wherein the first obstacle data, the second obstacle data and the vehicle data are acquired at the same time;
matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result;
determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
2. The method of claim 1, wherein the matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result comprises:
removing obstacle data of the vehicle as an obstacle from the first obstacle data to obtain third obstacle data;
and matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain the matching result.
3. The method of claim 2, wherein said culling obstacle data for the vehicle as an obstacle from the first obstacle data comprises:
determining a first rectangular area corresponding to the vehicle according to the vehicle data; determining a second rectangular area corresponding to the first obstacle;
and according to the first overlapping area of the first rectangular area and the second rectangular area, removing obstacle data of the vehicle as an obstacle from the first obstacle data.
4. The method of claim 3, wherein said culling obstacle data for the vehicle as an obstacle from the first obstacle data according to the first overlapping area of the first rectangular region and the second rectangular region comprises:
and removing the obstacle data of the first obstacle corresponding to the maximum first overlapping area from the first obstacle data.
5. The method according to any one of claims 2-4, wherein said determining a detection result of the first obstacle based on the matching result and the vehicle data comprises:
if the matching result indicates that a fourth obstacle which is failed to be matched with the second obstacle exists in the third obstacles, determining the position relation between the fourth obstacle and the vehicle according to the vehicle data;
and determining the detection result of the fourth obstacle according to the position relation.
6. The method of claim 5, wherein the determining a detection result of the fourth obstacle according to the positional relationship comprises:
determining whether the fourth obstacle is in a perception blind area in the perception range of the vehicle according to the position relation;
if the fourth obstacle is not in the perception blind area, determining that the detection result is false detection;
and if the fourth obstacle is in the perception blind area, determining that the detection result is non-false detection.
7. The method according to any one of claims 2-6, wherein the matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain the matching result comprises:
determining a second overlapping area of the rectangular area corresponding to the third obstacle and the rectangular area corresponding to the second obstacle; determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles;
determining that the third obstacle does not match the fifth obstacle if the maximum second overlap area is less than or equal to a preset threshold;
determining that the third obstacle matches the fifth obstacle if the maximum second overlap area is greater than the preset threshold.
8. An obstacle detection device comprising:
the system comprises an acquisition unit, a detection unit and a control unit, wherein the acquisition unit is used for acquiring first obstacle data in a detected road sensed by road side equipment, second obstacle data in the detected road sensed by a vehicle and vehicle data; wherein the first obstacle data, the second obstacle data and the vehicle data are acquired at the same time;
the matching unit is used for matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result;
the processing unit is used for determining the detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
9. The apparatus of claim 8, wherein the matching unit comprises a first matching module and a second matching module;
the first matching module is used for removing the obstacle data of the vehicle as an obstacle from the first obstacle data to obtain third obstacle data;
the second matching module is configured to match a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data, so as to obtain the matching result.
10. The apparatus of claim 9, wherein the first matching module comprises a first matching submodule and a second matching submodule;
the first matching submodule is used for determining a first rectangular area corresponding to the vehicle according to the vehicle data; determining a second rectangular area corresponding to the first obstacle;
the second matching submodule is used for eliminating the obstacle data of the vehicle as the obstacle from the first obstacle data according to the first overlapping area of the first rectangular area and the second rectangular area.
11. The apparatus of claim 10, wherein the first and second electrodes are disposed on opposite sides of the substrate,
the second matching submodule is specifically configured to remove, from the first obstacle data, obstacle data of the first obstacle corresponding to the maximum first overlap area.
12. The apparatus of any one of claims 9-11, wherein the processing unit comprises a first processing module and a second processing module;
the first processing module is configured to determine, according to the vehicle data, a position relationship between a fourth obstacle and the vehicle if the matching result indicates that the fourth obstacle that fails to be matched with the second obstacle exists in the third obstacles;
and the second processing module is used for determining the detection result of the fourth obstacle according to the position relation.
13. The apparatus of claim 12, wherein the second processing module comprises a first processing sub-module, a second processing sub-module, and a third processing sub-module;
the first processing submodule is used for determining whether the fourth obstacle is in a perception blind area in a perception range of the vehicle according to the position relation;
the second processing sub-module is configured to determine that the detection result is false detection if the fourth obstacle is not in the perception blind area;
the third processing sub-module is configured to determine that the detection result is a non-false detection if the fourth obstacle is in the perception blind area.
14. The apparatus of any one of claims 9-13, wherein the second matching module comprises a third matching sub-module, a fourth matching sub-module, and a fifth matching sub-module;
the third matching submodule is used for determining a second overlapping area of the rectangular region corresponding to the third obstacle and the rectangular region corresponding to the second obstacle; determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles;
the fourth matching sub-module is configured to determine that the third obstacle does not match the fifth obstacle if the maximum second overlapping area is smaller than or equal to a preset threshold;
the fifth matching sub-module is configured to determine that the third obstacle matches the fifth obstacle if the maximum second overlap area is greater than the preset threshold.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of detecting an obstacle of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of detecting an obstacle according to any one of claims 1-7.
17. A computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the method of detecting an obstacle according to any one of claims 1-7.
CN202111633794.1A 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment Active CN114332818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633794.1A CN114332818B (en) 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633794.1A CN114332818B (en) 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114332818A true CN114332818A (en) 2022-04-12
CN114332818B CN114332818B (en) 2024-04-09

Family

ID=81016759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633794.1A Active CN114332818B (en) 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114332818B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045178A1 (en) * 2022-09-02 2024-03-07 华为技术有限公司 Sensing method, apparatus, and system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002269694A (en) * 2001-03-08 2002-09-20 Natl Inst For Land & Infrastructure Management Mlit Road side processing device for correcting obstacle detection data
US20160274239A1 (en) * 2015-03-16 2016-09-22 Here Global B.V. Vehicle Obstruction Detection
CN106463059A (en) * 2014-06-06 2017-02-22 日立汽车系统株式会社 Obstacle-information-managing device
CN109801508A (en) * 2019-02-26 2019-05-24 百度在线网络技术(北京)有限公司 The motion profile prediction technique and device of barrier at crossing
US20190202476A1 (en) * 2017-12-28 2019-07-04 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus and device for obstacle in lane warning
CN110386065A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN110687549A (en) * 2019-10-25 2020-01-14 北京百度网讯科技有限公司 Obstacle detection method and device
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111551938A (en) * 2020-04-26 2020-08-18 北京踏歌智行科技有限公司 Unmanned technology perception fusion method based on mining area environment
CN111768642A (en) * 2019-04-02 2020-10-13 上海图森未来人工智能科技有限公司 Road environment perception and vehicle control method, system and device of vehicle and vehicle
CN112199991A (en) * 2020-08-27 2021-01-08 广州中国科学院软件应用技术研究所 Simulation point cloud filtering method and system applied to vehicle-road cooperative roadside sensing
CN112712719A (en) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 Vehicle control method, vehicle-road coordination system, road side equipment and automatic driving vehicle
CN112764013A (en) * 2020-12-25 2021-05-07 北京百度网讯科技有限公司 Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN113378947A (en) * 2021-06-21 2021-09-10 北京踏歌智行科技有限公司 Vehicle road cloud fusion sensing system and method for unmanned transportation in open-pit mining area
CN113468941A (en) * 2021-03-11 2021-10-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and computer storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002269694A (en) * 2001-03-08 2002-09-20 Natl Inst For Land & Infrastructure Management Mlit Road side processing device for correcting obstacle detection data
CN106463059A (en) * 2014-06-06 2017-02-22 日立汽车系统株式会社 Obstacle-information-managing device
US20160274239A1 (en) * 2015-03-16 2016-09-22 Here Global B.V. Vehicle Obstruction Detection
US20190202476A1 (en) * 2017-12-28 2019-07-04 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus and device for obstacle in lane warning
CN110386065A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN109801508A (en) * 2019-02-26 2019-05-24 百度在线网络技术(北京)有限公司 The motion profile prediction technique and device of barrier at crossing
CN111768642A (en) * 2019-04-02 2020-10-13 上海图森未来人工智能科技有限公司 Road environment perception and vehicle control method, system and device of vehicle and vehicle
CN110687549A (en) * 2019-10-25 2020-01-14 北京百度网讯科技有限公司 Obstacle detection method and device
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111551938A (en) * 2020-04-26 2020-08-18 北京踏歌智行科技有限公司 Unmanned technology perception fusion method based on mining area environment
CN112199991A (en) * 2020-08-27 2021-01-08 广州中国科学院软件应用技术研究所 Simulation point cloud filtering method and system applied to vehicle-road cooperative roadside sensing
CN112712719A (en) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 Vehicle control method, vehicle-road coordination system, road side equipment and automatic driving vehicle
CN112764013A (en) * 2020-12-25 2021-05-07 北京百度网讯科技有限公司 Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN113468941A (en) * 2021-03-11 2021-10-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and computer storage medium
CN113378947A (en) * 2021-06-21 2021-09-10 北京踏歌智行科技有限公司 Vehicle road cloud fusion sensing system and method for unmanned transportation in open-pit mining area

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MARKUS HIESMAIR ET.AL: "Empowering Road Vehicles to Learn Parking Situations Based on Optical Sensor Measurements", 《 IOT\'17: PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON THE INTERNET OF THINGS》, pages 199 - 200 *
姜灏;: "一种自动驾驶车的环境感知系统", 电子制作, no. 15, pages 72 - 75 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045178A1 (en) * 2022-09-02 2024-03-07 华为技术有限公司 Sensing method, apparatus, and system

Also Published As

Publication number Publication date
CN114332818B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN113129625B (en) Vehicle control method and device, electronic equipment and vehicle
CN109633688A (en) A kind of laser radar obstacle recognition method and device
CN112560862B (en) Text recognition method and device and electronic equipment
CN112764013B (en) Method, device, equipment and storage medium for testing sensing system of automatic driving vehicle
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
CN112541437A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN113887418A (en) Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium
CN113762272A (en) Road information determination method and device and electronic equipment
CN113743344A (en) Road information determination method and device and electronic equipment
CN113688935A (en) High-precision map detection method, device, equipment and storage medium
EP4145408A1 (en) Obstacle detection method and apparatus, autonomous vehicle, device and storage medium
CN115641359B (en) Method, device, electronic equipment and medium for determining movement track of object
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113592903A (en) Vehicle track recognition method and device, electronic equipment and storage medium
CN112863187A (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN114332818A (en) Obstacle detection method and device and electronic equipment
CN111640301B (en) Fault vehicle detection method and fault vehicle detection system comprising road side unit
CN111157012B (en) Robot navigation method and device, readable storage medium and robot
CN115891868A (en) Fault detection method, device, electronic apparatus, and medium for autonomous vehicle
CN113469045B (en) Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN115062240A (en) Parking lot sorting method and device, electronic equipment and storage medium
CN114283398A (en) Method and device for processing lane line and electronic equipment
CN113535876A (en) Method, apparatus, electronic device, and medium for processing map data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant