CN111753765B - Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium - Google Patents

Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium Download PDF

Info

Publication number
CN111753765B
CN111753765B CN202010601938.4A CN202010601938A CN111753765B CN 111753765 B CN111753765 B CN 111753765B CN 202010601938 A CN202010601938 A CN 202010601938A CN 111753765 B CN111753765 B CN 111753765B
Authority
CN
China
Prior art keywords
obstacle
information
determining
matching result
result data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010601938.4A
Other languages
Chinese (zh)
Other versions
CN111753765A (en
Inventor
李丹
李建平
陈潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010601938.4A priority Critical patent/CN111753765B/en
Publication of CN111753765A publication Critical patent/CN111753765A/en
Application granted granted Critical
Publication of CN111753765B publication Critical patent/CN111753765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a detection method, a detection device and a detection storage medium for sensing equipment, and relates to the fields of image processing and automatic driving. The specific implementation scheme is as follows: acquiring information of at least one first obstacle identified by first sensing equipment; acquiring information of at least one second obstacle identified by at least one second sensing device, wherein the accuracy of the first sensing device is smaller than that of the second sensing device; performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data; and determining a detection result of the first sensing device according to the matching result data. The technical scheme provided by the application can effectively improve the detection efficiency of the first sensing equipment and reduce the detection cost.

Description

Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium
Technical Field
The application relates to the field of data processing, in particular to the field of automatic driving and image processing.
Background
In the field of automatic driving, a sensing device mainly performs sensing recognition of an obstacle by means of image data and radar data, and provides sensing recognized obstacle information for downstream devices.
The detection of the existing sensing equipment basically depends on the labeling data, but the detection method has the problems of low detection efficiency and high detection cost.
Disclosure of Invention
The application provides a detection method, a detection device, detection equipment and storage medium of sensing equipment.
According to a first aspect of the present application, there is provided a detection method of a sensing device, comprising:
Acquiring information of at least one first obstacle identified by first sensing equipment;
acquiring information of at least one second obstacle identified by at least one second sensing device, wherein the accuracy of the first sensing device is smaller than that of the second sensing device;
Performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data;
and determining a detection result of the first sensing device according to the matching result data.
According to a second aspect of the present application, there is provided a detection apparatus for a sensing device, comprising:
the first acquisition module is used for acquiring information of at least one first obstacle identified by the first sensing equipment;
The second acquisition module is used for acquiring information of at least one second obstacle identified by at least one second sensing device, and the accuracy of the first sensing device is smaller than that of the second sensing device;
The first determining module is used for performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data;
and the second determining module is used for determining the detection result of the first sensing device according to the matching result data.
According to a third aspect of the present application, there is provided an electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods described above.
According to a fourth aspect of the present application there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform any of the methods described above.
According to a fifth aspect of the present application there is provided a computer program product comprising a computer program which, when executed by a processor, implements any of the methods described above.
According to the detection method, the information of the second obstacle identified by the second sensing equipment with higher precision is used as the detection reference object of the first sensing equipment, so that the information of the first obstacle identified by the first sensing equipment and the information of the second obstacle identified by the second sensing equipment can be subjected to obstacle matching, the matching result data is determined, and the detection result of the first sensing equipment is determined according to the matching result data. Because the detection result of the first sensing device does not depend on the labeling data, the processing flow of manual labeling is not needed, and the time and the labor cost of manual labeling can be saved; and the first sensing equipment and the second sensing equipment can be identified in real time, so that the determination of the detection result can be carried out in real time, the detection efficiency of the first sensing equipment can be effectively improved, and the detection cost can be reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a flow chart of a sensing method of a sensing device according to the present application;
FIG. 2 is a schematic diagram of a configuration of a first sensing device and a second sensing device according to the present application;
FIG. 3A is another schematic structural view of a first sensing device and a second sensing device according to the present application;
FIG. 3B is a schematic diagram of yet another configuration of a first sensing device and a second sensing device according to the present application;
FIG. 4 is a schematic flow chart of step S103 in the sensing method of the sensing device according to the present application;
FIG. 5 is a flow chart of step S104 in the sensing method of the sensing device according to the present application;
FIG. 6 is a schematic diagram of another flow chart of step S104 in the sensing method of the sensing device according to the present application;
FIG. 7 is a schematic diagram of a sensing apparatus of a sensing device according to the present application;
FIG. 8 is a schematic diagram of a first determining module in a detecting device of a sensing apparatus according to the present application;
FIG. 9 is a schematic diagram of a second determining module in the detecting device of the sensing apparatus according to the present application;
FIG. 10 is a schematic diagram of another configuration of the first determining module in the detecting device of the sensing apparatus according to the present application;
FIG. 11 is a schematic diagram of another configuration of a second determining module in the detecting device of the sensing apparatus according to the present application;
fig. 12 is a block diagram of an electronic device for implementing a sensing device detection method according to an embodiment of the present application.
Detailed Description
The detection method of the sensing equipment basically comprises the steps of firstly using drive test data acquired by an automatic driving vehicle running on a road as initial data, then manually marking the initial data to obtain marked data, then respectively inputting the marked data and the initial data into the simulated sensing equipment to perform sensing identification of the obstacle, and finally comparing sensing identification results to determine the sensing performance of the sensing equipment. However, because the labeling of the labeling data needs to be performed with multiple processing flows such as drive test data collection, labeling format determination, data screening and labeling to be labeled, manual labeling of the data to be labeled, and the like, wherein the time for manual labeling is at least 3 months, the labeling of the labeling data is long, a large amount of labor cost is consumed, and the detection efficiency of the sensing equipment is low and the detection cost is high.
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 shows a flow diagram of a method of detecting a sensing device according to the present application. As shown in fig. 1, the detection method of the sensing device may include:
Step S101, obtaining information of at least one first obstacle identified by first sensing equipment;
Step S102, obtaining information of at least one second obstacle identified by at least one second sensing device, wherein the accuracy of the first sensing device is smaller than that of the second sensing device;
Step S103, performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data.
The information of at least one first obstacle output by the first sensing device comprises information of one or more first obstacles, and the information of at least one second obstacle output by the second sensing device comprises information of one or more second obstacles, wherein the number of the second obstacles is two or more. Step S104, determining a detection result of the first sensing device according to the matching result data.
For example, the content of the match may include a match of the locations of the first obstacle and the second obstacle. By matching the positions of the first obstacle and the second obstacle, an overlapping relation between the first obstacle and the second obstacle, the number of the first obstacle matched with the second obstacle and the number of the first obstacle matched with the second obstacle which fail can be obtained, and then the detection result of the first sensing device can be determined by using the matching result data.
According to the detection method, the information of the second obstacle identified by the second sensing equipment with higher precision is used as the detection reference information of the first sensing equipment, so that the information of the first obstacle identified by the first sensing equipment and the information of the second obstacle identified by the second sensing equipment can be subjected to obstacle matching, the matching result data is determined, and the detection result of the first sensing equipment is determined according to the matching result data. Because the detection result of the first sensing device does not depend on the labeling data, the processing flow of manual labeling is not needed, and the time and the labor cost of manual labeling can be saved; and the first sensing equipment and the second sensing equipment can be identified in real time, so that the determination of the detection result can be carried out in real time, the detection efficiency of the first sensing equipment can be effectively improved, and the detection cost can be reduced.
When the initial data is manually marked, the initial data and the marked data are required to be stored, and the marked data are adopted to detect the sensing equipment, so that a large amount of storage space is occupied, and a large amount of storage resources are consumed. In the detection method, the first sensing equipment and the second sensing equipment can be identified in real time, so that the detection result of the first sensing equipment is determined in real time according to the information of each first obstacle and the information of each second obstacle, the detection result of the first sensing equipment can be synchronously determined in the drive test process of the automatic driving vehicle, the sensing accuracy of the first sensing equipment is determined in real time, the storage space is not required to be consumed for storing the marking data, and the storage resource can be saved.
In addition, for the detection method depending on the labeling data, in order to adapt the detection method to the format of the labeling data, an additional detection algorithm needs to be written, which consumes time and labor cost; the detection method does not depend on the labeling data, but utilizes the identification result of the second sensing device with higher accuracy to match, so that the time and labor cost for writing an additional detection algorithm can be saved.
Furthermore, for the detection method depending on the labeling data, after the sensor in the sensing equipment is updated, the drive test data is required to be collected again and labeling is carried out, so that the updating cost of the labeling data is high; according to the detection method, the identification results of the first sensing equipment and the second sensing equipment can be suitable for updating of the sensor, the information of the first obstacle and the information of the second obstacle can be obtained through collecting the drive test data, the detection result of the first sensing equipment is determined, and the information updating and detection cost is low. For example, the drive test data may be data collected when an autonomous vehicle travels on a road, including, but not limited to, collected travel environment information and information of obstacles, where the obstacles may include pedestrians, vehicles, rails, shoulders, and the like, which are present on the traveling road.
The first sensing device and the second sensing device in the above detection method are described below with reference to examples.
In one example, as shown in fig. 2, the first sensing device 201 may be a device that senses information identifying the first obstacle using initial data collected by the first sensor. The first sensor includes, but is not limited to, any one of a laser radar (lidar), a camera (camera), and a millimeter wave radar (radar), and the initial data includes, but is not limited to, any one of point cloud data, image data, and millimeter wave radar data; the second sensing device 202 may be a device that senses information identifying a second obstacle using initial data of a second sensor, for example, the second sensing device 202 of fig. 2 having 40-line lidar; the second sensor has higher accuracy than the first sensor, for example, the first sensor can be a 4-line laser radar, and the identification range is 5 meters; the second sensor may be a 40-line lidar with an identification range of 120 meters. In this way, the sensing precision of the second sensing device may be higher than that of the first sensing device 201, so that the information of the second obstacle identified by the second sensing device 202 is more accurate, and any one of the first sensing devices may be detected directly by using the information of the second obstacle identified by the second sensing device 202.
The first sensing device 201 and the second sensing device 202 may perform sensing recognition for the same automatic driving environment at the same time to ensure that the first sensing device 201 and the second sensing device 202 perform sensing recognition for the same obstacle. For example, the information of the first obstacle may include a position, a projected profile, a category, an orientation, a speed, and the like of the first obstacle; correspondingly, the information of the second obstacle may include a position, a projection profile, a category, an orientation, a speed, etc. of the first obstacle, and the information of the second obstacle perceived by the second perceiving device 202 substantially coincides with the information of the first obstacle perceived by the first perceiving device 201, which is different in that the accuracy of the information of the second obstacle perceived by the second perceiving device 202 may be higher than the accuracy of the information of the first obstacle, and thus the information of the second obstacle perceived by the second perceiving device 202 may be employed as reference information for detecting the first perceiving device 201.
In one example, as shown in fig. 3A, the second sensing device 202 may also be a device that performs fusion processing on the information of the first obstacle output by the at least two first sensing devices 201 by using a fusion algorithm to identify the second obstacle. For example, the number of the first sensing devices may be three, and the three first sensing devices are respectively sensing and identifying the first obstacle by using point cloud data acquired by a laser radar (lidar), image data acquired by a camera (camera) and millimeter wave radar data acquired by a millimeter wave radar (radar); the second sensing device adopts a fusion algorithm to fuse the first obstacles output by the three first sensing devices so as to identify the information of the second obstacles. Because the laser radar (lidar) is high in accuracy of identifying the position of the obstacle, the camera is high in accuracy of classifying the type of the obstacle, and the millimeter wave radar (radar) is high in sensitivity of identifying the speed of the obstacle, the second sensing device can integrate the first obstacle output by the three first sensing devices, and the information of the identified second obstacle can have the advantages of the three sensors, so that the second sensing device has higher sensing precision than the first sensing device with a single sensor.
The fusion algorithm can be used for fusing the first barriers output by the three first sensing devices by adopting a Dempster synthesis rule, and can also be other fusion algorithms, and the application does not limit the type of the fusion algorithm.
For example, the information of the first obstacle output by each first sensing device may have a confidence, and the second sensing device may screen and fuse the obstacles with high confidence according to the confidence of the information of the first obstacle. For example, when the confidence degrees of the positions identified by the three first sensing devices for the same first obstacle are 80%, 70% and 50% respectively, the second sensing device determines that the position with the confidence degree of 80% is the position of the second obstacle corresponding to the first obstacle. In this way, the accuracy of the second obstacle output by the second sensing device is higher than the accuracy of the first obstacle output by the first sensing device. Therefore, the information of the first barrier output by the at least two first sensing devices can be fused by the second sensing devices to output the information of the second barrier, and the first sensing devices are reversely detected by the fused information of the second barrier, so that manual labeling of initial data is not needed, the detection speed is high, and the detection cost is low.
In one example, when the second sensing device is to use a fusion algorithm to fuse the information of the first obstacles output by at least two first sensing devices, in order to improve the sensing accuracy of the second sensing device, V2X (vehicle to everything, information exchange between vehicles and the outside, which may also be called as internet of vehicles) may also be input to the second sensing device, where the information of the V2X obstacles output by the device for the same autopilot environment and the map information output by the high-precision map participate in fusion; the V2X device can provide information of the obstacle in the shielding area for the second sensing device in combination with the high-precision map so as to supplement the input information of the second sensing device and improve the sensing effect of the second sensing device. In addition, the historical information of the first obstacle can be referred to in the fusion process, and operations such as Kalman filtering and the like are performed on the information of the first obstacle so as to reduce perceived noise and improve the detection accuracy of the first perception device.
In one example, as shown in fig. 3B, since the sensing device having a high-precision sensor has a higher sensing precision than the sensing device employing the fusion algorithm, it is also possible to use the sensing device employing the fusion algorithm as the first sensing device 201, use the sensing device having a high-precision sensor as the second sensing device, and use the obstacle information identified by the second sensing device 202 having a high-precision sensor as the reference information for detecting the first sensing device 201 employing the fusion algorithm as described above. For example, the high-precision sensor may be a 40-line lidar.
In one embodiment, the information of the first obstacle may include a position and a projection profile of the first obstacle, and the information of the second obstacle includes a position and a projection profile of the second obstacle, as shown in fig. 4, step S103 may include:
Step S401, comparing the position of each first obstacle with the position of each second obstacle.
In one example, the position of the first obstacle and the position of the second obstacle may be coordinate positions and the position comparison may be:
determining a distance between the coordinate position of the first obstacle and the coordinate position of the second obstacle;
And determining an overlapping relationship between the first obstacle and the second obstacle under the condition that the distance is smaller than a preset distance threshold value.
Step S402, when the first obstacle and the second obstacle have an overlapping relationship, determining an overlapping ratio of the projection profile of the first obstacle and the projection profile of the second obstacle.
The overlapping ratio of the projection profiles refers to the proportion of the intersection of the projection profile of the first obstacle and the projection profile of the second obstacle to the union of the projection profile of the first obstacle and the projection profile of the second obstacle. For example, if the area of the projection profile of the first obstacle is 1m 2, the area of the projection profile of the second obstacle is 0.75m 2, and the area of the projection profile of the intersection of the projection profile of the first obstacle and the projection profile of the second obstacle is 0.5m 2, the area of the projection profile of the union of the projection profile of the first obstacle and the projection profile of the second obstacle is 1.25m2, and the overlapping ratio of the projection profile of the first obstacle and the projection profile of the second obstacle is 0.4.
In one example, the projection profile may be a projection frame of the obstacle, which may be used to represent a projected area of the obstacle. For ease of calculation, the projection frame may be a rectangular frame covering the obstacle.
Step S403, determining that the first obstacle and the second obstacle are successfully matched when the overlapping rate is greater than a preset overlapping threshold.
For example, the overlap ratio threshold may be set to 0.7, and it may be determined that the first obstacle and the second obstacle are successfully matched when the overlap ratio of the projection profile of the first obstacle and the projection profile of the second obstacle is greater than 0.7. The preset overlapping threshold can be selected and adjusted according to actual needs, and the preset overlapping threshold is not limited.
And if the first obstacle is successfully matched with the second obstacle, the first obstacle and the second obstacle are the same obstacle. For example, when the first sensing device and the second sensing device sense in real time for the same automatic driving environment, the first obstacle and the second obstacle are successfully matched, which may indicate that the first obstacle and the second obstacle are the same obstacle in the same automatic driving environment.
And step S404, counting the number of the successfully matched barriers to obtain first matching result data.
In this embodiment, the overlapping relationship between each first obstacle and each second obstacle is determined by comparing the positions of each first obstacle and each second obstacle, so that the matching success of the first obstacle and the second obstacle can be determined by using the overlapping ratio of the projection profile of the first obstacle and the projection profile of the second obstacle being greater than the preset overlapping threshold value, and the first matching result data, that is, the number of matching pairs between the first obstacle and the second obstacle is obtained by counting the number of successfully matched obstacles, so as to provide statistics data for determining the recognition accuracy of the first sensing device. And, confirm the matching condition between the first obstacle and the second obstacle through position and overlapping rate, can improve the accuracy of matching.
In one embodiment, step S103 may include:
under the condition that the overlapping rate is smaller than a preset overlapping threshold value, determining that the first obstacle and the second obstacle fail to be matched;
And counting the number of obstacles failing to match, and obtaining second matching result data.
In this embodiment, the second matching result data is obtained by counting the number of obstacles that fail to match the first obstacle with the second obstacle, that is, determining the number of obstacles that fail to match between the first obstacle and the second obstacle, and providing statistical data for determining the detection result of the first sensing device.
In one embodiment, determining the detection result of the first sensing device according to the matching result data may include: and determining the identification recall rate of the first sensing equipment according to the first matching result data and the second matching result data.
For example, assuming that the first matching result data (e.g., the number of obstacles between which the matching between the first obstacle and the second obstacle is successful) is tp and the second matching result data (e.g., the number of obstacles between which the matching between the first obstacle and the second obstacle is failed) is fn, the recognition recall recall of the first sensing device may be calculated by the formula recall =tp/(tp+fn). Thus, a first perception recognition detection result of the first perception device can be obtained, and the perception recognition accuracy of the first perception device is reflected through the recognition recall rate of the first perception device.
In one embodiment, the detection method may further include: in the case where there is a second obstacle that does not match the position of the first obstacle among the second obstacles, third matching result data is obtained according to the number of second obstacles whose positions do not match.
The third matching result data is data which is not matched with the first sensing device, in other words, the third matching result data is the number of obstacles which are not recognized by the first sensing device, so as to provide statistical data for the recognition accuracy of the first sensing device.
In one embodiment, determining the detection result of the first sensing device according to the matching result data may include: and determining the identification accuracy of the first sensing equipment according to the first matching result and the third matching result data.
For example, if the third matching result data may be fp, the recognition accuracy precision of the first sensing device may be calculated by the formula precision=tp/(tp+fp). Thus, a second perception recognition detection result of the first perception device can be obtained, and the perception recognition precision of the first perception device is reflected through the recognition accuracy.
In one embodiment, the information of the first obstacle includes a category of the first obstacle, and the information of the second obstacle includes a category of the second obstacle, as shown in fig. 5, step S104 may include:
Step S501, under the condition that the first obstacle and the second obstacle are successfully matched, matching the category of the first obstacle and the category of the second obstacle;
step S502, determining that the category of the first obstacle is the same as the category of the second obstacle when the category of the first obstacle is matched with the category of the second obstacle;
step S503, counting the number of obstacles with the same category, and obtaining category matching result data;
Step S504, determining the classification accuracy of the first sensing device according to the classification matching result data and the first matching result data.
In one example, when the category of the first obstacle that is successfully matched is "pedestrian", and the category of the first obstacle is "pedestrian", the category of the first obstacle is the same as the category of the second obstacle, and the number of obstacles with the same type can be obtained through statistics; when the category of the successfully matched first obstacle is a pedestrian, and the category of the first obstacle is a vehicle, the category of the first obstacle is different from the category of the second obstacle, and the category matching of the first obstacle and the second obstacle fails.
For example, the category matching result data may be set to be acc_right_count, and the classification accuracy acc of the first sensing device may be calculated by the formula acc=acc_right_count/tp. Thus, the classification detection result of the first sensing device can be obtained, and the classification accuracy can be utilized to reflect the sensing precision of the first sensing device.
In one embodiment, the information of the first obstacle includes an attribute of the first obstacle, and the information of the second obstacle includes an attribute of the second obstacle, as shown in fig. 6, step S104 may further include:
Step S601, determining an attribute error between the attribute of the matched first obstacle and the attribute of the second obstacle when the first obstacle and the second obstacle are successfully matched.
Wherein the properties of the first obstacle and the properties of the second obstacle include, but are not limited to, heading and speed, and the property errors include, but are not limited to, heading errors and speed errors.
In one example, when the number of successfully matched obstacles between the first obstacle and the second obstacle is tp, wherein the orientation error of the i-th pair of obstacles is head_err i, i is an integer, and 1.ltoreq.i.ltoreq.tp.
In one example, when the number of obstacles that successfully match between the first obstacle and the second obstacle is tp, where the speed error of the ith pair of obstacles is speed err i, where i is an integer and 1.ltoreq.i.ltoreq.tp.
Step S602, determining attribute differences between the first sensing device and the second sensing device according to the attribute errors and the first matching result data.
Wherein the attribute difference may be a mean error of the attribute, including, but not limited to, a mean error of orientation and a velocity mean error.
In one example, the average error heading_err_avg between the first obstacle and the second obstacle may be calculated by the following formula:
heading_err_avg=(heading_err1+heading_err2+…+heading_errtp)/tp。
in one example, the speed average error speed_err_avg between the first obstacle and the second obstacle may be calculated by the following formula:
speed_err_avg=(speed_err1+speed_err2+…+speed_errtp)/tp。
Based on the above, the attribute difference between the first obstacle and the second obstacle which are successfully matched can be determined, and then the attribute detection result of the first sensing device is obtained.
In summary, the detection method of the application can detect the perception recognition recall rate, the perception recognition accuracy rate, the classification accuracy rate, the orientation mean error and the speed mean error of the first perception device, is favorable for reflecting the perception accuracy of the first perception device through a plurality of detection results, provides a multi-dimensional detection result for the downstream device, and is convenient for the downstream device to use the information of the first obstacle recognized by the first perception device.
Fig. 7 shows a schematic structural diagram of a detection device according to an embodiment of the present application. As shown in fig. 7, the detecting apparatus 700 may include:
A first obtaining module 710, configured to obtain information of at least one first obstacle identified by the first sensing device;
A second obtaining module 720, configured to obtain information of at least one second obstacle identified by at least one second sensing device, where the accuracy of the first sensing device is less than that of the second sensing device;
a first determining module 730, configured to perform obstacle matching on information of each first obstacle and information of each second obstacle, and determine matching result data;
the second determining module 740 is configured to determine a detection result of the first sensing device according to the matching result data.
In one embodiment, as shown in fig. 8, the information of the first obstacle includes a position and a projection profile of the first obstacle, the information of the second obstacle includes a position and a projection profile of the second obstacle, and the first determining module 730 includes:
a position comparison sub-module 801, configured to compare the position of each first obstacle with the position of each second obstacle;
An overlap rate determination submodule 802, configured to determine an overlap rate of a projection profile of the first obstacle and a projection profile of the second obstacle in a case where the first obstacle has an overlap relationship with the second obstacle;
a first determining submodule 803, configured to determine that the first obstacle and the second obstacle are successfully matched when the overlap ratio is greater than a preset overlap threshold;
the first statistics sub-module 804 is configured to count the number of successfully matched obstacles, and obtain first matching result data.
In one embodiment, as shown in fig. 8, the first determining module 730 further includes:
A second determining sub-module 805, configured to determine that the first obstacle fails to match the second obstacle when the overlapping rate is less than a preset overlapping threshold;
And a second statistics sub-module 806, configured to count the number of obstacles that fail to match, and obtain second matching result data.
In one embodiment, as shown in fig. 9, the second determining module 740 may include:
the first determining sub-module 901 is configured to determine an identification recall rate of the first sensing device according to the first matching result data and the second matching result data.
In one embodiment, as shown in fig. 8, the first determining module 730 may further include:
A data obtaining sub-module 807 for obtaining third matching result data according to the number of second obstacles whose positions are not matched in the case where there is a second obstacle which is not matched with the position of the first obstacle in the second obstacles.
In one embodiment, the second determining module 740 may include, as shown in fig. 9:
and the second determining sub-module 902 is configured to determine, according to the first matching result and the third matching result data, the recognition accuracy of the first sensing device.
In one embodiment, the information of the first obstacle includes a category of the first obstacle, the information of the second obstacle includes a category of the second obstacle, and as shown in fig. 10, the first determining module 730 may include:
A category matching submodule 1001, configured to match a category of the first obstacle with a category of the second obstacle when the first obstacle is successfully matched with the second obstacle;
A category determination submodule 1002, configured to determine that, in a case where the category of the first obstacle matches the category of the second obstacle, the category of the first obstacle is the same as the category of the second obstacle;
A third statistics submodule 1003, configured to count the number of obstacles with the same category, and obtain category matching result data;
The classification determination submodule 1004 is configured to determine a classification accuracy of the first sensing device according to the category matching result data and the first matching result data.
In one embodiment, the information of the first obstacle includes an attribute of the first obstacle, the information of the second obstacle includes an attribute of the second obstacle, and as shown in fig. 11, the second determining module 740 may include:
An attribute error determination submodule 1101, configured to determine an attribute error between an attribute of the matched first obstacle and an attribute of the second obstacle in a case where the first obstacle and the second obstacle are successfully matched;
the attribute difference determining submodule 1102 determines attribute differences between the first sensing device and the second sensing device according to the attribute errors and the first matching result data.
In one embodiment, the information of the at least one second obstacle is obtained by fusing information of the first obstacles identified by the at least two first sensing devices by the second sensing device through a fusion algorithm.
According to embodiments of the present application, the present application also provides an electronic device, a readable storage medium and a computer program product.
As shown in fig. 12, there is a block diagram of an electronic device of a sensing device detection method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 12, the electronic device includes: one or more processors 1201, memory 1202, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1201 is illustrated in fig. 12.
Memory 1202 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the sensing method of the sensing device provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the detection method of the sensing device provided by the present application.
The memory 1202 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the first acquisition module 710, the second acquisition module 720, the first determination module 730, and the second determination module 740 shown in fig. 7) corresponding to the detection method of the sensing device in the embodiment of the application. The processor 1201 performs various functional applications of the server and data processing, i.e., implements the detection method of the sensing device in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 1202.
Memory 1202 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device of the detection method of the sensing device, etc. In addition, memory 1202 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 1202 optionally includes memory remotely located with respect to processor 1201, which may be connected to the electronic device of the sensing method of the sensing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device that perceives the detection method of the device may further include: an input device 1203 and an output device 1204. The processor 1201, the memory 1202, the input device 1203, and the output device 1204 may be connected by a bus or otherwise, for example in fig. 12.
The input device 1203 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device sensing the detection method of the device, such as input devices of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. The output device 1204 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (21)

1. A method of detecting a sensing device, comprising:
Acquiring information of at least one first obstacle identified by first sensing equipment;
Acquiring information of at least one second obstacle identified by at least one second sensing device, wherein the accuracy of the first sensing device is smaller than that of the second sensing device;
performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data; the information of the second obstacle is used as reference information for detecting the first sensing equipment;
and determining a detection result of the first sensing device according to the matching result data.
2. The method of claim 1, wherein the information of the first obstacles includes a position and a projection profile of the first obstacles, the information of the second obstacles includes a position and a projection profile of the second obstacles, performing obstacle matching of the information of each of the first obstacles and the information of each of the first obstacles, determining matching result data, comprising:
comparing the position of each first obstacle with the position of each second obstacle;
Determining a rate of overlap of a projected profile of the first obstacle with a projected profile of the second obstacle in the event that the first obstacle has an overlapping relationship with the second obstacle;
Under the condition that the overlapping rate is larger than a preset overlapping threshold value, determining that the first obstacle and the second obstacle are successfully matched;
And counting the number of the barriers successfully matched, and obtaining first matching result data.
3. The method of claim 2, wherein performing obstacle matching on the information of each first obstacle and the information of each first obstacle, determining matching result data, further comprises:
determining that the first obstacle fails to match with the second obstacle under the condition that the overlapping rate is smaller than a preset overlapping threshold value;
And counting the number of obstacles failing to match, and obtaining second matching result data.
4. A method according to claim 3, wherein determining the detection result of the first sensing device from the matching result data comprises:
And determining the identification recall rate of the first sensing equipment according to the first matching result data and the second matching result data.
5. The method of claim 2, further comprising:
And in the case that a second obstacle which is not matched with the position of the first obstacle exists in the second obstacles, obtaining third matching result data according to the number of the second obstacles which are not matched with the position.
6. The method of claim 5, wherein determining the detection result of the first sensing device based on the matching result data comprises:
and determining the identification accuracy of the first sensing equipment according to the first matching result and the third matching result data.
7. The method of claim 2, wherein the information of the first obstacle includes a category of the first obstacle, the information of the second obstacle includes a category of the second obstacle, and the determining the detection result of the first sensing device according to the matching result data includes:
matching the category of the first obstacle with the category of the second obstacle under the condition that the first obstacle is successfully matched with the second obstacle;
Determining that the category of the first obstacle is the same as the category of the second obstacle in the case that the category of the first obstacle is matched with the category of the second obstacle;
counting the number of obstacles with the same category, and obtaining category matching result data;
and determining the classification accuracy of the first sensing equipment according to the classification matching result data and the first matching result data.
8. The method of claim 2, wherein the information of the first obstacle includes an attribute of the first obstacle, the information of the second obstacle includes an attribute of the second obstacle, and the determining the detection result of the first sensing device according to the matching result data includes:
Determining an attribute error between the attribute of the matched first obstacle and the attribute of the second obstacle under the condition that the first obstacle and the second obstacle are successfully matched;
And determining attribute differences between the first sensing equipment and the second sensing equipment according to the attribute errors and the first matching result data.
9. The method of claim 1, wherein the information of the at least one second obstacle is obtained by fusing information of first obstacles identified by at least two first sensing devices by the second sensing device using a fusion algorithm.
10. A sensing device detection apparatus comprising:
the first acquisition module is used for acquiring information of at least one first obstacle identified by the first sensing equipment;
The second acquisition module is used for acquiring information of at least one second obstacle identified by at least one second sensing device, and the accuracy of the first sensing device is smaller than that of the second sensing device;
The first determining module is used for performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data; the information of the second obstacle is used as reference information for detecting the first sensing equipment;
And the second determining module is used for determining the detection result of the first sensing device according to the matching result data.
11. The apparatus of claim 10, wherein the information of the first obstacle comprises a position and a projected profile of the first obstacle and the information of the second obstacle comprises a position and a projected profile of the second obstacle, the first determination module comprising:
The position comparison sub-module is used for comparing the position of each first obstacle with the position of each second obstacle;
an overlap rate determination submodule for determining an overlap rate of a projection profile of the first obstacle and a projection profile of the second obstacle in a case where the first obstacle and the second obstacle have an overlap relationship;
a first determining submodule, configured to determine that the first obstacle and the second obstacle are successfully matched when the overlap rate is greater than a preset overlap threshold;
And the first statistics sub-module is used for counting the number of the successfully matched barriers and obtaining first matching result data.
12. The apparatus of claim 11, wherein the first determination module further comprises:
a second determining submodule, configured to determine that the first obstacle fails to match with the second obstacle when the overlap rate is less than a preset overlap threshold;
And the second statistics sub-module is used for counting the number of obstacles failing to match and obtaining second matching result data.
13. The apparatus of claim 12, wherein the second determination module comprises:
And the first determining submodule is used for determining the identification recall rate of the first sensing equipment according to the first matching result data and the second matching result data.
14. The apparatus of claim 11, wherein the first determination module further comprises:
and the data obtaining sub-module is used for obtaining third matching result data according to the number of the second obstacles with the unmatched positions when the second obstacles exist in the second obstacles and the second obstacles with the unmatched positions of the first obstacles.
15. The apparatus of claim 14, wherein the second determination module comprises:
And the second determining submodule is used for determining the identification accuracy of the first sensing equipment according to the first matching result and the third matching result data.
16. The apparatus of claim 11, wherein the information of the first obstacle comprises a category of the first obstacle and the information of the second obstacle comprises a category of the second obstacle, the first determination module comprising:
a category matching sub-module, configured to match a category of the first obstacle with a category of the second obstacle when the first obstacle and the second obstacle are successfully matched;
A category determination submodule for determining that the category of the first obstacle is the same as the category of the second obstacle in the case where the category of the first obstacle is matched with the category of the second obstacle;
The third statistical sub-module is used for counting the number of the obstacles with the same category and obtaining category matching result data;
And the classification determining sub-module is used for determining the classification accuracy of the first sensing equipment according to the classification matching result data and the first matching result data.
17. The apparatus of claim 11, wherein the information of the first obstacle comprises an attribute of the first obstacle and the information of the second obstacle comprises an attribute of the second obstacle, the second determination module comprising:
An attribute error determination submodule, configured to determine an attribute error between an attribute of the matched first obstacle and an attribute of the second obstacle in a case where the first obstacle and the second obstacle are successfully matched;
And the attribute difference determining submodule determines attribute differences between the first sensing equipment and the second sensing equipment according to the attribute errors and the first matching result data.
18. The apparatus of claim 10, wherein the information of the at least one second obstacle is obtained by fusing information of first obstacles identified by at least two first sensing devices by the second sensing device using a fusion algorithm.
19. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-9.
CN202010601938.4A 2020-06-29 2020-06-29 Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium Active CN111753765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601938.4A CN111753765B (en) 2020-06-29 2020-06-29 Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601938.4A CN111753765B (en) 2020-06-29 2020-06-29 Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium

Publications (2)

Publication Number Publication Date
CN111753765A CN111753765A (en) 2020-10-09
CN111753765B true CN111753765B (en) 2024-05-31

Family

ID=72677755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601938.4A Active CN111753765B (en) 2020-06-29 2020-06-29 Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium

Country Status (1)

Country Link
CN (1) CN111753765B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device
CN112581526A (en) * 2020-12-11 2021-03-30 北京百度网讯科技有限公司 Evaluation method, device, equipment and storage medium for obstacle detection
CN112541475B (en) 2020-12-24 2024-01-19 北京百度网讯科技有限公司 Sensing data detection method and device
CN112764013B (en) * 2020-12-25 2024-03-01 北京百度网讯科技有限公司 Method, device, equipment and storage medium for testing sensing system of automatic driving vehicle
CN113463720B (en) * 2021-06-30 2023-02-17 广西柳工机械股份有限公司 System and method for identifying contact material of loader bucket
CN113205087B (en) * 2021-07-06 2022-06-03 中汽创智科技有限公司 Perception information processing method, device, equipment and computer readable storage medium
CN115691099A (en) * 2021-07-30 2023-02-03 华为技术有限公司 Sensing capability information generation method, using method and device
CN113963327B (en) * 2021-09-06 2023-09-08 阿波罗智能技术(北京)有限公司 Obstacle detection method, obstacle detection device, autonomous vehicle, apparatus, and storage medium
CN113893142B (en) * 2021-10-08 2024-06-07 四川康佳智能终端科技有限公司 Blind person obstacle avoidance method, system, equipment and readable storage medium
CN114202830A (en) * 2021-11-24 2022-03-18 湖南湘商智能科技有限公司 Intelligent lifting system for garage door
CN114596706B (en) * 2022-03-15 2024-05-03 阿波罗智联(北京)科技有限公司 Detection method and device of road side perception system, electronic equipment and road side equipment
CN115633085A (en) * 2022-08-31 2023-01-20 东风汽车集团股份有限公司 Driving scene image display method, device, equipment and storage medium
CN117975404A (en) * 2022-10-26 2024-05-03 北京三快在线科技有限公司 Direction information determining method and automatic driving vehicle
CN116434041B (en) * 2022-12-05 2024-06-21 北京百度网讯科技有限公司 Mining method, device and equipment for error perception data and automatic driving vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN108646739A (en) * 2018-05-14 2018-10-12 北京智行者科技有限公司 A kind of sensor information fusion method
CN109738198A (en) * 2018-12-14 2019-05-10 北京百度网讯科技有限公司 Detection method, device, equipment and the storage medium of vehicle environmental sensing capability
CN110069408A (en) * 2019-04-11 2019-07-30 杭州飞步科技有限公司 Automatic driving vehicle sensory perceptual system test method and device
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
WO2020083024A1 (en) * 2018-10-24 2020-04-30 腾讯科技(深圳)有限公司 Obstacle identification method and device, storage medium, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109738904A (en) * 2018-12-11 2019-05-10 北京百度网讯科技有限公司 A kind of method, apparatus of detection of obstacles, equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN108646739A (en) * 2018-05-14 2018-10-12 北京智行者科技有限公司 A kind of sensor information fusion method
WO2020083024A1 (en) * 2018-10-24 2020-04-30 腾讯科技(深圳)有限公司 Obstacle identification method and device, storage medium, and electronic device
CN109738198A (en) * 2018-12-14 2019-05-10 北京百度网讯科技有限公司 Detection method, device, equipment and the storage medium of vehicle environmental sensing capability
CN110069408A (en) * 2019-04-11 2019-07-30 杭州飞步科技有限公司 Automatic driving vehicle sensory perceptual system test method and device
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Lyapunov based driverless vehicle in obstacle free environment;Sheikh Azid;EEE Xplore;20170529;全文 *
基于多传感器数据融合的障碍物检测与跟踪;陆峰;徐友春;李永乐;王任栋;王东敏;;军事交通学院学报;20180225(02);全文 *
智轨电车多源环境感知系统;胡云卿;冯江华;龙腾;潘文波;袁希文;林军;黄瑞鹏;侯志超;;控制与信息技术;20200205(01);全文 *

Also Published As

Publication number Publication date
CN111753765A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111753765B (en) Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium
CN111401208B (en) Obstacle detection method and device, electronic equipment and storage medium
KR102543952B1 (en) Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, device, and program
CN111311925B (en) Parking space detection method and device, electronic equipment, vehicle and storage medium
CN111324115B (en) Obstacle position detection fusion method, obstacle position detection fusion device, electronic equipment and storage medium
CN110979346B (en) Method, device and equipment for determining lane where vehicle is located
CN111310840B (en) Data fusion processing method, device, equipment and storage medium
CN111784836B (en) High-precision map generation method, device, equipment and readable storage medium
CN111784835B (en) Drawing method, drawing device, electronic equipment and readable storage medium
CN111563450B (en) Data processing method, device, equipment and storage medium
EP3910533A1 (en) Method, apparatus, electronic device, and storage medium for monitoring an image acquisition device
EP3816663A2 (en) Method, device, equipment, and storage medium for determining sensor solution
CN111523471B (en) Method, device, equipment and storage medium for determining lane where vehicle is located
US20210291878A1 (en) Method and apparatus for annotating virtual lane at crossing
CN113370911A (en) Pose adjusting method, device, equipment and medium of vehicle-mounted sensor
US11769260B2 (en) Cross-camera obstacle tracking method, system and medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN111079079A (en) Data correction method and device, electronic equipment and computer readable storage medium
US20230072632A1 (en) Obstacle detection method, electronic device and storage medium
CN111597987A (en) Method, apparatus, device and storage medium for generating information
KR20220093382A (en) Obstacle detection method and device
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN111640301B (en) Fault vehicle detection method and fault vehicle detection system comprising road side unit
CN112528846A (en) Evaluation method, device, equipment and storage medium for obstacle detection
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant