CN111753765A - Detection method, device and equipment of sensing equipment and storage medium - Google Patents

Detection method, device and equipment of sensing equipment and storage medium Download PDF

Info

Publication number
CN111753765A
CN111753765A CN202010601938.4A CN202010601938A CN111753765A CN 111753765 A CN111753765 A CN 111753765A CN 202010601938 A CN202010601938 A CN 202010601938A CN 111753765 A CN111753765 A CN 111753765A
Authority
CN
China
Prior art keywords
obstacle
determining
information
matching result
result data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010601938.4A
Other languages
Chinese (zh)
Inventor
李丹
李建平
陈潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010601938.4A priority Critical patent/CN111753765A/en
Publication of CN111753765A publication Critical patent/CN111753765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a detection method, a detection device, equipment and a storage medium of sensing equipment, and relates to the field of image processing and automatic driving. The specific implementation scheme is as follows: acquiring information of at least one first obstacle identified by first sensing equipment; acquiring information of at least one second obstacle identified by at least one second sensing device, wherein the precision of the first sensing device is smaller than that of the second sensing device; carrying out obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data; and determining the detection result of the first sensing equipment according to the matching result data. The technical scheme provided by the application can effectively improve the detection efficiency of the first sensing equipment and reduce the detection cost.

Description

Detection method, device and equipment of sensing equipment and storage medium
Technical Field
The application relates to the field of data processing, in particular to the fields of automatic driving and image processing.
Background
In the field of automatic driving, sensing equipment mainly relies on image data and radar data to perform sensing identification of obstacles and provide sensing identified obstacle information for downstream equipment.
The detection of the existing sensing equipment basically depends on the marked data, but the detection method has the problems of low detection efficiency and high detection cost.
Disclosure of Invention
The application provides a detection method, a detection device, equipment and a storage medium of sensing equipment.
According to a first aspect of the present application, there is provided a detection method of a sensing device, including:
acquiring information of at least one first obstacle identified by first sensing equipment;
acquiring information of at least one second obstacle identified by at least one second sensing device, wherein the precision of the first sensing device is smaller than that of the second sensing device;
carrying out obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data;
and determining the detection result of the first sensing equipment according to the matching result data.
According to a second aspect of the present application, there is provided a detection apparatus for a sensing device, comprising:
the first acquisition module is used for acquiring information of at least one first obstacle identified by the first sensing equipment;
the second acquisition module is used for acquiring information of at least one second obstacle identified by at least one second sensing device, and the precision of the first sensing device is smaller than that of the second sensing device;
the first determining module is used for carrying out obstacle matching on the information of each first obstacle and the information of each second obstacle to determine matching result data;
and the second determining module is used for determining the detection result of the first sensing equipment according to the matching result data.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform any of the methods described above.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform any of the methods described above.
According to the detection method, the information of the second obstacle identified by the second sensing equipment with higher precision is used as the detection reference object of the first sensing equipment, and then obstacle matching can be carried out on the information of the first obstacle identified by the first sensing equipment and the information of the second obstacle identified by the second sensing equipment, matching result data are determined, and the detection result of the first sensing equipment is determined according to the matching result data. Because the detection result of the first sensing equipment does not depend on the labeled data, a processing flow of manual labeling is not needed, and the time and labor cost of manual labeling can be saved; and the first sensing equipment and the second sensing equipment can be identified in real time, so that the determination of the detection result can be carried out in real time, the detection efficiency of the first sensing equipment can be effectively improved, and the detection cost can be reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic flow chart of a sensing method of a sensing device according to the present application;
FIG. 2 is a schematic diagram of one configuration of a first perception device and a second perception device according to the present application;
FIG. 3A is another schematic structural diagram of a first perception device and a second perception device according to the present application;
FIG. 3B is a schematic diagram of yet another configuration of the first and second sensing devices according to the application;
fig. 4 is a schematic flow chart of step S103 in the detection method of the sensing device according to the present application;
fig. 5 is a schematic flow chart of step S104 in the detection method of the sensing device according to the present application;
fig. 6 is another schematic flow chart of step S104 in the detection method of the perception device according to the present application;
FIG. 7 is a schematic diagram of one configuration of a detection device of a sensing apparatus according to the present application;
FIG. 8 is a schematic diagram of a first determining module of the detecting device of the sensing apparatus according to the present application;
FIG. 9 is a schematic diagram of a second determining module of the detecting device of the sensing apparatus according to the present application;
FIG. 10 is a schematic diagram of another structure of a first determining module in the detecting device of the sensing apparatus according to the present application;
FIG. 11 is a schematic diagram of another structure of a second determining module in the detecting device of the sensing apparatus according to the present application;
fig. 12 is a block diagram of an electronic device for implementing the detection method of the sensing device according to the embodiment of the present application.
Detailed Description
The detection method of the sensing equipment basically comprises the steps of firstly utilizing road test data collected by an automatic driving vehicle running on a road as initial data, then carrying out manual marking on the initial data to obtain marking data, then respectively inputting the marking data and the initial data into the simulated sensing equipment to carry out sensing identification on obstacles, and finally comparing sensing identification results to determine the sensing performance of the sensing equipment. However, because the labeling of the labeled data needs a plurality of processing flows, such as the acquisition of drive test data, the determination of labeling formats, the screening of data to be labeled, the sending of labels, and the manual labeling of data to be labeled, the time for the manual labeling is at least 3 months, which makes the labeling of the labeled data time consuming, and also consumes a large amount of labor cost, thereby resulting in low detection efficiency and high detection cost of the sensing device.
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 shows a schematic flow chart of a detection method of a perception device according to the present application. As shown in fig. 1, the detection method of the sensing device may include:
s101, acquiring information of at least one first obstacle identified by first sensing equipment;
step S102, obtaining information of at least one second obstacle identified by at least one second sensing device, wherein the precision of the first sensing device is smaller than that of the second sensing device;
and step S103, performing obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data.
The information of at least one first obstacle output by the first sensing equipment comprises information of one or more first obstacles, the information of at least one second obstacle output by the second sensing equipment comprises information of one or more second obstacles, and the information is two or more. And step S104, determining the detection result of the first sensing equipment according to the matching result data.
For example, the content of the match may include a match of the locations of the first obstacle and the second obstacle. Through the matching of the positions of the first obstacle and the second obstacle, the overlapping relation between the first obstacle and the second obstacle, the number of obstacles matched with the second obstacle and the number of obstacles failed in matching between the first obstacle and the second obstacle can be obtained, and then the detection result of the first sensing device can be determined by using the matching result data.
According to the detection method, the information of the second obstacle identified by the second sensing equipment with higher precision is used as the detection reference information of the first sensing equipment, and then the information of the first obstacle identified by the first sensing equipment and the information of the second obstacle identified by the second sensing equipment are subjected to obstacle matching to determine matching result data, and the detection result of the first sensing equipment is determined according to the matching result data. Because the detection result of the first sensing equipment does not depend on the labeled data, a processing flow of manual labeling is not needed, and the time and labor cost of manual labeling can be saved; and the first sensing equipment and the second sensing equipment can be identified in real time, so that the determination of the detection result can be carried out in real time, the detection efficiency of the first sensing equipment can be effectively improved, and the detection cost can be reduced.
It should be noted that, when the initial data is manually labeled, the initial data and the labeled data need to be stored, and a large amount of storage space is occupied when the labeled data is used for detecting the sensing device, which may consume a large amount of storage resources. In the detection method, the first sensing equipment and the second sensing equipment can be identified in real time, so that the detection result of the first sensing equipment can be determined in real time according to the information of each first obstacle and the information of each second obstacle, the detection result of the first sensing equipment can be synchronously determined in the automatic driving vehicle drive test process, the sensing accuracy of the first sensing equipment can be determined in real time, the storage space is not required to be consumed for storing the labeled data, and the storage resources can be saved.
In addition, for the detection method depending on the labeled data, in order to adapt the detection method to the format of the labeled data, an additional detection algorithm needs to be written, which consumes time and labor cost; the detection method does not depend on the labeled data, and the identification result of the second sensing equipment with higher accuracy is used for matching, so that the time and labor cost for writing an additional detection algorithm can be saved.
Moreover, for the detection method depending on the labeled data, when the sensor in the sensing equipment is updated, the drive test data needs to be collected again and labeled, so that the updating cost of the labeled data is high; according to the detection method, the identification results of the first sensing equipment and the second sensing equipment can be suitable for updating of the sensors, the information of the first obstacle and the information of the second obstacle can be obtained by collecting the drive test data, the detection result of the first sensing equipment can be determined, and the information updating and detection cost is low. For example, the drive test data may be data collected by the autonomous vehicle while traveling on a road, including but not limited to collected information about the traveling environment and obstacles, such as pedestrians, vehicles, railings, shoulders, and other objects appearing on the traveling road.
The first sensing device and the second sensing device in the above detection method are described below with reference to examples.
In one example, as shown in fig. 2, the first sensing device 201 may be a device that senses information identifying a first obstacle using initial data collected by the first sensor. The first sensor includes, but is not limited to, any of a laser radar (lidar), a camera (camera), and a millimeter wave radar (radar), the initial data includes, but is not limited to, any of point cloud data, image data, and millimeter wave radar data; the second sensing device 202 may be a device that senses information identifying a second obstacle using initial data of a second sensor, for example, the second sensing device 202 of fig. 2 having a 40-line lidar; the second sensor is more accurate than the first sensor, for example, the first sensor may be a 4-line lidar with a 5 meter recognition range; the second sensor may be a 40-line lidar with a recognition range of 120 meters. In this way, the sensing accuracy of the second sensing device can be higher than that of the first sensing device 201, so that the information of the second obstacle identified by the second sensing device 202 is more accurate, and the information of the second obstacle identified by the second sensing device 202 can be directly used for detecting any one of the first sensing devices.
The first perception device 201 and the second perception device 202 may perform perception recognition for the same automatic driving environment at the same time to ensure that the first perception device 201 and the second perception device 202 perform perception recognition for the same obstacle. For example, the information of the first obstacle may include a position, a projection profile, a category, an orientation, a speed, and the like of the first obstacle; correspondingly, the information of the second obstacle may include a position, a projection profile, a category, an orientation, a speed, and the like of the first obstacle, and the information of the second obstacle perceived and identified by the second perception device 202 is substantially identical to the information of the first obstacle perceived and identified by the first perception device 201, except that the accuracy of the information of the second obstacle perceived and identified by the second perception device 202 may be higher than the accuracy of the information of the first obstacle, so that the information of the second obstacle perceived and identified by the second perception device 202 may be used as the reference information for detecting the first perception device 201.
In one example, as shown in fig. 3A, the second sensing device 202 may also be a device that performs fusion processing on information of the first obstacle output by the at least two first sensing devices 201 by using a fusion algorithm to identify the second obstacle. For example, the number of the first sensing devices may be three, and the three first sensing devices are respectively used for sensing and identifying the first obstacle by using point cloud data acquired by a laser radar (lidar), image data acquired by a camera (camera), and millimeter wave radar data acquired by a millimeter wave radar (radar); the second sensing equipment adopts a fusion algorithm to perform fusion processing on the first obstacles output by the three first sensing equipment so as to identify the information of the second obstacles. Because the position identification accuracy of the laser radar (lidar) to the obstacle is high, the type classification accuracy of the camera (camera) to the obstacle is high, and the sensitivity of the millimeter wave radar (radar) to the speed identification of the obstacle is high, the second sensing equipment performs fusion processing on the first obstacles output by the three first sensing equipment, the information of the second obstacle identified by the second sensing equipment can have the advantages of the three sensors, and the second sensing equipment has higher sensing precision than the first sensing equipment with a single sensor.
The fusion algorithm may be fusion of the first obstacles output by the three first sensing devices by using a Dempster synthesis rule, or other fusion algorithms, and the type of the fusion algorithm is not limited in the present application.
For example, the information of the first obstacle output by each first sensing device may have a confidence level, and the second sensing device may screen the obstacles with high confidence levels for fusion according to the confidence level of the information of the first obstacle. For example, when the confidences of the positions recognized by the three first sensing devices for the same first obstacle are 80%, 70%, and 50%, respectively, the second sensing device determines the position with the confidence of 80% as the position of the second obstacle corresponding to the first obstacle. As such, the accuracy of the second obstacle output by the second sensing device is higher than the accuracy of the first obstacle output by the first sensing device. Therefore, the information of the first obstacles output by the at least two first sensing devices can be utilized to output the information of the second obstacles after being fused by the second sensing device, the fused information of the second obstacles reversely detects each first sensing device, the initial data does not need to be manually marked, the detection speed is high, and the detection cost is low.
In an example, in order to improve the sensing accuracy of the second sensing device, when the second sensing device is a device that uses a fusion algorithm to fuse information of first obstacles output by at least two first sensing devices, V2X (vehicle to outside information exchange, also referred to as an internet of vehicles) information of a V2X obstacle output by the device for the same automatic driving environment and map information output by a high-precision map may also be input into the second sensing device to participate in the fusion; the V2X device can provide information of obstacles in the occlusion area for the second sensing device by combining with a high-precision map so as to supplement the input information of the second sensing device and improve the sensing effect of the second sensing device. In addition, historical information of the first obstacle can be referred to in the fusion process, Kalman filtering and other operations are carried out on the information of the first obstacle so as to reduce perception noise, and the detection accuracy of the first perception device is improved.
In one example, as shown in fig. 3B, since the sensing device with the high-precision sensor has higher sensing precision than the sensing device adopting the fusion algorithm, it is also possible to use the sensing device adopting the fusion algorithm as the first sensing device 201, use the sensing device with the high-precision sensor as the second sensing device, and use the obstacle information recognized by the second sensing device 202 with the high-precision sensor as the reference information for detecting the first sensing device 201 adopting the fusion algorithm. For example, the high-precision sensor may be a 40-line lidar.
In one embodiment, the information of the first obstacle may include a position and a projection profile of the first obstacle, and the information of the second obstacle includes a position and a projection profile of the second obstacle, as shown in fig. 4, and step S103 may include:
step S401, comparing the position of each first obstacle with the position of each second obstacle.
In one example, the position of the first obstacle and the position of the second obstacle may be coordinate positions, and the position comparison may be:
determining a distance between the coordinate position of the first obstacle and the coordinate position of the second obstacle;
and determining the overlapping relation between the first obstacle and the second obstacle when the distance is smaller than a preset distance threshold value.
Step S402, under the condition that the first obstacle and the second obstacle have an overlapping relation, determining the overlapping rate of the projection outline of the first obstacle and the projection outline of the second obstacle.
The overlapping rate of the projection outlines refers to the proportion of the intersection of the projection outline of the first obstacle and the projection outline of the second obstacle to the union of the projection outline of the first obstacle and the projection outline of the second obstacle. For example, the projected contour of the first obstacle has an area of 1m2The area of the projected contour of the second obstacle is 0.75m2The area of the intersecting projected contour of the first obstacle and the projected contour of the second obstacle is 0.5m2Then it is firstThe area of the projection profile of the phase of the projection profile of the obstacle and the projection profile of the second obstacle is 1.25m2, and it can be obtained that the overlap ratio of the projection profile of the first obstacle and the projection profile of the second obstacle is 0.4.
In one example, the projected outline may be a projected box of the obstacle, which may be used to represent a projected area of the obstacle. For ease of calculation, the projection box may be a rectangular box covering the obstacle.
And S403, determining that the first obstacle and the second obstacle are successfully matched under the condition that the overlapping rate is greater than a preset overlapping threshold value.
For example, if the overlap rate threshold is set to 0.7, it can be determined that the first obstacle and the second obstacle are successfully matched only when the overlap rate of the projected contour of the first obstacle and the projected contour of the second obstacle is greater than 0.7. The preset overlap threshold value can be selected and adjusted according to actual needs, and the preset overlap threshold value is not limited in the application.
And if the first obstacle is successfully matched with the second obstacle, the first obstacle and the second obstacle are the same obstacle. For example, when the first sensing device and the second sensing device sense the same automatic driving environment in real time, the first obstacle and the second obstacle are successfully matched, which may indicate that the first obstacle and the second obstacle are the same obstacle in the same automatic driving environment.
And S404, counting the number of obstacles successfully matched to obtain first matching result data.
In this embodiment, the position of each first obstacle is compared with the position of each second obstacle to determine the overlapping relationship between each first obstacle and each second obstacle, and then the overlapping rate of the projection profile of the first obstacle and the projection profile of the second obstacle is greater than a preset overlapping threshold to determine that the first obstacle and the second obstacle are successfully matched, and the number of obstacles successfully matched is counted to obtain first matching result data, that is, the number of matching pairs between the first obstacle and the second obstacle is obtained, so as to provide statistical data for determining the recognition accuracy of the first sensing device. And, confirm the matching situation between second obstacle and the first obstacle through position and overlap rate, can improve the accuracy matched.
In one embodiment, step S103 may include:
determining that the first obstacle and the second obstacle fail to be matched under the condition that the overlapping rate is smaller than a preset overlapping threshold value;
and counting the number of obstacles failing to be matched to obtain second matching result data.
In this embodiment, the second matching result data, that is, the number of obstacles between which matching between the first obstacle and the second obstacle failed is determined, is obtained by counting the number of obstacles between which matching between the first obstacle and the second obstacle fails, and the statistical data is provided for determination of the detection result of the first sensing device.
In one embodiment, determining the detection result of the first sensing device according to the matching result data may include: and determining the recognition recall rate of the first sensing equipment according to the first matching result data and the second matching result data.
For example, if the first matching result data (for example, the number of obstacles successfully matched between the first obstacle and the second obstacle) is tp and the second matching result data (for example, the number of obstacles failed to be matched between the first obstacle and the second obstacle) is fn, the recognition recall rate recall of the first sensing device may be calculated by the formula recall tp/(tp + fn). Therefore, a first sensing identification detection result of the first sensing equipment can be obtained, and the sensing identification accuracy of the first sensing equipment is reflected through the identification recall rate of the first sensing equipment.
In one embodiment, the detection method may further include: in the case where there is a second obstacle that does not match the position of the first obstacle among the second obstacles, third matching result data is obtained according to the number of second obstacles whose positions do not match.
The third matching result data is data that the first sensing device is not matched, in other words, the third matching result data is the number of obstacles that the first sensing device does not recognize, so as to provide statistical data for the recognition accuracy of the first sensing device.
In one embodiment, determining the detection result of the first sensing device according to the matching result data may include: and determining the identification accuracy of the first sensing equipment according to the first matching result and the third matching result data.
For example, the third matching result data may be set to fp, and the recognition accuracy precision of the first sensing device may be calculated by the formula precision tp/(tp + fp). Therefore, a second sensing identification detection result of the first sensing equipment can be obtained, and the sensing identification precision of the first sensing equipment is reflected through the identification accuracy.
In one embodiment, the information of the first obstacle includes a category of the first obstacle, and the information of the second obstacle includes a category of the second obstacle, as shown in fig. 5, step S104 may include:
step S501, under the condition that the first obstacle and the second obstacle are successfully matched, the type of the first obstacle and the type of the second obstacle are matched;
step S502, determining that the type of the first obstacle is the same as that of the second obstacle when the type of the first obstacle is matched with that of the second obstacle;
step S503, counting the number of obstacles with the same category to obtain category matching result data;
and step S504, determining the classification accuracy of the first sensing equipment according to the class matching result data and the first matching result data.
In one example, when the category of the successfully matched first obstacle is "pedestrian" and the category of the first obstacle is "pedestrian", the category of the first obstacle is the same as that of the second obstacle, and the number of the obstacles with the same type can be obtained through statistics; when the category of the successfully matched first obstacle is 'pedestrian' and the category of the first obstacle is 'vehicle', the category of the first obstacle is different from that of the second obstacle, and the category matching of the first obstacle and the second obstacle fails.
For example, the category matching result data may be set as acc _ right _ count, and the classification accuracy acc of the first sensing device may be calculated by the formula acc _ right _ count/tp. Therefore, the classification detection result of the first sensing equipment can be obtained, and the classification accuracy can be used for reflecting the sensing precision of the first sensing equipment.
In one embodiment, the information of the first obstacle includes an attribute of the first obstacle, and the information of the second obstacle includes an attribute of the second obstacle, as shown in fig. 6, step S104 may further include:
step S601, under the condition that the first obstacle and the second obstacle are successfully matched, determining an attribute error between the attribute of the matched first obstacle and the attribute of the second obstacle.
Wherein the attributes of the first obstacle and the attributes of the second obstacle include, but are not limited to, orientation and velocity, and the attribute errors include, but are not limited to, orientation errors and velocity errors.
In one example, when the number of obstacles successfully matched between the first obstacle and the second obstacle is tp, the orientation error of the ith pair of obstacles is leading _ erriWherein i is an integer, and i is more than or equal to 1 and less than or equal to tp.
In one example, when the number of obstacles successfully matched between the first obstacle and the second obstacle is tp, the speed error of the ith pair of obstacles is speed _ erriWherein i is an integer, and i is more than or equal to 1 and less than or equal to tp.
Step S602, determining the attribute difference between the first sensing device and the second sensing device according to the attribute error and the first matching result data.
Wherein the attribute difference may be a mean error of the attribute, including but not limited to a heading mean error and a velocity mean error.
In one example, the heading mean error heading _ err _ avg between the first obstacle and the second obstacle may be calculated by the following formula:
heading_err_avg=(heading_err1+heading_err2+…+heading_errtp)/tp。
in one example, the speed mean error speed _ err _ avg between the first obstacle and the second obstacle may be calculated by the following formula:
speed_err_avg=(speed_err1+speed_err2+…+speed_errtp)/tp。
based on the attribute difference, the attribute difference between the first obstacle and the second obstacle which are successfully matched can be determined, and then the attribute detection result of the first sensing device is obtained.
In summary, the detection method of the present application can detect the perception recognition recall rate, the perception recognition accuracy rate, the classification accuracy rate, the orientation mean error and the velocity mean error of the first perception device, is beneficial to reflecting the perception accuracy of the first perception device through a plurality of detection results, provides a multi-dimensional detection result for the downstream device, and facilitates the downstream device to use the information of the first obstacle recognized by the first perception device.
Fig. 7 shows a schematic structural diagram of a detection device according to an embodiment of the present application. As shown in fig. 7, the detecting apparatus 700 may include:
a first obtaining module 710, configured to obtain information of at least one first obstacle identified by a first sensing device;
a second obtaining module 720, configured to obtain information of at least one second obstacle identified by at least one second sensing device, where accuracy of the first sensing device is smaller than that of the second sensing device;
the first determining module 730 is configured to perform obstacle matching on the information of each first obstacle and the information of each second obstacle, and determine matching result data;
and a second determining module 740, configured to determine a detection result of the first sensing device according to the matching result data.
In one embodiment, as shown in fig. 8, the information of the first obstacle includes a position and a projection contour of the first obstacle, the information of the second obstacle includes a position and a projection contour of the second obstacle, and the first determining module 730 includes:
a position comparison submodule 801 for comparing the position of each first obstacle with the position of each second obstacle;
an overlap rate determining submodule 802, configured to determine an overlap rate of a projection contour of a first obstacle and a projection contour of a second obstacle if the first obstacle and the second obstacle have an overlapping relationship;
the first determining submodule 803 is configured to determine that the first obstacle and the second obstacle are successfully matched when the overlap rate is greater than a preset overlap threshold;
and the first statistic submodule 804 is configured to count the number of obstacles successfully matched to obtain first matching result data.
In one embodiment, as shown in fig. 8, the first determining module 730 further comprises:
a second determining submodule 805, configured to determine that the first obstacle and the second obstacle fail to be matched when the overlap rate is smaller than a preset overlap threshold;
and a second counting submodule 806, configured to count the number of obstacles failing to match, and obtain second matching result data.
In one embodiment, as shown in fig. 9, the second determining module 740 may include:
the first determining sub-module 901 is configured to determine an identification recall rate of the first sensing device according to the first matching result data and the second matching result data.
In one embodiment, as shown in fig. 8, the first determining module 730 may further include:
a data obtaining sub-module 807 for obtaining third matching result data according to the number of the second obstacles whose positions do not match, in the case where there is a second obstacle that does not match the position of the first obstacle among the second obstacles.
In one embodiment, the second determining module 740, as shown in fig. 9, may include:
and the second determining submodule 902 is configured to determine the identification accuracy of the first sensing device according to the first matching result and the third matching result data.
In one embodiment, the information of the first obstacle includes a category of the first obstacle, and the information of the second obstacle includes a category of the second obstacle, as shown in fig. 10, the first determining module 730 may include:
a category matching submodule 1001 configured to match a category of the first obstacle with a category of the second obstacle when the first obstacle and the second obstacle are successfully matched;
a category determination submodule 1002, configured to determine that the category of the first obstacle is the same as the category of the second obstacle when the category of the first obstacle matches the category of the second obstacle;
the third counting submodule 1003 is configured to count the number of obstacles with the same category, and obtain category matching result data;
and the classification determining sub-module 1004 is configured to determine the classification accuracy of the first sensing device according to the class matching result data and the first matching result data.
In one embodiment, the information of the first obstacle includes an attribute of the first obstacle, the information of the second obstacle includes an attribute of the second obstacle, and as shown in fig. 11, the second determining module 740 may include:
the attribute error determination submodule 1101 is configured to, if the first obstacle and the second obstacle are successfully matched, determine an attribute error between the matched attribute of the first obstacle and the attribute of the second obstacle;
the attribute difference determination sub-module 1102 determines an attribute difference between the first sensing device and the second sensing device according to the attribute error and the first matching result data.
In one embodiment, the information of the at least one second obstacle is obtained by the second sensing device fusing the information of the first obstacles recognized by the at least two first sensing devices by using a fusion algorithm.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 12 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 12, the electronic apparatus includes: one or more processors 1201, memory 1202, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 12 illustrates an example of one processor 1201.
Memory 1202 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the sensing method of the sensing device provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the detection method of a perceiving device provided by the present application.
The memory 1202 is a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the detection method of the sensing device in the embodiment of the present application (for example, the first obtaining module 710, the second obtaining module 720, the first determining module 730, and the second determining module 740 shown in fig. 7). The processor 1201 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 1202, that is, implements the detection method of the perception device in the above method embodiments.
The memory 1202 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device that perceives the detection method of the device, and the like. Further, the memory 1202 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1202 may optionally include memory remotely located from the processor 1201, and such remote memory may be connected to the sensing device detection method electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the detection method of the perception device may further include: an input device 1203 and an output device 1204. The processor 1201, the memory 1202, the input device 1203, and the output device 1204 may be connected by a bus or other means, and the bus connection is exemplified in fig. 12.
The input device 1203 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device sensing the detection method of the device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 1204 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (20)

1. A method of sensing a device, comprising:
acquiring information of at least one first obstacle identified by first sensing equipment;
acquiring information of at least one second obstacle identified by at least one second sensing device, wherein the precision of the first sensing device is smaller than that of the second sensing device;
carrying out obstacle matching on the information of each first obstacle and the information of each second obstacle, and determining matching result data;
and determining the detection result of the first sensing equipment according to the matching result data.
2. The method according to claim 1, wherein the information of the first obstacle includes a position and a projection contour of the first obstacle, the information of the second obstacle includes a position and a projection contour of the second obstacle, the information of each of the first obstacles and the information of each of the first obstacles are subjected to obstacle matching, and the determining of matching result data includes:
comparing the position of each first obstacle with the position of each second obstacle;
determining the overlapping rate of the projection contour of the first obstacle and the projection contour of the second obstacle under the condition that the first obstacle and the second obstacle have an overlapping relation;
determining that the first obstacle and the second obstacle are successfully matched under the condition that the overlapping rate is greater than a preset overlapping threshold value;
and counting the number of obstacles successfully matched to obtain first matching result data.
3. The method of claim 2, wherein the performing obstacle matching on the information of each of the first obstacles and the information of each of the first obstacles to determine matching result data, further comprises:
determining that the first obstacle and the second obstacle fail to be matched under the condition that the overlapping rate is smaller than a preset overlapping threshold value;
and counting the number of obstacles failing to be matched to obtain second matching result data.
4. The method of claim 3, wherein determining the detection result of the first perception device according to the matching result data comprises:
and determining the identification recall rate of the first sensing equipment according to the first matching result data and the second matching result data.
5. The method of claim 2, further comprising:
and under the condition that a second obstacle which does not match with the position of the first obstacle exists in the second obstacles, obtaining third matching result data according to the number of the second obstacles whose positions do not match.
6. The method of claim 5, wherein determining the detection result of the first perception device according to the matching result data comprises:
and determining the identification accuracy of the first sensing equipment according to the first matching result and the third matching result data.
7. The method of claim 2, wherein the information of the first obstacle comprises a category of the first obstacle, the information of the second obstacle comprises a category of the second obstacle, and the determining the detection result of the first perception device from the matching result data comprises:
matching the category of the first obstacle with the category of the second obstacle if the first obstacle and the second obstacle are successfully matched;
determining that the category of the first obstacle is the same as the category of the second obstacle if the category of the first obstacle matches the category of the second obstacle;
counting the number of obstacles with the same category to obtain category matching result data;
and determining the classification accuracy of the first sensing equipment according to the class matching result data and the first matching result data.
8. The method of claim 2, wherein the information of the first obstacle includes an attribute of the first obstacle, the information of the second obstacle includes an attribute of the second obstacle, and the determining the detection result of the first perception device according to the matching result data includes:
determining an attribute error between the matched attribute of the first obstacle and the attribute of the second obstacle if the matching of the first obstacle and the second obstacle is successful;
and determining the attribute difference between the first sensing equipment and the second sensing equipment according to the attribute error and the first matching result data.
9. The method according to claim 1, wherein the information of the at least one second obstacle is obtained by the second sensing device fusing information of the first obstacles recognized by the at least two first sensing devices by using a fusion algorithm.
10. A detection apparatus for sensing a device, comprising:
the first acquisition module is used for acquiring information of at least one first obstacle identified by the first sensing equipment;
the second acquisition module is used for acquiring information of at least one second obstacle identified by at least one second sensing device, and the precision of the first sensing device is smaller than that of the second sensing device;
the first determining module is used for performing obstacle matching on the information of each first obstacle and the information of each second obstacle to determine matching result data;
and the second determining module is used for determining the detection result of the first sensing equipment according to the matching result data.
11. The apparatus of claim 10, wherein the information of the first obstacle comprises a position and a projected contour of the first obstacle, the information of the second obstacle comprises a position and a projected contour of the second obstacle, the first determination module comprises:
the position comparison submodule is used for comparing the position of each first obstacle with the position of each second obstacle;
an overlap rate determination submodule, configured to determine an overlap rate of a projection contour of the first obstacle and a projection contour of the second obstacle if the first obstacle and the second obstacle have an overlapping relationship;
the first determining submodule is used for determining that the first obstacle and the second obstacle are successfully matched under the condition that the overlapping rate is greater than a preset overlapping threshold value;
and the first statistical submodule is used for counting the number of obstacles successfully matched to obtain first matching result data.
12. The apparatus of claim 11, wherein the first determining means further comprises:
the second determining submodule is used for determining that the first obstacle and the second obstacle fail to be matched under the condition that the overlapping rate is smaller than a preset overlapping threshold value;
and the second statistical submodule is used for counting the number of obstacles which fail to be matched and obtaining second matching result data.
13. The apparatus of claim 12, wherein the second determining means comprises:
and the first determining submodule is used for determining the identification recall rate of the first sensing equipment according to the first matching result data and the second matching result data.
14. The apparatus of claim 11, wherein the first determining means further comprises:
and the data obtaining sub-module is used for obtaining third matching result data according to the number of the second obstacles with unmatched positions under the condition that the second obstacles with unmatched positions with the first obstacles exist in the second obstacles.
15. The apparatus of claim 14, wherein the second determining means comprises:
and the second determining submodule is used for determining the identification accuracy of the first sensing equipment according to the first matching result and the third matching result data.
16. The apparatus of claim 11, wherein the information of the first obstacle comprises a category of the first obstacle, the information of the second obstacle comprises a category of the second obstacle, the first determination module comprises:
the class matching submodule is used for matching the class of the first obstacle with the class of the second obstacle under the condition that the first obstacle and the second obstacle are successfully matched;
a category determination submodule for determining that the category of the first obstacle is the same as the category of the second obstacle if the category of the first obstacle matches the category of the second obstacle;
the third statistic submodule is used for counting the number of the obstacles with the same category to obtain category matching result data;
and the classification determining submodule is used for determining the classification accuracy of the first sensing equipment according to the class matching result data and the first matching result data.
17. The apparatus of claim 11, wherein the information of the first obstacle comprises a property of the first obstacle, the information of the second obstacle comprises a property of the second obstacle, the second determining module comprises:
an attribute error determination submodule, configured to determine an attribute error between the matched attribute of the first obstacle and the attribute of the second obstacle if the first obstacle and the second obstacle are successfully matched;
and the attribute difference determining submodule is used for determining the attribute difference between the first sensing equipment and the second sensing equipment according to the attribute error and the first matching result data.
18. The apparatus according to claim 10, wherein the information of the at least one second obstacle is obtained by the second sensing device fusing information of the first obstacles recognized by the at least two first sensing devices by using a fusion algorithm.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202010601938.4A 2020-06-29 2020-06-29 Detection method, device and equipment of sensing equipment and storage medium Pending CN111753765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601938.4A CN111753765A (en) 2020-06-29 2020-06-29 Detection method, device and equipment of sensing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601938.4A CN111753765A (en) 2020-06-29 2020-06-29 Detection method, device and equipment of sensing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111753765A true CN111753765A (en) 2020-10-09

Family

ID=72677755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601938.4A Pending CN111753765A (en) 2020-06-29 2020-06-29 Detection method, device and equipment of sensing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111753765A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device
CN112541475A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Sensing data detection method and device
CN112581526A (en) * 2020-12-11 2021-03-30 北京百度网讯科技有限公司 Evaluation method, device, equipment and storage medium for obstacle detection
CN112764013A (en) * 2020-12-25 2021-05-07 北京百度网讯科技有限公司 Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN113205087A (en) * 2021-07-06 2021-08-03 中汽创智科技有限公司 Perception information processing method, device, equipment and computer readable storage medium
CN113463720A (en) * 2021-06-30 2021-10-01 广西柳工机械股份有限公司 System and method for identifying contact material of loader bucket
CN113893142A (en) * 2021-10-08 2022-01-07 四川康佳智能终端科技有限公司 Blind person obstacle avoidance method, system, equipment and readable storage medium
CN113963327A (en) * 2021-09-06 2022-01-21 阿波罗智能技术(北京)有限公司 Obstacle detection method, obstacle detection apparatus, autonomous vehicle, device, and storage medium
CN114596706A (en) * 2022-03-15 2022-06-07 阿波罗智联(北京)科技有限公司 Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN116434041A (en) * 2022-12-05 2023-07-14 北京百度网讯科技有限公司 Mining method, device and equipment for error perception data and automatic driving vehicle
CN114596706B (en) * 2022-03-15 2024-05-03 阿波罗智联(北京)科技有限公司 Detection method and device of road side perception system, electronic equipment and road side equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN108646739A (en) * 2018-05-14 2018-10-12 北京智行者科技有限公司 A kind of sensor information fusion method
CN109738198A (en) * 2018-12-14 2019-05-10 北京百度网讯科技有限公司 Detection method, device, equipment and the storage medium of vehicle environmental sensing capability
CN110069408A (en) * 2019-04-11 2019-07-30 杭州飞步科技有限公司 Automatic driving vehicle sensory perceptual system test method and device
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956527A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Method and device for evaluating barrier detection result of driverless vehicle
CN108646739A (en) * 2018-05-14 2018-10-12 北京智行者科技有限公司 A kind of sensor information fusion method
CN109738198A (en) * 2018-12-14 2019-05-10 北京百度网讯科技有限公司 Detection method, device, equipment and the storage medium of vehicle environmental sensing capability
CN110069408A (en) * 2019-04-11 2019-07-30 杭州飞步科技有限公司 Automatic driving vehicle sensory perceptual system test method and device
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device
CN112581526A (en) * 2020-12-11 2021-03-30 北京百度网讯科技有限公司 Evaluation method, device, equipment and storage medium for obstacle detection
CN112541475A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Sensing data detection method and device
CN112541475B (en) * 2020-12-24 2024-01-19 北京百度网讯科技有限公司 Sensing data detection method and device
US11869247B2 (en) 2020-12-24 2024-01-09 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Perception data detection method and apparatus
CN112764013A (en) * 2020-12-25 2021-05-07 北京百度网讯科技有限公司 Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN112764013B (en) * 2020-12-25 2024-03-01 北京百度网讯科技有限公司 Method, device, equipment and storage medium for testing sensing system of automatic driving vehicle
CN113463720B (en) * 2021-06-30 2023-02-17 广西柳工机械股份有限公司 System and method for identifying contact material of loader bucket
CN113463720A (en) * 2021-06-30 2021-10-01 广西柳工机械股份有限公司 System and method for identifying contact material of loader bucket
CN113205087A (en) * 2021-07-06 2021-08-03 中汽创智科技有限公司 Perception information processing method, device, equipment and computer readable storage medium
CN113963327B (en) * 2021-09-06 2023-09-08 阿波罗智能技术(北京)有限公司 Obstacle detection method, obstacle detection device, autonomous vehicle, apparatus, and storage medium
CN113963327A (en) * 2021-09-06 2022-01-21 阿波罗智能技术(北京)有限公司 Obstacle detection method, obstacle detection apparatus, autonomous vehicle, device, and storage medium
CN113893142A (en) * 2021-10-08 2022-01-07 四川康佳智能终端科技有限公司 Blind person obstacle avoidance method, system, equipment and readable storage medium
CN114596706A (en) * 2022-03-15 2022-06-07 阿波罗智联(北京)科技有限公司 Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN114596706B (en) * 2022-03-15 2024-05-03 阿波罗智联(北京)科技有限公司 Detection method and device of road side perception system, electronic equipment and road side equipment
CN116434041A (en) * 2022-12-05 2023-07-14 北京百度网讯科技有限公司 Mining method, device and equipment for error perception data and automatic driving vehicle

Similar Documents

Publication Publication Date Title
CN111753765A (en) Detection method, device and equipment of sensing equipment and storage medium
CN111401208B (en) Obstacle detection method and device, electronic equipment and storage medium
CN112415552B (en) Vehicle position determining method and device and electronic equipment
CN110979346B (en) Method, device and equipment for determining lane where vehicle is located
KR102543952B1 (en) Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, device, and program
CN111310840B (en) Data fusion processing method, device, equipment and storage medium
CN111220164A (en) Positioning method, device, equipment and storage medium
US11447153B2 (en) Method and apparatus for annotating virtual lane at crossing
CN113723141B (en) Vehicle positioning method and device, electronic equipment, vehicle and storage medium
CN111324115A (en) Obstacle position detection fusion method and device, electronic equipment and storage medium
EP3910533A1 (en) Method, apparatus, electronic device, and storage medium for monitoring an image acquisition device
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN111079079A (en) Data correction method and device, electronic equipment and computer readable storage medium
US20220101540A1 (en) Cross-camera obstacle tracking method, system and medium
CN111523471A (en) Method, device and equipment for determining lane where vehicle is located and storage medium
US20230072632A1 (en) Obstacle detection method, electronic device and storage medium
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN111640301B (en) Fault vehicle detection method and fault vehicle detection system comprising road side unit
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
CN112528846A (en) Evaluation method, device, equipment and storage medium for obstacle detection
CN110843771B (en) Obstacle recognition method, obstacle recognition device, electronic device and storage medium
CN111597993A (en) Data processing method and device
CN114674328A (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN111336984A (en) Obstacle ranging method, device, equipment and medium
CN111932611A (en) Object position acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination