CN109255341B - Method, device, equipment and medium for extracting obstacle perception error data - Google Patents

Method, device, equipment and medium for extracting obstacle perception error data Download PDF

Info

Publication number
CN109255341B
CN109255341B CN201811273698.9A CN201811273698A CN109255341B CN 109255341 B CN109255341 B CN 109255341B CN 201811273698 A CN201811273698 A CN 201811273698A CN 109255341 B CN109255341 B CN 109255341B
Authority
CN
China
Prior art keywords
data
obstacle
driving behavior
sensing result
misrecognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811273698.9A
Other languages
Chinese (zh)
Other versions
CN109255341A (en
Inventor
费雯凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811273698.9A priority Critical patent/CN109255341B/en
Publication of CN109255341A publication Critical patent/CN109255341A/en
Application granted granted Critical
Publication of CN109255341B publication Critical patent/CN109255341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a medium for extracting obstacle perception error data. The method comprises the following steps: acquiring a drive test data set of a vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data; obtaining example data of obstacle misrecognition and/or obstacle omission recognition through comparative analysis of the obstacle perception result data and the manual driving behavior data; wherein the example data includes sensor data. According to the technical scheme, a large amount of example data corresponding to missing identification and/or error identification of the obstacles and the like in the driving process of the vehicle during the road measurement are extracted, so that data support is provided for retraining of the obstacle perception model, the obstacle perception model can be optimized, the accuracy and reliability of obstacle recognition are improved, and the potential safety hazard of the unmanned vehicle is reduced.

Description

Method, device, equipment and medium for extracting obstacle perception error data
Technical Field
The embodiment of the invention relates to the technical field of unmanned driving, in particular to a method, a device, equipment and a medium for extracting obstacle perception error data.
Background
In the unmanned sensing system, sensing and identification of obstacles are mainly carried out by means of image data output by a camera, point cloud data output by a laser radar and data output by a radio radar. In the prior art, in an algorithm iterative development stage, a large amount of data acquisition and standard are generally performed in an open-loop manner of manual driving for training an obstacle perception model, so that obstacles are identified and classified through the obstacle perception model in the following process.
However, due to the wide variety of obstacles, situations occur in which non-obstacles such as green branches and leaves, surface water and the like are mistakenly recognized, and/or low obstacles such as traffic cones, triangular supports or children pedestrians are missed to be recognized, which brings about a potential safety hazard to the driving of the unmanned vehicle.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for extracting obstacle perception error data, which are used for providing data support for retraining an obstacle perception model through the extracted obstacle perception error data, so that the obstacle perception model is optimized, the obstacle recognition accuracy and reliability are improved, and the potential safety hazard of an unmanned vehicle is reduced.
In a first aspect, an embodiment of the present invention provides a method for extracting obstacle sensing error data, including:
acquiring a drive test data set of a vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data;
obtaining example data of obstacle misrecognition and/or obstacle omission recognition through comparative analysis of the obstacle perception result data and the manual driving behavior data; wherein the example data includes sensor data.
In a second aspect, an embodiment of the present invention further provides an apparatus for extracting obstacle sensing error data, including:
the drive test data set acquisition module is used for acquiring a drive test data set of the vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data;
the example data acquisition module is used for acquiring example data of obstacle misrecognition and/or obstacle omission recognition through the comparative analysis of the obstacle perception result data and the manual driving behavior data; wherein the example data includes sensor data.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement a method for extracting obstacle sensing error data as provided in an embodiment of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for extracting obstacle sensing error data as provided in the embodiment of the first aspect.
The method comprises the steps of obtaining sensor data, obstacle sensing result data and manual driving behavior data of a vehicle, wherein the sensor data, the obstacle sensing result data and the manual driving behavior data are used for sensing obstacles; comparing and analyzing the obstacle sensing result data and the manual driving behavior data to obtain example data of obstacle misrecognition and/or missed recognition; wherein the example data includes sensor data. According to the technical scheme, a large amount of example data corresponding to missing identification and/or error identification of the obstacles and the like in the driving process of the vehicle during road measurement are extracted, data support is provided for retraining of the obstacle perception model, the obstacle perception model can be optimized, the accuracy and reliability of obstacle recognition are improved, and the potential safety hazard of the unmanned vehicle is reduced.
Drawings
Fig. 1 is a flowchart of a method for extracting obstacle sensing error data according to a first embodiment of the present invention;
fig. 2 is a flowchart of a method for extracting obstacle sensing error data according to a second embodiment of the present invention;
fig. 3 is a structural diagram of an obstacle sensing error data extraction apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for extracting obstacle sensing error data according to a first embodiment of the present invention. The embodiment of the invention is suitable for the condition that the training samples of the obstacle perception model are acquired in an open-loop mode of manual driving in the algorithm iteration development stage of the unmanned perception system, and the method can be executed by an extraction device of obstacle perception error data, and the device is realized by software and/or hardware and is specifically configured in the unmanned vehicle.
The method for extracting obstacle sensing error data shown in fig. 1 includes:
s110, acquiring a drive test data set of the vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data.
Wherein the sensor data includes image data output by a camera in the vehicle, point cloud data output by a lidar in the vehicle, and data output by a lidar in the vehicle. The vehicle may be an unmanned vehicle, or may be another vehicle provided with an unmanned perception system. The obstacle sensing result data can be understood as a sensing result obtained by the sensing algorithm module according to sensor data output by the sensor in the driving process of the vehicle, or can also be a sensing result obtained by the sensing algorithm module in an off-line mode according to sensor data generated in the driving process of the vehicle, the off-line mode means that the on-vehicle test is not needed, the acquired sensor data are sent to the sensing algorithm module under the same computer hardware environment as that of the vehicle, and the sensing algorithm module can correspondingly output the sensing result.
The manual driving behavior data may be data representing manual driving behavior, for example, behavior analysis data such as a rotation angle of a steering wheel, an operating state of a vehicle brake device, a starting state of a vehicle, and/or a vehicle motion parameter, and may also be intuitive behavior data indicating whether a driver has a stop and/or a detour. The behavior analysis data can be used for obtaining whether the driver has the driving behavior of parking and/or detour driving when the driver drives manually. Wherein the vehicle motion parameters include: vehicle movement speed, angular velocity and/or acceleration.
Exemplarily, the sensor data and the manual driving behavior data are collected and stored in real time in the driving process of the vehicle, and the obstacle sensing result data obtained by sensing the obstacle in real time are stored in real time; correspondingly, when the sensor data, the obstacle sensing result data and the manual driving behavior data are stored in real time, the data are synchronously transmitted to the obstacle sensing error data extraction device, and the sensor data, the obstacle sensing result data and the manual driving behavior data are correspondingly stored according to the data generation time to obtain a drive test data set. Of course, after the vehicle driving process is finished, the sensor data, the obstacle sensing result data and the manual driving behavior data stored in the driving process can be retrieved through manual triggering of technicians, and the drive test data set can be obtained through corresponding storage according to the data generation time.
S120, comparing and analyzing the obstacle sensing result data and the manual driving behavior data to obtain example data of obstacle misrecognition and/or obstacle omission recognition; wherein the example data includes sensor data.
Specifically, the example data of the obstacle misrecognition is obtained by comparing and analyzing the sensing result data and the manual driving behavior data, and the example data includes: for each obstacle sensing result data, if the obstacle sensing result data indicates that the obstacle is sensed, determining a manual driving behavior corresponding to the moment when the obstacle is sensed according to the manual driving behavior data, and judging whether the manual driving behavior is a preset manual driving behavior; if not, determining that the sensor data corresponding to the obstacle sensing result data is example data of obstacle misrecognition; wherein the preset artificial driving behavior comprises: parking and detouring.
Illustratively, when the manual driving behavior data includes the turning angle of the steering wheel, if the turning angle meets the set turning angle turning threshold, it indicates that the driver has the driving behavior of detour driving; when the manual driving behavior data comprise the working state of the vehicle braking device, if the working state of the vehicle braking device is a non-idle state, the fact that the driver has the driving behavior of parking is indicated; when the manual driving behavior data comprises a vehicle starting state, if the state of the vehicle is 'starting-non-starting' in a first preset time period, indicating that the driver has a driving behavior of stopping; when the manual driving behavior data comprises the vehicle moving speed, if the reduction amplitude of the vehicle moving speed is larger than the preset percentage of the larger moving speed in a second preset time period, indicating that the driver has the driving behavior of parking; when the manual driving behavior data comprises the vehicle motion angular speed, if the difference value of the vehicle motion angular speed is larger than the set angular speed turning threshold value in a third preset time period, it is indicated that the driver has a driving behavior of detouring; when the manual driving behavior data comprises vehicle motion acceleration, if the vehicle running acceleration is continuously smaller than zero in the fourth preset time period, it is indicated that the driver has a driving behavior of parking. The turning threshold value of the rotation angle, the first preset time period, the second preset time period, the third preset time period, the fourth preset time period, the preset percentage, the turning threshold value of the angular velocity and the like can be set by technicians according to experimental values or empirical values.
Specifically, the example data of obstacle missing recognition is obtained by performing comparative analysis on the sensing result data and the manual driving behavior data, and includes: for a preset artificial driving behavior in the artificial driving behavior data, determining obstacle sensing result data corresponding to the moment of generating the preset artificial driving behavior according to the obstacle sensing result data, judging whether the obstacle sensing result data is a sensed obstacle or not, and if not, determining that sensor data corresponding to the obstacle sensing result data is example data of obstacle missing identification; wherein the preset artificial driving behavior comprises: parking and detouring.
The method comprises the steps of obtaining sensor data, obstacle sensing result data and manual driving behavior data of a vehicle, wherein the sensor data, the obstacle sensing result data and the manual driving behavior data are used for sensing obstacles; comparing and analyzing the obstacle sensing result data and the manual driving behavior data to obtain example data of obstacle misrecognition and/or missed recognition; wherein the example data includes sensor data. According to the technical scheme, a large amount of example data corresponding to missing identification and/or error identification of the obstacles and the like in the driving process of the vehicle during the road measurement are extracted, so that data support is provided for retraining of the obstacle perception model, the obstacle perception model can be optimized, the accuracy and reliability of obstacle recognition are improved, and the potential safety hazard of the unmanned vehicle is reduced.
Further, after obtaining the example data of obstacle misrecognition and/or obstacle misrecognition, the method further comprises the following steps: training an obstacle perception model by taking example data of obstacle misrecognition and/or obstacle misrecognition as training sample data; and/or the presence of a gas in the gas,
and testing the recognition effect of the obstacle perception model by using example data of obstacle misrecognition and/or obstacle misrecognition.
The example data can also comprise corresponding actual perception result data when the obstacle is identified by mistake and/or is not identified. The actual sensing result data when the obstacle is identified by mistake is opposite to the obstacle sensing result data in content; the actual sensing result data during obstacle missing identification is opposite to the obstacle sensing result data in content.
Illustratively, when "0" is adopted as the result identifier without the obstacle and "1" is adopted as the result identifier with the obstacle, when the obstacle is recognized by mistake, the corresponding obstacle sensing result data is "1" and the actual sensing result data is "0"; when the obstacle is missed to be identified, the corresponding obstacle sensing result data is '0', and the actual sensing result data is '1'.
For example, the example data of obstacle misrecognition and/or obstacle misrecognition is used for testing the recognition effect of the obstacle perception model, and the method can be as follows: and inputting sensor data in the example data into the obstacle perception model to obtain a corresponding prediction perception result, and evaluating the obstacle perception model according to the prediction perception result and actual perception result data in the example data.
In particular, according to the formula
Figure BDA0001846530440000071
Evaluating the obstacle perception model;
wherein, TP indicates that the data of the predicted perception result and the actual perception result are both provided with obstacles; TN means that the data of the prediction perception result and the data of the actual perception result are both free of obstacles; FP means that the prediction perception result is that an obstacle exists, but the actual perception result data is that no obstacle exists; FN means that the prediction perception result is no obstacle, but the actual perception result data is the obstacle; the accuracy is the accuracy of the obstacle sensing model; precision is the accuracy of the obstacle perception model; recall is the recall rate of the obstacle perception model.
The embodiment of the invention trains the obstacle perception model by taking the example data of the error recognition and/or the missing recognition of the obstacle as the training sample data so as to optimize the obstacle perception model and improve the accuracy and the reliability of the obstacle perception model; and testing the obstacle perception model by taking the example data of the obstacle misrecognition and/or the missed recognition as test sample data so as to evaluate the recognition effect of the obstacle perception model.
Example two
Fig. 2 is a flowchart of a method for extracting obstacle sensing error data according to a second embodiment of the present invention. The embodiment of the invention performs additional optimization on the basis of the technical scheme of each embodiment.
Furthermore, after the operation of obtaining the example data of the missed obstacle identification, the method additionally combines the road network data and the positioning data to screen the example data of the missed obstacle identification so as to eliminate the non-error data contained in the example data of the missed obstacle identification, thereby improving the purity of the error data sensed by the obstacles.
The method for extracting obstacle sensing error data shown in fig. 2 includes:
s210, acquiring a drive test data set of the vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data.
S220, determining obstacle sensing result data corresponding to the moment of generating the preset artificial driving behavior according to the obstacle sensing result data for the preset artificial driving behavior in the artificial driving behavior data.
And S230, judging whether the obstacle sensing result data is the sensed obstacle, and if not, determining that the sensor data corresponding to the obstacle sensing result data is the obstacle missing identification example data.
Wherein the preset artificial driving behavior comprises: parking and detouring.
Specifically, when the manual driving behavior data includes a parking driving behavior, obstacle sensing result data corresponding to the current moment when the driver generates the parking driving behavior in the obstacle sensing result data is obtained; and if the obstacle sensing result data are result marks representing that the obstacle is not sensed, determining that the sensor data corresponding to the obstacle sensing result data are example data of obstacle missing identification. It can be understood that the sensor data corresponding to the obstacle sensing result data is the sensor data corresponding to the current time when the parking driving behavior is generated.
Specifically, when the manual driving behavior data includes a detour driving behavior, obstacle sensing result data corresponding to the current moment when the detour driving behavior is generated by the driver in the obstacle sensing result data are obtained; and if the obstacle sensing result data are result marks representing that the obstacle is not sensed, determining that the sensor data corresponding to the obstacle sensing result data are example data of obstacle missing identification. It can be understood that the sensor data corresponding to the obstacle sensing result data is the sensor data corresponding to the current time when the detour driving behavior is generated.
The example data can also comprise corresponding actual perception result data when the obstacle is not identified. The actual sensing result data during obstacle missing identification is opposite to the obstacle sensing result data in content.
S240, screening example data of obstacle missing identification by combining road network data and positioning data.
Specifically, for each piece of example data of the missed obstacle identification, the traffic light information of the position of the vehicle at the moment when the preset manual driving behavior is generated is judged according to the positioning data and the road network data, and if the traffic light information indicates that the straight-line passing is forbidden, the example data of the missed obstacle identification is deleted.
Specifically, positioning data corresponding to the generation time of the sensor data included in each instance data is acquired; determining the position of the vehicle according to the positioning data, and determining traffic light color information and traffic light driving direction information when the vehicle is at the position by combining road network data; and if the color information of the traffic light is red or the driving direction information of the traffic light is turning, deleting the example data of the missed obstacle identification.
The embodiment of the invention eliminates the determined example data under the condition that the obstacle is correctly identified and misjudged as the obstacle is not identified when the traffic light information indicates that the driver generates the driving behavior of parking or detour driving and the obstacle identification result does not sense the obstacle, improves the purity of the obstacle sensing error data,
and a good data base is provided for the optimization of the obstacle perception model.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an obstacle sensing error data extraction device in a third embodiment of the present invention. The embodiment of the invention is suitable for the condition that the training samples of the obstacle perception model are acquired in an open-loop mode of manual driving in the algorithm iteration development stage of the unmanned perception system, and the device is realized by software and/or hardware and is specifically configured in the unmanned vehicle. The obstacle sensing error data extraction apparatus shown in fig. 3 includes: a drive test data set acquisition module 310 and an instance data acquisition module 320.
A drive test data set acquisition module 310, configured to acquire a drive test data set of a vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data;
the example data obtaining module 320 is configured to obtain example data of obstacle misrecognition and/or obstacle missing recognition through comparative analysis of the obstacle sensing result data and the manual driving behavior data; wherein the example data includes sensor data.
The method comprises the steps that a drive test data set acquisition module is adopted to acquire sensor data, obstacle sensing result data and manual driving behavior data of a vehicle for sensing obstacles; an example data acquisition module is adopted to acquire example data of obstacle misrecognition and/or missed recognition through comparative analysis of obstacle perception result data and manual driving behavior data; wherein the example data includes sensor data. According to the technical scheme, a large amount of example data corresponding to missing identification and/or error identification of the obstacles and the like in the driving process of the vehicle during the road measurement are extracted, so that data support is provided for retraining of the obstacle perception model, the obstacle perception model can be optimized, the accuracy and reliability of obstacle recognition are improved, and the potential safety hazard of the unmanned vehicle is reduced.
Further, the instance data obtaining module 320 includes:
an obstacle misidentification and judgment unit for sensing result data of each obstacle if the obstacle is detected
If the sensing result data is that the obstacle is sensed, determining the artificial driving behavior corresponding to the moment when the obstacle is sensed according to the artificial driving behavior data, and judging whether the artificial driving behavior is a preset artificial driving behavior; if not, determining that the sensor data corresponding to the obstacle sensing result data is the example data of the obstacle misrecognition.
Further, the instance data obtaining module 320 includes:
and the obstacle missing identification and judgment unit is used for determining obstacle sensing result data corresponding to the moment of generating the preset artificial driving behavior according to the obstacle sensing result data for the preset artificial driving behavior in the artificial driving behavior data, judging whether the obstacle sensing result data is the sensed obstacle or not, and if not, determining that the sensor data corresponding to the obstacle sensing result data is the example data of obstacle missing identification.
Further, the preset artificial driving behavior comprises: parking and detouring.
Further, the example data obtaining module 320 further includes an example data screening unit, specifically configured to:
after the obstacle missing identification judging unit obtains the example data of the obstacle missing identification, the example data of the obstacle missing identification is screened by combining the road network data and the positioning data.
Further, the example data screening unit includes:
and the example data deleting subunit is used for judging the traffic light information of the position of the vehicle at the moment of generating the preset manual driving behavior according to the positioning data and the road network data for each piece of example data of the missed obstacle identification, and deleting the example data of the missed obstacle identification if the traffic light information indicates that the straight line passing is forbidden.
Further, the apparatus further comprises:
the training module is used for training the obstacle perception model by taking the example data of the obstacle misrecognition and/or the obstacle missing recognition as training sample data after the example data of the obstacle misrecognition and/or the obstacle missing recognition is obtained; and/or the presence of a gas in the gas,
and the test module is used for testing the identification effect of the obstacle perception model by using the example data of the obstacle misrecognition and/or the obstacle omission recognition after the example data of the obstacle misrecognition and/or the obstacle omission recognition is obtained.
The device for extracting the obstacle sensing error data provided by the embodiment of the invention can execute the method for extracting the obstacle sensing error data provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the method for extracting the obstacle sensing error data.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 412 suitable for use in implementing embodiments of the present invention. The electronic device 412 shown in fig. 4 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present invention.
As shown in fig. 4, the electronic device 412 is in the form of a general purpose computing device. The components of the electronic device 412 may include, but are not limited to: one or more processors or processing units 416, a system memory 428, and a bus 418 that couples the various system components including the system memory 428 and the processing unit 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 412 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 428 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)430 and/or cache memory 432. The electronic device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The electronic device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the electronic device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the electronic device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 420. As shown, network adapter 420 communicates with the other modules of electronic device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 416 executes various functional applications and data processing by executing at least one program of the programs stored in the system memory 428, for example, to implement a method for extracting obstacle sensing error data according to an embodiment of the present invention.
The embodiment of the invention also provides a vehicle which comprises a vehicle body and the electronic equipment.
EXAMPLE five
An embodiment five of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for extracting obstacle sensing error data provided in any embodiment of the present invention, and the method includes: acquiring a drive test data set of a vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data; obtaining example data of obstacle misrecognition and/or obstacle omission recognition through comparative analysis of the obstacle perception result data and the manual driving behavior data; wherein the example data includes sensor data.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. A method for extracting obstacle sensing error data, comprising:
acquiring a drive test data set of a vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data;
obtaining example data of obstacle misrecognition and/or obstacle omission recognition through comparative analysis of the obstacle perception result data and the manual driving behavior data; wherein the instance data comprises sensor data;
the example data of obstacle misrecognition is obtained by comparing and analyzing the perception result data and the manual driving behavior data, and the example data comprises the following steps:
for each obstacle sensing result data, if the obstacle sensing result data indicates that the obstacle is sensed, determining a manual driving behavior corresponding to the moment when the obstacle is sensed according to the manual driving behavior data, and judging whether the manual driving behavior is a preset manual driving behavior; if not, determining that the sensor data corresponding to the obstacle sensing result data is example data of obstacle misrecognition;
wherein the example data of obstacle missing recognition is obtained by the comparative analysis of the perception result data and the artificial driving behavior data, and comprises:
and for a preset artificial driving behavior in the artificial driving behavior data, determining obstacle sensing result data corresponding to the moment of generating the preset artificial driving behavior according to the obstacle sensing result data, judging whether the obstacle sensing result data is the sensed obstacle, and if not, determining that sensor data corresponding to the obstacle sensing result data is example data of obstacle missing identification.
2. The method of claim 1, wherein the preset artificial driving behavior comprises: parking and detouring.
3. The method of claim 1, after obtaining instance data for obstacle miss identification, further comprising:
and screening example data of obstacle missing identification by combining road network data and positioning data.
4. The method of claim 3, wherein said screening said instance data of obstacle missing identification in combination with road network data and positioning data comprises:
and for example data of each obstacle missing identification, judging traffic light information of the position of the vehicle at the moment of generating the preset manual driving behavior according to the positioning data and the road network data, and deleting the example data of each obstacle missing identification if the traffic light information indicates that straight line passing is forbidden.
5. The method according to claim 1, further comprising, after obtaining instance data of obstacle misrecognition and/or obstacle misrecognition:
training an obstacle perception model by taking example data of obstacle misrecognition and/or obstacle misrecognition as training sample data; and/or the presence of a gas in the gas,
and testing the recognition effect of the obstacle perception model by using example data of obstacle misrecognition and/or obstacle misrecognition.
6. An obstacle sensing error data extraction device, comprising:
the drive test data set acquisition module is used for acquiring a drive test data set of the vehicle; the drive test data set comprises sensor data for sensing obstacles, obstacle sensing result data and manual driving behavior data;
the example data acquisition module is used for acquiring example data of obstacle misrecognition and/or obstacle omission recognition through the comparative analysis of the obstacle perception result data and the manual driving behavior data; wherein the instance data comprises sensor data;
wherein the instance data obtaining module comprises:
the obstacle misrecognition judging unit is used for judging whether the man-made driving behavior is the preset man-made driving behavior or not according to the man-made driving behavior data if the obstacle sensing result data is that the obstacle is sensed; if not, determining that the sensor data corresponding to the obstacle sensing result data is example data of obstacle misrecognition;
wherein the instance data obtaining module comprises:
and the obstacle missing identification and judgment unit is used for determining obstacle sensing result data corresponding to the moment of generating the preset artificial driving behavior according to the obstacle sensing result data for the preset artificial driving behavior in the artificial driving behavior data, judging whether the obstacle sensing result data is the sensed obstacle or not, and if not, determining that the sensor data corresponding to the obstacle sensing result data is the example data of obstacle missing identification.
7. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of obstacle sensing error data extraction as claimed in any one of claims 1 to 5.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of extracting obstacle sensing error data according to any one of claims 1 to 5.
CN201811273698.9A 2018-10-30 2018-10-30 Method, device, equipment and medium for extracting obstacle perception error data Active CN109255341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811273698.9A CN109255341B (en) 2018-10-30 2018-10-30 Method, device, equipment and medium for extracting obstacle perception error data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811273698.9A CN109255341B (en) 2018-10-30 2018-10-30 Method, device, equipment and medium for extracting obstacle perception error data

Publications (2)

Publication Number Publication Date
CN109255341A CN109255341A (en) 2019-01-22
CN109255341B true CN109255341B (en) 2021-08-10

Family

ID=65042913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811273698.9A Active CN109255341B (en) 2018-10-30 2018-10-30 Method, device, equipment and medium for extracting obstacle perception error data

Country Status (1)

Country Link
CN (1) CN109255341B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046569B (en) * 2019-04-12 2022-04-12 北京百度网讯科技有限公司 Unmanned driving data processing method and device and electronic equipment
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
CN110853389B (en) * 2019-11-21 2022-03-18 白犀牛智达(北京)科技有限公司 Drive test monitoring system suitable for unmanned commodity circulation car
US10981577B1 (en) * 2019-12-19 2021-04-20 GM Global Technology Operations LLC Diagnosing perception system based on scene continuity
CN111177869B (en) * 2020-01-02 2023-09-01 北京百度网讯科技有限公司 Method, device and equipment for determining sensor layout scheme
CN112380942A (en) * 2020-11-06 2021-02-19 北京石头世纪科技股份有限公司 Method, device, medium and electronic equipment for identifying obstacle
CN112698421A (en) * 2020-12-11 2021-04-23 北京百度网讯科技有限公司 Evaluation method, device, equipment and storage medium for obstacle detection
CN112633518B (en) * 2021-01-25 2024-03-01 国汽智控(北京)科技有限公司 Automatic driving model training method and system based on multi-subject mutual learning
CN114357814B (en) * 2022-03-21 2022-05-31 禾多科技(北京)有限公司 Automatic driving simulation test method, device, equipment and computer readable medium
CN114563007B (en) * 2022-04-28 2022-07-29 新石器慧通(北京)科技有限公司 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
CN116434041B (en) * 2022-12-05 2024-06-21 北京百度网讯科技有限公司 Mining method, device and equipment for error perception data and automatic driving vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506733A (en) * 2017-08-28 2017-12-22 济南浪潮高新科技投资发展有限公司 A kind of obstacle recognition method, mist node and system
CN107571864A (en) * 2017-09-05 2018-01-12 百度在线网络技术(北京)有限公司 The collecting method and device of automatic driving vehicle
CN108508881A (en) * 2017-02-27 2018-09-07 北京百度网讯科技有限公司 Automatic Pilot control strategy method of adjustment, device, equipment and storage medium
CN108549366A (en) * 2018-05-04 2018-09-18 同济大学 Intelligent automobile road driving mapping experiment method parallel with virtual test

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9933264B2 (en) * 2015-04-06 2018-04-03 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
JP6543520B2 (en) * 2015-07-02 2019-07-10 株式会社トプコン Survey data processing apparatus, survey data processing method and program for survey data processing
CN108162973B (en) * 2016-12-07 2020-07-28 法法汽车(中国)有限公司 Device for improving automatic driving reliability and automatic driving system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108508881A (en) * 2017-02-27 2018-09-07 北京百度网讯科技有限公司 Automatic Pilot control strategy method of adjustment, device, equipment and storage medium
CN107506733A (en) * 2017-08-28 2017-12-22 济南浪潮高新科技投资发展有限公司 A kind of obstacle recognition method, mist node and system
CN107571864A (en) * 2017-09-05 2018-01-12 百度在线网络技术(北京)有限公司 The collecting method and device of automatic driving vehicle
CN108549366A (en) * 2018-05-04 2018-09-18 同济大学 Intelligent automobile road driving mapping experiment method parallel with virtual test

Also Published As

Publication number Publication date
CN109255341A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN109255341B (en) Method, device, equipment and medium for extracting obstacle perception error data
CN109343061B (en) Sensor calibration method and device, computer equipment, medium and vehicle
JP6811282B2 (en) Automatic data labeling used in self-driving cars
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
JP6738932B2 (en) System and method for training machine learning models located on a simulation platform
US11400928B2 (en) Driverless vehicle testing method and apparatus, device and storage medium
CN109116374B (en) Method, device and equipment for determining distance of obstacle and storage medium
CN107153363B (en) Simulation test method, device, equipment and readable medium for unmanned vehicle
CN109598066B (en) Effect evaluation method, apparatus, device and storage medium for prediction module
CN109188438B (en) Yaw angle determination method, device, equipment and medium
US20200082553A1 (en) Method and apparatus for generating three-dimensional data, device, and storage medium
CN111998860B (en) Automatic driving positioning data verification method and device, electronic equipment and storage medium
CN108508881B (en) Automatic driving control strategy adjusting method, device, equipment and storage medium
CN111959495B (en) Vehicle control method and device and vehicle
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
CN109835260B (en) Vehicle information display method, device, terminal and storage medium
CN111104842A (en) Computer-aided or autonomous driving traffic sign recognition method and device
JP6808775B2 (en) Object tracking using multiple queues
CN110097121B (en) Method and device for classifying driving tracks, electronic equipment and storage medium
WO2020147500A1 (en) Ultrasonic array-based obstacle detection result processing method and system
CN109558854B (en) Obstacle sensing method and device, electronic equipment and storage medium
CN112015178B (en) Control method, device, equipment and storage medium
CN109635861B (en) Data fusion method and device, electronic equipment and storage medium
CN109637148B (en) Vehicle-mounted whistling monitoring system, method, storage medium and equipment
CN109297725B (en) Vehicle boundary capability testing method, device, equipment, medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211019

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Patentee after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Patentee before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

TR01 Transfer of patent right