CN118306426A - Method and device for determining monitoring information, vehicle and storage medium - Google Patents

Method and device for determining monitoring information, vehicle and storage medium Download PDF

Info

Publication number
CN118306426A
CN118306426A CN202410597902.1A CN202410597902A CN118306426A CN 118306426 A CN118306426 A CN 118306426A CN 202410597902 A CN202410597902 A CN 202410597902A CN 118306426 A CN118306426 A CN 118306426A
Authority
CN
China
Prior art keywords
monitoring
model
information
decision
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410597902.1A
Other languages
Chinese (zh)
Inventor
何文
杨越
周宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202410597902.1A priority Critical patent/CN118306426A/en
Publication of CN118306426A publication Critical patent/CN118306426A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application relates to a method and a device for determining monitoring information, a vehicle and a storage medium, and relates to the technical field of automatic driving. The method comprises the following steps: and acquiring the running environment information of the vehicle. And inputting the driving environment information into the automatic driving perception model to obtain the characteristic information of the automatic driving perception model, wherein the characteristic information is the information obtained by the automatic driving perception model through the deep learning model. Inputting the driving environment information and the characteristic information into a first evaluation model, determining at least one perception monitoring information, wherein the first evaluation model comprises at least one of the following: the first operation design domain ODD monitoring model, the first significance monitoring model, the first rationality monitoring model, the first uncertainty monitoring model and the first antagonism monitoring model are used for indicating whether the automatic driving perception model is insufficient or not. Therefore, the accuracy monitoring on different aspects of the deep learning model can be realized through different monitoring models, and preparation is made for subsequently improving the accuracy of the deep learning model.

Description

Method and device for determining monitoring information, vehicle and storage medium
Technical Field
The application relates to the technical field of automatic driving, in particular to a method and a device for determining monitoring information, a vehicle and a storage medium.
Background
The advanced automatic driving is man-machine co-driving, namely the automatic driving system can normally run under normal working conditions, but when an emergency occurs, the automatic driving system can exit and needs the driver to take over. Functional security is an integral part of the overall security of a system or device. When the system has functional fault or failure (such as hardware fault and software fault), the system enters a safe controllable mode, so that casualties are avoided. However, for an automatic driving system, even if no hardware component fails, the algorithm has no software error, and the function of the triggering system cannot meet the boundary condition because of running various edge scenes in the design domain (operational design domain, ODD) range, so that misjudgment is caused, and safety failure occurs.
Currently, intended functional safety refers to reducing the unacceptable risk of system failure due to either an intended functional deficiency (design failure or performance limitation) or predictable personnel mishandling. Because the deep learning model is widely applied in the automatic driving system, the result of the deep learning model is inaccurate and does not belong to hardware faults or software faults, and the accuracy of the deep learning model is also required to be focused on by the preset function safety. Therefore, how to monitor the accuracy of the deep learning model is a urgent problem to be solved.
Disclosure of Invention
The application provides a method and a device for determining monitoring information, a vehicle and a storage medium, which at least solve the technical problem of how to monitor the accuracy of a deep learning model in the related technology. The technical scheme of the application is as follows:
According to a first aspect of the present application there is provided a method of determining monitoring information, the method being applied to a controller of a vehicle, the controller being deployed with an autopilot awareness model. The method comprises the following steps: acquiring running environment information of a vehicle; and inputting the driving environment information into the automatic driving perception model to obtain the characteristic information of the automatic driving perception model, wherein the characteristic information is the information obtained by the automatic driving perception model through the deep learning model. Inputting the driving environment information and the characteristic information into a first evaluation model, determining at least one perception monitoring information, wherein the first evaluation model comprises at least one of the following: the method comprises the steps of a first operation design domain ODD monitoring model, a first significance monitoring model, a first rationality monitoring model, a first uncertainty monitoring model and a first antagonism monitoring model, wherein one monitoring model corresponds to one perception monitoring information, and the perception monitoring information is used for indicating whether an automatic driving perception model is insufficient or not. The first ODD monitoring model is used for monitoring whether the driving environment information meets the automatic driving starting condition, the first saliency monitoring model is used for monitoring whether the image detection result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether the target detection result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether the target detection result of the automatic driving perception model is stable, and the first opposite sex monitoring model is used for monitoring whether the driving environment information has noise.
According to the technical means, the controller can monitor the automatic driving perception model through different models to obtain corresponding monitoring information, the follow-up controller can optimize the automatic driving perception model according to the monitoring information, and the automatic driving perception model in the automatic driving system is adjusted, so that the perception result of the automatic driving system is more accurate.
In one possible implementation, the first assessment model includes a first ODD monitoring model that includes an autopilot launch condition. The "inputting the driving environment information and the feature information into the first evaluation model to determine the at least one perception monitoring information" includes: and determining first monitoring information according to the running environment information and the automatic driving starting condition through the first ODD monitoring model, and taking the first monitoring information as perception monitoring information, wherein the first monitoring information is used for indicating whether the running environment information meets the automatic driving starting condition or not.
According to the technical means, the controller can determine whether the running environment information of the vehicle meets the automatic driving starting condition through the first ODD monitoring model so as to determine whether an automatic driving system is started or not, and safety of the vehicle and a driver is guaranteed.
In one possible embodiment, the first assessment model comprises a first saliency monitoring model and the characteristic information comprises a driving environment image marking the first salient region. The "inputting the driving environment information and the feature information into the first evaluation model to determine the at least one perception monitoring information" includes: and inputting the driving environment image into the first saliency monitoring model to obtain the driving environment image of the marked second salient region. And determining second monitoring information through the first saliency monitoring model according to the first saliency area and the second saliency area, and taking the second monitoring information as perception monitoring information, wherein the second monitoring information is used for indicating whether an image detection result of the automatic driving perception model is accurate or not.
According to the technical means, the controller can determine whether the image detection result of the automatic driving perception model is accurate or not through the first significance monitoring model so as to determine whether the automatic driving perception model needs to be optimized or not, so that the image detection result of the automatic driving perception model is more accurate.
In one possible implementation, the first assessment model includes a first rationality monitoring model, the characteristic information includes a target object, and the first rationality monitoring model includes a preset standard rule for the target object. The "inputting the driving environment information and the feature information into the first evaluation model to determine the at least one perception monitoring information" includes: and obtaining third monitoring information through the first rationality monitoring model according to the target object and a preset standard rule of the target object, and taking the third monitoring information as perception monitoring information, wherein the third monitoring information is used for indicating whether a target detection result of the automatic driving perception model is reasonable or not.
According to the technical means, the controller can determine whether the target detection result of the automatic driving perception model is reasonable through the first rationality monitoring model so as to determine whether the automatic driving perception model needs to be optimized or not, so that the image detection result of the automatic driving perception model is more reasonable.
In one possible implementation, the first evaluation model includes a first uncertainty monitoring model, and the feature information includes a target object of each of a plurality of driving scene images, and a similarity between any two driving scene images of the plurality of driving scene images is greater than a preset similarity threshold. The "inputting the driving environment information and the feature information into the first evaluation model to determine the at least one perception monitoring information" includes: and inputting the target object of each driving scene image into a first uncertainty monitoring model to obtain the confidence coefficient of each target object. And determining fourth monitoring information according to the confidence coefficient of each target object through the first uncertainty monitoring model, and taking the fourth monitoring information as perception monitoring information, wherein the fourth monitoring information is used for indicating whether a target detection result of the automatic driving perception model is stable or not.
According to the technical means, the controller can determine whether the target detection result of the automatic driving perception model is stable or not through the first uncertainty monitoring model so as to determine whether the automatic driving perception model needs to be optimized or not, and the image detection result of the automatic driving perception model is more stable.
In one possible embodiment, the first assessment model comprises a first resistance monitoring model and the driving environment information comprises a plurality of driving environment images. And inputting the plurality of driving environment images into the first resistance monitoring model to obtain fifth monitoring information, and taking the fifth monitoring information as perception monitoring information, wherein the fifth monitoring information is used for indicating whether noise exists in the plurality of driving environment images.
According to the technical means, the controller can determine whether noise exists in the plurality of driving environment images through the first uncertainty monitoring model so as to determine whether the automatic driving perception model needs to be optimized or not, and the confusion noise of input data in the automatic driving perception model is reduced.
According to a second aspect of the present application, there is provided a method of determining monitoring information, the method being applied to a controller of a vehicle, the controller being deployed with a decision-making planning model. The method comprises the following steps: and acquiring the running state information of the vehicle and the characteristic information of the automatic driving perception model. And inputting the driving state information and the characteristic information into a decision planning model to obtain decision information of the decision planning model, wherein the decision information is information obtained by the decision planning model through a deep learning model. Inputting the driving state information and the decision information into a second evaluation model, determining at least one decision monitoring information, the second evaluation model comprising at least one of: the second operational design domain ODD monitoring model, the second significance monitoring model, the second rationality monitoring model, the second uncertainty monitoring model and the second contrast monitoring model, wherein one monitoring model corresponds to one decision monitoring information, and the decision monitoring information is used for indicating whether the decision planning model is insufficient or not. The second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significance monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second opposite resistance monitoring model is used for monitoring whether noise exists in the driving state information.
According to the technical means, the controller can monitor the decision planning model through different models to obtain corresponding monitoring information, the follow-up controller can optimize the decision planning model according to the monitoring information, and the decision planning model in the automatic driving system is adjusted, so that the decision result of the automatic driving system is more accurate.
In one possible implementation, the second assessment model includes a second ODD monitoring model that includes driving function conditions. The "inputting the driving state information and the decision information into the second evaluation model, and determining the at least one decision monitoring information" includes: and determining sixth monitoring information according to the driving state information and the driving function condition through the second ODD monitoring model, and taking the sixth monitoring information as decision monitoring information, wherein the sixth monitoring information is used for indicating whether the driving state information meets the driving function condition or not.
In one possible implementation, the second assessment model comprises a second significance monitoring model, the decision information comprising a plurality of target decisions and a weight value for each target decision, the second significance monitoring model comprising a preset decision rule. The "inputting the driving state information and the decision information into the second evaluation model, and determining the at least one decision monitoring information" includes: and inputting the multiple target decisions and the weight value of each target decision into a second significance monitoring model to obtain a target decision rule. And determining seventh monitoring information through the second significance monitoring model according to the target decision rule and a preset decision rule, and taking the seventh monitoring information as decision monitoring information, wherein the seventh monitoring information is used for indicating whether the target decision rule of the decision planning model is accurate or not.
In one possible implementation, the second assessment model comprises a second rationality monitoring model, the decision information comprises a target decision, and the second rationality monitoring model comprises a preset standard rule for the target decision. The "inputting the driving state information and the decision information into the second evaluation model, and determining the at least one decision monitoring information" includes: and obtaining eighth monitoring information through the second rationality monitoring model according to the target decision and the preset standard rule of the target decision, and taking the eighth monitoring information as decision monitoring information, wherein the eighth monitoring information is used for indicating whether a decision result of the decision planning model is rational or not.
In one possible implementation manner, the second evaluation model includes a second uncertainty monitoring model, the decision information includes a plurality of target decisions, and a similarity between driving scene information corresponding to any two target decisions in the plurality of target decisions is greater than a preset similarity threshold. The "inputting the driving state information and the decision information into the second evaluation model, and determining the at least one decision monitoring information" includes: and inputting the multiple target decisions into a second uncertainty monitoring model to obtain the confidence coefficient of each target decision. And determining ninth monitoring information according to the confidence coefficient of each target decision through the second uncertainty monitoring model, and taking the ninth monitoring information as decision monitoring information, wherein the ninth monitoring information is used for indicating whether a decision result of the decision planning model is stable.
In one possible embodiment, the second assessment model comprises a second resistance monitoring model. The "inputting the driving state information and the decision information into the second evaluation model, and determining the at least one decision monitoring information" includes: and inputting the driving state information into the second resistance monitoring model to obtain tenth monitoring information, and taking the tenth monitoring information as decision monitoring information, wherein the tenth monitoring information is used for indicating whether noise exists in the driving state information.
According to a third aspect of the present application, there is provided a device for determining monitoring information, the device being applied to a controller of a vehicle, the controller being deployed with an autopilot awareness model. The device comprises: an acquisition unit and a processing unit.
And an acquisition unit configured to acquire running environment information of the vehicle. The processing unit is used for inputting the driving environment information into the automatic driving perception model to obtain the characteristic information of the automatic driving perception model, wherein the characteristic information is the information obtained by the automatic driving perception model through the deep learning model. The processing unit is further used for inputting the driving environment information and the characteristic information into a first evaluation model, determining at least one piece of perception monitoring information, and the first evaluation model comprises at least one of the following: the method comprises the steps of a first operation design domain ODD monitoring model, a first significance monitoring model, a first rationality monitoring model, a first uncertainty monitoring model and a first antagonism monitoring model, wherein one monitoring model corresponds to one perception monitoring information, and the perception monitoring information is used for indicating whether an automatic driving perception model is insufficient or not. The first ODD monitoring model is used for monitoring whether the driving environment information meets the automatic driving starting condition, the first saliency monitoring model is used for monitoring whether the image detection result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether the target detection result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether the scene detection result of the automatic driving perception model is stable, and the first opposite sex monitoring model is used for monitoring whether the driving environment information has noise.
In one possible implementation, the first assessment model includes a first ODD monitoring model that includes an autopilot launch condition. The processing unit is specifically configured to determine, according to the driving environment information and the automatic driving start condition through the first ODD monitoring model, first monitoring information, and use the first monitoring information as sensing monitoring information, where the first monitoring information is used to indicate whether the driving environment information meets the automatic driving start condition.
In one possible embodiment, the first assessment model comprises a first saliency monitoring model and the characteristic information comprises a driving environment image marking the first salient region. The processing unit is specifically configured to input a driving environment image into the first saliency monitoring model, and obtain a driving environment image marked with the second salient region. The processing unit is specifically configured to determine second monitoring information according to the first salient region and the second salient region through the first salient monitoring model, and use the second monitoring information as perception monitoring information, where the second monitoring information is used to indicate whether an image detection result of the automatic driving perception model is accurate.
In one possible implementation, the first assessment model includes a first rationality monitoring model, the characteristic information includes a target object, and the first rationality monitoring model includes a preset standard rule for the target object. The processing unit is specifically configured to obtain third monitoring information according to the target object and a preset standard rule of the target object through the first rationality monitoring model, and use the third monitoring information as perception monitoring information, where the third monitoring information is used to indicate whether a target detection result of the automatic driving perception model is rational.
In one possible implementation, the first evaluation model includes a first uncertainty monitoring model, and the feature information includes a target object of each of a plurality of driving scene images, and a similarity between any two driving scene images of the plurality of driving scene images is greater than a preset similarity threshold. The processing unit is specifically configured to, when the first evaluation model includes a first uncertainty monitoring model, the feature information includes a target object of each of a plurality of driving scene images, and a similarity between any two driving scene images in the plurality of driving scene images is greater than a preset similarity threshold. The processing unit is specifically configured to determine fourth monitoring information according to the confidence coefficient of each target object through the first uncertainty monitoring model, and use the fourth monitoring information as perception monitoring information, where the fourth monitoring information is used to indicate whether a target detection result of the autopilot perception model is stable.
In one possible embodiment, the first assessment model comprises a first resistance monitoring model and the driving environment information comprises a plurality of driving environment images. The processing unit is specifically configured to input a plurality of driving environment images into the first resistance monitoring model to obtain fifth monitoring information, and use the fifth monitoring information as perception monitoring information, where the fifth monitoring information is used to indicate whether noise exists in the plurality of driving environment images.
According to a fourth aspect of the present application, there is provided a device for determining monitoring information, the device being applied to a controller of a vehicle, the controller being deployed with a decision-making planning model. The device comprises: an acquisition unit and a processing unit.
And the acquisition unit is used for acquiring the driving state information and the characteristic information of the automatic driving perception model. The processing unit is used for inputting the driving state information and the characteristic information into the decision planning model to obtain decision information of the decision planning model, wherein the decision information is information obtained by the decision planning model through the deep learning model. The processing unit is further configured to input the driving state information and the decision information into a second evaluation model, determine at least one decision monitoring information, and the second evaluation model includes at least one of: the second operational design domain ODD monitoring model, the second significance monitoring model, the second rationality monitoring model, the second uncertainty monitoring model and the second contrast monitoring model, wherein one monitoring model corresponds to one decision monitoring information, and the decision monitoring information is used for indicating whether the decision planning model is insufficient or not. The second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significance monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second opposite resistance monitoring model is used for monitoring whether noise exists in the driving state information.
In one possible implementation, the second assessment model includes a second ODD monitoring model that includes driving function conditions. The processing unit is specifically configured to determine, according to the driving status information and the driving function condition, sixth monitoring information through the second ODD monitoring model, and use the sixth monitoring information as decision monitoring information, where the sixth monitoring information is used to indicate whether the driving status information meets the driving function condition.
In one possible implementation, the second assessment model comprises a second significance monitoring model, the decision information comprising a plurality of target decisions and a weight value for each target decision, the second significance monitoring model comprising a preset decision rule. The processing unit is specifically configured to input a plurality of target decisions and a weight value of each target decision into the second significance monitoring model, so as to obtain a target decision rule. The processing unit is specifically configured to determine seventh monitoring information according to the target decision rule and a preset decision rule through the second significance monitoring model, and use the seventh monitoring information as decision monitoring information, where the seventh monitoring information is used to indicate whether the target decision rule of the decision planning model is accurate.
In one possible implementation, the second assessment model comprises a second rationality monitoring model, the decision information comprises a target decision, and the second rationality monitoring model comprises a preset standard rule for the target decision. The processing unit is specifically configured to obtain eighth monitoring information according to the target decision and a preset standard rule of the target decision through the second rationality monitoring model, and use the eighth monitoring information as decision monitoring information, where the eighth monitoring information is used to indicate whether a decision result of the decision planning model is rational.
In one possible implementation manner, the second evaluation model includes a second uncertainty monitoring model, the decision information includes a plurality of target decisions, and a similarity between driving scene information corresponding to any two target decisions in the plurality of target decisions is greater than a preset similarity threshold. The processing unit is specifically configured to input a plurality of target decisions into the second uncertainty monitoring model, so as to obtain a confidence coefficient of each target decision. The processing unit is specifically configured to determine, according to the confidence coefficient of each target decision, ninth monitoring information through the second uncertainty monitoring model, and use the ninth monitoring information as decision monitoring information, where the ninth monitoring information is used to indicate whether a decision result of the decision planning model is stable.
In one possible embodiment, the second assessment model comprises a second resistance monitoring model. The processing unit is specifically configured to input the driving status information into the second resistance monitoring model to obtain tenth monitoring information, and use the tenth monitoring information as decision monitoring information, where the tenth monitoring information is used to indicate whether noise exists in the driving status information.
According to a fifth aspect of the present application, there is provided a vehicle comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the method of the first aspect and any of its possible embodiments described above.
According to a sixth aspect of the present application there is provided a computer readable storage medium, which when executed by a processor of a vehicle, enables the vehicle to perform the method of the first aspect and any one of its possible embodiments.
According to a seventh aspect of the present application there is provided a computer program product comprising computer instructions which, when run on a vehicle, cause the vehicle to perform the method of the first aspect and any one of its possible embodiments.
Therefore, the technical characteristics of the application have the following beneficial effects:
(1) The automatic driving perception model can be monitored through different models to obtain corresponding monitoring information, the follow-up controller can optimize the automatic driving perception model according to the monitoring information, and the automatic driving perception model in the automatic driving system is adjusted, so that the perception result of the automatic driving system is more accurate.
(2) The decision planning model can be monitored through different models to obtain corresponding monitoring information, the follow-up controller can optimize the decision planning model according to the monitoring information, and the decision planning model in the automatic driving system is adjusted, so that the decision result of the automatic driving system is more accurate.
It should be noted that, the technical effects caused by any implementation manner of the second aspect to the seventh aspect may refer to the technical effects caused by the corresponding implementation manner in the first aspect, which are not described herein.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application and do not constitute a undue limitation on the application.
FIG. 1 is a schematic illustration of a causal relationship for vehicle hazard behavior, according to an exemplary embodiment;
FIG. 2 is a schematic diagram of an architecture of a monitoring information determination system, according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of determining monitoring information according to an exemplary embodiment;
FIG. 4 is a model schematic diagram illustrating one monitoring for deep learning model starvation in accordance with an exemplary embodiment;
FIG. 5 is a flowchart illustrating another method of determining monitoring information, according to an example embodiment;
FIG. 6 is a schematic diagram illustrating a deep learning model run phase monitoring link according to an exemplary embodiment;
FIG. 7 is a schematic diagram of an architecture of an autopilot system shown in accordance with one exemplary embodiment;
FIG. 8 is a block diagram illustrating a means for determining monitoring information according to an exemplary embodiment;
FIG. 9 is a block diagram of another monitoring information determination device, according to an example embodiment;
fig. 10 is a block diagram of a vehicle, according to an exemplary embodiment.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions of the present application, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Before describing the method for determining the monitoring information in detail, the implementation environment and application field Jing Jinhang of the embodiment of the present application are described.
The advanced automatic driving is man-machine co-driving, namely the automatic driving system can normally run under normal working conditions, but when an emergency occurs, the automatic driving system can exit and needs the driver to take over. Functional security is an integral part of the overall security of a system or device. When the system has functional fault or failure (such as hardware fault and software fault), the system enters a safe controllable mode, so that casualties are avoided. However, for an automatic driving system, even if no hardware component fails, the algorithm has no software error, and the function of the triggering system cannot meet the boundary condition because of running various edge scenes in the design domain (operational design domain, ODD) range, so that misjudgment is caused, and safety failure occurs.
Currently, intended functional safety refers to reducing the unacceptable risk of system failure due to either an intended functional deficiency (design failure or performance limitation) or predictable personnel mishandling. Because the deep learning model is widely applied in the automatic driving system, the result of the deep learning model is inaccurate and does not belong to hardware faults or software faults, and the accuracy of the deep learning model is also required to be focused on by the preset function safety. Therefore, how to monitor the accuracy of the deep learning model is a urgent problem to be solved.
In the embodiment of the application, the defects of the deep learning model can be divided into two categories, namely the defects of specification and performance. As shown in fig. 1, a causal relationship diagram of a vehicle's dangerous behavior is shown. Insufficient specifications or insufficient performance of the deep learning model may cause insufficient driving functions of the automatic driving system, and the triggering conditions of the environment may activate the insufficient driving functions of the automatic driving system, so that the vehicle may have dangerous behaviors.
The specification shortage of the deep learning model comprises the following steps: the data quality and the integrity adopted by the deep learning model training are insufficient, and the task interpretability of the deep learning model is insufficient, the data quality and the integrity are defined according to the task which needs to be solved by the model, and the data which are sufficient and multiple for the deep learning model training are needed to be prepared as much as possible.
It should be noted that, the interpretability of the deep learning model is not directly related to the performance of the deep learning model, and the deep learning model with better performance in the industry at present is often a large model with hundreds of millions of neuron weight parameters, so that it is very difficult to describe the input and output relationship context.
The performance deficiencies of the deep learning model include: the accuracy and robustness of the deep learning model are insufficient and the uncertainty of the deep learning model is indicative of insufficient. The accuracy is used for indicating the ability of the output result of the deep learning model to be correct; robustness is used to indicate the ability of the deep learning model to still give correct results when faced with inputs of different scenarios; uncertainty is used to indicate the difference between the deep learning model output result and the actual result and how the difference between the two is represented.
In order to solve the above-mentioned problems, an embodiment of the present application provides a method for determining monitoring information, including: the controller may obtain feature data of the deep learning model, the feature data including: the method comprises the steps of inputting data of a deep learning model and outputting data of the deep learning model, wherein the input data are data acquired through a vehicle sensor, and the output data are data obtained through the deep learning model by an automatic driving system. The controller may input the feature data into an evaluation model, determining at least one perceptually monitored information, the evaluation model comprising at least one of: the ODD monitoring model, the significance monitoring model, the rationality monitoring model and the uncertainty monitoring model are characterized in that one model corresponds to one piece of perception monitoring information, and the perception monitoring information is used for indicating whether the deep learning model is insufficient or not. The ODD monitoring model is used for monitoring the accuracy of a scene prediction result of the deep learning model, the significance monitoring model is used for monitoring the accuracy of an image detection result of the deep learning model, the rationality monitoring model is used for monitoring the accuracy of a target detection result of the deep learning model, and the uncertainty monitoring model is used for monitoring the reliability of the prediction result of the deep learning model.
The following describes an implementation environment of an embodiment of the present application.
Fig. 2 is a schematic architecture diagram of a system for determining monitoring information according to an exemplary embodiment, where the system for determining monitoring information includes: a controller 201, and a vehicle sensor 202. The controller 201 is in wired/wireless communication with the vehicle sensors 202.
The controller 201 may be an autopilot controller, and is configured to receive data collected by vehicle sensors. The controller may be deployed with an autopilot system, and the controller may include a deep learning model. The controller 201 can fuse, identify and classify the data acquired by the vehicle sensor through the deep learning model, and plan and decide the path of the vehicle by combining with map positioning, so as to realize accurate control and automatic driving of the vehicle. The controller 201 may also monitor the deep learning model.
The vehicle sensors 202 may be lidar, cameras, global positioning systems (global positioning system, GPS), inertial navigation, etc. devices deployed for the vehicle. The vehicle sensor 102 may be used to collect data of the driving environment, the driving position, the driving state, etc. during the driving of the vehicle. The vehicle sensors 102 may also be used to send the collected data to the controller 201.
For easy understanding, the following describes a method for determining monitoring information provided by the present application in detail with reference to the accompanying drawings. Fig. 3 is a flow chart illustrating a method of determining monitoring information, according to an exemplary embodiment, as shown in fig. 3, the method comprising the steps of:
s301, the controller acquires running environment information of the vehicle.
In one possible implementation, the controller is coupled to a vehicle sensor. The vehicle sensor may collect driving environment information of the vehicle. The vehicle sensor may send the running environment information to the controller. The controller may receive running environment information from the vehicle sensor to acquire the running environment information of the vehicle.
In the embodiment of the present application, the driving environment information may include at least one of: environmental information, road information, traffic identification information, pedestrian information during the running of the vehicle.
S302, the controller inputs the driving environment information into the automatic driving perception model to obtain the characteristic information of the automatic driving perception model.
The characteristic information is information obtained by the automatic driving perception model through the deep learning model.
In one possible implementation, the controller is deployed with an autopilot awareness model. The controller may input driving environment information of the vehicle into the automatic driving perception model to obtain feature information of the automatic driving perception model.
In an embodiment of the present application, the feature information may include at least one of the following: vehicle driving scene, vehicle driving state, vehicle driving function.
The vehicle driving scenario may be a high-speed road section scenario, or may be an urban road section scenario, for example. The vehicle driving state may be a high-speed running state or a flameout-free stationary state. The vehicle driving function may be a high-speed running function or an automatic lane changing function.
S303, the controller inputs the driving environment information and the characteristic information into a first evaluation model to determine at least one piece of perception monitoring information.
In an embodiment of the application, the first evaluation model comprises at least one of: a first operational design domain (operational design domain, ODD) monitoring model, a first saliency monitoring model, a first rationality monitoring model, a first uncertainty monitoring model, a first contrast monitoring model.
The first ODD monitoring model is used for monitoring whether the driving environment information meets the automatic driving starting condition, the first saliency monitoring model is used for monitoring whether the image detection result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether the target detection result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether the scene detection result of the automatic driving perception model is stable, and the first opposite sex monitoring model is used for monitoring whether the driving environment information has noise.
In one possible implementation, the controller may input the driving environment information and the feature information into the first evaluation model, determine at least one perception monitoring information, one perception monitoring information corresponding to each monitoring model.
That is, if the first assessment model includes the first ODD monitoring model, the controller may determine a first perception monitoring information. If the first evaluation model includes: the first ODD monitoring model and the first saliency monitoring model, the controller may determine two pieces of perceptual monitoring information. If the first evaluation model includes: the controller may determine three pieces of perceptual monitoring information. If the first evaluation model includes: the controller may determine four perception monitoring information. If the first evaluation model includes: the controller may determine five pieces of perceptual monitoring information.
It is understood that the controller may acquire the running environment information of the vehicle. The controller can input the driving environment information into the automatic driving perception model to obtain the characteristic information of the automatic driving perception model, wherein the characteristic information is the information obtained by the automatic driving perception model through the deep learning model. The controller may input the driving environment information and the characteristic information into a first evaluation model, determining at least one perception monitoring information, the first evaluation model including at least one of: the system comprises a first ODD monitoring model, a first significance monitoring model, a first rationality monitoring model, a first uncertainty monitoring model and a first antagonism monitoring model, wherein one monitoring model corresponds to one perception monitoring information, and the perception monitoring information is used for indicating whether the automatic driving perception model is insufficient or not. The first ODD monitoring model is used for monitoring whether the driving environment information meets the automatic driving starting condition, the first saliency monitoring model is used for monitoring whether an image monitoring result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether an image detecting result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether a target detecting result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether a scene detecting result of the automatic driving perception model is stable, and the first antagonism monitoring model is used for monitoring whether noise exists in the driving environment information. Therefore, the automatic driving perception model can be monitored through different models to obtain corresponding monitoring information, the follow-up controller can optimize the automatic driving perception model according to the monitoring information, and the automatic driving perception model in the automatic driving system is adjusted, so that the perception result of the automatic driving system is more accurate.
In some embodiments, as shown in FIG. 4, a model schematic is shown that monitors for deficiencies in the deep learning model. The lack of specifications due to the deep learning model includes: insufficient data integrity, poor data quality, insufficient model interpretability, and insufficient performance of the deep learning model including: the model has insufficient accuracy, poor robustness and lack of uncertainty representation. Therefore, the controller can detect the deep learning model through the evaluation model to obtain a plurality of pieces of perception monitoring information. The assessment model includes at least one of: an ODD monitoring model, a significance monitoring model, a rationality monitoring model, an antagonism monitoring model, and an uncertainty monitoring model.
Therefore, the deficiency of the deep learning model is monitored through the evaluation model, corresponding monitoring information can be obtained, the deep learning model can be optimized according to the monitoring information, or the control decision of the automatic driving system can be adjusted, so that the driving safety of the vehicle can be improved.
In some embodiments, the first assessment model includes a first ODD monitoring model, which may include an automatic driving start condition in order to monitor whether the driving environment information of the vehicle satisfies the automatic driving start condition. The controller inputs the driving environment information and the characteristic information into a first evaluation model to determine at least one perception monitoring information (S303), including: the controller can determine first detection information through the first ODD monitoring model according to the driving environment information and the automatic driving starting condition, and the first detection information is used as perception monitoring information.
The first monitoring information is used for indicating whether the driving environment information meets the automatic driving starting condition.
In the embodiment of the present application, the driving environment information may include: sensor status, ambient illuminance. The automatic driving start condition may include: the sensor state is normal state and the illuminance is preset.
In one possible implementation, the controller may determine whether the driving environment information satisfies the autopilot start condition through the first ODD monitoring model to determine the first monitoring information.
Specifically, the first monitoring information may be first sub-information, or the first monitoring information may be second sub-information.
In one possible design, the controller may determine whether the sensor state is a normal state through the first ODD monitoring model. If the sensor state is an abnormal state, the controller can determine first sub-information through the first ODD monitoring model, wherein the first sub-information is used for indicating that the driving environment information does not meet the automatic driving starting condition.
In another possible design, if the sensor state is a normal state, the controller may compare the ambient illuminance and the preset illuminance through the first ODD monitoring model. If the ambient illuminance is less than the preset illuminance, the controller may determine, through the first ODD monitoring model, first sub-information, where the first sub-information is used to indicate that the driving environment information does not meet the automatic driving starting condition.
In the embodiment of the application, if the ambient illuminance is greater than or equal to the preset illuminance, the controller may determine, through the first ODD monitoring model, second sub-information, where the second sub-information is used to indicate that the driving environment information meets the automatic driving starting condition.
That is, if the sensor state is a normal state and the ambient illuminance is greater than or equal to the preset illuminance, the controller may determine the second sub-information through the first ODD monitoring model.
It should be noted that, in the embodiment of the present application, if the controller determines, through the first ODD monitoring model, the first sub-information, that is, determines that the driving environment information does not meet the automatic driving starting condition, the controller may send an alarm signal through the first ODD monitoring model, and record the starting score value. If the controller determines the first sub-information through the first ODD monitoring model, the controller may start the autopilot system.
It can be appreciated that the controller may determine, through the first ODD monitoring model, first monitoring information according to the driving environment information and the automatic driving start condition, and use the first monitoring information as sensing monitoring information, where the first monitoring information is used to indicate whether the driving environment information meets the automatic driving start condition. In this way, the controller may determine whether the running environment information satisfies the automatic driving start condition to determine whether to start the automatic driving system.
In some embodiments, the first assessment model includes a first saliency monitoring model, and the feature information includes a driving environment image marking the first salient region in order to monitor whether an image detection result of the automatic driving perception model is accurate. The controller inputs the driving environment information and the characteristic information into a first evaluation model to determine at least one perception monitoring information (S303), including: the controller can input the driving environment image into the first saliency monitoring model to obtain the driving environment image of the marked second salient region.
In one possible design, the autopilot awareness model may include a method of data modeling that simulates the visual attention mechanism of a human. The automatic driving perception model can calculate the importance degree of each object to be detected in the driving environment image through a visual attention mechanism, and takes the region of the object to be detected with the largest importance degree in the driving environment image as the first significant region of the target object in the driving environment image.
In the embodiment of the application, the first saliency monitoring model can obtain the driving environment image of the marked second saliency area through a preset identification method.
In the embodiment of the present application, the preset recognition method is not limited. For example, the preset recognition method may be a direction gradient histogram (histogram of oriented gradient, HOG) method. For another example, the preset recognition method may be an image pyramid method. For another example, the preset recognition method may be a sliding window method.
Then, the controller can determine second monitoring information according to the first salient region and the second salient region through the first salient monitoring model, and the second monitoring information is used as perception monitoring information.
The second monitoring information is used for indicating whether the image detection result of the automatic driving perception model is accurate or not.
It should be noted that, in the embodiment of the present application, the driving environment image may be obtained by photographing through a vehicle camera, and the driving environment image may include a plurality of objects to be detected. The plurality of objects to be measured may include: vehicles, traffic signs, buildings, pedestrians, etc. The first saliency monitoring model is used to monitor whether the autopilot awareness model prioritizes information that should be of particular interest when processing data.
Specifically, the controller may determine the degree of overlap of the first salient region and the second salient region through a salient monitoring model. The controller can determine the second monitoring information according to the overlapping degree and a preset overlapping degree threshold value through the saliency monitoring model.
The second monitoring information may be the third sub information, or the second monitoring information may be the fourth sub information.
In one possible design, if the overlapping degree is greater than or equal to the preset overlapping degree threshold, the controller may determine third sub-information through the saliency monitoring model, where the third sub-information is used to indicate that the image detection result of the autopilot sensing model is accurate.
It should be noted that, in the embodiment of the present application, if the overlapping degree is greater than or equal to the preset overlapping degree threshold, it is indicated that the first significant region detected by the autopilot sensing model is similar to the second significant region detected by the first significant monitoring model, that is, the autopilot sensing model is the same as the target object detected by the first significant monitoring model.
For example, when the vehicle turns, the controller may acquire a driving environment image in front of the vehicle through the front camera, and the driving environment image may include: pedestrians, traffic lines, other vehicles, traffic lights, trees. If the first significant region detected by the autopilot awareness model includes a pedestrian, the second significant region detected by the first significance monitoring model includes a pedestrian, the degree of overlap of the first significant region and the second significant region is 95%, and the preset degree of overlap threshold is 90%. Because the overlapping degree is larger than the preset overlapping degree threshold value, the controller can determine the third sub-information through the first saliency monitoring model.
In another possible design, if the overlapping degree is smaller than the preset overlapping degree threshold, the controller may determine fourth sub-information through the saliency monitoring model, where the fourth sub-information is used to indicate that the image detection result of the autopilot perception model is inaccurate.
For example, when the vehicle turns, the controller may acquire a driving environment image in front of the vehicle through the front camera, and the driving environment image may include: pedestrians, traffic lines, other vehicles, traffic lights, trees. If the first salient region detected by the automatic driving perception model comprises a tree, the second salient region detected by the first salient monitoring model comprises a pedestrian, the overlapping degree of the first salient region and the second salient region is 20%, and the preset overlapping degree threshold is 90%. Because the overlapping degree is smaller than the preset overlapping degree threshold value, the controller can determine fourth sub-information through the first saliency monitoring model and record a saliency scoring value.
In the embodiment of the present application, the preset overlapping area is not limited. For example, the preset overlap region may be 80%. For another example, the predetermined overlap region may be 90%. For another example, the predetermined overlap region may be 85%.
It is understood that the controller may input the driving environment image into the first saliency monitoring model to obtain a driving environment image marking the second salient region. The controller can determine second monitoring information through the first saliency monitoring model according to the first saliency area and the second saliency area, and the second monitoring information is used as perception monitoring information and used for indicating whether an image detection result of the automatic driving perception model is accurate or not. Therefore, the controller can monitor whether the image detection result of the automatic driving perception model is accurate or not.
In some embodiments, the first assessment model includes a first rationality monitoring model, the feature information includes a target object, and the first rationality monitoring model includes preset criteria rules for the target object in order to monitor whether a target detection result of the autopilot awareness model is reasonable. The controller inputs the driving environment information and the characteristic information into a first evaluation model to determine at least one perception monitoring information (S303), including: the controller can obtain third monitoring information through the first rationality monitoring model according to the target object and the preset standard rule of the target object, and the third monitoring information is used as perception monitoring information.
The third monitoring information is used for indicating whether the target monitoring result of the automatic driving perception model is reasonable or not.
It should be noted that, in the embodiment of the present application, the target object is a target detection result of the autopilot perception model. For example, the target object may be a pedestrian, a tree, a vehicle, a traffic sign, or the like.
In an embodiment of the present application, the third monitoring information includes: fifth sub information or sixth sub information.
In one possible design, if the target object meets the preset standard rule of the target object, the controller may determine fifth sub-information through the first rationality monitoring model, where the fifth sub-information is used to indicate that the target monitoring result of the autopilot awareness model is reasonable.
For example, if the target object is a car, the preset standard rules of the target object include: the car body length of the car is less than 6 meters, the car width is less than 2 meters, and the height is less than 2.5 meters. If the target object meets the preset standard rule, the controller can determine fifth sub-information through the first rationality monitoring model.
In another possible design, if the target object does not meet the preset standard rule of the target object, the controller may determine sixth sub-information through the first rationality monitoring model, and record a rationality score value, where the sixth sub-information is used to indicate that the target monitoring result of the autopilot awareness model is unreasonable.
It can be understood that the controller can obtain third monitoring information through the first rationality monitoring model according to the target object and the preset standard rule of the target object, and the third monitoring information is used as perception monitoring information, and the third monitoring information is used for indicating whether the target detection result of the automatic driving perception model is reasonable or not. Therefore, the controller can monitor whether the target detection result of the automatic driving perception model is reasonable or not.
In some embodiments, the first evaluation model includes a first uncertainty monitoring model, and the feature information includes a target object of each of a plurality of driving scene images, in order to monitor whether a target detection result of the autopilot awareness model is stable, and a similarity between any two driving scene images of the plurality of driving scene images is greater than a preset similarity threshold. The controller inputs the driving environment information and the characteristic information into a first evaluation model to determine at least one perception monitoring information (S303), including: the controller may input the target object of each driving scenario image into the first uncertainty monitoring model to obtain a confidence level of the target object in each driving scenario image.
In the embodiment of the application, the controller can determine fourth monitoring information according to the confidence coefficient of each target object through the first uncertainty monitoring model, and the fourth monitoring information is used as perception monitoring information for indicating whether the target monitoring result of the automatic driving perception model is stable or not.
The fourth monitoring information may be seventh sub information, or the fourth monitoring information may be eighth sub information.
Specifically, the confidence of the plurality of target objects is normally distributed. The controller may determine the variance of the normal distribution through a first uncertainty monitoring model. The controller may determine whether the variance of the normal distribution satisfies a Laida criterion (3σ principle) by the first uncertainty monitoring model.
In one possible design, if the variance of the normal distribution meets the radon criterion, the controller may determine seventh sub-information through the first uncertainty monitoring model, where the seventh sub-information is used to indicate that the target monitoring result of the autopilot awareness model is stable.
It should be noted that, in the embodiment of the present application, if the variance of the normal distribution satisfies the radon criterion, it is described that for similar scene images, the target objects identified by the autopilot perception model are the same or similar, and the target detection result of the autopilot perception model is stable.
In another possible design, if the variance of the normal distribution does not meet the radon criterion, the controller may determine eighth sub-information through the first uncertainty monitoring model, and record a stability score value, where the eighth sub-information is used to indicate that the target monitoring result of the autopilot awareness model is unstable.
It will be appreciated that the controller may input the target object for each driving scenario image into the first uncertainty monitoring model, resulting in a confidence level for each target object. The controller can determine fourth monitoring information through the first uncertainty monitoring model according to the confidence coefficient of each target object, and the fourth monitoring information is used as perception monitoring information and used for indicating whether a target detection result of the automatic driving perception model is stable or not. Thus, the controller can monitor whether the target detection result of the automatic driving perception model is stable.
In some embodiments, the first assessment model includes a first resistance monitoring model, and the driving environment information includes a plurality of driving environment images in order to monitor whether noise is present in the plurality of driving environment images. The controller inputs the driving environment information and the characteristic information into a first evaluation model to determine at least one perception monitoring information (S303), including: the controller may input the plurality of driving environment images into the first resistance monitoring model to obtain fifth monitoring information, and use the fifth monitoring information as perception monitoring information, where the fifth monitoring information is used to indicate whether noise exists in the plurality of driving environment images.
In one possible design, if there is a blurred region (i.e., a region with a sharpness less than a preset sharpness threshold) in the driving environment image, the controller may determine that the driving environment image has noise through the first contrast monitoring model. Or if the image with the wrong identification exists in the driving environment image, the controller can ensure that the driving environment image has noise through the first resistance monitoring model.
In the embodiment of the present application, if the fifth monitoring information is specifically used to indicate that noise exists in the plurality of driving environment images, the controller may record the noise grading value.
In the embodiment of the present application, the first resistance monitoring model is not limited. For example, the first resistance monitoring model may be a feature learning model, and the feature learning model performs dimension reduction processing on input data to convert high-dimension input data into low-dimension data, so as to reduce detection difficulty of the deep learning model and improve accuracy of the deep learning model. For another example, the first resistance monitoring model may be an input dissociation model, and the input dissociation model decomposes the input data to obtain input data that does not include the resistance number, thereby improving the accuracy of the deep learning model. For another example, the first resistance monitoring model may perform noise reduction processing on the resistance data to obtain data with noise eliminated, thereby improving accuracy of the deep learning model.
It may be appreciated that the controller may input a plurality of driving environment images into the first resistance monitoring model to obtain fifth monitoring information, and use the fifth monitoring information as sensing monitoring information, where the fifth monitoring information is used to indicate whether noise exists in the plurality of driving environment images. Thus, the controller can monitor whether noise exists in the plurality of driving environment images.
It should be noted that, in the embodiment of the present application, the controller may determine the score sum according to the start score value, the significance score value, the rationality score value, the stability score value, and the noise score value. If the score sum is less than the preset score threshold, the controller may generate degradation alarm information for suggesting a degradation operation. If the score sum is greater than or equal to a preset score threshold, the controller may generate a shutdown alert message, where the shutdown alert message is used to suggest shutdown of the autopilot system, and prompt the driver to perform an inspection.
In some embodiments, the decision-making planning model in the automatic driving system is used for planning a corresponding path according to the perception result of the automatic driving perception model and the behavior of the comprehensive decision-making vehicle for a preset destination. As shown in fig. 5, fig. 5 is a flowchart illustrating another method of determining monitoring information according to an exemplary embodiment. In order to make the decision result of the decision rule model more accurate, the controller is deployed with a decision planning model, and the method for determining the monitoring information comprises the following steps: S501-S503.
S501, the controller acquires the running state information of the vehicle and the characteristic information of the automatic driving perception model.
In one possible implementation, the controller is coupled to a vehicle sensor. The vehicle sensor may collect driving state information of the vehicle. The vehicle sensor may send driving state information to the controller. The controller may receive the driving state information from the vehicle sensor to acquire the driving state information of the vehicle.
In the embodiment of the present application, the driving state information may include at least one of: the vehicle speed, steering angle of steering wheel, steering angle acceleration, road information, vehicle position and recognition result of target object in the running process of the vehicle.
S502, the controller inputs the driving state information and the characteristic information into a decision planning model to obtain decision information of the decision planning model.
The decision information is information obtained by the decision planning model through the deep learning model.
In one possible implementation, the controller is deployed with a decision-making planning model. The controller may input driving environment information of the vehicle into the decision planning model to obtain decision information of the decision planning model.
Illustratively, the decision information may be a vehicle automatic lane change function, including: lane change speed, steering angle of the steering wheel, etc. Or the decision information may be a vehicle high-speed driving function including a running speed, a running acceleration, etc. Or the decision information may be a vehicle automatic turning function including turning angle, turning speed, etc.
S503, the controller inputs the driving state information and the decision information into a second evaluation model to determine at least one decision monitoring information.
In an embodiment of the application, the second evaluation model comprises at least one of: a second operational design domain (operational design domain, ODD) monitoring model, a second saliency monitoring model, a second rationality monitoring model, a second uncertainty monitoring model, a second contrast monitoring model.
The second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significance monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second opposite resistance monitoring model is used for monitoring whether noise exists in the driving state information.
In one possible implementation, the controller may input the driving state information and the decision information into the second evaluation model, determine at least one decision monitoring information, one decision monitoring information corresponding to each monitoring model.
It is understood that the controller may acquire the driving state information of the vehicle and the feature information of the automatic driving perception model. The controller inputs the driving state information and the characteristic information into a decision planning model to obtain decision information of the decision planning model, wherein the decision information is information obtained by the decision planning model through a deep learning model. The controller may input the driving state information and the decision information into a second evaluation model, determining at least one decision monitoring information, the second evaluation model including at least one of: the second operational design domain ODD monitoring model, the second significance monitoring model, the second rationality monitoring model, the second uncertainty monitoring model and the second contrast monitoring model, wherein one monitoring model corresponds to one decision monitoring information, and the decision monitoring information is used for indicating whether the decision planning model is insufficient or not. The second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significance monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second opposite resistance monitoring model is used for monitoring whether noise exists in the driving state information. Therefore, the decision planning model can be monitored through different models to obtain corresponding monitoring information, the follow-up controller can optimize the decision planning model according to the monitoring information, and the decision planning model in the automatic driving system is adjusted, so that the decision result of the automatic driving system is more accurate.
In some embodiments, the second assessment model includes a second ODD monitoring model, which may include driving function conditions in order to monitor whether the driving state information of the vehicle satisfies the driving function conditions. The controller inputs the driving state information and the decision information into a second evaluation model, determines at least one decision monitoring information (S503), comprising: the controller can determine sixth monitoring information through the second ODD monitoring model according to the driving state information and the driving function condition, and takes the sixth monitoring information as decision monitoring information.
The sixth monitoring information is used for indicating whether the driving state information meets the driving function condition.
In the embodiment of the present application, the driving function condition is a standard condition corresponding to an automatic driving function.
For example, if the altitude NOA is started, the decision result of the decision planning model is the cut-in function. The driving state information of the vehicle is that other vehicles exist in the left front of the vehicle, the driving function condition (namely, the condition of the overtaking function) is that the left front is an open lane, and the second ODD monitoring model can determine that the driving state information of the vehicle does not meet the driving function condition.
It can be appreciated that the controller may determine, through the second ODD monitoring model, sixth monitoring information according to the driving state information and the driving function condition, and use the sixth monitoring information as decision monitoring information, where the sixth monitoring information is used to indicate whether the driving state information meets the driving function condition. In this way, the controller may determine whether the driving state information satisfies the driving function condition to determine whether to start the automatic driving system.
In some embodiments, the second assessment model comprises a second significance monitoring model, the decision information comprising a plurality of target decisions and a weight value for each target decision in order to monitor whether the target decision rules of the decision planning model are accurate, the second significance monitoring model comprising preset decision rules. The controller inputs the driving state information and the decision information into a second evaluation model, determines at least one decision monitoring information (S503), comprising: the controller may input a plurality of target decisions and a weight value of each target decision into the second significance monitoring model to obtain a target decision rule.
The weight value is used for reflecting the importance degree of the target decision in the process of space-time combined decision planning of the decision planning model.
Specifically, the controller can sort the plurality of target decisions based on the weight value of each target decision through the second significance monitoring model to obtain a target decision rule.
In the embodiment of the application, the controller can determine the seventh monitoring information through the second significance monitoring model according to the target decision rule and the preset decision rule, and the seventh monitoring information is used as the decision monitoring information and used for indicating whether the target decision rule of the decision planning model is accurate or not.
It should be noted that, in the embodiment of the present application, the preset decision rule may be a built-in rule of the autopilot system. For example, the preset decision rule may include: personnel safety is preferably ensured (i.e., pedestrians are preferably considered). For another example, the preset decision rule may include: the vehicle should actively let the lane of the object go straight when the vehicle makes unprotected left turn.
In one possible implementation, the controller may determine, through the second significance monitoring model, a compliance between the target decision rule and the preset decision rule according to the target decision rule and the preset decision rule. The controller can compare the conformity with a preset conformity threshold value through the second saliency monitoring model to determine seventh monitoring information.
The seventh monitoring information may be the ninth sub information, or the seventh monitoring information may be the tenth sub information.
In one possible design, if the compliance is greater than or equal to a preset compliance threshold, the controller may determine ninth sub-information through the second significance monitoring model, where the ninth sub-information is used to indicate that the target decision rule of the decision planning model is accurate.
In another possible design, if the compliance is less than the preset compliance threshold, the controller may determine tenth sub-information through the second significance monitoring model, where the tenth sub-information is used to indicate that the target decision rule of the decision planning model is inaccurate.
In the embodiment of the present application, the preset compliance threshold is not limited. For example, the preset compliance threshold may be 90%. For another example, the preset compliance threshold may be 85%. For example, the preset compliance threshold may be 95%.
It will be appreciated that the controller may input a plurality of target decisions and a weight value for each target decision into the second significance monitoring model to derive a target decision rule. The controller can determine seventh monitoring information through the second significance monitoring model according to the target decision rule and the preset decision rule, and the seventh monitoring information is used as decision monitoring information and used for indicating whether the target decision rule of the decision planning model is accurate or not. Thus, the controller can monitor whether the target decision rule of the decision planning model is accurate or not.
In some embodiments, the second assessment model comprises a second rationality monitoring model, the decision information comprising a target decision, the second rationality monitoring model comprising preset standard rules for the target decision, in order to monitor whether the decision result of the decision planning model is reasonable. The controller inputs the driving state information and the decision information into a second evaluation model, determines at least one decision monitoring information (S503), comprising: the controller can obtain eighth monitoring information through the second rationality monitoring model according to the target decision and the preset standard rule of the target decision, and the eighth monitoring information is used as decision monitoring information.
The eighth monitoring information is used for indicating whether the decision result of the decision planning model is reasonable.
For example, if the decision planning model yields a target decision that is a path around an obstacle. The second rationality monitoring model will examine the speed, acceleration, steering angle acceleration, etc. corresponding to the target decision. For example, the preset standard rule for detouring is that the steering angle is less than 35 degrees, and the rate of change of the steering angle acceleration with speed is less than 1 degree per square second.
Specifically, the eighth monitoring information may be eleventh sub information, or the eighth monitoring information may be twelfth sub information.
In one possible design, if the target decision meets the preset standard rule of the target decision, the controller may determine eleventh sub-information through the second rationality monitoring model, where the first sub-information is used to indicate that the decision result of the decision planning model is reasonable.
In another possible design, if the target decision does not meet the preset standard rule of the target decision, the controller may determine twelfth sub-information through the second rationality monitoring model, where the second sub-information is used to indicate that the decision result of the decision planning model is unreasonable.
It can be understood that the controller can obtain the eighth monitoring information through the second rationality monitoring model according to the target decision and the preset standard rule of the target decision, and take the eighth monitoring information as the decision monitoring information, where the eighth monitoring information is used to indicate whether the decision result of the decision planning model is rational. Thus, the controller can monitor whether the decision result of the decision planning model is reasonable.
In some embodiments, the second evaluation model includes a second uncertainty monitoring model, and in order to monitor whether the target detection result of the autopilot awareness model is stable, the decision information includes a plurality of target decisions, and a similarity between driving scenario information corresponding to any two of the plurality of target decisions is greater than a preset similarity threshold. The controller inputs the driving state information and the decision information into a second evaluation model, determines at least one decision monitoring information (S503), comprising: the controller may input a plurality of target decisions into the second uncertainty monitoring model, resulting in a confidence level for each target decision.
In the embodiment of the application, the controller can determine the ninth monitoring information according to the confidence coefficient of each target decision through the second uncertainty monitoring model, and the ninth monitoring information is used as decision monitoring information, and the ninth monitoring information is used for indicating whether the decision result of the decision planning model is stable.
The ninth monitoring information may be thirteenth sub information, or the ninth monitoring information may be fourteenth sub information.
Specifically, the confidence of the multiple target decisions is a normal distribution. The controller may determine the variance of the normal distribution through a second uncertainty monitoring model. The controller may determine whether the variance of the normal distribution satisfies a Laida criterion (3σ principle) by the first uncertainty monitoring model.
In one possible design, if the variance of the normal distribution meets the radon criterion, the controller may determine thirteenth sub-information through the second uncertainty monitoring model, where the thirteenth sub-information is used to indicate that the decision result of the decision planning model is stable.
It should be noted that, in the embodiment of the present application, if the variance of the normal distribution meets the rada criterion, it is described that for similar scenarios, the target decisions obtained by the decision planning module are the same or similar, and the decision result of the decision planning module is stable.
In another possible design, if the variance of the normal distribution does not meet the radon criterion, the controller may determine fourteenth sub-information through the second uncertainty monitoring model, where the fourteenth sub-information is used to indicate that the decision result of the decision planning model is unstable.
It will be appreciated that the controller may input a plurality of target decisions into the second uncertainty monitoring model, resulting in a confidence level for each target decision. The controller can determine ninth monitoring information through the second uncertainty monitoring model according to the confidence coefficient of each target decision, and the ninth monitoring information is used as decision monitoring information, and the ninth monitoring information is used for indicating whether a decision result of the decision planning model is stable or not. Thus, the controller can monitor whether the decision result of the decision planning model is stable.
In some embodiments, the second assessment model includes a second resistance monitoring model for monitoring whether noise is present in the plurality of driving environment images. The controller inputs the driving state information and the decision information into a second evaluation model, determines at least one decision monitoring information (S503), comprising: the controller may input the driving state information into the second resistance monitoring model to obtain tenth monitoring information, and use the tenth monitoring information as decision monitoring information, where the tenth monitoring information is used to indicate whether noise exists in the driving state information.
For example, if there is a wrong lane line in the driving state information, the controller may determine that noise is present in the driving state information through the second resistance monitoring model. For another example, if the target object is mixed in the driving state information, the controller may determine that noise exists in the driving state information through the second resistance monitoring model.
It is understood that the controller may input the driving state information into the second resistance monitoring model to obtain tenth monitoring information, and use the tenth monitoring information as decision monitoring information, where the tenth monitoring information is used to indicate whether noise exists in the driving state information. In this way, the controller can realize monitoring whether noise exists in the plurality of running state information.
In some embodiments, as shown in FIG. 6, a schematic diagram of a deep learning model run phase monitoring link is shown. The trigger condition for identifying the error may be classified into intra-distribution data, extra-distribution data, and countermeasure data according to the type of input data. For different monitoring models, the controller may input different data to the monitoring model.
For example, the controller may input the in-distribution data, the out-of-distribution data, and the contrast data into an uncertainty monitoring model to monitor uncertainty of a predicted result of the deep learning model. The controller may input the out-of-distribution data into the ODD monitoring model to monitor whether the input data of the deep learning model is within an expected range. The controller may input the antagonism data into the antagonism monitoring model to monitor whether the input data of the deep learning model has the antagonism.
It should be noted that, in the embodiment of the present application, in the training process of the deep learning model, the data in the distribution is the trained data; the data in the subsection is untrained data; the challenge data is data obtained by adding noise to trained data.
In some embodiments, noisy information may be included in the input data to make the deep learning model data erroneous. The controller may also include an antagonism monitoring model. The controller may input the input data into the resistance monitoring model to obtain processed data having noise less than noise of the input data.
It can be appreciated that the controller can also include an contrast monitoring model to process the input data to obtain processed data having less noise than the input data to improve the accuracy of the deep learning model.
In some embodiments, as shown in fig. 7, a schematic architecture diagram of an autopilot system is shown. The automatic driving system includes: the system comprises a perception module, a planning module, a control module and a deep learning model error detector, wherein the perception module is connected with the planning module, the planning module is connected with the control module, the perception module is connected with the deep learning model error detector, and the control module is connected with the deep learning model error detector.
The sensing module can acquire vehicle running environment data through the vehicle sensor, and the environment data is processed through the deep learning model to obtain a vehicle running scene. The perception module may send the vehicle driving scenario to the planning module.
The planning module can input the vehicle driving scene into a deep learning model to obtain a vehicle control function. The planning module sends the vehicle control function to the control module.
The control module may control vehicle behavior based on vehicle control functions to cause the vehicle to operate.
The deep learning model error detector is provided with at least one detector (such as a detector 1, a detector 2, a detector 3 and a detector 4), and the deep learning model error detector comprises an evaluation model which comprises at least one of the following: an ODD monitoring model, a significance monitoring model, a rationality monitoring model, an uncertainty monitoring model, and one monitoring model deployed by one detector.
Illustratively, detector 1 may include an ODD monitoring model, detector 2 may include a significance monitoring model, detector 3 may include a rationality monitoring model, and detector 4 may include an uncertainty monitoring model.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. In order to achieve the above functions, the monitoring information determining device or the vehicle includes a hardware structure and/or a software module for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
According to the method, the function modules of the monitoring information determining device or the vehicle are divided, for example, the monitoring information determining device or the vehicle may include each function module corresponding to each function division, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 8 is a block diagram illustrating a device for determining monitoring information according to an exemplary embodiment. Referring to fig. 8, the monitoring information determining apparatus is applied to a controller of a vehicle, which is deployed with an automatic driving perception model. The determining means are for performing the method shown in fig. 3. The device for determining the monitoring information comprises: an acquisition unit 801, and a processing unit 802.
An acquisition unit 801 for acquiring running environment information of the vehicle. The processing unit 802 is configured to input driving environment information into the autopilot sensing model, and obtain feature information of the autopilot sensing model, where the feature information is information obtained by the autopilot sensing model through a deep learning model. The processing unit 802 is further configured to input the driving environment information and the feature information into a first evaluation model, and determine at least one piece of perception monitoring information, where the first evaluation model includes at least one of: the method comprises the steps of a first operation design domain ODD monitoring model, a first significance monitoring model, a first rationality monitoring model, a first uncertainty monitoring model and a first antagonism monitoring model, wherein one monitoring model corresponds to one perception monitoring information, and the perception monitoring information is used for indicating whether an automatic driving perception model is insufficient or not. The first ODD monitoring model is used for monitoring whether the driving environment information meets the automatic driving starting condition, the first saliency monitoring model is used for monitoring whether the image detection result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether the target detection result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether the scene detection result of the automatic driving perception model is stable, and the first opposite sex monitoring model is used for monitoring whether the driving environment information has noise.
In one possible implementation, the first assessment model includes a first ODD monitoring model that includes an autopilot launch condition. The processing unit 802 is specifically configured to determine, according to the driving environment information and the automatic driving start condition through the first ODD monitoring model, first monitoring information, and use the first monitoring information as sensing monitoring information, where the first monitoring information is used to indicate whether the driving environment information meets the automatic driving start condition.
In one possible embodiment, the first assessment model comprises a first saliency monitoring model and the characteristic information comprises a driving environment image marking the first salient region. The processing unit 802 is specifically configured to input the driving environment image into the first saliency monitoring model, and obtain a driving environment image marked with the second salient region. The processing unit 802 is specifically configured to determine, according to the first salient region and the second salient region, second monitoring information through the first salient monitoring model, and use the second monitoring information as perception monitoring information, where the second monitoring information is used to indicate whether an image detection result of the automatic driving perception model is accurate.
In one possible implementation, the first assessment model includes a first rationality monitoring model, the characteristic information includes a target object, and the first rationality monitoring model includes a preset standard rule for the target object. The processing unit 802 is specifically configured to obtain third monitoring information according to the target object and a preset standard rule of the target object through the first rationality monitoring model, and use the third monitoring information as perception monitoring information, where the third monitoring information is used to indicate whether a target detection result of the autopilot perception model is rational.
In one possible implementation, the first evaluation model includes a first uncertainty monitoring model, and the feature information includes a target object of each of a plurality of driving scene images, and a similarity between any two driving scene images of the plurality of driving scene images is greater than a preset similarity threshold. The processing unit 802 is specifically configured to include a first uncertainty monitoring model, where the feature information includes a target object of each of a plurality of driving scene images, and a similarity between any two driving scene images in the plurality of driving scene images is greater than a preset similarity threshold. The processing unit 802 is specifically configured to determine fourth monitoring information according to the confidence coefficient of each target object through the first uncertainty monitoring model, and use the fourth monitoring information as perception monitoring information, where the fourth monitoring information is used to indicate whether the target detection result of the autopilot perception model is stable.
In one possible embodiment, the first assessment model comprises a first resistance monitoring model and the driving environment information comprises a plurality of driving environment images. The processing unit 802 is specifically configured to input a plurality of driving environment images into the first resistance monitoring model to obtain fifth monitoring information, and use the fifth monitoring information as perception monitoring information, where the fifth monitoring information is used to indicate whether noise exists in the plurality of driving environment images.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
Fig. 9 is a block diagram illustrating another monitoring information determining apparatus according to an exemplary embodiment. Referring to fig. 9, the determination device of the monitoring information is applied to a controller of a vehicle, which is deployed with a decision-making planning model. The apparatus is for performing the method shown in fig. 5. The device for determining the monitoring information comprises: an acquisition unit 901 and a processing unit 902.
An acquisition unit 901 for acquiring driving state information and feature information of the automatic driving perception model. The processing unit 902 is configured to input the driving state information and the feature information into a decision planning model, obtain decision information of the decision planning model, where the decision information is information obtained by the decision planning model through a deep learning model. The processing unit 902 is further configured to input the driving status information and the decision information into a second evaluation model, and determine at least one decision monitoring information, where the second evaluation model includes at least one of: the second operational design domain ODD monitoring model, the second significance monitoring model, the second rationality monitoring model, the second uncertainty monitoring model and the second contrast monitoring model, wherein one monitoring model corresponds to one decision monitoring information, and the decision monitoring information is used for indicating whether the decision planning model is insufficient or not. The second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significance monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second opposite resistance monitoring model is used for monitoring whether noise exists in the driving state information.
In one possible implementation, the second assessment model includes a second ODD monitoring model that includes driving function conditions. The processing unit 902 is specifically configured to determine, according to the driving status information and the driving function condition through the second ODD monitoring model, sixth monitoring information, and use the sixth monitoring information as decision monitoring information, where the sixth monitoring information is used to indicate whether the driving status information meets the driving function condition.
In one possible implementation, the second assessment model comprises a second significance monitoring model, the decision information comprising a plurality of target decisions and a weight value for each target decision, the second significance monitoring model comprising a preset decision rule. The processing unit 902 is specifically configured to input a plurality of target decisions and a weight value of each target decision into the second saliency monitoring model, so as to obtain a target decision rule. The processing unit 902 is specifically configured to determine, according to the target decision rule and the preset decision rule, seventh monitoring information through the second significance monitoring model, and use the seventh monitoring information as decision monitoring information, where the seventh monitoring information is used to indicate whether the target decision rule of the decision planning model is accurate.
In one possible implementation, the second assessment model comprises a second rationality monitoring model, the decision information comprises a target decision, and the second rationality monitoring model comprises a preset standard rule for the target decision. The processing unit 902 is specifically configured to obtain, according to the target decision and a preset standard rule of the target decision, eighth monitoring information through the second rationality monitoring model, and use the eighth monitoring information as decision monitoring information, where the eighth monitoring information is used to indicate whether a decision result of the decision planning model is rational.
In one possible implementation manner, the second evaluation model includes a second uncertainty monitoring model, the decision information includes a plurality of target decisions, and a similarity between driving scene information corresponding to any two target decisions in the plurality of target decisions is greater than a preset similarity threshold. The processing unit 902 is specifically configured to input a plurality of target decisions into the second uncertainty monitoring model to obtain a confidence level of each target decision. The processing unit 902 is specifically configured to determine, according to the confidence coefficient of each target decision, ninth monitoring information through the second uncertainty monitoring model, and use the ninth monitoring information as decision monitoring information, where the ninth monitoring information is used to indicate whether a decision result of the decision planning model is stable.
In one possible embodiment, the second assessment model comprises a second resistance monitoring model. The processing unit 902 is specifically configured to input the driving status information into the second resistance monitoring model to obtain tenth monitoring information, and use the tenth monitoring information as decision monitoring information, where the tenth monitoring information is used to indicate whether noise exists in the driving status information.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
Fig. 10 is a block diagram of a vehicle, according to an exemplary embodiment. As shown in fig. 10, the vehicle 1000 includes, but is not limited to: a processor 1001 and a memory 1002.
The memory 1002 is used for storing executable instructions of the processor 1001. It will be appreciated that the processor 1001 is configured to execute instructions to implement the method for determining monitoring information in the above embodiment.
It should be noted that the vehicle structure shown in fig. 10 is not limiting of the vehicle, and the vehicle may include more or fewer components than shown in fig. 10, or may combine certain components, or a different arrangement of components, as will be appreciated by those skilled in the art.
The processor 1001 is a control center of the vehicle, connects various parts of the entire vehicle using various interfaces and lines, and performs various functions of the vehicle and processes data by running or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby performing overall monitoring of the vehicle. The processor 1001 may include one or more processing units. Alternatively, the processor 1001 may integrate an application processor that mainly processes an operating system, a user interface, an application program, and the like, and a modem processor that mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 1001.
The memory 1002 may be used to store software programs as well as various data. The memory 1002 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs (such as a processing unit) required for at least one functional module, and the like. In addition, memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
In an exemplary embodiment, a computer readable storage medium is also provided, such as a memory 1002, comprising instructions executable by the processor 1001 of the vehicle 1000 to implement the method of determining monitoring information in the above embodiments.
In actual implementation, the functions of the acquisition unit 801 and the processing unit 802 in fig. 8 may be implemented by the processor 1001 in fig. 10 calling a computer program stored in the memory 1002. The specific implementation process may refer to the description of the method portion for determining the monitoring information in the above embodiment, which is not repeated here.
Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, the present application also provides a computer program product comprising one or more instructions executable by a processor of a vehicle to perform the method of determining monitoring information in the above-described embodiment.
It should be noted that, when the instructions in the computer readable storage medium or one or more instructions in the computer program product are executed by the processor of the vehicle, the respective processes of the embodiment of the method for determining the monitoring information are implemented, and the technical effects same as those of the method for determining the monitoring information can be achieved, so that repetition is avoided, and no further description is given here.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The present application is not limited to the above embodiments, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (17)

1. A method for determining monitoring information, which is characterized by being applied to a controller of a vehicle, wherein the controller is provided with an automatic driving perception model; the method comprises the following steps:
Acquiring running environment information of the vehicle;
Inputting the driving environment information into an automatic driving perception model to obtain characteristic information of the automatic driving perception model, wherein the characteristic information is information obtained by the automatic driving perception model through a deep learning model;
Inputting the driving environment information and the characteristic information into a first evaluation model, and determining at least one piece of perception monitoring information, wherein the first evaluation model comprises at least one of the following: the first operation design domain ODD monitoring model, the first significance monitoring model, the first rationality monitoring model, the first uncertainty monitoring model and the first antagonism monitoring model are respectively used for indicating whether the automatic driving perception model is insufficient or not;
The first ODD monitoring model is used for monitoring whether the running environment information meets automatic driving starting conditions, the first saliency monitoring model is used for monitoring whether an image detection result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether a target detection result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether the target detection result of the automatic driving perception model is stable, and the first antagonism monitoring model is used for monitoring whether noise exists in the running environment information.
2. The method of claim 1, wherein the first assessment model comprises the first ODD monitoring model comprising the autopilot start condition; the driving environment information and the characteristic information are input into a first evaluation model to determine at least one piece of perception monitoring information, and the method comprises the following steps:
And determining first monitoring information according to the driving environment information and the automatic driving starting condition through the first ODD monitoring model, and taking the first monitoring information as the perception monitoring information, wherein the first monitoring information is used for indicating whether the driving environment information meets the automatic driving starting condition or not.
3. The method according to claim 1 or 2, wherein the first assessment model comprises the first saliency monitoring model, and the characteristic information comprises a driving environment image marking a first salient region; the driving environment information and the characteristic information are input into a first evaluation model to determine at least one piece of perception monitoring information, and the method comprises the following steps:
inputting the driving environment image into the first significance monitoring model to obtain the driving environment image of a second significant region;
and determining second monitoring information through the first saliency monitoring model according to the first saliency area and the second saliency area, and taking the second monitoring information as the perception monitoring information, wherein the second monitoring information is used for indicating whether an image detection result of the automatic driving perception model is accurate or not.
4. The method according to claim 1 or 2, wherein the first assessment model comprises the first rationality monitoring model, the characteristic information comprises a target object, and the first rationality monitoring model comprises preset standard rules for the target object; the driving environment information and the characteristic information are input into a first evaluation model to determine at least one piece of perception monitoring information, and the method comprises the following steps:
And obtaining third monitoring information through the first rationality monitoring model according to the target object and a preset standard rule of the target object, and taking the third monitoring information as the perception monitoring information, wherein the third monitoring information is used for indicating whether a target detection result of the automatic driving perception model is reasonable or not.
5. The method of claim 1 or 2, wherein the first assessment model comprises the first uncertainty monitoring model, the characteristic information comprises a target object for each of a plurality of travel scene images, and a similarity between any two of the plurality of travel scene images is greater than a preset similarity threshold; the driving environment information and the characteristic information are input into a first evaluation model to determine at least one piece of perception monitoring information, and the method comprises the following steps:
inputting the target object of each driving scene image into the first uncertainty monitoring model to obtain the confidence coefficient of each target object;
And determining fourth monitoring information according to the confidence coefficient of each target object through the first uncertainty monitoring model, and taking the fourth monitoring information as the perception monitoring information, wherein the fourth monitoring information is used for indicating whether a target detection result of the automatic driving perception model is stable or not.
6. The method according to claim 1 or 2, wherein the first assessment model comprises the first resistance monitoring model, and the driving environment information comprises a plurality of driving environment images; the driving environment information and the characteristic information are input into a first evaluation model to determine at least one piece of perception monitoring information, and the method comprises the following steps:
And inputting the plurality of driving environment images into the first resistance monitoring model to obtain fifth monitoring information, wherein the fifth monitoring information is used as the perception monitoring information and is used for indicating whether noise exists in the plurality of driving environment images.
7. A method for determining monitoring information, which is characterized by being applied to a controller of a vehicle, wherein the controller is deployed with a decision planning model; the method comprises the following steps:
Acquiring the running state information of the vehicle and the characteristic information of an automatic driving perception model;
inputting the driving state information and the characteristic information into the decision planning model to obtain decision information of the decision planning model, wherein the decision information is information obtained by the decision planning model through a deep learning model;
Inputting the driving state information and the decision information into a second evaluation model, and determining at least one decision monitoring information, wherein the second evaluation model comprises at least one of the following: a second operation design domain ODD monitoring model, a second significance monitoring model, a second rationality monitoring model, a second uncertainty monitoring model and a second contrast monitoring model, wherein one monitoring model corresponds to one decision monitoring information, and the decision monitoring information is used for indicating whether the decision planning model is insufficient;
the second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significant monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second contrast monitoring model is used for monitoring whether noise exists in the driving state information.
8. The method of claim 7, wherein the second assessment model comprises a second ODD monitoring model comprising the driving function condition; the step of inputting the driving state information and the decision information into a second evaluation model to determine at least one decision monitoring information comprises the following steps:
And determining sixth monitoring information according to the driving state information and the driving function condition through the second ODD monitoring model, and taking the sixth monitoring information as the decision monitoring information, wherein the sixth monitoring information is used for indicating whether the driving state information meets the driving function condition or not.
9. The method of claim 7 or 8, wherein the second assessment model comprises a second significance monitoring model, the decision information comprising a plurality of target decisions and a weight value for each of the target decisions, the second significance monitoring model comprising a preset decision rule; the step of inputting the driving state information and the decision information into a second evaluation model to determine at least one decision monitoring information comprises the following steps:
inputting a plurality of target decisions and weight values of each target decision into the second significance monitoring model to obtain the target decision rule;
Determining seventh monitoring information through the second significance monitoring model according to the target decision rule and the preset decision rule, and taking the seventh monitoring information as the decision monitoring information, wherein the seventh monitoring information is used for indicating whether the target decision rule of the decision planning model is accurate or not.
10. The method of claim 7 or 8, wherein the second assessment model comprises a second rationality monitoring model, the decision information comprises a target decision, and the second rationality monitoring model comprises preset standard rules for the target decision; the step of inputting the driving state information and the decision information into a second evaluation model to determine at least one decision monitoring information comprises the following steps:
obtaining eighth monitoring information through the second rationality monitoring model according to the target decision and a preset standard rule of the target decision, and taking the eighth monitoring information as the decision monitoring information, wherein the eighth monitoring information is used for indicating whether a decision result of the decision planning model is reasonable or not.
11. The method according to claim 7 or 8, wherein the second evaluation model comprises a second uncertainty monitoring model, the decision information comprises a plurality of target decisions, and a similarity between driving scene information corresponding to any two target decisions in the plurality of target decisions is greater than a preset similarity threshold; the step of inputting the driving state information and the decision information into a second evaluation model to determine at least one decision monitoring information comprises the following steps:
Inputting the target decisions into the second uncertainty monitoring model to obtain the confidence coefficient of each target decision;
And determining ninth monitoring information according to the confidence coefficient of each target decision through the second uncertainty monitoring model, and taking the ninth monitoring information as the decision monitoring information, wherein the ninth monitoring information is used for indicating whether a decision result of the decision planning model is stable.
12. The method of claim 7 or 8, wherein the second assessment model comprises the second resistance monitoring model; the step of inputting the driving state information and the decision information into a second evaluation model to determine at least one decision monitoring information comprises the following steps:
and inputting the driving state information into the second resistance monitoring model to obtain tenth monitoring information, wherein the tenth monitoring information is used as the decision monitoring information, and the tenth monitoring information is used for indicating whether noise exists in the driving state information.
13. A device for determining monitoring information is characterized by being applied to a controller, wherein an automatic driving perception model is deployed on the controller; the method comprises the following steps:
an acquisition unit configured to acquire running environment information of a vehicle;
the processing unit is used for inputting the driving environment information into an automatic driving perception model to obtain the characteristic information of the automatic driving perception model, wherein the characteristic information is information obtained by the automatic driving perception model through a deep learning model;
The processing unit is further configured to input the driving environment information and the feature information into a first evaluation model, and determine at least one piece of perception monitoring information, where the first evaluation model includes at least one of: the first operation design domain ODD monitoring model, the first significance monitoring model, the first rationality monitoring model, the first uncertainty monitoring model and the first antagonism monitoring model are respectively used for indicating whether the automatic driving perception model is insufficient or not;
The first ODD monitoring model is used for monitoring whether the driving environment information meets automatic driving starting conditions, the first saliency monitoring model is used for monitoring whether an image detection result of the automatic driving perception model is accurate, the first rationality monitoring model is used for monitoring whether a target detection result of the automatic driving perception model is reasonable, the first uncertainty monitoring model is used for monitoring whether a scene detection result of the automatic driving perception model is stable, and the first antagonism monitoring model is used for monitoring whether noise exists in the driving environment information.
14. The device for determining the monitoring information is characterized by being applied to a controller, wherein a decision planning model is deployed on the controller; the method comprises the following steps:
the acquisition unit is used for acquiring the driving state information and the characteristic information of the automatic driving perception model;
The processing unit is used for inputting the driving state information and the characteristic information into the decision planning model to obtain decision information of the decision planning model, wherein the decision information is information obtained by the decision planning model through a deep learning model;
The processing unit is further configured to input the driving state information and the decision information into a second evaluation model, and determine at least one decision monitoring information, where the second evaluation model includes at least one of: a second operation design domain ODD monitoring model, a second significance monitoring model, a second rationality monitoring model, a second uncertainty monitoring model and a second contrast monitoring model, wherein one monitoring model corresponds to one decision monitoring information, and the decision monitoring information is used for indicating whether the decision planning model is insufficient;
the second ODD monitoring model is used for monitoring whether the driving state information meets driving function conditions, the second significant monitoring model is used for monitoring whether a target decision rule of the decision planning model is accurate, the second rationality monitoring model is used for monitoring whether a decision result of the decision planning model is reasonable, the second uncertainty monitoring model is used for monitoring whether the decision result of the decision planning model is stable, and the second contrast monitoring model is used for monitoring whether noise exists in the driving state information.
15. A vehicle, characterized by comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any of claims 1-6, or claims 7-12.
16. A computer readable storage medium, characterized in that, when computer executable instructions stored in the computer readable storage medium are executed by a processor of a vehicle, the vehicle is capable of performing the method of any one of claims 1-6, or claims 7-12.
17. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements a method of determining monitoring information according to any of claims 1-6 or claims 7-12.
CN202410597902.1A 2024-05-14 2024-05-14 Method and device for determining monitoring information, vehicle and storage medium Pending CN118306426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410597902.1A CN118306426A (en) 2024-05-14 2024-05-14 Method and device for determining monitoring information, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410597902.1A CN118306426A (en) 2024-05-14 2024-05-14 Method and device for determining monitoring information, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN118306426A true CN118306426A (en) 2024-07-09

Family

ID=91727520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410597902.1A Pending CN118306426A (en) 2024-05-14 2024-05-14 Method and device for determining monitoring information, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN118306426A (en)

Similar Documents

Publication Publication Date Title
US11840239B2 (en) Multiple exposure event determination
CN109358612B (en) Intelligent driving control method and device, vehicle, electronic equipment and storage medium
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
KR102158497B1 (en) evaluation system for autonomous driving
CN104859662A (en) Fault handling in an autonomous vehicle
CN111311914A (en) Vehicle driving accident monitoring method and device and vehicle
CN110610137B (en) Method and device for detecting vehicle running state, electronic equipment and storage medium
EP3441839B1 (en) Information processing method and information processing system
CN116783462A (en) Performance test method of automatic driving system
CN113887372A (en) Target aggregation detection method and computer-readable storage medium
CN115366885A (en) Method for assisting a driving maneuver of a motor vehicle, assistance device and motor vehicle
CN111626905A (en) Passenger safety monitoring method and device and computer readable storage medium
CN113468678B (en) Method and device for calculating accuracy of automatic driving algorithm
CN113335311B (en) Vehicle collision detection method and device, vehicle and storage medium
US20220019818A1 (en) Method and system for vehicle parking detection, and storage medium
CN111800508B (en) Automatic driving fault monitoring method based on big data
CN111800314B (en) Automatic driving fault monitoring system
CN118306426A (en) Method and device for determining monitoring information, vehicle and storage medium
US20240103548A1 (en) Image-Based Method for Simplifying a Vehicle-External Takeover of Control of a Motor Vehicle, Assistance Device, and Motor Vehicle
WO2023108364A1 (en) Method and apparatus for detecting driver state, and storage medium
CN114693722A (en) Vehicle driving behavior detection method, detection device and detection equipment
CN114596706A (en) Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN115017967A (en) Detecting and collecting accident-related driving experience event data
US20240043022A1 (en) Method, system, and computer program product for objective assessment of the performance of an adas/ads system
CN114670818A (en) Method and system for realizing fault vehicle identification and avoidance based on ArUco code positioning

Legal Events

Date Code Title Description
PB01 Publication