CN117962901B - Driving state adjusting method and device, storage medium, electronic device and computer program product - Google Patents

Driving state adjusting method and device, storage medium, electronic device and computer program product Download PDF

Info

Publication number
CN117962901B
CN117962901B CN202410379844.5A CN202410379844A CN117962901B CN 117962901 B CN117962901 B CN 117962901B CN 202410379844 A CN202410379844 A CN 202410379844A CN 117962901 B CN117962901 B CN 117962901B
Authority
CN
China
Prior art keywords
driver
determining
state
data
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410379844.5A
Other languages
Chinese (zh)
Other versions
CN117962901A (en
Inventor
陈筱琳
赵雅倩
史宏志
张亚强
许光远
高飞
李逍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202410379844.5A priority Critical patent/CN117962901B/en
Publication of CN117962901A publication Critical patent/CN117962901A/en
Application granted granted Critical
Publication of CN117962901B publication Critical patent/CN117962901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a driving state adjusting method and device, a storage medium, electronic equipment and a computer program product, and relates to the field of automatic driving, wherein the method comprises the following steps: determining a driver state in which a driver in the autonomous vehicle is currently located from a plurality of preset driver states; determining the environment complexity of the environment in which the automatic driving vehicle is currently located from a plurality of preset environment complexities; and determining a current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states; determining a target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state, and determining a target prompt level according to the target take-over level and the driver state; and controlling the automatic driving vehicle to execute a prompt operation according to the target prompt level, wherein the prompt operation is used for prompting a driver to adjust the driving state. By adopting the technical scheme, the problem that the reminding effect of the traditional driver state intervention reminding strategy in an automatic driving scene is poor is solved.

Description

Driving state adjusting method and device, storage medium, electronic device and computer program product
Technical Field
The embodiment of the application relates to the field of automatic driving, in particular to a driving state adjusting method and device, a storage medium, electronic equipment and a computer program product.
Background
Automatic driving is a leading edge technology for completing complete, safe and effective driving under the condition of no manual operation by means of a computer and an artificial intelligence technology. Based on the degree to which the driving automation system can execute the dynamic driving task, the related technical standard "automobile driving automation classification" classifies the driving automation into 6 classes of 0 to 5 according to the role allocation and the design running condition restriction in executing the dynamic driving task. In which L3 and below drive automation requires the driver to remain interested in the driving of the vehicle at the driving location in order to be able to take over the vehicle control at any time, even without direct active control by the driver. Therefore, the method for monitoring the takeover state of the driver in real time and performing intervention reminding according to different states is an important technology for guaranteeing the automatic driving safety of the automobile with the level of L3 or below.
At present, the driver state intervention reminding strategy is mostly aimed at abnormal states of the driver in the traditional driving environment, so most researches only aim at the states of the driver to determine the intervention reminding degree, and in automatic driving, the states of the driver are different from the states of normal driving in the traditional driving, so that the states of the driver to determine the intervention reminding degree of the driver are not suitable for the automatic driving scene.
Aiming at the problem that the traditional driver state intervention reminding strategy has poor reminding effect in an automatic driving scene in the related technology, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the application provides a driving state adjusting method and device, a storage medium, electronic equipment and a computer program product, which are used for at least solving the problem that the traditional driving state intervention reminding strategy is poor in reminding effect in an automatic driving scene.
According to an embodiment of the present application, there is provided a driving state adjustment method including: determining a driver state of an automatic driving vehicle in which a driver is currently located from a plurality of preset driver states, wherein the preset driver states are used for reflecting the actual taking over capability of the driver on the automatic driving vehicle; determining the environment complexity of the current environment of the automatic driving vehicle from a plurality of preset environment complexities; and determining a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states, wherein the preset vehicle states are used for reflecting the probability that the autonomous vehicle is allowed to be controlled; determining a target takeover level from a plurality of preset takeover levels according to the environment complexity and the vehicle state, and determining a target prompt level according to the target takeover level and the driver state, wherein the preset takeover level is used for indicating the takeover capability of a driver required by the automatic driving vehicle under a corresponding driving situation; and controlling the automatic driving vehicle to execute prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust driving state.
In one exemplary embodiment, the determining, from among a plurality of preset driver states, a driver state in which a driver in the autonomous vehicle is currently located includes: acquiring N pieces of state data of the driver, wherein the N pieces of state data comprise: the image data and the physiological signal data of the driver are that N is a positive integer more than or equal to 2; and determining the current driver state of the driver in the automatic driving vehicle from a plurality of preset driver states according to the N state data.
In an exemplary embodiment, the determining, according to the N state data, a driver state in which a driver in the autonomous vehicle is currently located from a plurality of preset driver states includes: determining M classification models, wherein model inputs of the M classification models are different from each other, the inputs of each classification model in the M classification models are part or all of the N state data, and each classification model is used for determining a driver state in which the driver is currently located from the plurality of preset driver states according to the corresponding model inputs, wherein M is a positive integer greater than or equal to 2; and determining M classification results of the M classification models according to the N state data, and determining the current driver state of the driver according to the M classification results.
In an exemplary embodiment, the determining M classification results of the M classification models according to the N state data includes: determining the classification result of the ith classification model in the M classification models by the following method to determine M classification results of the M classification models: and inputting the state data corresponding to the ith classification model in the N state data into the ith classification model to obtain a classification result of the ith classification model.
In an exemplary embodiment, the determining, according to the M classification results, a driver state in which the driver is currently located includes: under the condition that the M classification results are the same and are all the designated driver states, determining the designated driver states as the driver states of the drivers at present; and under the condition that the M classification results are not identical and the M classification results comprise P classification results, determining the credibility corresponding to each classification result in the P classification results, and determining the driver state corresponding to the classification result with the highest credibility in the P classification results as the current driver state of the driver, wherein P is a positive integer greater than or equal to 2 and less than or equal to M.
In an exemplary embodiment, the determining the confidence level corresponding to each classification result in the P-class classification result includes: the credibility of the j-th class classification result in the P-class classification results is determined by the following steps: z classification models with the output results of the j-th class classification result are determined from the M classification models; determining the data quality of state data corresponding to each classification model in the Z classification models; and under the condition that the Z groups of state data corresponding to the Z classification models comprise Q pieces of state data, determining the credibility of the j-th classification result according to the data quality of the Q pieces of state data.
In an exemplary embodiment, the determining the data quality of the state data corresponding to each of the Z classification models includes: determining a variance of pixel values in the image data as a data quality of the state data in the case that the state data is image data; and under the condition that the state data are physiological signal data, determining the data quality of the state data according to the signal stability and the signal continuity of the physiological signal data.
In an exemplary embodiment, the determining the environmental complexity of the environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities includes: determining dynamic entity data, basic equipment element data and weather data of an environment in which the automatic driving vehicle is located, wherein the dynamic entity data is used for describing dynamic entities in the environment, and the basic equipment element data is used for describing basic equipment elements in the environment; and determining the environmental complexity of the current environment of the automatic driving vehicle from a plurality of preset environmental complexities according to the dynamic entity data, the basic equipment element data and the weather data.
In an exemplary embodiment, the determining, according to the dynamic entity data, the environmental complexity of the environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities by the base device element data and the weather data includes: determining a first environment complex value according to the number of dynamic entities in the dynamic entity data; determining a second environment complex value according to the dynamic entity data and the entity data of a target entity in the basic equipment element data, wherein the risk coefficient of the target entity is larger than a preset threshold; determining a third environmental complexity value from the weather data; and determining the environment complexity of the environment in which the automatic driving vehicle is currently located from a plurality of preset environment complexities according to the first environment complexity value, the second environment complexity value and the third environment complexity value.
In an exemplary embodiment, the determining the environmental complexity of the environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities according to the first environmental complexity value, the second environmental complexity value, and the third environmental complexity value includes: carrying out weighted summation on the first environment complex value, the second environment complex value and the third environment complex value to obtain a target environment complex value; and under the condition that the target environment complexity value is located in a target environment complexity value range in a plurality of environment complexity value ranges, determining the target environment complexity corresponding to the target environment complexity value range as the environment complexity of the environment where the automatic driving vehicle is currently located, wherein the plurality of preset environment complexity values have a one-to-one correspondence with the plurality of environment complexity value ranges, and the plurality of environment complexity value ranges have a correspondence with the driving scene of the automatic driving vehicle.
In an exemplary embodiment, the determining, from a plurality of preset vehicle states, a vehicle state in which the autonomous vehicle is currently located includes: acquiring running data of the vehicle, and determining the possibility that the driving behavior of the automatic driving vehicle leaves a preset operation area according to the running data; and determining the current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states according to the possibility and the integrity of an automatic driving system of the automatic driving vehicle.
In one exemplary embodiment, the determining, from a plurality of preset vehicle states, the vehicle state in which the autonomous vehicle is currently located according to the likelihood and the integrity of an autonomous system of the autonomous vehicle includes: acquiring a first preset rule, wherein the first preset rule has a corresponding relation between each preset vehicle state in the plurality of preset vehicle states and the possibility that the driving behavior of the automatic driving vehicle leaves a preset operation area and the integrity of an automatic driving system of the automatic driving vehicle; based on the first preset rule, determining a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states according to the possibility and the integrity of an autonomous system of the autonomous vehicle.
In an exemplary embodiment, said determining a target take-over level from a plurality of preset take-over levels based on said environmental complexity and said vehicle state comprises: acquiring a second preset rule, wherein the second preset rule has a corresponding relation between each preset takeover level in the plurality of preset takeover levels and the environment complexity and the vehicle state; and determining a target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state based on the second preset rule.
In an exemplary embodiment, said determining a target hint level based on said target take over level and said driver status comprises: acquiring a third preset rule, wherein the third preset rule is used for indicating the corresponding relation between different take-over levels and the states of the driver and different prompt levels; and determining a target prompt level according to the target take-over level and the driver state based on the third preset rule.
In an exemplary embodiment, said determining a target hint level based on said target take over level and said driver status comprises: determining a first prompt level as the target prompt level if the target take-over level is higher than the take-over level indicated by the driver status; determining a second prompt level as the target prompt level if the target take-over level is equal to the take-over level indicated by the driver status; determining a third prompt level as the target prompt level if the target take-over level is lower than the take-over level indicated by the driver status; the prompt strength corresponding to the first prompt level is higher than the prompt strength corresponding to the second prompt level, and the prompt strength corresponding to the second prompt level is higher than the prompt strength corresponding to the third prompt level.
In an exemplary embodiment, the controlling the autonomous vehicle to perform the prompting operation according to the target prompting level includes: obtaining a preset prompting rule, wherein prompting operations corresponding to different prompting grades are arranged in the prompting rule, and the prompting grades comprise the target prompting grade; determining a target prompt operation corresponding to the target prompt level based on the prompt rule; and controlling the automatic driving vehicle to execute the target prompt operation, wherein the prompt operation comprises the target prompt operation.
In an exemplary embodiment, the acquiring N status data of the driver includes: and acquiring the N pieces of state data through different devices.
In one exemplary embodiment, the prompting operation prompts the driver to adjust driving conditions through a dimension of at least one of: visual, auditory, tactile.
According to another embodiment of the present application, there is also provided an adjustment device for driving state, including: a first determining module, configured to determine a driver state in which a driver in an autonomous vehicle is currently located from a plurality of preset driver states, where the preset driver states are used to represent an actual takeover capability of the driver for the autonomous vehicle; the second determining module is used for determining the environment complexity of the environment where the automatic driving vehicle is currently located from a plurality of preset environment complexities; a third determining module, configured to determine a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states, where the preset vehicle states are used to represent a probability that the autonomous vehicle is allowed to be controlled; a fourth determining module, configured to determine a target takeover level from a plurality of preset takeover levels according to the environmental complexity and the vehicle state, and determine a target prompt level according to the target takeover level and the driver state, where the preset takeover level is used to indicate a takeover capability of the driver required by the autopilot vehicle in a corresponding driving scenario; and the control module is used for controlling the automatic driving vehicle to execute prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust the driving state.
According to a further embodiment of the application, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the application there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to a further embodiment of the application, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
According to the application, the target take-over level is determined according to the environmental complexity of the environment where the automatic driving vehicle is located and the vehicle state of the automatic driving vehicle, the target prompt level is determined according to the target take-over level and the driver state of the driver, and then the automatic driving vehicle is controlled to execute the prompt operation according to the target prompt level so as to prompt the driver to adjust the driving state, so that the driver can be effectively intervened and reminded, the driver can experience the comfort and convenience brought by the automatic driving as far as possible on the premise of ensuring the driving safety, and the problem that the traditional driver state intervening and reminding strategy has poor reminding effect in the automatic driving scene is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a hardware configuration block diagram of a server apparatus of a driving state adjustment method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of adjusting driving state according to an embodiment of the present application;
FIG. 3 is a frame diagram of a driver status monitoring and driver driving status prompting system according to an embodiment of the present application;
FIG. 4 is an overall flow chart of a method of adjusting driving state according to an embodiment of the present application;
Fig. 5 is a block diagram of a driving state adjusting method according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The driving state adjustment method provided in the embodiment of the present application may be executed in a server apparatus or a similar computing device. Taking the operation on the server device as an example, fig. 1 is a block diagram of the hardware structure of the server device of a driving state adjustment method according to an embodiment of the present application. As shown in fig. 1, the server device may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like processing means) and a memory 104 for storing data, wherein the server device may further include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the architecture shown in fig. 1 is merely illustrative and is not intended to limit the architecture of the server apparatus described above. For example, the server device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a driving state adjustment method in an embodiment of the present application, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the server device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a server device. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a driving state adjustment method is provided, including but not limited to being applied to an autonomous vehicle, and fig. 2 is a flowchart of a driving state adjustment method according to an embodiment of the present application, as shown in fig. 2, and the flowchart includes the following steps S202-S210:
step S202: determining a driver state of an automatic driving vehicle in which a driver is currently located from a plurality of preset driver states, wherein the preset driver states are used for reflecting the actual taking over capability of the driver on the automatic driving vehicle;
Optionally, the plurality of preset driver states includes, but is not limited to: the first take-over level (i.e., a high take-over level, the driver's actual take-over capacity for the autonomous vehicle is greater than or equal to the first preset capacity), the second take-over level (i.e., a medium take-over level, the driver's actual take-over capacity for the autonomous vehicle is greater than or equal to the second preset capacity, less than the first preset capacity), the third take-over level (i.e., a low take-over level, the driver's actual take-over capacity for the autonomous vehicle is greater than or equal to the third preset capacity, less than the second preset capacity), and the fourth take-over level (i.e., a disengaged driving level, the driver's actual take-over capacity for the autonomous vehicle is less than the third preset capacity).
Step S204: determining the environmental complexity of the environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities;
optionally, the plurality of preset environmental complexities includes, but is not limited to: the first ambient complexity (i.e., low ambient complexity, i.e., ambient complexity is lower than the first preset complexity), the second ambient complexity (i.e., ambient complexity is higher than or equal to the first preset complexity, lower than the second preset complexity), the third ambient complexity (i.e., high ambient complexity, i.e., ambient complexity is higher than the second preset complexity).
Step S206: determining a vehicle state of the automatic driving vehicle from a plurality of preset vehicle states, wherein the preset vehicle states are used for reflecting the probability that the automatic driving vehicle is allowed to be controlled;
alternatively, vehicle conditions include, but are not limited to: the first controllable state (i.e., a high controllable state, the probability that the vehicle is allowed to be controlled is higher than or equal to the first probability), the second controllable state (i.e., a medium controllable state, the probability that the vehicle is allowed to be controlled is higher than or equal to the second probability, lower than the first probability), the third controllable state (i.e., a low controllable state, the probability that the vehicle is allowed to be controlled is lower than the second probability).
Alternatively, the driver status may be determined by a driver status monitoring module, the environmental complexity may be determined by a driving environment monitoring module, and the vehicle status may be determined by a vehicle status monitoring module.
It should be noted that, the steps S202 to S206 are performed asynchronously, and there is no execution sequence.
Step S208: determining a target take-over level from a plurality of preset take-over levels according to the environment complexity and the vehicle state, and determining a target prompt level according to the target take-over level and the driver state, wherein the preset take-over level is used for indicating the take-over capacity of a driver required by the automatic driving vehicle under a corresponding driving situation.
It should be noted that the plurality of preset takeover levels include the above: the first take-over level, the second take-over level, the third take-over level, the fourth take-over level.
Optionally, the target hint level is one of: the first preset prompting level, the second preset prompting level, the third preset prompting level and the fourth preset prompting level; the reminding intensity of the first preset reminding level to the fourth preset reminding level is gradually increased from weak to strong.
Step S210: and controlling the automatic driving vehicle to execute prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust driving state.
In one exemplary embodiment, the prompting operation prompts the driver to adjust driving conditions through a dimension of at least one of: visual, auditory, tactile.
Alternatively, the visual dimensions may include, but are not limited to: changes to visual elements of the screen device, such as flickering of important visual elements on the head-up display system (head-up display) of an automobile; or a change in the light system of the vehicle interior, such as a change in the color temperature of the light.
Alternatively, the auditory dimension may include, but is not limited to: music signals, tone changes, voice prompts, etc. For example, changing the high and low frequency signals of the music being played, or directly using voice prompts to intervene in the driver. The position of the sound in the car can be used for giving out warning information with directional directivity.
Alternatively, the haptic dimension may include, but is not limited to: ambient temperature in the vehicle, seat shape, temperature, vibration, etc. Such as turning on or off seat heating, moderately lowering the temperature in the vehicle, etc.
Note that the presentation effect and characteristics of different dimensions are different. The visual prompt has certain delay, namely, the capturing by the driver needs a certain time; the audible prompts are relatively direct, and can be almost instantaneously perceived by a driver, such as the change of volume and tone, and when the direct voice prompts or alarms have certain cognitive load and response, destructive prompts can be generated on the current state of the driver; tactile cues, vibration-based cues are immediate effects, and temperature-related cues require a period of time to be felt by the driver. Therefore, according to different required prompt levels, different mode combinations are adopted to design the prompt scheme. The prompting time and the persistence of the prompts in different modes are different, the prompting effect can be improved by organically combining the multi-mode prompting mechanisms, and the taking over state of the driver can be adjusted in fine granularity on the premise that the automatic driving experience of the driver is not disturbed as much as possible.
According to the method, the target take-over level is determined according to the environmental complexity of the environment where the automatic driving vehicle is located and the vehicle state of the automatic driving vehicle, the target prompt level is determined according to the target take-over level and the driver state of the driver, and then the automatic driving vehicle is controlled to execute prompt operation according to the target prompt level so as to prompt the driver to adjust the driving state, so that the driver can be effectively intervened and reminded, the driver can experience comfort and convenience brought by the automatic driving as much as possible on the premise of ensuring the driving safety, and the problem that the traditional driver state intervening and reminding strategy is poor in reminding effect under the automatic driving scene is solved.
In an exemplary embodiment, the step S210 includes the following steps S11-S13:
Step S11: obtaining a preset prompting rule, wherein prompting operations corresponding to different prompting grades are arranged in the prompting rule, and the prompting grades comprise the target prompting grade; the prompt level comprises the first preset prompt level, the second preset prompt level, the third preset prompt level and the fourth preset prompt level.
Step S12: determining a target prompt operation corresponding to the target prompt level based on the prompt rule;
step S13: and controlling the automatic driving vehicle to execute the target prompt operation, wherein the prompt operation comprises the target prompt operation.
Alternatively, the hint rules may be as described in Table 1 below:
TABLE 1
In an exemplary embodiment, the above step S202 may be implemented by the following steps S21 to S22:
step S21: acquiring N pieces of state data of the driver, wherein the N pieces of state data comprise: the image data and the physiological signal data of the driver are that N is a positive integer more than or equal to 2;
Alternatively, the N state data may be acquired by different devices. For example, the image data is from a camera, the physiological signal data is from a physiological signal detection sensor. In this way, all inaccuracy of the status data due to a malfunction of one device can be avoided.
It should be noted that, the image data includes, but is not limited to, facial image data and body image data of the driver, and the physiological signal data may include, but is not limited to: respiratory rate, heart rate or pulse, skin resistance signal.
Step S22: and determining the current driver state of the driver in the automatic driving vehicle from a plurality of preset driver states according to the N state data.
It should be noted that, by determining the current driver state of the driver through the image data and the physiological signal data of the driver, the accuracy of the determined driver state can be improved.
In an exemplary embodiment, the above step S22 may be implemented by the following steps S31 to S32:
step S31: determining M classification models, wherein model inputs of the M classification models are different from each other, the inputs of each classification model in the M classification models are part or all of the N state data, and each classification model is used for determining a driver state in which the driver is currently located from a plurality of preset driver states according to the corresponding model inputs, wherein M is a positive integer greater than or equal to 2;
optionally, the classification algorithm used by the classification model is an artificial intelligence algorithm used to classify data, including but not limited to: random forests, support vector machines, long and short term memory neural networks, convolutional neural networks, and the like.
Optionally, in the case where M is equal to 3, the model input of the first classification model is image data in N state data, the model input of the second classification module is physiological signal data in N state data, and the third classification model is image data and physiological signal data in N state data.
Step S32: and determining M classification results of the M classification models according to the N state data, and determining the current driver state of the driver according to the M classification results.
In this embodiment, the driver state in which the driver is currently located is determined by a plurality of classification models, so that the accuracy of the determined driver state can be further improved.
In an exemplary embodiment, determining M classification results of the M classification models according to the N state data includes: determining the classification result of the ith classification model in the M classification models by the following method to determine M classification results of the M classification models: and inputting the state data corresponding to the ith classification model in the N state data into the ith classification model to obtain a classification result of the ith classification model.
In an exemplary embodiment, determining, according to the M classification results, a driver state in which the driver is currently located includes: under the condition that the M classification results are the same and are all the designated driver states, determining the designated driver states as the driver states of the drivers at present; and under the condition that the M classification results are not identical and the M classification results comprise P classification results, determining the credibility corresponding to each classification result in the P classification results, and determining the driver state corresponding to the classification result with the highest credibility in the P classification results as the current driver state of the driver, wherein P is a positive integer greater than or equal to 2 and less than or equal to M.
In an exemplary embodiment, the determining the confidence level corresponding to each classification result in the P-class classification results includes: the credibility of the j-th class classification result in the P-class classification results is determined through the following steps S41-S43:
Step S41: z classification models with the output results of the j-th class classification result are determined from the M classification models;
step S42: determining the data quality of state data corresponding to each classification model in the Z classification models;
step S43: and under the condition that the Z groups of state data corresponding to the Z classification models comprise Q pieces of state data, determining the credibility of the j-th classification result according to the data quality of the Q pieces of state data.
Alternatively, the mean value of the data quality of the Q state data may be determined as the credibility of the j-th class classification result.
Optionally, the data quality of the Q state data may be normalized, and then the average value of the data quality of the normalized Q state data may be determined as the credibility of the j-th classification result.
That is, in order to improve the accuracy of the determined driving state, two or more models are used to evaluate the takeover state of the driver, and therefore, it is necessary to integrate the classification results of the plurality of models to obtain the summarized driver state. If the results of the plurality of classification models are consistent, the results can be directly considered to be the driver state; if the results of the multiple classification models are inconsistent, the quality evaluation results of all the data sources required by the models with the same classification results are compared after the average value is obtained (or the average value is obtained by the quality evaluation results after normalization of all the data sources), and the classification result with higher score is selected to be judged as the summarized driver state.
In an exemplary embodiment, the step S42 includes: determining a variance of pixel values in the image data as a data quality of the state data in the case that the state data is image data; and under the condition that the state data are physiological signal data, determining the data quality of the state data according to the signal stability and the signal continuity of the physiological signal data.
That is, the variance can be used on the image data to evaluate the image quality. Variance is the simplest method of assessing image quality, referring to the degree of dispersion of the gray values of image pixels relative to the mean. The larger the variance, the more discrete the gray levels in the image, respectively, the better the image quality. The change of the gray level of the image is evaluated, and the calculation formula is as follows:
Wherein/>
Wherein M, N denotes the width and height of the image, respectively,For the coordinates on the imageThe pixel value at which it is located,Representing the average of all pixel values of the image. Taking the quality evaluation index variance V of the image data as an example, the maximum value of the variances of the image data in the training set of the classification model can be calculatedAnd minimum valueTo normalize the index, the specific formula is as follows:
The evaluation of the physiological signal data quality may comprise a signal stability evaluation, a signal continuity evaluation, wherein the signal stability may be evaluated by comparing the similarity of the signal in the current window with the signal in the last stability window, and the signal continuity evaluation may be evaluated by the duty cycle of the blank signal in the current window. Signal continuity is the simplest signal quality assessment method.
Taking the quality evaluation index signal continuity of physiological signal data as an example, in the data windowIn this case, the ratio a of available Data can be regarded as a specific embodiment of signal continuity, and when the ratio of unavailable Data exceeds 50%, the section of Data is generally considered to be unreliable, that is, the Data Quality (Data Quality) is 0, and when all the Data are available Data, the Data Quality (Data Quality) is 1, so that the normalization can be performed by the following formula:
In an exemplary embodiment, the above step S204 may be implemented by the following steps S51-S52:
step S51: determining dynamic entity data, basic equipment element data and weather data of an environment in which the automatic driving vehicle is located, wherein the dynamic entity data is used for describing dynamic entities in the environment, and the basic equipment element data is used for describing basic equipment elements in the environment;
It should be noted that the dynamic entity includes, but is not limited to, a vehicle and a pedestrian. Basic equipment elements include, but are not limited to, signal lamps, lane lines, reflective cones, and weather data including, but not limited to, severe weather such as rain, snow, and the like.
Step S52: and determining the environmental complexity of the current environment of the automatic driving vehicle from a plurality of preset environmental complexities according to the dynamic entity data, the basic equipment element data and the weather data.
In an exemplary embodiment, the step S52 includes the following steps S61-S64:
Step S61: determining a first environment complex value according to the number of dynamic entities in the dynamic entity data;
step S62: determining a second environment complex value according to the dynamic entity data and the entity data of a target entity in the basic equipment element data, wherein the risk coefficient of the target entity is larger than a preset threshold;
Step S63: determining a third environmental complexity value from the weather data;
it should be noted that, the steps S61-S63 are performed asynchronously, and there is no execution sequence.
Step S64: and determining the environment complexity of the environment in which the automatic driving vehicle is currently located from a plurality of preset environment complexities according to the first environment complexity value, the second environment complexity value and the third environment complexity value.
In an exemplary embodiment, the step S64 includes: carrying out weighted summation on the first environment complex value, the second environment complex value and the third environment complex value to obtain a target environment complex value; and under the condition that the target environment complexity value is located in a target environment complexity value range in a plurality of environment complexity value ranges, determining the target environment complexity corresponding to the target environment complexity value range as the environment complexity of the environment where the automatic driving vehicle is currently located, wherein the plurality of preset environment complexity values have a one-to-one correspondence with the plurality of environment complexity value ranges, and the plurality of environment complexity value ranges have a correspondence with the driving scene of the automatic driving vehicle.
It should be noted that driving scenarios of the autonomous vehicle include, but are not limited to, city, country, and high speed.
In an exemplary embodiment, the above step S206 is implemented by the following steps S71-S72:
Step S71: acquiring running data of the vehicle, and determining the possibility that the driving behavior of the automatic driving vehicle leaves a preset operation area according to the running data;
Alternatively, operational data of the vehicle may be derived based on data from the inertial measurement unit and chassis sensors, including but not limited to: travel speed, acceleration, steering angle, etc. describe data of the running state of the own vehicle.
Step S72: and determining the current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states according to the possibility and the integrity of an automatic driving system of the automatic driving vehicle.
Optionally, the integrity of the autopilot system includes: technical state safety, reliability, availability of sensors, actuators, support systems and computing units.
In an exemplary embodiment, the step S72 includes: acquiring a first preset rule, wherein the first preset rule has a corresponding relation between each preset vehicle state in the plurality of preset vehicle states and the possibility that the driving behavior of the automatic driving vehicle leaves a preset operation area and the integrity of an automatic driving system of the automatic driving vehicle; based on the first preset rule, determining a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states according to the possibility and the integrity of an autonomous system of the autonomous vehicle.
Optionally, the first preset rule may be: when the likelihood of the vehicle leaving the preset operating zone is low (e.g., less than 30%) and the autopilot system integrity is high, the vehicle state is a highly controllable state; when the vehicle has a certain possibility of leaving a preset operating area (for example, 30-50%) but the integrity of an automatic driving system is high, the vehicle state is a medium controllable state; the vehicle state is a low-controllability state when the likelihood of the vehicle leaving the preset operating zone is high (e.g., greater than 50%) or the autopilot system integrity is partially compromised.
In an exemplary embodiment, the above-mentioned determining the target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state may be achieved by the following steps S81-S82:
Step S81: acquiring a second preset rule, wherein the second preset rule has a corresponding relation between each preset takeover level in the plurality of preset takeover levels and the environment complexity and the vehicle state;
Step S82: and determining a target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state based on the second preset rule.
Optionally, the second preset rule may be as shown in table 2:
TABLE 2
In an exemplary embodiment, determining the target prompt level according to the target take-over level and the driver state includes: acquiring a third preset rule, wherein the third preset rule is used for indicating the corresponding relation between different take-over levels and the states of the driver and different prompt levels; and determining a target prompt level according to the target take-over level and the driver state based on the third preset rule.
Optionally, the third preset rule may be as shown in table 3:
TABLE 3 Table 3
In an exemplary embodiment, determining the target prompt level according to the target take-over level and the driver state includes: determining a first prompt level as the target prompt level if the target take-over level is higher than the take-over level indicated by the driver status; determining a second prompt level as the target prompt level if the target take-over level is equal to the take-over level indicated by the driver status; determining a third prompt level as the target prompt level if the target take-over level is lower than the take-over level indicated by the driver status; the prompt strength corresponding to the first prompt level is higher than the prompt strength corresponding to the second prompt level, and the prompt strength corresponding to the second prompt level is higher than the prompt strength corresponding to the third prompt level.
It should be noted that, the first prompt level, the second prompt level, and the third prompt level are three prompt levels among the first preset level, the second preset prompt level, the third preset prompt level, and the fourth preset prompt level.
It will be apparent that the embodiments described above are merely some, but not all, embodiments of the invention. For better understanding of the above method, the following description will explain the above process with reference to the embodiments, but is not intended to limit the technical solutions of the embodiments of the present invention, optionally:
The application provides a method and a system for monitoring the state of a driver and prompting the driver to change the driving state in automatic driving, which can monitor the takeover state of the driver in high reliability and real time and timely carry out grading prompting, timely adjust the takeover state of the driver and ensure the driving safety of automatic driving. As shown in fig. 3, the system includes: the system comprises a driver state monitoring module, a driver state evaluation module, a driving environment monitoring module, a vehicle state monitoring module, a takeover capability evaluation module and a driver takeover state intervention module.
1) A driver status monitoring module: the method is responsible for acquiring the state data of the current driver, wherein the state data at least comprises two types of data: image data and physiological signal data.
The driver state detection module is composed of a series of sensors aiming at the environment in the vehicle, and the sensors in the application at least comprise two types of devices, a camera (aiming at the face and the gesture of the driver) and a physiological data detection sensor (aiming at any one or more of physiological data such as heart rate, respiration, myoelectricity, electrocardio, blood pressure and the like).
2) Driver state assessment module: and the method is responsible for integrating the multi-source monitoring data and evaluating the participation degree and the takeover state of the driving task of the driver.
The driver assessment module is mainly used for comprehensively assessing the current state of the driver by taking the real-time data acquired by the driver state detection module as input data. The module is divided into three units, wherein the first part is a data quality evaluation unit which is responsible for evaluating the quality of acquired image data and physiological signals; the second part is a state evaluation model unit, the unit comprises more than two different evaluation models, the input data of the different evaluation models are different, and the design diversity is guaranteed to improve the fault tolerance of the evaluation unit; the third unit is a comprehensive evaluation unit, and the unit obtains the weight of the classification results of different models according to the data quality evaluation and the initial reliability of the models, and synthesizes the classification results of a plurality of models to obtain a final evaluation result. The driver state assessment results will include 4 classes: high take over level, medium take over level, low take over level, off driving level.
3) The driving environment monitoring module: is responsible for acquiring the number, state, etc. of objects related to driving in the current driving environment.
The driving environment detection module is mainly used for monitoring the external environment of the vehicle, belongs to common functional components of the automatic driving vehicle, and can exist alone as a part of the system or directly acquire the data of the environment sensing module of the automatic driving system to detect the environment. The environment detection module is mainly used for evaluating the complexity of the current driving environment, and mainly classifies the driving environment complexity into 3 types based on dynamic entities, infrastructure elements and environmental conditions around the vehicle: low complexity, medium complexity, high complexity.
4) Vehicle state monitoring module: the main functions comprise two parts, namely, the current automatic driving level and the corresponding control domain are defined, and the current vehicle driving data is acquired.
The vehicle state detection module is mainly divided into two parts, namely a vehicle motion monitoring unit and an automatic driving system state monitoring unit. The self-vehicle movement monitoring unit obtains basic information such as the running speed, acceleration, steering angle and the like of the current vehicle based on the data of the inertia measuring unit and the chassis sensor; and the automatic driving system state monitoring unit is used for acquiring the control capability of the current automatic driving system on the vehicle and the corresponding operation domain boundary. The closer the current vehicle state is to the operating domain boundary, the higher the vehicle's requirement for take over capability, classifying the vehicle state into three categories: a high controllable state, a medium controllable state, a low controllable state. Wherein, the high controllable state indicates that the vehicle state is at the center of the operating domain of the automatic driving system, and the low controllable state indicates that the vehicle state is close to the boundary of the operating domain of the automatic driving system.
5) The takeover capability assessment module: and the method is responsible for integrating the state of the driver, the monitoring state of the driving environment and the state of the vehicle, evaluating the capacity of taking over of the current driver and comparing the capacity with the expected capacity of taking over.
The takeover capability assessment module gives out the takeover state which the driver should have under the current situation according to the complexity assessment result of the driving environment and the vehicle state assessment result, compares the takeover state with the actually monitored driver state, and gives out the current driver takeover capability assessment result according to the comparison difference.
6) The driver takes over the status intervention module: is responsible for determining the required intervention level (i.e. the prompt level) according to the assessment result of the takeover capability.
According to the assessment result of the taking over ability of the driver, the taking over state of the driver is prompted through a vehicle-mounted man-machine interface, the prompting scheme is a prompt which is formed by one or more channels for a certain time length, the channels refer to sensory channels for human perception of the external world, and the prompting method comprises the following steps: visual, auditory, tactile, etc.
Optionally, the method and the system for monitoring the state of the driver and prompting the driver to change the driving state in the automatic driving are based on various data for describing the state of the driver, respectively judge the state of the driver through a plurality of machine learning models using different data, calculate weights according to the quality of the data to fuse classification results of the models, and ensure the fault tolerance of the classification modules through multi-source data and heterogeneous redundancy models; secondly, the prompting scheme is determined by integrating the driving environment, the vehicle state and the driver state, and the fine-granularity grading prompting scheme can improve the experience of the driver in the automatic driving exemplary running process under the condition of ensuring the driving safety, so that the convenience brought by automatic driving is fully exerted.
Optionally, fig. 4 illustrates an overall flowchart of a driving state adjustment method, specifically including the following steps:
step 1: status data of the driver is acquired, wherein the status data is from at least two different types of data sources.
It should be noted that the state data corresponds to a human body feature representing the driver during the automatic driving of the vehicle, and the human body feature may include, but is not limited to, a physiological state feature, an eye movement behavior feature, a body posture feature, or any other feature related to the state of the driver. And the source of the state data is more than two types of sources. If, for example, both eye movement behavior and body posture are image data from a camera, both features are from the same data source; if the eye movement data is from a portable eye movement device and the body posture data is from radar, then the two features are considered to be from different data sources. The status data acquisition device thus includes at least two types of devices, including but not limited to cameras (for driver face and pose) and physiological signal detection sensors. The physiological signal detection sensor may include, but is not limited to, a respiratory rate sensor, a skin resistance sensor, a heart rate sensor, a pulse sensor, and the like. In practical applications, the selection and configuration of the data acquisition device depends on the data decisions required by the driver state assessment model used in the system.
Step 2: the quality of the status data is evaluated.
It should be noted that the driver status data includes, but is not limited to, image data, a numerical signal, an analog signal, or other similar signals. The quality evaluation of the image data can be performed by training a model in advance in an artificial intelligence mode, and the image quality can also be directly evaluated by a traditional image quality evaluation method. The quality evaluation indexes of various state data also need to be normalized, so that the value interval is within 0 and 1.
Step 3: the driver status is evaluated.
It should be noted that the driver state monitoring model (i.e., the classification model in the above embodiment) is a model trained in advance, which can output the takeover level of the driver, and must be implemented by two or more classification models based on different data sources. The driver state monitoring model is established based on a classification algorithm, namely, the driver state monitoring model can identify the takeover level of the driver in the image data or the physiological signal data, and output the category to which the driver belongs, wherein the category is the current takeover level of the driver. The take-over level may reflect the current take-over capacity of the driver, in particular the time required for the driver to take over the switching of the driving task to the manual driving state. The required take-over time is less than 2.6s for a high take-over level, in the interval 2.6s to 6.1s for a medium take-over level, and more than 6.1s for a low take-over level. The above interval is an exemplary default interval given according to the existing research conclusion, and in the practical application process, the appropriate interval can be re-divided according to the practical situation of the training data set to define different takeover levels.
In an exemplary embodiment, the classification model based on image data may be trained by first acquiring, as feature inputs, eye movement behavior data such as eye movement gaze direction, eye closure, etc. through some algorithms. The classification model may also be trained with the image data directly as raw data. The choice of a particular method is related to the classification algorithm chosen.
In an exemplary embodiment, the classification model based on physiological signal data may first obtain certain characteristics of the signal through some algorithms, such as standard deviation of normal heart beats (SDNN), mean square of continuous differences between normal heart beats (RMSSD), proportion of continuous heart beats (pNN 50) that differ by more than 50ms, within a data window, based on heart rate data; acquiring the number and amplitude of peaks within a data window based on the galvanic skin data; the classification model is trained with the above features as inputs. The classification model may also be trained with the signal data directly as raw data. The choice of a particular method is related to the classification algorithm chosen.
Optionally, a driver state monitoring model is established in advance based on a classification algorithm, the image data and the physiological signal data are acquired and then input into the corresponding driver monitoring model, and each model outputs a takeover state corresponding to the data.
Step 4: and comprehensively evaluating the state of the driver.
In order to improve the fault tolerance of the system, more than two models are adopted to evaluate the takeover state of the driver in step 3, so that the classification results of the multiple models need to be integrated to obtain the summarized evaluation result of the takeover state of the driver. If the results of the classification models are consistent, the results can be directly determined to be the summarized driver taking over state; if the results of the multiple classification models are inconsistent, the normalized quality evaluation results of all the required data sources of the models with the same classification results are compared after the average value is obtained, and the classification result with higher score is selected to be judged to be the collected driver taking over state. And judging which model has more credible classification result according to the normalized data quality evaluation result in the step 2.
Step 5: and acquiring driving environment monitoring data and evaluating the environment complexity.
The obtained driving environment detection data refers to that the environment monitoring system senses and detects various driving-related objects in the driving environment through sensors (such as radar, laser range finder, video sensor, etc.), including dynamic entities (such as other vehicles and pedestrians), infrastructure elements (such as signal lights, lane lines, reflection cones, etc.), environmental conditions (such as bad weather of rain, snow, etc.). The embodiment of the application is not limited to the specific content of the environmental complexity evaluation method, and the evaluation method can calculate the environmental complexity by accumulating the number of the identified objects according to the actual driving scene (city, country, high speed, etc.) of the automatic driving vehicle and giving different weights according to the object properties. The more environmental complexity of the identified object is, the higher the weight of some objects with dangerous meanings (such as animals on a traffic road, triangular warning lamps and the like) is, and the environmental complexity is improved more quickly. And then the environment complexity is divided into three levels of high, medium and low, and the specific threshold value is set to be related to the actual driving scene.
Step 6: and acquiring the motion state data of the vehicle.
It should be noted that a simple vehicle motion assembly may only process IMU and chassis sensor data. More complex systems may use all or a subset of other inputs to calculate additional motion estimates, which are then fused together. The embodiment of the application is not particularly limited, and mainly needs to acquire data describing the running state of the own vehicle, such as the speed, the acceleration, the steering angle and the like of the own vehicle.
Step 7: automatic driving system state data is acquired and vehicle controllable states are evaluated.
It should be noted that the work of this step is generally performed by an operation domain supervision module in the autopilot system, so as to ensure that the autopilot vehicle operates under operation design domain and other applicable dynamic and static constraints. Autopilot system status includes, but is not limited to: autopilot system integrity (technical status including sensors, actuators, support systems and computing units safe, reliable, available), whether current driving behavior is within authorized operating domains, and the possibility of leaving a design operating domain.
It should be noted that, the first preset rule is specifically visible when the controllable state of the vehicle is evaluated according to the state of the automatic driving system.
Step 8: a driver take over capability deviation is determined.
It should be noted that the main task of this step is to compare the difference between the driver taking over capability required by the current driving situation and the driver's actual taking over capability. The actual taking over level is the result of the evaluation obtained in step 4, and the required taking over capacity of the driver is determined by the complexity of the environment and the controllable state of the vehicle (i.e. the vehicle state), and the specific rule is the second preset rule.
It should be noted that if the actual take-over level is higher than the required take-over level, only the lowest or the like intervention is required for maintaining the stability of the take-over level of the driver, and if the actual take-over level is not higher than the required take-over level, different levels of intervention are required according to the difference, the specific rule being as the third preset rule described above.
Step 9: the driver state is intervened in a grading manner.
It should be noted that, according to the difference between the required take-over level and the actual take-over level determined in the step 8, intervention (prompt) is performed for the driver through the vehicle-mounted human-computer interaction interface, and the take-over level of the driver is adjusted, so that the driver is more concerned about tasks related to driving, including driving environment perception, vehicle state monitoring and the like. In practical application, the state monitoring result can be preconfigured or dynamically responded with the driver according to various factors such as the requirement of a specific application scene, related regulations of safe driving, the requirement of the driver and the like. Optionally, various adjustment mechanisms are preset, and intervention schemes with different levels can be set according to advanced experimental results and comprehensive user experience and intervention effects. In this embodiment, the state of the driver is adjusted by controlling the man-machine interaction interface (audio device, various types of screens, seats, steering wheel, etc.) on the vehicle.
It should be noted that, the existing driver state monitoring system mainly aims at abnormal driving states (such as fatigue driving, sudden illness, drunk driving and the like), the used classification algorithm is simple, and the requirement on data is low, but under an automatic driving environment, a driver is allowed to perform certain tasks irrelevant to driving, so that the importance of driver state monitoring can be transferred to whether the driver has the capability of timely taking over driving tasks, and therefore richer data and complex classification models are needed to complete monitoring work. However, in a real-time running environment, data acquisition has certain instability, the quality of the data can fluctuate, and in order to ensure the reliability of a monitoring system, the driver state monitoring and evaluating system with fault tolerance provided by the application avoids single module failure through multi-source data and heterogeneous redundancy classification models, gathers multi-model classification results based on data quality evaluation, and ensures the reliability and stability of the driver state monitoring and evaluating results. The driver state monitoring and evaluating system has strong fault tolerance capability, and is resistant to data quality interference and model deviation.
Note that, the present application is not limited to the above-described embodiments. The existing intervention scheme is mainly a simple alarm sound prompt, and even the thought of hierarchical intervention is also an intervention mode with single mode and different intensities. Furthermore, since the intervention (e.g., fatigue state) is mostly performed only according to the driver state, the purpose of the intervention is direct, and the driver is put into the driving state as soon as possible. In automatic driving, the driver can perform tasks which are irrelevant to driving, and the driver is not required to keep high attention to the driving tasks at any time, but the driver is the first responsible person of the driving tasks and still needs to participate in the driving tasks to a certain extent. Therefore, the driving environment and the vehicle state are required to be synthesized, the expected level of the takeover capability of the driver is given, the difference between the level of the takeover capability of the driver and the level of the takeover capability of the current driver is judged, and the intervention is performed in a fine granularity. The fine granularity intervention classification method provided by the application has the advantages of playing the convenience of automatic driving and providing high-quality riding experience on the premise of ensuring the driving safety.
It should be noted that, the present application introduces the idea of redundancy into the driver state monitoring and evaluating system, and improves the fault tolerance of the data acquisition end by expanding the variety of state data sources, where the redundancy is not simple sensor redundancy (e.g. using two identical cameras to acquire image data), but uses different data sources (e.g. using a camera and a heart rate sensor to acquire different state data, etc.). The former can not avoid common mode faults, for example, because sudden ambient light changes can lead to that two cameras can not obtain effective data, and the latter can avoid the data that all acquisition devices gathered to fail simultaneously through the diversity of the acquisition devices. Secondly, redundancy of the classification models is achieved, the classification models applicable to different data types may have differences, a proper acceleration chip may also have differences, state evaluation is carried out by adopting different classification models based on different data, fault tolerance of the whole system can be improved, and classification deviation caused by a single model can be avoided. For the problem of how to collect the results of multiple classification models, the application essentially provides a weighted voting method, which uses the quality of input data as the weight to collect the results of multiple classification models, and the thinking is based on a basic theory: the higher the reliability of the results obtained by the high data quality model.
It should be noted that the present application uses a multi-modal hierarchical intervention scheme to provide a continuous intervention to adjust driver take over status. Existing driver state interventions are mostly based on traditional driving contexts or very urgent states in order to bring the driver back to a state where it is possible to operate the vehicle as normal as possible. Thus, the intervention is straightforward and invasive. However, in the automatic driving context, the driver is allowed to perform tasks not related to driving, so that the intervention mode can be more flexible and guiding under the premise of ensuring driving safety. Therefore, the multi-mode hierarchical intervention method provided by the application integrates the different external perception characteristics of a plurality of sensory channels, provides a multi-mode combined intervention scheme, and combines the change of the intervention duration and frequency to form a complete intervention scheme. The judgment of the intervention scheme level is not only determined by the state of the driver, but also the participation degree of the driving task of the driver required by different driving environments and vehicle states is different, so that the complexity of the environment, the vehicle states and the state of taking over by the driver are integrated, the intervention level is determined, and the state is updated continuously along with the intervention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The present embodiment also provides a driving state adjusting device, which is used to implement the foregoing embodiments and the preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the modules described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of a driving state adjusting apparatus according to an embodiment of the present application, as shown in fig. 5, including:
A first determining module 50, configured to determine a driver state currently located by a driver in an automatic driving vehicle from a plurality of preset driver states, where the preset driver states are used to represent an actual takeover capability of the driver on the automatic driving vehicle;
a second determining module 52, configured to determine an environmental complexity of an environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities;
a third determining module 54, configured to determine a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states, where the preset vehicle states are used to represent a probability that the autonomous vehicle is allowed to be controlled;
A fourth determining module 56, configured to determine a target takeover level from a plurality of preset takeover levels according to the environmental complexity and the vehicle state, and determine a target prompt level according to the target takeover level and the driver state, where the preset takeover level is used to indicate a takeover capability of the driver required by the autopilot vehicle in a corresponding driving scenario;
The control module 58 is configured to control the autonomous vehicle to perform a prompting operation according to the target prompting level, where the prompting operation is used to prompt the driver to adjust the driving state.
According to the device, the target take-over level is determined according to the environmental complexity of the environment where the automatic driving vehicle is located and the vehicle state of the automatic driving vehicle, the target prompt level is determined according to the target take-over level and the driver state of the driver, and then the automatic driving vehicle is controlled to execute prompt operation according to the target prompt level, so that the driver is prompted to adjust the driving state, and then the driver can be effectively intervened and reminded, and the driver can experience comfort and convenience brought by the automatic driving as much as possible on the premise of ensuring the driving safety, so that the problem that the reminding effect of the traditional driver state intervening and reminding strategy is poor in the automatic driving scene is solved.
In an exemplary embodiment, the first determining module 50 is further configured to obtain N pieces of status data of the driver, where the N pieces of status data include: the image data and the physiological signal data of the driver are that N is a positive integer more than or equal to 2; and determining the current driver state of the driver in the automatic driving vehicle from a plurality of preset driver states according to the N state data.
In an exemplary embodiment, the first determining module 50 is further configured to determine M classification models, where model inputs of the M classification models are different from each other, and an input of each of the M classification models is part or all of the N state data, and each classification model is configured to determine, according to a corresponding model input, a driver state in which the driver is currently located from the plurality of preset driver states, where M is a positive integer greater than or equal to 2; and determining M classification results of the M classification models according to the N state data, and determining the current driver state of the driver according to the M classification results.
In an exemplary embodiment, the first determining module 50 is further configured to determine the classification result of the ith classification model in the M classification models by: and inputting the state data corresponding to the ith classification model in the N state data into the ith classification model to obtain a classification result of the ith classification model.
In an exemplary embodiment, the first determining module 50 is further configured to determine, when the M classification results are the same and are all the specified driver states, the specified driver states as the driver states in which the driver is currently located; and under the condition that the M classification results are not identical and the M classification results comprise P classification results, determining the credibility corresponding to each classification result in the P classification results, and determining the driver state corresponding to the classification result with the highest credibility in the P classification results as the current driver state of the driver, wherein P is a positive integer greater than or equal to 2 and less than or equal to M.
In an exemplary embodiment, the first determining module 50 is further configured to determine the credibility of the j-th class of classification results in the P-class classification results by: z classification models with the output results of the j-th class classification result are determined from the M classification models; determining the data quality of state data corresponding to each classification model in the Z classification models; and under the condition that the Z groups of state data corresponding to the Z classification models comprise Q pieces of state data, determining the credibility of the j-th classification result according to the data quality of the Q pieces of state data.
In an exemplary embodiment, the first determining module 50 is further configured to determine, in a case where the state data is image data, a variance of pixel values in the image data as a data quality of the state data; and under the condition that the state data are physiological signal data, determining the data quality of the state data according to the signal stability and the signal continuity of the physiological signal data.
In an exemplary embodiment, the second determining module 52 is further configured to determine dynamic entity data, base device element data, and weather data of an environment in which the autonomous vehicle is located, where the dynamic entity data is used to describe dynamic entities in the environment, and the base device element data is used to describe base device elements in the environment; and determining the environmental complexity of the current environment of the automatic driving vehicle from a plurality of preset environmental complexities according to the dynamic entity data, the basic equipment element data and the weather data.
In an exemplary embodiment, the second determining module 52 is further configured to determine a first environmental complexity value according to the number of dynamic entities in the dynamic entity data; determining a second environment complex value according to the dynamic entity data and the entity data of a target entity in the basic equipment element data, wherein the risk coefficient of the target entity is larger than a preset threshold; determining a third environmental complexity value from the weather data; and determining the environment complexity of the environment in which the automatic driving vehicle is currently located from a plurality of preset environment complexities according to the first environment complexity value, the second environment complexity value and the third environment complexity value.
In an exemplary embodiment, the second determining module 52 is further configured to perform weighted summation on the first environmental complexity value, the second environmental complexity value, and the third environmental complexity value to obtain a target environmental complexity value; and under the condition that the target environment complexity value is located in a target environment complexity value range in a plurality of environment complexity value ranges, determining the target environment complexity corresponding to the target environment complexity value range as the environment complexity of the environment where the automatic driving vehicle is currently located, wherein the plurality of preset environment complexity values have a one-to-one correspondence with the plurality of environment complexity value ranges, and the plurality of environment complexity value ranges have a correspondence with the driving scene of the automatic driving vehicle.
In an exemplary embodiment, the third determining module 54 is further configured to obtain operation data of the vehicle, and determine a likelihood that the driving behavior of the autonomous vehicle leaves a preset operation domain according to the operation data; and determining the current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states according to the possibility and the integrity of an automatic driving system of the automatic driving vehicle.
In an exemplary embodiment, the third determining module 54 is further configured to obtain a first preset rule, where the first preset rule has a correspondence between each of the plurality of preset vehicle states and a likelihood that a driving behavior of the autonomous vehicle leaves a preset operation domain and an integrity of an autonomous system of the autonomous vehicle; based on the first preset rule, determining a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states according to the possibility and the integrity of an autonomous system of the autonomous vehicle.
In an exemplary embodiment, the fourth determining module 56 is further configured to obtain a second preset rule, where the second preset rule has a correspondence between each of the plurality of preset takeover levels and an environmental complexity and a vehicle state; and determining a target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state based on the second preset rule.
In an exemplary embodiment, the fourth determining module 56 is further configured to obtain a third preset rule, where the third preset rule is used to indicate a correspondence between different take-over levels and driver states and different prompt levels; and determining a target prompt level according to the target take-over level and the driver state based on the third preset rule.
In an exemplary embodiment, the fourth determining module 56 is further configured to determine the first alert level as the target alert level if the target alert level is higher than the indicated alert level; determining a second prompt level as the target prompt level if the target take-over level is equal to the take-over level indicated by the driver status; determining a third prompt level as the target prompt level if the target take-over level is lower than the take-over level indicated by the driver status; the prompt strength corresponding to the first prompt level is higher than the prompt strength corresponding to the second prompt level, and the prompt strength corresponding to the second prompt level is higher than the prompt strength corresponding to the third prompt level.
In an exemplary embodiment, the control module 58 is further configured to obtain a preset prompting rule, where the prompting rule has prompting operations corresponding to different prompting levels, and the prompting levels include the target prompting level; determining a target prompt operation corresponding to the target prompt level based on the prompt rule; and controlling the automatic driving vehicle to execute the target prompt operation, wherein the prompt operation comprises the target prompt operation.
In an exemplary embodiment, the first determining module 50 is further configured to obtain the N status data through different devices.
In one exemplary embodiment, the prompting operation prompts the driver to adjust driving conditions through a dimension of at least one of: visual, auditory, tactile.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; or the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer program may be configured to execute the following steps by the computer program:
S1, determining a current driver state of a driver in an automatic driving vehicle from a plurality of preset driver states, wherein the preset driver states are used for reflecting the actual taking-over capability of the driver on the automatic driving vehicle; and
S2, determining the environment complexity of the current environment of the automatic driving vehicle from a plurality of preset environment complexities; and
S3, determining a current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states, wherein the preset vehicle states are used for reflecting the probability that the automatic driving vehicle is allowed to be controlled;
s4, determining a target takeover level from a plurality of preset takeover levels according to the environment complexity and the vehicle state, and determining a target prompt level according to the target takeover level and the driver state, wherein the preset takeover level is used for indicating the takeover capacity of a driver required by the automatic driving vehicle under the corresponding driving situation;
and S5, controlling the automatic driving vehicle to execute a prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust the driving state.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the application also provides an electronic device comprising a memory 602 and a processor 604, the memory 602 having stored therein a computer program, the processor 604 being arranged to perform the steps of any of the method embodiments described above by means of the computer program, as shown in fig. 6.
Alternatively, in the present embodiment, the processor 604 may be configured to execute the following steps by a computer program:
S1, determining a current driver state of a driver in an automatic driving vehicle from a plurality of preset driver states, wherein the preset driver states are used for reflecting the actual taking-over capability of the driver on the automatic driving vehicle; and
S2, determining the environment complexity of the current environment of the automatic driving vehicle from a plurality of preset environment complexities; and
S3, determining a current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states, wherein the preset vehicle states are used for reflecting the probability that the automatic driving vehicle is allowed to be controlled;
s4, determining a target takeover level from a plurality of preset takeover levels according to the environment complexity and the vehicle state, and determining a target prompt level according to the target takeover level and the driver state, wherein the preset takeover level is used for indicating the takeover capacity of a driver required by the automatic driving vehicle under the corresponding driving situation;
and S5, controlling the automatic driving vehicle to execute a prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust the driving state.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
Alternatively, it will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 6 is merely illustrative, and that fig. 6 is not intended to limit the configuration of the electronic device described above. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 6, or have a different configuration than shown in FIG. 6.
The memory 602 may be used to store software programs and modules, such as program instructions/modules corresponding to the driving state adjustment method and apparatus in the embodiments of the present application, and the processor 604 executes the software programs and modules stored in the memory 602 to perform various functional applications and data processing, that is, implement the driving state adjustment method described above. The memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 602 may further include memory located remotely from processor 604, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 602 may be used to store, but is not limited to, information such as system configuration files. As an example, as shown in fig. 6, the memory 602 may include, but is not limited to, the first determining module 50, the second determining module 52, the third determining module 54, the fourth determining module 56, and the control module 58 in the adjustment device for driving state. In addition, other module units in the driving state adjusting device may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 606 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 606 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 606 is a Radio Frequency (RF) module for communicating wirelessly with the internet.
In addition, the electronic device further includes: a display 608; and a connection bus 610 for connecting the respective module parts in the above-described electronic device.
In other embodiments, the electronic device may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
Embodiments of the application also provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
Embodiments of the present application also provide another computer program product comprising a non-volatile computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
Embodiments of the present application also provide a computer program comprising computer instructions stored in a computer-readable storage medium; the processor of the computer device reads the computer instructions from the computer readable storage medium and the embedder executes the computer instructions to cause the computer device to perform the steps of any of the method embodiments described above.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A driving state adjusting method is characterized in that,
Comprising the following steps:
Determining a driver state of an automatic driving vehicle in which a driver is currently located from a plurality of preset driver states, wherein the preset driver states are used for reflecting the actual taking over capability of the driver on the automatic driving vehicle; and
Determining the environmental complexity of the environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities; and
Determining a vehicle state of the automatic driving vehicle from a plurality of preset vehicle states, wherein the preset vehicle states are used for reflecting the probability that the automatic driving vehicle is allowed to be controlled;
Determining a target takeover level from a plurality of preset takeover levels according to the environment complexity and the vehicle state, and determining a target prompt level according to the target takeover level and the driver state, wherein the preset takeover level is used for indicating the takeover capability of a driver required by the automatic driving vehicle under a corresponding driving situation;
Controlling the automatic driving vehicle to execute a prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust driving states;
Wherein the determining, from a plurality of preset driver states, a driver state in which a driver in the autonomous vehicle is currently located includes: acquiring N pieces of state data of the driver, wherein the N pieces of state data comprise: the image data and the physiological signal data of the driver are that N is a positive integer more than or equal to 2; determining a driver state of the automatic driving vehicle in which a driver is currently located from a plurality of preset driver states according to the N state data;
Wherein the determining, according to the N state data, a driver state in which a driver in the automatically driven vehicle is currently located from a plurality of preset driver states includes: determining M classification models, wherein model inputs of the M classification models are different from each other, the inputs of each classification model in the M classification models are part or all of the N state data, and each classification model is used for determining a driver state in which the driver is currently located from the plurality of preset driver states according to the corresponding model inputs, wherein M is a positive integer greater than or equal to 2; m classification results of the M classification models are determined according to the N state data, and the current driver state of the driver is determined according to the M classification results;
The determining the environmental complexity of the current environment of the automatic driving vehicle from a plurality of preset environmental complexities comprises the following steps: determining dynamic entity data, basic equipment element data and weather data of an environment in which the automatic driving vehicle is located, wherein the dynamic entity data is used for describing dynamic entities in the environment, and the basic equipment element data is used for describing basic equipment elements in the environment; according to the dynamic entity data, the basic equipment element data and the weather data determine the environment complexity of the current environment of the automatic driving vehicle from a plurality of preset environment complexities;
The determining, according to the dynamic entity data, the environmental complexity of the environment in which the autonomous vehicle is currently located from a plurality of preset environmental complexities by using the basic device element data and the weather data includes: determining a first environment complex value according to the number of dynamic entities in the dynamic entity data; determining a second environment complex value according to the dynamic entity data and the entity data of a target entity in the basic equipment element data, wherein the risk coefficient of the target entity is larger than a preset threshold; determining a third environmental complexity value from the weather data; and determining the environment complexity of the environment in which the automatic driving vehicle is currently located from a plurality of preset environment complexities according to the first environment complexity value, the second environment complexity value and the third environment complexity value.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining M classification results of the M classification models according to the N state data includes:
Determining the classification result of the ith classification model in the M classification models by the following method to determine M classification results of the M classification models:
And inputting the state data corresponding to the ith classification model in the N state data into the ith classification model to obtain a classification result of the ith classification model.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining, according to the M classification results, a driver state in which the driver is currently located includes:
under the condition that the M classification results are the same and are all the designated driver states, determining the designated driver states as the driver states of the drivers at present;
And under the condition that the M classification results are not identical and the M classification results comprise P classification results, determining the credibility corresponding to each classification result in the P classification results, and determining the driver state corresponding to the classification result with the highest credibility in the P classification results as the current driver state of the driver, wherein P is a positive integer greater than or equal to 2 and less than or equal to M.
4. The method of claim 3, wherein the step of,
The determining the corresponding credibility of each classification result in the P-class classification results comprises the following steps:
the credibility of the j-th class classification result in the P-class classification results is determined by the following steps:
Z classification models with the output results of the j-th class classification result are determined from the M classification models;
Determining the data quality of state data corresponding to each classification model in the Z classification models;
And under the condition that the Z groups of state data corresponding to the Z classification models comprise Q pieces of state data, determining the credibility of the j-th classification result according to the data quality of the Q pieces of state data.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
The determining the data quality of the state data corresponding to each of the Z classification models includes:
determining a variance of pixel values in the image data as a data quality of the state data in the case that the state data is image data;
and under the condition that the state data are physiological signal data, determining the data quality of the state data according to the signal stability and the signal continuity of the physiological signal data.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining the environmental complexity of the current environment of the automatic driving vehicle according to the first environmental complexity value, the second environmental complexity value and the third environmental complexity value from a plurality of preset environmental complexities includes:
Carrying out weighted summation on the first environment complex value, the second environment complex value and the third environment complex value to obtain a target environment complex value;
And under the condition that the target environment complexity value is located in a target environment complexity value range in a plurality of environment complexity value ranges, determining the target environment complexity corresponding to the target environment complexity value range as the environment complexity of the environment where the automatic driving vehicle is currently located, wherein the plurality of preset environment complexity values have a one-to-one correspondence with the plurality of environment complexity value ranges, and the plurality of environment complexity value ranges have a correspondence with the driving scene of the automatic driving vehicle.
7. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining, from a plurality of preset vehicle states, a current vehicle state of the autonomous vehicle includes:
Acquiring running data of the vehicle, and determining the possibility that the driving behavior of the automatic driving vehicle leaves a preset operation area according to the running data;
And determining the current vehicle state of the automatic driving vehicle from a plurality of preset vehicle states according to the possibility and the integrity of an automatic driving system of the automatic driving vehicle.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
The determining, from a plurality of preset vehicle states, a vehicle state in which the autonomous vehicle is currently located according to the likelihood and the integrity of an autonomous system of the autonomous vehicle, including:
Acquiring a first preset rule, wherein the first preset rule has a corresponding relation between each preset vehicle state in the plurality of preset vehicle states and the possibility that the driving behavior of the automatic driving vehicle leaves a preset operation area and the integrity of an automatic driving system of the automatic driving vehicle;
Based on the first preset rule, determining a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states according to the possibility and the integrity of an autonomous system of the autonomous vehicle.
9. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining a target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state comprises:
Acquiring a second preset rule, wherein the second preset rule has a corresponding relation between each preset takeover level in the plurality of preset takeover levels and the environment complexity and the vehicle state;
and determining a target take-over level from a plurality of preset take-over levels according to the environmental complexity and the vehicle state based on the second preset rule.
10. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining a target prompt level according to the target take-over level and the driver state comprises:
acquiring a third preset rule, wherein the third preset rule is used for indicating the corresponding relation between different take-over levels and the states of the driver and different prompt levels;
And determining a target prompt level according to the target take-over level and the driver state based on the third preset rule.
11. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The determining a target prompt level according to the target take-over level and the driver state comprises:
Determining a first prompt level as the target prompt level if the target take-over level is higher than the take-over level indicated by the driver status;
determining a second prompt level as the target prompt level if the target take-over level is equal to the take-over level indicated by the driver status;
determining a third prompt level as the target prompt level if the target take-over level is lower than the take-over level indicated by the driver status;
the prompt strength corresponding to the first prompt level is higher than the prompt strength corresponding to the second prompt level, and the prompt strength corresponding to the second prompt level is higher than the prompt strength corresponding to the third prompt level.
12. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The controlling the automatic driving vehicle to execute the prompt operation according to the target prompt level comprises the following steps:
Obtaining a preset prompting rule, wherein prompting operations corresponding to different prompting grades are arranged in the prompting rule, and the prompting grades comprise the target prompting grade;
Determining a target prompt operation corresponding to the target prompt level based on the prompt rule;
and controlling the automatic driving vehicle to execute the target prompt operation, wherein the prompt operation comprises the target prompt operation.
13. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The obtaining the N state data of the driver includes:
And acquiring the N pieces of state data through different devices.
14. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The prompting operation prompts the driver to adjust driving conditions through a dimension of at least one of: visual, auditory, tactile.
15. A driving state adjusting device is characterized in that,
Comprising the following steps:
A first determining module, configured to determine a driver state in which a driver in an autonomous vehicle is currently located from a plurality of preset driver states, where the preset driver states are used to represent an actual takeover capability of the driver for the autonomous vehicle;
The second determining module is used for determining the environment complexity of the environment where the automatic driving vehicle is currently located from a plurality of preset environment complexities;
a third determining module, configured to determine a vehicle state in which the autonomous vehicle is currently located from a plurality of preset vehicle states, where the preset vehicle states are used to represent a probability that the autonomous vehicle is allowed to be controlled;
A fourth determining module, configured to determine a target takeover level from a plurality of preset takeover levels according to the environmental complexity and the vehicle state, and determine a target prompt level according to the target takeover level and the driver state, where the preset takeover level is used to indicate a takeover capability of the driver required by the autopilot vehicle in a corresponding driving scenario;
The control module is used for controlling the automatic driving vehicle to execute a prompt operation according to the target prompt level, wherein the prompt operation is used for prompting the driver to adjust the driving state;
The first determining module is further configured to obtain N pieces of status data of the driver, where the N pieces of status data include: the image data and the physiological signal data of the driver are that N is a positive integer more than or equal to 2; determining a driver state of the automatic driving vehicle in which a driver is currently located from a plurality of preset driver states according to the N state data;
The first determining module is further configured to determine M classification models, where model inputs of the M classification models are different from each other, an input of each classification model of the M classification models is part or all of the N state data, and each classification model is configured to determine, according to a corresponding model input, a driver state in which the driver is currently located from the plurality of preset driver states, where M is a positive integer greater than or equal to 2; m classification results of the M classification models are determined according to the N state data, and the current driver state of the driver is determined according to the M classification results;
The second determining module is further configured to determine dynamic entity data, basic equipment element data and weather data of an environment where the autonomous vehicle is located, where the dynamic entity data is used for describing a dynamic entity in the environment, and the basic equipment element data is used for describing a basic equipment element in the environment; according to the dynamic entity data, the basic equipment element data and the weather data determine the environment complexity of the current environment of the automatic driving vehicle from a plurality of preset environment complexities;
The second determining module is further configured to determine a first environmental complexity value according to the number of dynamic entities in the dynamic entity data; determining a second environment complex value according to the dynamic entity data and the entity data of a target entity in the basic equipment element data, wherein the risk coefficient of the target entity is larger than a preset threshold; determining a third environmental complexity value from the weather data; and determining the environment complexity of the environment in which the automatic driving vehicle is currently located from a plurality of preset environment complexities according to the first environment complexity value, the second environment complexity value and the third environment complexity value.
16. A computer-readable storage medium comprising,
The computer readable storage medium has stored therein a computer program, wherein the computer program when executed by a processor implements the steps of the method of any of claims 1 to 14.
17. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that,
The processor, when executing the computer program, implements the steps of the method as claimed in any one of claims 1 to 14.
18. A computer program product comprising a computer program, characterized in that,
Which computer program, when being executed by a processor, carries out the steps of the method as claimed in any one of claims 1 to 14.
CN202410379844.5A 2024-03-29 2024-03-29 Driving state adjusting method and device, storage medium, electronic device and computer program product Active CN117962901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410379844.5A CN117962901B (en) 2024-03-29 2024-03-29 Driving state adjusting method and device, storage medium, electronic device and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410379844.5A CN117962901B (en) 2024-03-29 2024-03-29 Driving state adjusting method and device, storage medium, electronic device and computer program product

Publications (2)

Publication Number Publication Date
CN117962901A CN117962901A (en) 2024-05-03
CN117962901B true CN117962901B (en) 2024-05-28

Family

ID=90848247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410379844.5A Active CN117962901B (en) 2024-03-29 2024-03-29 Driving state adjusting method and device, storage medium, electronic device and computer program product

Country Status (1)

Country Link
CN (1) CN117962901B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6227862B1 (en) * 1999-02-12 2001-05-08 Advanced Drivers Education Products And Training, Inc. Driver training system
CN111915159A (en) * 2020-07-15 2020-11-10 北方工业大学 Personalized takeover early warning method and system based on dynamic time budget
CN114169755A (en) * 2021-12-07 2022-03-11 华东交通大学 Driver takeover capability evaluation and alarm method, system, equipment and medium
CN115447589A (en) * 2022-09-30 2022-12-09 重庆交通大学 Takeover success probability prediction and intervention effect evaluation method under man-machine common driving condition
CN117325872A (en) * 2023-11-16 2024-01-02 大连理工大学 Automatic driving vehicle driver takes over cognitive behavior prediction system
CN117666785A (en) * 2023-11-28 2024-03-08 山东大学 Automatic driving man-machine interaction takeover training method and system based on digital twinning
CN117755329A (en) * 2023-08-29 2024-03-26 杭州电子科技大学 Method and system for taking over opportunity and type based on driver situational awareness

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6227862B1 (en) * 1999-02-12 2001-05-08 Advanced Drivers Education Products And Training, Inc. Driver training system
CN111915159A (en) * 2020-07-15 2020-11-10 北方工业大学 Personalized takeover early warning method and system based on dynamic time budget
CN114169755A (en) * 2021-12-07 2022-03-11 华东交通大学 Driver takeover capability evaluation and alarm method, system, equipment and medium
CN115447589A (en) * 2022-09-30 2022-12-09 重庆交通大学 Takeover success probability prediction and intervention effect evaluation method under man-machine common driving condition
CN117755329A (en) * 2023-08-29 2024-03-26 杭州电子科技大学 Method and system for taking over opportunity and type based on driver situational awareness
CN117325872A (en) * 2023-11-16 2024-01-02 大连理工大学 Automatic driving vehicle driver takes over cognitive behavior prediction system
CN117666785A (en) * 2023-11-28 2024-03-08 山东大学 Automatic driving man-machine interaction takeover training method and system based on digital twinning

Also Published As

Publication number Publication date
CN117962901A (en) 2024-05-03

Similar Documents

Publication Publication Date Title
US11709488B2 (en) Manual control re-engagement in an autonomous vehicle
US11787417B2 (en) Assessing driver ability to operate an autonomous vehicle
JP6911841B2 (en) Image processing device, image processing method, and moving object
US20200057487A1 (en) Methods and systems for using artificial intelligence to evaluate, correct, and monitor user attentiveness
US9747812B2 (en) Saliency based awareness modeling
US20180099679A1 (en) Apparatus and Method for Controlling a User Situation Awareness Modification of a User of a Vehicle, and a User Situation Awareness Modification Processing System
US9101313B2 (en) System and method for improving a performance estimation of an operator of a vehicle
US20210001864A1 (en) Systems And Methods For Detecting And Dynamically Mitigating Driver Fatigue
US20130325202A1 (en) Neuro-cognitive driver state processing
CN111915159B (en) Personalized take-over early warning method and system based on dynamic time budget
US11447140B2 (en) Cognitive tunneling mitigation device for driving
US10528047B1 (en) Method and system for monitoring user activity
CN112406882A (en) Device for monitoring state of driver in man-machine co-driving process and method for evaluating pipe connection capability
CN113491519A (en) Digital assistant based on emotion-cognitive load
Rezaei et al. Simultaneous analysis of driver behaviour and road condition for driver distraction detection
Rong et al. Artificial intelligence methods in in-cabin use cases: A survey
US11772674B2 (en) Systems and methods for increasing the safety of voice conversations between drivers and remote parties
KR20150066308A (en) Apparatus and method for determining driving condition of deiver
CN117962901B (en) Driving state adjusting method and device, storage medium, electronic device and computer program product
WO2021262166A1 (en) Operator evaluation and vehicle control based on eyewear data
DE112019007484T5 (en) INFORMATION PROCESSING DEVICE, PROGRAM AND INFORMATION PROCESSING METHOD
WO2022172724A1 (en) Information processing device, information processing method, and information processing program
Basu et al. Using Facial Analysis to Combat Distracted Driving in Autonomous Vehicles
Felix et al. Review of Driver Drowsiness Detection System
Rastegar AI-Based Systems for Autonomous Vehicle Driver Monitoring and Alertness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant