CN114771559A - Vehicle human-computer interaction method, device and system - Google Patents

Vehicle human-computer interaction method, device and system Download PDF

Info

Publication number
CN114771559A
CN114771559A CN202210677459.XA CN202210677459A CN114771559A CN 114771559 A CN114771559 A CN 114771559A CN 202210677459 A CN202210677459 A CN 202210677459A CN 114771559 A CN114771559 A CN 114771559A
Authority
CN
China
Prior art keywords
driver
state
vehicle
human
computer interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210677459.XA
Other languages
Chinese (zh)
Inventor
张峥
胡益波
张如高
虞正华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Moshi Intelligent Technology Co ltd
Original Assignee
Suzhou Moshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Moshi Intelligent Technology Co ltd filed Critical Suzhou Moshi Intelligent Technology Co ltd
Priority to CN202210677459.XA priority Critical patent/CN114771559A/en
Publication of CN114771559A publication Critical patent/CN114771559A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W50/16Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0053Handover processes from vehicle to occupant
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0059Estimation of the risk associated with autonomous or manual driving, e.g. situation too complex, sensor failure or driver incapacity
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a man-machine interaction method, a device and a system of a vehicle, wherein the man-machine interaction method comprises the steps of detecting a driver state and a takeover request state of the vehicle when the vehicle is in an automatic driving state, wherein the takeover request state represents the emergency degree of the request for the driver to take over the vehicle; taking the detected driver state and the takeover request state as a target combination, and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have respectively corresponding human-computer interaction strategies; and controlling the human-computer interaction equipment on the vehicle to work according to the target human-computer interaction strategy so as to remind a driver to take over the vehicle. The safety of the vehicle running can be improved.

Description

Vehicle human-computer interaction method, device and system
Technical Field
The invention relates to the technical field of automatic driving, in particular to a man-machine interaction method, a man-machine interaction device and a man-machine interaction system for a vehicle.
Background
Currently, the automatic driving system of a vehicle may be classified into 5 levels of L1 to L5 according to different degrees of automation. Among them, the L3-level automatic driving system is conditional automatic driving. After the L3-level automatic driving system of the vehicle is started, if the conditions of the driving environment meet the designed operating environment of the automatic driving system, the automatic driving system can automatically complete the detection and response of the targets and events in the driving environment. When the driving environment conditions do not meet the design operation environment of the automatic driving system, for example, the vehicle encounters heavy rainstorm weather in the traveling process, so that the visual perception system of the vehicle cannot accurately identify the front target object, the automatic driving system can remind the driver to take over the vehicle.
Disclosure of Invention
In view of this, embodiments of the present invention provide a human-computer interaction method, a human-computer interaction device, a human-computer interaction system, and a computer-readable storage medium for a vehicle, which can improve the driving safety of the vehicle.
The invention provides a man-machine interaction method of a vehicle, which comprises the following steps:
detecting a driver state and a takeover request state of the vehicle when the vehicle is in an automatic driving state, wherein the takeover request state represents an emergency degree for requesting the driver to take over the vehicle;
taking the detected driver state and the takeover request state as a target combination, and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have respectively corresponding human-computer interaction strategies; and
and controlling the human-computer interaction equipment on the vehicle to work according to the target human-computer interaction strategy so as to remind a driver of taking over the vehicle.
In some embodiments, the driver state comprises a fatigue state of the driver, wherein the fatigue state of the driver is detected based on the following method:
acquiring a time domain heart rate signal of the driver, and performing Fourier transform on the time domain heart rate signal to obtain a frequency domain heart rate signal of the time domain heart rate signal;
and calculating the frequency ratio of the low frequency to the high frequency in the frequency domain heart rate signal, and determining the fatigue state of the driver according to the frequency ratio.
In some embodiments, said determining the fatigue state of the driver from the frequency ratio comprises:
and determining the fatigue state of the driver according to the ratio range to which the frequency ratio belongs, wherein the ratio range is determined based on an analysis method of human factors engineering, and each ratio range has a corresponding fatigue state.
In some embodiments, the driver state comprises a state of attention focus of the driver, wherein the state of attention focus of the driver is detected based on:
acquiring image data photographed for the driver;
analyzing the image data, determining an eyeball watching area of the driver, and determining the attention concentration state of the driver according to the eyeball watching area.
In some embodiments, the determining the driver's state of concentration from the eye gaze region comprises:
and determining the attention focusing state of the driver according to the area range to which the eyeball watching area belongs, wherein the area range is determined based on an analysis method of human factors engineering, and each area range has a corresponding attention focusing state.
In some embodiments, the driver state comprises a driving state of the driver, the driving state characterizing an intervention state of the driver for vehicle driving, wherein the driving state is determined based on the following method:
detecting the current position of a target component in a vehicle, wherein the target component is a vehicle component operated by a driver when the vehicle is driven;
and determining the driving state of the driver according to the current position of the target component.
In some embodiments, the determining a human-computer interaction policy corresponding to the target combination includes:
and inquiring a human-computer interaction strategy corresponding to the target combination in a pre-established human-computer interaction strategy table by taking the target combination as an inquiry condition, wherein the human-computer interaction strategy table is used for storing human-computer interaction strategies respectively corresponding to different combinations of the driver state and the takeover request state.
The invention also provides a man-machine interaction device, which comprises:
the system comprises a detection module, a judgment module and a control module, wherein the detection module is used for detecting a driver state and a takeover request state of a vehicle when the vehicle is in an automatic driving state, and the takeover request state represents the emergency degree of requesting the driver to take over the vehicle;
the decision module is used for taking the detected driver state and the detected takeover request state as a target combination and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have human-computer interaction strategies corresponding to the different combinations; and
and the control module is used for controlling corresponding human-computer interaction equipment on the vehicle to work according to the target human-computer interaction strategy so as to remind a driver to take over the vehicle.
In another aspect, the present invention also provides a computer-readable storage medium for storing a computer program, which when executed by a processor implements the method as described above.
In another aspect, the present invention also provides a human-computer interaction system, which includes a processor and a memory, where the memory is used to store a computer program, and the computer program implements the method as described above when executed by the processor.
In some embodiments of the application, the detected driver state and the takeover request state of the vehicle are used as a target combination, and the corresponding human-computer interaction equipment on the vehicle is controlled to work according to a human-computer interaction strategy corresponding to the target combination, so that the driver can be effectively reminded to take over the vehicle when the vehicle is switched from automatic driving to manual driving, the problem that the vehicle cannot be timely taken over because the driver does not pay attention to the takeover request of the vehicle due to reasons such as fatigue can be prevented, and further, the driving safety of the vehicle can be improved when the vehicle is switched from automatic driving to manual driving.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are schematic and are not to be understood as limiting the invention in any way, and in which:
FIG. 1 illustrates a block diagram of a human-computer interaction system provided by an embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for human-computer interaction of a vehicle according to an embodiment of the present application;
FIG. 3 shows a schematic flow chart for detecting a driver fatigue state provided by an embodiment of the present application;
FIG. 4 illustrates a schematic view of the area division in front of the driver provided by an embodiment of the present application;
FIG. 5 illustrates a flow chart for detecting a driver attentiveness state provided by an embodiment of the present application;
FIG. 6 shows a block diagram of a human-computer interaction device provided by an embodiment of the present application;
fig. 7 shows a schematic diagram of a human-computer interaction system provided by an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, a block diagram of a human-computer interaction system according to an embodiment of the present application is shown. The human-computer interaction system may be mounted on a vehicle having an autopilot function. In fig. 1, the human-computer interaction system may include sensing devices (e.g., sensors, cameras, radar), human-computer interaction controllers, autonomous driving area controllers, and human-computer interaction devices (e.g., indicator lights, vibration motors, speakers, in-vehicle center control screens, etc.).
In some embodiments, the heart rate sensor may be a wearable device or a device integrated into the steering wheel. The heart rate sensor may be used to detect the heart rate of the driver and output a corresponding time domain heart rate signal. The attention detection camera may be used to capture and output image data of the driver. The component position sensor may be configured to detect a current position of a target component on the vehicle and output a corresponding position signal. The target component may be a vehicle component that is operated by the driver while the vehicle is driving, including an accelerator pedal, a brake pedal, a steering wheel, and a seat belt of the vehicle.
In some embodiments, forward looking cameras, millimeter wave radars, and lidar may detect objects and events in the vehicle driving environment and output corresponding detection results. The detection results may include image video data output by the forward-looking camera, and point cloud data output by the millimeter wave radar or the laser radar. The automatic driving domain controller can analyze detection results output by the forward-looking camera, the millimeter wave radar and the laser radar, judge whether a vehicle driving environment meets a design operation environment of the automatic driving system, and determine whether a driver needs to take over the vehicle according to the judgment result.
In some embodiments, the indicator light, the vibration motor, the speaker and the vehicle-mounted central control screen can be used as a man-machine interaction device for prompting the driver to take over the vehicle. The human-computer interaction controller can control at least part of human-computer interaction equipment to work according to the state of the driver and the takeover request state of the vehicle so as to prompt the driver.
Based on the human-computer interaction system in fig. 1, the application provides a human-computer interaction method for a vehicle, which can improve the driving safety of the vehicle in the process of switching the vehicle from automatic driving to manual driving. The man-machine interaction method can be applied to the man-machine interaction system in fig. 1. Please refer to fig. 2, which is a flowchart illustrating a vehicle human-computer interaction method according to an embodiment of the present disclosure. The human-computer interaction method includes steps S21 to S23.
In step S21, when the vehicle is in the automatic driving state, the driver state and the takeover request state of the vehicle are detected.
The driver state will be described first.
In some embodiments, the driver state may include a fatigue state, an attentive state, and a driving state of the driver.
The fatigue state detection of the driver will be described below. Please refer to fig. 3, which is a flowchart illustrating a process for detecting a fatigue state of a driver according to an embodiment of the present disclosure.
In some embodiments, a time domain heart rate signal of the driver may be obtained first, and the time domain heart rate signal is subjected to fourier transform to obtain a frequency domain heart rate signal of the time domain heart rate signal. Then, a frequency Ratio of the low frequency to the high frequency in the frequency domain heart rate signal may be calculated, and the fatigue state of the driver may be determined according to the frequency Ratio. Specifically, the time domain heart rate signal of the driver may be subjected to fourier transform according to the following expression, so as to obtain a frequency domain heart rate signal of the time domain heart rate signal.
Figure 448829DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure 740133DEST_PATH_IMAGE004
representing a sequence of frequency domain heart rate signals resulting from the transformation,
Figure 476008DEST_PATH_IMAGE006
representing the sequence obtained after discretization of the time-domain heart rate signal, and N representing the sampling length.
And then according to a preset low-frequency range and a preset high-frequency range, determining the frequency in the low-frequency range as a low-frequency and determining the frequency in the high-frequency range as a high-frequency in the time domain heart rate signal obtained by changing. In some embodiments, the low frequency range is 0.04Hz to 0.14Hz and the high frequency range is 0.15Hz to 0.4 Hz.
Further, the fatigue state of the driver can be determined according to the Ratio range to which the frequency Ratio of the low frequency to the high frequency belongs. For example, in fig. 3, for the case where the frequency Ratio is greater than 3, it is determined that the driver is in the awake state; determining that the driver is in a light fatigue state under the condition that the frequency Ratio is less than or equal to 3 and is greater than or equal to 2; and determining that the driver is in a severe fatigue state under the condition that the frequency Ratio is less than 2 and greater than or equal to 0. In some embodiments, the ratio ranges may be determined based on human factors engineering analysis, each ratio range having a respective fatigue state. The ratio range is determined by the human factor engineering analysis method, so that the fatigue degree of the driver obtained by the method is more accurate and reliable than the fatigue degree of the driver obtained by simple blink judgment through vision.
In this embodiment, the fatigue state of the driver may be represented by a variable DDS, and different values of the variable DDS represent different fatigue degrees of the driver. Specifically, when the DDS value is 0, the driver is in a waking state; when the DDS value is 1, the driver is in a light fatigue state; when the DDS value is 2, the driver is in a severe fatigue state.
In some embodiments, the time domain heart rate signal of the driver can be further divided into a plurality of segments of time domain heart rate signals according to the duration. For example, dividing the time domain heart rate signal of the driver into a plurality of time domain heart rate signals according to the division of every two minutes into one segment. And then respectively executing Fourier transform on each segment of time domain heart rate signal to obtain a frequency domain heart rate signal of each segment of time domain heart rate signal. The fatigue state of the driver can be determined based on each frequency domain heart rate signal. In this way, tracking of the fatigue state of the driver can be achieved. In short, the fatigue state of the driver can be detected once at regular intervals. For example, every 2 minutes, the fatigue state of the driver is detected.
Please refer to fig. 1. In some embodiments, the human interaction controller may include a fatigue state detection module. The fatigue state detection module can be used for acquiring a time domain heart rate signal output by the heart rate sensor and detecting the fatigue state of the driver according to the method so as to obtain the fatigue state detection result of the driver. It is to be understood that the fatigue state detection module may be a separate module independent of the human interaction controller. When the fatigue state detection module is independent of the human-computer interaction controller, the fatigue state detection module can send the fatigue state detection result of the driver to the human-computer interaction controller.
The following describes detection of the driver's attention concentrating state. Please refer to fig. 4 and fig. 5 in combination. Fig. 4 is a schematic diagram of the area division in front of the driver according to an embodiment of the present application. Fig. 5 is a schematic flowchart of detecting a driver attention focusing state according to an embodiment of the present application.
In some embodiments, the image data captured for the driver may be first acquired, and then the image data may be analyzed to determine an eye gaze area of the driver, and the attention focusing status of the driver may be determined according to the eye gaze area. The image data of the driver is image data that can determine the eyeball region of the driver, such as eye image data or head image data or upper body image data of the driver. It can be understood that, during the vehicle traveling, if the eyeball gaze area of the driver is located right in front of the seat, it can indicate that the attention of the driver is more focused and the risk of distraction is lower. If the eyeball attention area of the driver is located in other areas than the area right in front of the seat, it can indicate that the attention of the driver is at a distraction risk. Based on this, as shown in fig. 4, the driver's surroundings may be divided into a plurality of area ranges in advance, and then the state of concentration of the driver may be determined according to the area range to which the eyeball attention area belongs. The area ranges can be determined based on an analysis method of human factors engineering, and each area range has a corresponding attention focusing state. The area range is divided based on the analysis method of human factors engineering, and the detection result of the attention concentration state of the driver can be more accurate.
See fig. 4 and 5 in combination. In this embodiment, the attention focusing status of the driver may be represented by a variable dda, and different values of the variable dda represent different attention focusing statuses of the driver. Specifically, when the eyeball watching area of the driver is located in the Zone1 area range, the DDRA value is 00, which indicates that the risk of driver distraction is low; when the eyeball watching area of the driver is located in the Zone2 area range, the DDRA value is 01, which indicates that the risk of driver distraction is moderate; when the eyeball watching area of the driver is located in the Zone3 area range, the value of the DDRA is 02, which indicates that the risk of driver distraction is high; when the eyeball watching area of the driver is positioned at the side or the rear of the driver seat, the DDRA value is 03, which indicates that the risk of distraction of the driver is extremely high.
Please refer to fig. 1. In some embodiments, the human interaction controller may include an attention detection module. The attention detection module can be used for acquiring image data output by the attention detection camera and detecting the attention focusing state of the driver based on the method so as to obtain the detection result of the attention focusing state of the driver. It will be appreciated that the attention detection module may be a separate module from the human interaction controller. When the attention detection module is independent of the human-computer interaction controller, the attention detection module can send the detection result to the human-computer interaction controller.
The following describes detection of the driving state of the driver.
In some embodiments, the driver's driving state may be indicative of a driver's state of intervention in the driving of the vehicle. Such as the operation control states of the steering wheel, the accelerator pedal, and the brake pedal by the driver. When detecting the driving state of the driver, the current position of the target component in the vehicle may be detected first, and then the driving state of the driver may be determined according to the current position of the target component. As previously described, among other things, the target component may be a vehicle component that is operated by the driver while the vehicle is driving. The target components may include a steering wheel, an accelerator pedal, a brake pedal, and a seat belt of the vehicle. The driver can control the vehicle or control the driving safety of the driver by changing the position of the target component during the driving of the vehicle. For example, the accelerator pedal and the brake pedal may have two positions, depressed and not depressed by the driver, the steering wheel has a steering position, and the seat belt has two positions, tied and unbuckled by the driver. The driver can control the vehicle by changing the steering positions of an accelerator pedal, a brake pedal and a steering wheel. The driver can control the driving safety of the driver by changing the position of the safety belt.
Please refer to fig. 1. In some embodiments, the component position sensor may sense a position of the target component and output a different position electrical signal when the target component is in a different position. For example, when the accelerator pedal is not depressed by the driver, the component position sensor may output a low level; the component position sensor may output a high level when the accelerator pedal is depressed by the driver. In this manner, the position of the target component can be determined by analyzing the electric signal output from the component position sensor.
In this embodiment, a variable APP may be used to indicate a current position of the accelerator pedal, and different values of the variable APP indicate different positions of the accelerator pedal; the position of the brake pedal is represented by a variable BPP, and different values of the variable BPP represent different positions of the brake pedal; a variable SWA is used for representing the current steering position of the steering wheel, and different values of the variable SWA represent different positions of the steering wheel; and a variable DSS represents the current position of the safety belt, and different values of the variable DSS represent different positions of the safety belt.
Please continue to refer to fig. 1. In some embodiments, the human interaction controller may include a driving state detection module. The driving state detection module can be used for acquiring the position electric signal of the target component and analyzing the position electric signal to obtain a detection result of the driving state of the driver. It is understood that the driving state detection module may be a separate module independent of the human interaction controller. When the driving state detection module is independent of the human-computer interaction controller, the driving state detection module can send the detection result of the driving state to the human-computer interaction controller.
Based on the above description, the detection of the driver's state can be accomplished.
The following describes the takeover request state of the vehicle.
In some embodiments, the takeover request state characterizes the urgency of requesting the driver to take over the vehicle. The takeover request state may be represented using the variable DDTFR. Different values of the variable DDTFR may indicate different degrees of urgency for the driver to take over the vehicle. Specifically, if the driving environment of the vehicle meets the design operation environment of the automatic driving system, the vehicle can be operated in the automatic driving state, and the value of the variable DDTFR can be 0, which means that the driver does not need to take over. If the driving environment of the vehicle does not meet the design operation environment of the automatic driving system, different time for requesting the driver to take over the vehicle can be represented by different values of the variable DDTFR. For example, if the DDTFR value is 1, it may indicate that the driver is requested to take over the vehicle after 10 seconds; the DDTFR value is 2, which can indicate that the driver is requested to take over the vehicle after 3 seconds; a DDTFR value of 3 may indicate a request to the driver to take over the vehicle 1 second later. Therefore, different takeover request states of the vehicle can be distinguished through different DDTFR values.
With continued reference to fig. 1. In some embodiments, the autopilot domain controller may analyze detection results output by the forward-looking camera, the millimeter wave radar, and the laser radar, then determine a takeover request status of the vehicle, and send the takeover request status to the human-machine interaction controller.
Based on the above description, the detection of the driver state and the takeover request state of the vehicle is completed.
And step S22, taking the detected driver state and the takeover request state as a target combination, and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have respectively corresponding human-computer interaction strategies.
In some embodiments, the human-machine interaction strategy is used to characterize a prompt strategy when the vehicle needs to be switched from autonomous driving to manual driving, prompting the driver to take over the vehicle. The man-machine interaction strategy comprises but is not limited to the size of a prompting sound, the prompting time, the prompting times and the man-machine interaction device selection for executing the prompting operation. Aiming at different combinations of the driver state and the takeover request state, respectively corresponding human-computer interaction strategies can be provided so as to effectively prompt the driver. In the following, different combinations of the driver status and the takeover request status will be described as examples where the prompting operation is performed by a human-computer interaction device corresponding to each combination.
Based on the description of step S21, it is assumed that the fatigue state of the driver is represented using the variable DDS, the state of concentration of the driver is represented using the variable DDRA, the current position of the accelerator pedal is represented using the variable APP, the current position of the brake pedal is represented using the variable BPP, the current steering position of the steering wheel is represented using the variable SWA, the current position of the seatbelt is represented using the variable DSS, and the take-over request state is represented using the variable DDTFR. In some embodiments, different combinations of driver states and takeover request states refer to different combinations of values of the variables. Each combination may have a respective corresponding human-machine interaction policy. For example, assume that there are the following combinations one and two:
the combination is as follows:
DDS =0 (driver awake), ddr =00 (low risk of driver distraction), APP =1 (accelerator pedal depressed), BPP =1 (accelerator pedal depressed), SWA =1 (steering wheel turned to position 1), DSS =1 (seat belt fastened), DDTFR =1 (driver requested to take over vehicle 10 seconds later)
Combining two:
DDS =3 (driver in severe fatigue), ddr =03 (driver at high risk of distraction), APP =1 (accelerator pedal depressed), BPP =1 (accelerator pedal depressed), SWA =1 (steering wheel turned to position 1), DSS =1 (seat belt fastened), DDTFR =3 (driver requested to take over vehicle 1 second later)
In the above two combinations, the driving state is the same. In the combination one, the driver is in an awake state, the risk of distraction is low, and the urgency of requesting the driver to take over the vehicle is low; in the second combination, the driver is in a severe fatigue state, the risk of distraction is extremely high, and the urgency for requesting the driver to take over the vehicle is high. In view of this, a more gradual human-machine interaction strategy may be employed for the combination. For example, only the indicator light and the loudspeaker are controlled to work to remind the driver to take over the vehicle. And aiming at the second combination, a relatively obvious human-computer interaction strategy can be adopted. For example, the control indicator light, the loudspeaker and the vibrating motor work simultaneously to remind a driver of taking over the vehicle. Therefore, the problem that the driver cannot take over the vehicle in time due to fatigue and the like is avoided, and the driving safety of the vehicle is influenced.
Based on the above description, in some embodiments, a human-machine interaction policy table may be pre-established, where the human-machine interaction policy table is used to store human-machine interaction policies corresponding to different combinations of the driver status and the takeover request status. Table 1 illustratively lists some of the contents of the human-machine interaction policy table.
Table 1 human-machine interaction policy Table
Figure DEST_PATH_IMAGE008
In table 1, please refer to the above description for the meaning and value of the DDS, ddr a, APP, SWA, DSS, and DDTFR, which is not described herein again. Wherein the content of the first and second substances,
light denotes an indicator lamp. When the value of the indicator light is 1, the indicator light works, and when the value of the indicator light is 0, the indicator light does not work;
au denotes a speaker. When the value of the loudspeaker is 1, the loudspeaker works, and when the value of the loudspeaker is 0, the loudspeaker does not work;
vi denotes a vibration motor. When the value of the vibration motor is 1, the vibration motor works, and when the value of the vibration motor is 0, the vibration motor does not work;
the HMI stands for an in-vehicle center screen. When the value of the vehicle-mounted central control screen is 1, the vehicle-mounted central control screen works, and when the value of the vehicle-mounted central control screen is 0, the vehicle-mounted central control screen does not work.
As can be seen in table 1, there are corresponding human-machine interaction strategies for different combinations of driver status and takeover request status, respectively. Therefore, the appropriate prompt mode can be adopted according to the state of the driver and the taking-over request state of the vehicle, and the driver is effectively prompted, so that the driving safety of the vehicle can be improved when the vehicle is switched from automatic driving to manual driving.
It can be understood that, for different combinations of the driver state and the takeover request state, the prompt operation is executed through the corresponding human-computer interaction devices, and the prompt operation can be further effectively prompted by combining the modes of prompting sound intensity, light intensity and the like. For example, for the 1 st and 3 rd combinations in table 1, the light intensity of the indicator light in the 3 rd combination may be stronger than the light intensity in the 1 st combination.
In some embodiments, when a target combination composed of the detected driver state and the takeover request state is detected, the target combination may be used as a query condition, and a human-machine interaction policy corresponding to the target combination may be queried in a human-machine interaction policy table established in advance. And the system can directly inquire from the table, so that the response efficiency can be improved, and the driving safety of the vehicle can be further improved.
And step S23, controlling the work of the human-computer interaction equipment on the vehicle according to the target human-computer interaction strategy so as to remind the driver of taking over the vehicle.
In some embodiments of the application, the detected driver state and the takeover request state of the vehicle are used as the target combination, and the corresponding human-computer interaction equipment on the vehicle is controlled to work according to the human-computer interaction strategy corresponding to the target combination, so that when the vehicle is switched from automatic driving to manual driving, the driver can be effectively reminded to take over the vehicle, the problem that the vehicle cannot be timely taken over due to the fact that the driver does not pay attention to the takeover request of the vehicle because of fatigue and the like can be prevented, and further, when the vehicle is switched from automatic driving to manual driving, the driving safety of the vehicle is improved.
Please refer to fig. 6, which is a block diagram of a human-computer interaction device according to an embodiment of the present application. The man-machine interaction device comprises:
the system comprises a detection module, a judgment module and a control module, wherein the detection module is used for detecting a driver state and a takeover request state of a vehicle when the vehicle is in an automatic driving state, and the takeover request state represents the emergency degree of requesting the driver to take over the vehicle;
the decision module is used for taking the detected driver state and the takeover request state as a target combination and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have respectively corresponding human-computer interaction strategies; and
and the control module is used for controlling corresponding human-computer interaction equipment on the vehicle to work according to the target human-computer interaction strategy so as to remind a driver of taking over the vehicle.
Please refer to fig. 7, which is a schematic diagram of a human-computer interaction system according to an embodiment of the present application. The human-computer interaction system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the computer program realizes the human-computer interaction method when being executed by the processor.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose Processor, Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or a combination thereof.
The memory, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions and modules stored in the memory, that is, the method in the above method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
An embodiment of the present application further provides a computer-readable storage medium, which is used for storing a computer program, and when the computer program is executed by a processor, the computer program implements the human-computer interaction method described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A human-computer interaction method for a vehicle, the method comprising:
detecting a driver state and a takeover request state of the vehicle when the vehicle is in an automatic driving state, wherein the takeover request state represents an emergency degree for requesting the driver to take over the vehicle;
taking the detected driver state and the takeover request state as a target combination, and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have respectively corresponding human-computer interaction strategies; and
and controlling the human-computer interaction equipment on the vehicle to work according to the target human-computer interaction strategy so as to remind a driver of taking over the vehicle.
2. The method of claim 1, wherein the driver state comprises a fatigue state of the driver, wherein the driver fatigue state is detected based on:
acquiring a time domain heart rate signal of the driver, and performing Fourier transform on the time domain heart rate signal to obtain a frequency domain heart rate signal of the time domain heart rate signal;
and calculating the frequency ratio of the low frequency to the high frequency in the frequency domain heart rate signal, and determining the fatigue state of the driver according to the frequency ratio.
3. The method of claim 2, wherein said determining the fatigue state of the driver from the frequency ratio comprises:
and determining the fatigue state of the driver according to the ratio range to which the frequency ratio belongs, wherein the ratio range is determined based on an analysis method of human factors engineering, and each ratio range has a corresponding fatigue state.
4. The method of claim 1, wherein the driver state comprises a state of attention focus of the driver, wherein the state of attention focus of the driver is detected based on:
acquiring image data shot for the driver;
analyzing the image data, determining an eyeball watching area of the driver, and determining the attention concentration state of the driver according to the eyeball watching area.
5. The method of claim 4, wherein said determining the driver's state of concentration from the eye gaze region comprises:
and determining the attention focusing state of the driver according to the area range to which the eyeball watching area belongs, wherein the area range is determined based on an analysis method of human factors engineering, and each area range has a corresponding attention focusing state.
6. The method of claim 1, wherein the driver state comprises a driving state of the driver characterizing an intervention state of the driver for vehicle driving, wherein the driving state is determined based on:
detecting the current position of a target component in a vehicle, wherein the target component is a vehicle component operated by a driver when the vehicle is driven;
and determining the driving state of the driver according to the current position of the target component.
7. The method of claim 1, wherein the determining the human-computer interaction policy corresponding to the target combination comprises:
and taking the target combination as a query condition, and querying a human-computer interaction strategy corresponding to the target combination in a pre-established human-computer interaction strategy table, wherein the human-computer interaction strategy table is used for storing human-computer interaction strategies respectively corresponding to different combinations of the driver state and the takeover request state.
8. A human-computer interaction device, characterized in that the device comprises:
the system comprises a detection module, a judgment module and a control module, wherein the detection module is used for detecting a driver state and a takeover request state of a vehicle when the vehicle is in an automatic driving state, and the takeover request state represents the emergency degree of requesting the driver to take over the vehicle;
the decision module is used for taking the detected driver state and the takeover request state as a target combination and determining a target human-computer interaction strategy corresponding to the target combination, wherein different combinations of the driver state and the takeover request state have respectively corresponding human-computer interaction strategies; and
and the control module is used for controlling corresponding human-computer interaction equipment on the vehicle to work according to the target human-computer interaction strategy so as to remind a driver to take over the vehicle.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium is used for storing a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
10. A human-computer interaction system, characterized in that the human-computer interaction system comprises a processor and a memory for storing a computer program which, when executed by the processor, carries out the method of any one of claims 1 to 7.
CN202210677459.XA 2022-06-16 2022-06-16 Vehicle human-computer interaction method, device and system Pending CN114771559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210677459.XA CN114771559A (en) 2022-06-16 2022-06-16 Vehicle human-computer interaction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210677459.XA CN114771559A (en) 2022-06-16 2022-06-16 Vehicle human-computer interaction method, device and system

Publications (1)

Publication Number Publication Date
CN114771559A true CN114771559A (en) 2022-07-22

Family

ID=82420692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210677459.XA Pending CN114771559A (en) 2022-06-16 2022-06-16 Vehicle human-computer interaction method, device and system

Country Status (1)

Country Link
CN (1) CN114771559A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106314437A (en) * 2016-08-30 2017-01-11 宇龙计算机通信科技(深圳)有限公司 Fatigue driving preventing device and method and intelligent bracelet
CN107200022A (en) * 2016-03-15 2017-09-26 奥迪股份公司 Drive assist system and method
CN108819945A (en) * 2018-05-25 2018-11-16 吉利汽车研究院(宁波)有限公司 A kind of automatic driving vehicle adapter tube suggestion device and method
US20190049955A1 (en) * 2017-08-10 2019-02-14 Omron Corporation Driver state recognition apparatus, driver state recognition system, and driver state recognition method
CN111361566A (en) * 2018-12-06 2020-07-03 驭势(上海)汽车科技有限公司 Takeover reminding method for automatic driving vehicle, vehicle-mounted equipment and storage medium
CN112693469A (en) * 2021-01-05 2021-04-23 中国汽车技术研究中心有限公司 Method and device for testing vehicle taking over by driver, electronic equipment and medium
CN112758098A (en) * 2019-11-01 2021-05-07 广州汽车集团股份有限公司 Vehicle driving authority take-over control method and device based on driver state grade
CN113071446A (en) * 2021-02-26 2021-07-06 三三智能科技(苏州)有限公司 Health index monitoring system based on automobile intelligent safety belt and data processing method thereof
US20210394789A1 (en) * 2020-06-20 2021-12-23 Tsinghua University Evaluation method and system for steering comfort in human machine cooperative take-over control process of autonomous vehicle, and storage medium
CN114274954A (en) * 2021-12-16 2022-04-05 上汽大众汽车有限公司 Vehicle active pipe connecting system and method combining vehicle internal and external sensing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107200022A (en) * 2016-03-15 2017-09-26 奥迪股份公司 Drive assist system and method
CN106314437A (en) * 2016-08-30 2017-01-11 宇龙计算机通信科技(深圳)有限公司 Fatigue driving preventing device and method and intelligent bracelet
US20190049955A1 (en) * 2017-08-10 2019-02-14 Omron Corporation Driver state recognition apparatus, driver state recognition system, and driver state recognition method
CN108819945A (en) * 2018-05-25 2018-11-16 吉利汽车研究院(宁波)有限公司 A kind of automatic driving vehicle adapter tube suggestion device and method
CN111361566A (en) * 2018-12-06 2020-07-03 驭势(上海)汽车科技有限公司 Takeover reminding method for automatic driving vehicle, vehicle-mounted equipment and storage medium
CN112758098A (en) * 2019-11-01 2021-05-07 广州汽车集团股份有限公司 Vehicle driving authority take-over control method and device based on driver state grade
US20210394789A1 (en) * 2020-06-20 2021-12-23 Tsinghua University Evaluation method and system for steering comfort in human machine cooperative take-over control process of autonomous vehicle, and storage medium
CN112693469A (en) * 2021-01-05 2021-04-23 中国汽车技术研究中心有限公司 Method and device for testing vehicle taking over by driver, electronic equipment and medium
CN113071446A (en) * 2021-02-26 2021-07-06 三三智能科技(苏州)有限公司 Health index monitoring system based on automobile intelligent safety belt and data processing method thereof
CN114274954A (en) * 2021-12-16 2022-04-05 上汽大众汽车有限公司 Vehicle active pipe connecting system and method combining vehicle internal and external sensing

Similar Documents

Publication Publication Date Title
CN107640159B (en) Automatic driving human-computer interaction system and method
CN109177975B (en) Integrated cruise system exit method and device
US10246091B2 (en) Rear monitoring for automotive cruise control systems
US9056550B2 (en) Vehicle speed limiting and/or controlling system that is responsive to GPS signals
JP4510389B2 (en) Method and apparatus for driver warning
US20170240185A1 (en) Driver assistance apparatus and vehicle having the same
JP6565408B2 (en) Vehicle control apparatus and vehicle control method
CN107209995B (en) Driving support device and driving support method
US10926699B2 (en) Method and system for historical state based advanced driver assistance
CN112758098B (en) Vehicle driving authority take-over control method and device based on driver state grade
US11587461B2 (en) Context-sensitive adjustment of off-road glance time
WO2022143994A1 (en) Vehicle avoidance method and apparatus
US20150002286A1 (en) Display control apparatus
US20220258771A1 (en) Method to detect driver readiness for vehicle takeover requests
US20200189518A1 (en) Attention calling device and attention calling system
JP2019513621A (en) Regional Adjustment for Driver Assist Function
CN112114671A (en) Human-vehicle interaction method and device based on human eye sight and storage medium
CN114771559A (en) Vehicle human-computer interaction method, device and system
JP6946789B2 (en) Awakening maintenance device
KR102331882B1 (en) Method and apparatus for controlling an vehicle based on voice recognition
CN114291110B (en) Vehicle control method, device, electronic equipment and storage medium
US11440553B2 (en) Vehicular device and computer-readable non-transitory storage medium storing computer program
WO2022258150A1 (en) Computer-implemented method of assisting a driver, computer program product, driving assistance system, and vehicle
CN106314256A (en) Automobile lighting control method, device and system
TWI831370B (en) Method and system for driving assisting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220722

RJ01 Rejection of invention patent application after publication