WO2021176633A1 - Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur - Google Patents

Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur Download PDF

Info

Publication number
WO2021176633A1
WO2021176633A1 PCT/JP2020/009315 JP2020009315W WO2021176633A1 WO 2021176633 A1 WO2021176633 A1 WO 2021176633A1 JP 2020009315 W JP2020009315 W JP 2020009315W WO 2021176633 A1 WO2021176633 A1 WO 2021176633A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
state
related information
unit
information
Prior art date
Application number
PCT/JP2020/009315
Other languages
English (en)
Japanese (ja)
Inventor
堅人 田中
季美果 池上
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2020/009315 priority Critical patent/WO2021176633A1/fr
Publication of WO2021176633A1 publication Critical patent/WO2021176633A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers

Definitions

  • the present disclosure relates to a driver state estimation device for estimating a driver's state and a driver state estimation method.
  • Patent Document 1 discloses a technique for determining that a driver is driving in a normal state from the biometric information of the driver.
  • This disclosure is made to solve the above-mentioned problems, and enables the driver to judge that the driver is driving in a normal state with higher accuracy than the case of judging from biometric information. It is an object of the present invention to provide a state estimation device.
  • the driver state estimation device includes an information collecting unit that collects driver-related information related to the driver of the vehicle, and the driver-related information collected by the information collecting unit based on the driver's reaction is driving.
  • the state judgment unit that determines whether the information is driver-related information collected when the person is in a normal state, the driver-related information collected by the information collection unit, and the driver-related information by the state judgment unit are the driver.
  • State estimation that estimates whether or not the driver is in the normal state based on the estimation information that is reset based on the judgment result that is the driver-related information collected when is in the normal state. It is equipped with a part.
  • FIG. It is a figure which shows the configuration example of the driver state estimation apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the image of an example of the estimation information in Embodiment 1.
  • FIG. It is a flowchart for demonstrating operation of the driver state estimation apparatus which concerns on Embodiment 1.
  • FIG. It is a flowchart for demonstrating the specific operation of step ST302 of FIG.
  • FIG. It is a flowchart explaining the specific operation of step ST403 of FIG.
  • FIG. It is a figure which shows the configuration example of the driver state estimation apparatus which concerns on Embodiment 2.
  • FIG. It is a figure for demonstrating a neural network. It is a flowchart for demonstrating operation of the driver state estimation apparatus which concerns on Embodiment 2.
  • FIG. FIG. FIG.
  • FIG. 5 is a diagram showing a configuration example of a driver state estimation device and a learning device when the learning unit is provided in a learning device outside the driver state estimation device in the second embodiment.
  • 10A and 10B are diagrams showing an example of the hardware configuration of the driver state estimation device according to the first and second embodiments.
  • the driver state estimation device is mounted on the vehicle.
  • the driver state estimation device collects information related to the driver of the vehicle (hereinafter referred to as "driver-related information”), and is based on the collected driver-related information and the stored estimation information. It is estimated whether or not the driver is in a normal state (hereinafter referred to as "normal state").
  • the driver-related information relates to the state of the environment surrounding the driver, such as information about the driver himself, information about the state inside the vehicle driven by the driver, or information about the state outside the vehicle. Contains information.
  • the "normal state” means a state in which the driver can drive the vehicle normally.
  • the "normal state” is, for example, a state in which the driver can concentrate on driving, a state in which the driver is awake, a state in which the driver is not exhausted, or a state in which the driver is not frustrated.
  • the estimation information is information in which the driver-related information and the estimation rule for determining whether or not the driver-related information is in a normal state are associated with each other.
  • the estimation information is generated based on, for example, driver-related information that is assumed to be collected during driving by a general driver in a normal state.
  • the estimation information is generated in advance at the time of shipment of the driver state estimation device and is stored in the storage unit of the driver state estimation device.
  • the driver state estimation device collects driver-related information when the driver starts driving the vehicle, and the collected driver-related information is the driver collected by the driver in a normal state. Determine if it is related information (hereinafter referred to as "normal state driver related information").
  • the driver state estimation device determines that the driver-related information is the normal state driver-related information
  • the driver state estimation device updates the stored estimation information based on the normal state driver-related information. do.
  • the driver state estimation device according to the first embodiment uses the stored estimation information as estimation information that matches the driver who is driving.
  • the driver state estimation device estimates whether or not the driver is in a normal state based on the collected driver-related information and the updated estimation information while updating the estimation information. The details of the determination of whether the information is related to the normal state driver and the update of the determination driver-related information by the driver state estimation device according to the first embodiment will be described later.
  • FIG. 1 is a diagram showing a configuration example of a driver state estimation device according to the first embodiment.
  • the driver state estimation device 1 includes an information collection unit 11, a state determination unit 12, a state estimation unit 13, a storage unit 14, and an output unit 15.
  • the state determination unit 12 includes an inquiry unit 121, a response acquisition unit 122, and a determination unit 123.
  • the state estimation unit 13 includes an update unit 131 and an estimation unit 132.
  • the information collecting unit 11 collects driver-related information. More specifically, the information collecting unit 11 extracts the feature amount from the information collected from the information collecting device (not shown), reflects it in the collected information, and reflects the information after reflecting the feature amount in the driver. Use related information.
  • the information collecting device is, for example, an image pickup device that images the inside of a vehicle (hereinafter, referred to as an “in-vehicle image pickup device”; not shown).
  • the in-vehicle image pickup device is a camera or the like installed for the purpose of monitoring the inside of the vehicle, and is installed so as to be able to image at least the driver's face.
  • the in-vehicle image pickup device may be shared with, for example, a so-called "driver monitoring system (DMS)".
  • DMS driver monitoring system
  • the information collecting device is, for example, a microphone (not shown) installed for the purpose of collecting sound in the vehicle.
  • the information collecting device includes, for example, a vehicle speed sensor (not shown), an accelerator opening sensor (not shown), a brake sensor (not shown), a button (not shown), a turn signal (not shown), or a GPS. (Global Positioning System) or the like, which is a device installed in a vehicle to detect an operation performed on the vehicle by a driver or a state of the vehicle (hereinafter referred to as a “vehicle state detection device”; not shown).
  • a vehicle speed sensor not shown
  • an accelerator opening sensor not shown
  • a brake sensor not shown
  • a button not shown
  • a turn signal not shown
  • GPS Global Positioning System
  • the information collecting device is installed in the vehicle, for example, an image pickup device (hereinafter referred to as “outside vehicle image pickup device”; not shown), a sensor (not shown), a LiDAR (not shown), or the like that images the surroundings of the vehicle. It is a device that acquires information around the vehicle (hereinafter referred to as “peripheral information acquisition device”; not shown).
  • the information collecting unit 11 obtains the driver's body temperature, sweating degree, heartbeat, etc. from the captured image (hereinafter referred to as "in-vehicle image") collected from the in-vehicle image pickup device.
  • the feature amount according to the facial expression, emotion, line of sight, eye opening degree, pupil size, face orientation, or posture is extracted.
  • the information collecting unit 11 uses the in-vehicle image after reflecting the extracted feature amount as driver-related information.
  • the information collecting unit 11 uses the voice information collected from the microphone as a feature amount according to the voice uttered by the driver, the voice quality of the driver, or the voice uttered by the passenger. Is extracted.
  • the information collecting unit 11 uses the voice information after reflecting the extracted feature amount as the driver-related information.
  • the information collecting unit 11 uses the information collected from the vehicle state detecting device to handle the steering angle, the accelerator opening, the brake opening, the button operation, and the direction instruction.
  • the feature amount according to the operation of the device, the vehicle speed, the acceleration, or the position of the own vehicle is extracted.
  • the information collecting unit 11 uses the information after reflecting the extracted feature amount as the driver-related information.
  • the information collecting unit 11 responds to the distance from another vehicle or whether or not the white line is stepped on from the information collected from the peripheral information acquisition device. Extract the feature amount.
  • the information collecting unit 11 uses the information after reflecting the extracted feature amount as the driver-related information.
  • the information collecting unit 11 outputs the driver-related information to the state determination unit 12.
  • the information collecting unit 11 may collect information from a plurality of information collecting devices. In that case, the information collecting unit 11 extracts a feature amount from the collected information for each information collecting device, reflects the extracted feature amount in the collected information, and sets it as driver-related information. Further, the above-mentioned information collecting device is only an example, and the information collecting device includes various devices and the like capable of collecting driver-related information.
  • the state determination unit 12 is based on the driver's reaction, and the driver-related information collected by the information collection unit 11 is the driver-related information collected when the driver is in a normal state, that is, the normal state driver-related information. Determine if it is information.
  • the driver's reaction is, for example, the driver's response to an inquiry about the driver's condition made by the driver state estimation device 1 to the driver.
  • the inquiry unit 121 of the state determination unit 12 makes an inquiry to the driver regarding the driver's condition. Specifically, the inquiry unit 121 makes the above inquiry by voice, for example.
  • the inquiry unit 121 outputs, for example, a voice message "Are you sleepy” or "Are you tired?" From a speaker (not shown) installed in the vehicle. When the driver confirms the output voice message, he / she responds, for example, with "yes" or "no".
  • the inquiry unit 121 may make the above inquiry by displaying, for example.
  • the inquiry unit 121 outputs, for example, a display message "Are you sleepy?" Or "Are you tired?" From a touch panel display (not shown) installed in the vehicle.
  • a touch panel display not shown
  • the driver visually recognizes the displayed display message, he / she responds by touching, for example, the "Yes” button or the "No” button.
  • the "Yes" button or the "No” button is displayed together with the display message when, for example, the inquiry unit 121 displays the display message.
  • the driver may respond to the inquiry by the inquiry unit 121 by a method other than voice or button touch.
  • the driver may use his / her face to respond to the inquiry by the voice message or display of "Are you sleepy?" Output by the inquiry unit 121.
  • the response using the face to the inquiry output by the inquiry unit 121 is, for example, a response by nodding, a response by changing the facial expression, a response by changing the direction of the face, and a response by changing the degree of eye opening. It is a response or a response by changing the line of sight.
  • the driver may respond to the inquiry of the inquiry unit 121 by combining a plurality of methods. For example, the driver may combine a voice response and a face response to the inquiry of the inquiry unit 121.
  • the inquiry unit 121 may simultaneously make an inquiry by voice and an inquiry by display, for example.
  • the content of the voice message or the content of the display message described above is only an example.
  • the inquiry unit 121 may make an inquiry to the driver to obtain a response from the driver as to whether or not the condition is normal.
  • the response acquisition unit 122 of the state determination unit 12 acquires the driver's response to the inquiry made by the inquiry unit 121. For example, in the above example, when the inquiry unit 121 outputs a voice message "Are you sleepy?", The response acquisition unit 122 acquires the utterance voice of "yes" or "no" by the driver. The response acquisition unit 122 may acquire the driver's response to the inquiry made by the inquiry unit 121 from the driver-related information as an in-vehicle image collected by the information collection unit 11. For example, when the driver responds by nodding to the voice message "Are you sleepy?" Output by the inquiry unit 121, the response acquisition unit 122 acquires the response from the in-vehicle image.
  • the response acquisition unit 122 may acquire information to the effect that the driver nodded by using a known image recognition technique.
  • the response acquisition unit 122 outputs the information regarding the acquired response to the determination unit 123.
  • the response acquisition unit 122 outputs the information regarding the inquiry made by the inquiry unit 121 to the determination unit 123 together with the information regarding the acquired response.
  • the determination unit 123 of the state determination unit 12 determines whether the driver-related information collected by the information collection unit 11 is normal state driver-related information based on the reaction of the driver (hereinafter referred to as "normal state determination". )I do. Specifically, the judgment unit 123 is based on the information regarding the response output from the response acquisition unit 122, and the response of the driver acquired by the response acquisition unit 122 is information indicating that the driver is in a normal state. In some cases, the driver determines that the condition is normal. To give a specific example, for example, when the response acquisition unit 122 outputs information that the driver has responded by voice to the inquiry by voice message "Are you sleepy?", The judgment unit.
  • the determination unit 123 determines that the driver-related information collected by the information collecting unit 11 is normal state driver-related information. It should be noted that it is predetermined in advance what kind of inquiry is made to the driver and what kind of response is obtained from the driver to determine that the driver is in a normal state.
  • the determination unit 123 may perform voice recognition or the like by using a known technique such as voice recognition to determine the normal state.
  • the driver's reaction is the driver's response to the inquiry about the driver's state made by the driver state estimation device 1 to the driver, but this is only an example.
  • the driver's reaction may be the driver's response to the utterance of the passenger.
  • the determination unit 123 acquires the driver's response to the utterance of the passenger based on the driver-related information as voice information in the vehicle collected by the information collection unit 11. Then, the determination unit 123 determines the normal state based on the acquired driver's response, and when the acquired driver's response is information indicating that the driver is in the normal state, the driver is in the normal state. You may decide that.
  • the determination unit 123 is in a normal state when the driver can obtain some response to some utterance by the passenger based on the voice information in the vehicle collected by the information collection unit 11. Assuming that the information indicating that the information can be obtained, the driver determines that the vehicle is in a normal state. It is assumed that the information that can identify the utterance voice of the passenger and the information that can identify the utterance voice of the driver are registered in advance, and the determination unit 123 operates based on the information that is registered in advance. The voice of the person or passenger may be specified.
  • the determination unit 123 analyzes the utterance content using a known voice recognition technique, and when the driver does not utter a content that complains of an abnormal state, the acquired driver's response is The driver may be information indicating that the vehicle is in a normal state.
  • the judgment unit 123 acquires when the driver responds "OK" to the utterance "Do you want to sleep?" By the passenger based on the voice information collected by the information collection unit 11.
  • the driver's response is determined to be a state indicating that the driver is in a normal state, and the driver is determined to be in a normal state.
  • the judgment unit 123 determines that the acquired response of the driver is not information indicating the normal state of the driver. , The driver does not judge that it is in a normal state. It should be noted that it is predetermined in advance what kind of response the driver should make to determine that the driver is in a normal state.
  • the driver's reaction may be the driver's reaction to an external event.
  • the determination unit 123 reacts to some external event that occurs outside the vehicle based on the driver-related information as vehicle peripheral information and the driver-related information as an in-vehicle image collected by the information collection unit 11. Judge whether or not it was possible to obtain.
  • the "external event” refers to various events that suddenly occur outside the vehicle, such as jumping out or sudden braking by a vehicle in front.
  • the vehicle peripheral information is an captured image (hereinafter referred to as "vehicle peripheral image") obtained by capturing the surroundings of the vehicle.
  • the determination unit 123 assumes that the information indicating that the driver is in the normal state can be acquired, and determines that the driver is in the normal state.
  • the judgment unit 123 jumps out of the vehicle based on the vehicle peripheral image and the vehicle interior image collected by the information collecting unit 11, the driver moves in the direction in which the pop-out occurs. Determine if you have turned your gaze.
  • the determination unit 123 determines that the driver has been able to acquire information indicating that the driver is in the normal state, and determines that the driver is in the normal state. For example, if the determination unit 123 does not direct the line of sight in the direction in which the pop-out occurs, the driver cannot acquire the information indicating that the normal state is obtained, and the driver does not determine that the normal state is present.
  • the state determination unit 12 may use the inquiry unit 121 and It is not essential to include the response acquisition unit 122.
  • the determination unit 123 may determine the normal state based on information other than the driver's reaction in addition to the driver's reaction. For example, the determination unit 123 may determine the normal state based on the driver's biological information in addition to the driver's reaction.
  • the determination unit 123 is information that the information acquired as the reaction of the driver indicates that the driver is in a normal state, and the biometric information of the driver is information that indicates that the driver is in a normal state. In some cases, the driver determines that it is in a normal state.
  • the driver's biological information is included in, for example, the driver-related information collected by the information collecting unit 11. It is assumed that the conditions for determining what kind of information the biometric information about the driver is in the normal state of the driver are set in advance. For example, as a condition for the driver to determine that the vehicle is in a normal state, "the degree of eye opening is equal to or higher than a preset threshold value" is set.
  • the determination unit 123 may determine the normal state based on the driver's voice quality in addition to the driver's reaction. Further, for example, the determination unit 123 may determine the normal state based on the facial expression of the driver in addition to the reaction of the driver. Further, for example, the determination unit 123 may determine the normal state based on the reaction of the driver and the time until the reaction is acquired. The time until the response is acquired is, for example, the time from when the inquiry unit 121 outputs the query until the response acquisition unit 122 acquires the response, or after the external event occurs, the judgment unit 123. Is the time it takes to get the driver's reaction to the external event.
  • the determination unit 123 acquires information indicating that the driver is in a normal state, and the time until the information is acquired is a preset threshold value (hereinafter referred to as “reaction time determination threshold value”). If it is less than or equal to the following, the driver determines that the vehicle is in a normal state.
  • the determination unit 123 may change the reaction time determination threshold value according to the driver.
  • the driver state estimation device 1 determines the normal state only from the driver's reaction. It is possible to judge the normal state more accurately than in the case of performing.
  • the state determination unit 12 associates the driver-related information collected by the information collection unit 11 with the determination result of whether or not the driver-related information can be determined to be normal state driver-related information, and causes the state estimation unit 13 to perform the determination. Output. Further, when the state determination unit 12 determines that the driver-related information collected by the information collection unit 11 is the normal state driver-related information, the state determination unit 12 stores the normal state driver-related information for each feature amount. Accumulate in 14.
  • the state determination unit 12 performs the normal state determination as described above at an appropriate timing.
  • a preset time interval such as a 30-minute interval or an hour interval is used. (Hereinafter referred to as "set time interval")
  • the normal state is judged.
  • the inquiry unit 121 may make an inquiry at set time intervals. The set time interval may be changed according to the elapsed time since the driver starts driving the vehicle.
  • the determination unit 123 acquires the response to the driver's external event based on the driver-related information, not the driver's response to the inquiry by the inquiry unit 121, and determines the normal state. In this case, the determination unit 123 constantly determines the normal state.
  • the determination unit 123 when the determination unit 123 determines that the driver is in the normal state as a result of determining the normal state, the determination unit 123 is preset from the time when it is determined that the driver is in the normal state. It can be determined that the driver-related information collected by the information collecting unit 11 while the conditions are satisfied is the normal state driver-related information.
  • the preset condition may be, for example, "a preset time” or "until the traveling condition changes".
  • the state estimation unit 13 estimates whether or not the driver is in a normal state based on the driver-related information collected by the information collecting unit 11 and the estimation information stored in the storage unit 14. More specifically, the state estimation unit 13 is based on the driver-related information collected by the information collecting unit 11 and the determination result by the state determination unit 12 that the driver-related information is the normal state driver-related information. The estimation information is reset, and the estimation information stored in the storage unit 14 is updated. When the estimation information is updated, the state estimation unit 13 estimates whether or not the driver is in a normal state based on the driver-related information collected by the information collection unit 11 and the reset estimation information. do.
  • the update unit 131 of the state estimation unit 13 resets the estimation information based on the normal state driver-related information. do.
  • the update unit 131 updates the estimation information stored in the storage unit 14 to the estimation information after the reset.
  • the estimation unit 132 of the state estimation unit 13 estimates whether or not the driver is in a normal state based on the reset estimation information. The details of the estimation unit 132 will be described later.
  • FIG. 2 is a diagram showing an image of an example of estimation information in the first embodiment.
  • the estimation information is information in which the driver-related information and the estimation rule for estimating that the driver is in a normal state are associated with each other.
  • the feature amount information reflected in the driver-related information is also illustrated.
  • the driver-related information is an in-vehicle image reflecting a feature amount showing the driver's facial expression
  • the driver is not sleeping, in other words, the driver's eyes If it is open, it indicates that the driver can be presumed to be in a normal state.
  • the storage unit 14 now stores estimation information as shown in FIG. 2, for example. Further, it is assumed that the state determination unit 12 outputs the normal state driver-related information as the vehicle information.
  • the normal state driver-related information reflects a feature amount indicating that the sudden acceleration is 1.5 times per 10 km. That is, the driver is in a normal state even if he / she accelerates suddenly 1.5 times per 10 km.
  • the update unit 131 sets the estimation rule that "the sudden acceleration per 10 km is within 1 time", which is associated with the vehicle information, to "the sudden acceleration per 10 km is within 1.5 times”. Reset to. As a result, the update unit 131 can set a normal state estimation rule according to the driver.
  • the update unit 131 may reset the estimation rule based on the normal state driver-related information stored in the storage unit 14. Specifically, when the update unit 131 accumulates the normal state driver-related information reflecting the feature amount indicating that the sudden acceleration is 1.5 times per 10 km, the threshold value or more set in advance is accumulated. In the estimation information, the estimation rule that "the sudden acceleration per 10 km is within 1.5 times", which is associated with the vehicle information, is reset to "the sudden acceleration per 10 km is within 1.5 times”. May be good.
  • the update unit 131 recalculates "n” that "the sudden acceleration per 10 km is within n times" based on the accumulated normal state driver-related information, and resets the estimation rule. May be good.
  • the update unit 131 may set the number of sudden accelerations per 10 km, which is the largest number, to "n" from the accumulated normal state driver-related information.
  • the usage rule flag is a flag for designating an estimation rule used by the estimation unit 132 when estimating whether or not the driver is in a normal state.
  • the estimation unit 132 estimates whether or not the driver is in a normal state according to an estimation rule in which the usage rule flag "1" is set.
  • the update unit 131 may set the usage rule flag based on the normal state operation-related information.
  • the update unit 131 outputs the normal state driver-related information reflecting the feature amount indicating the number of sudden accelerations per 10 km from the state determination unit 12, and the number of sudden accelerations per 10 km.
  • the estimation rule for is updated and the usage rule flag "1" is set.
  • the estimation unit 132 of the state estimation unit 13 determines whether or not the driver is in a normal state based on the driver-related information collected by the information collection unit 11 and the estimation information stored in the storage unit 14. presume.
  • the estimation unit 132 puts the driver in a normal state based on the driver-related information collected by the information collection unit 11 and the estimation information updated by the update unit 131. Estimate whether or not it is.
  • the estimation unit 132 may acquire the driver-related information collected by the information collection unit 11 from the state determination unit 12. For example, it is assumed that the estimation information is the content shown in FIG. 2, and the driver-related information collected by the information collecting unit 11 is voice information reflecting a feature amount indicating the voice quality of the driver.
  • the estimation unit 132 estimates that the driver is in a normal state if there is no change in the voice quality of the driver based on the estimation information. On the other hand, the estimation unit 132 estimates that the driver is not in a normal state when there is a change in the voice quality of the driver.
  • the estimation unit 132 may determine whether or not there is a change in the voice quality of the driver by using a known voice recognition technique based on the accumulated driver-related information.
  • the estimation unit 132 may estimate that the driver is in a normal state without using the estimation information.
  • the estimation unit 132 provides estimation information when the driver-related information is not the normal state driver-related information, in other words, when the state determination unit 12 does not determine that the driver is in the normal state. Based on, it is estimated whether or not the driver is in a normal state.
  • the estimation unit 132 determines whether or not the driver is in the normal state based on the estimation information. May be estimated.
  • the estimation unit 132 outputs the estimated estimation result of whether or not the driver is in a normal state to the output unit 15.
  • the storage unit 14 stores estimation information.
  • the storage unit 14 stores information related to the normal state driver.
  • the storage unit 14 is provided in the driver state estimation device 1, but this is only an example.
  • the storage unit 14 may be provided in a place outside the driver state estimation device 1 where the driver state estimation device 1 can be referred to.
  • the output unit 15 outputs the estimation result of whether or not the driver is in a normal state, which is output from the estimation unit 132, to an external device (not shown).
  • the external device is, for example, an automatic driving control device mounted on a vehicle. Even when the vehicle has an automatic driving function, the driver can drive the vehicle by himself / herself without executing the automatic driving function.
  • the automatic driving control device controls the vehicle based on the above estimation result output from the output unit 15. For example, when the automatic driving control device outputs an estimation result that the driver is in a normal state from the output unit 15 in a state where the vehicle is automatically driving, the automatic driving control device starts from the state where the automatic driving is performed. , The operation control method is shifted to the state where the driver performs manual operation. For example, the automatic driving control device stops the vehicle when the output unit 15 outputs an estimation result indicating that the driver is not in a normal state.
  • FIG. 3 is a flowchart for explaining the operation of the driver state estimation device 1 according to the first embodiment.
  • the information collecting unit 11 collects driver-related information (step ST301).
  • the information collecting unit 11 outputs the driver-related information to the state determination unit 12.
  • the state determination unit 12 determines whether the driver-related information collected by the information collection unit 11 in step ST301 is the driver-related information collected when the driver is in a normal state. Determine (step ST302).
  • FIG. 4 is a flowchart for explaining the specific operation of step ST302 of FIG.
  • the inquiry unit 121 of the state determination unit 12 makes an inquiry to the driver regarding the driver's condition (step ST401).
  • the response acquisition unit 122 of the state determination unit 12 acquires the driver's response to the inquiry made by the inquiry unit 121 in step ST401 (step ST402).
  • the response acquisition unit 122 outputs the information regarding the acquired response to the determination unit 123.
  • the response acquisition unit 122 outputs information to the effect that the response has not been acquired to the determination unit 123.
  • the determination unit 123 of the state determination unit 12 determines whether the driver-related information collected by the information collection unit 11 in step ST301 of FIG. 3 is normal state driver-related information. (Step ST403).
  • the determination unit 123 determines the normal state based on the driver's reaction other than the response to the inquiry made by the inquiry unit 121, the operation of steps ST401 and ST402 described above is performed in the driver state estimation device 1. Not done.
  • FIG. 5 is a flowchart illustrating a specific operation of step ST403 of FIG.
  • the determination unit 123 determines whether or not the driver's reaction, which indicates that the driver is in a normal state, has been acquired (step ST501). If the driver does not obtain a reaction indicating that the driver is in the normal state in step ST501 (when “NO” in step ST501), the determination unit 123 does not determine that the driver is in the normal state (step ST504). ..
  • the determination unit 123 outputs the driver-related information to the state estimation unit 13 in association with the information indicating that the driver did not determine the normal state.
  • step ST501 When the driver's reaction indicating that the driver is in the normal state is acquired in step ST501 (when “YES” in step ST501), the determination unit 123 indicates that the driver is in the normal state. , It is determined whether or not information other than the driver's reaction has been acquired (step ST502).
  • step ST502 When no information other than the driver's reaction, which indicates that the driver is in the normal state, is acquired in step ST502 (when “NO” in step ST502), the determination unit 123 determines that the driver is in the normal state. Is not determined (step ST504). The determination unit 123 outputs the driver-related information to the state estimation unit 13 in association with the information indicating that the driver did not determine the normal state.
  • step ST502 When information other than the driver's reaction, which indicates that the driver is in the normal state, is acquired in step ST502 (when “YES” in step ST502), the determination unit 123 determines that the driver is in the normal state. (Step ST503).
  • the determination unit 123 outputs the driver-related information to the state estimation unit 13 in association with the information indicating that the driver has determined that the state is normal.
  • the state determination unit 12 stores the driver-related information collected by the information collection unit 11 in step ST301, in other words, the normal state driver-related information, in the storage unit 14 for each feature amount.
  • the operation of step ST502 is not essential.
  • step ST302 determines in step ST302 that the driver-related information collected by the information collection unit 11 in step ST301 is normal state driver-related information (when “YES” in step ST303), the state is determined.
  • the update unit 131 of the estimation unit 13 resets the estimation information based on the normal state driver-related information output from the state determination unit 12 (step ST304).
  • the update unit 131 updates the estimation information stored in the storage unit 14 to the estimation information after resetting.
  • the driver state estimation device 1 ends the operation shown in the flowchart of FIG. This is because the state determination unit 12 obtains the reaction of the driver indicating the normal state, and the driver is determined to be in the normal state.
  • step ST301 When the state determination unit 12 cannot determine that the driver-related information collected by the information collection unit 11 in step ST301 is normal state driver-related information (when "NO" in step ST303), the update unit 131 ) Does not reset the estimation information. The operation of the driver state estimation device 1 proceeds to step ST305.
  • the estimation unit 132 of the state estimation unit 13 is in a normal state of the driver based on the driver-related information collected by the information collection unit 11 in step ST301 and the estimation information stored in the storage unit 14. Estimate whether or not (step ST305).
  • the estimation unit 132 outputs the estimation result of whether or not the driver is in a normal state to the output unit 15.
  • the output unit 15 outputs the estimation result of whether or not the driver is in the normal state, which was output from the estimation unit 132 in step ST305, to the external device (step ST306).
  • the driver state estimation device 1 determines whether the collected driver-related information is the driver-related information collected when the driver is in the normal state, based on the reaction of the driver.
  • the driver state estimation device 1 determines that the collected driver-related information is the normal state driver-related information collected when the driver is in the normal state
  • the driver state estimation device 1 is based on the normal state driver-related information. And reset the estimation information.
  • the driver state estimation device 1 estimates whether or not the driver is in the normal state based on the collected driver-related information and the estimation information reset based on the normal state driver-related information. ..
  • the driver state estimation device 1 can determine that the driver is driving in a normal state with higher accuracy than the case of determining from the biological information.
  • the operations of step ST301 and steps ST305 to ST306 are basically always performed while the driver is driving the vehicle.
  • the operations of steps ST302 to ST304 are performed at appropriate timings. Therefore, when it is not the timing when the operations of steps ST302 to ST304 are performed, the driver state estimation device 1 skips the operations of steps ST302 to ST304. Further, the operation of the driver state estimation device 1 described in the flowchart of FIG. 3 is repeated when the driver starts the driver of the vehicle, for example, until the driver finishes driving the vehicle.
  • the driver state estimation device 1 resets the estimation information stored in advance based on the result of determining the normal state, and makes the estimation information the estimation information suitable for the driver. By repeating the resetting of the estimation information, the estimation information can be made more suitable for the driver.
  • the driver state estimation device 1 collects information based on the reaction of the driver and the information collecting unit 11 that collects the driver-related information related to the driver of the vehicle.
  • the driver-related information collected by the unit 11 is a state determination unit 12 that determines whether the driver-related information is collected when the driver is in a normal state, and a driver-related information collected by the information collection unit 11. Based on the estimation information reset based on the determination result that the driver-related information is the driver-related information collected when the driver is in the normal state by the state determination unit 12. It is configured to include a state estimation unit 13 for estimating whether or not the driver is in a normal state. Therefore, the driver state estimation device 1 can determine that the driver is driving in a normal state with higher accuracy than the case of determining from the biological information.
  • the driver state estimation device estimates whether or not the driver is in a normal state based on the estimation information that is reset according to the reaction of the driver.
  • the driver is normal based on a trained model in machine learning (hereinafter referred to as "machine learning model") that the driver state estimator has relearned according to the reaction of the driver.
  • machine learning model a trained model in machine learning
  • the driver state estimation device according to the second embodiment is mounted on the vehicle like the driver state estimation device according to the first embodiment.
  • FIG. 6 is a diagram showing a configuration example of the driver state estimation device according to the second embodiment.
  • the same reference numerals are given to the same configurations as the driver state estimation device described with reference to FIG. 1 in the first embodiment, and duplicate explanations are omitted. do.
  • the driver state estimation device 1a according to the second embodiment is provided with a learning unit 16 and a model storage unit 17 instead of the storage unit 14 with the driver state estimation device 1 according to the first embodiment. Is different.
  • the driver state estimation device 1a according to the second embodiment is different from the driver state estimation device 1 according to the first embodiment in that the state estimation unit 13a does not include the update unit 131. Further, the specific operation of the estimation unit 132a in the driver state estimation device 1a according to the second embodiment is different from the specific operation of the estimation unit 132 in the driver state estimation device 1 according to the first embodiment.
  • the driver state estimation device 1a is a machine that inputs driver-related information in advance at the time of shipment of the driver state estimation device 1a and outputs information for estimating whether or not the driver is in a normal state. It has a learning model.
  • the information for estimating whether or not the driver is in a normal state is, for example, information indicating the degree to which the driver is in a normal state (hereinafter referred to as "normal degree").
  • the machine learning model is a model that has been trained to output the degree of normality represented by a numerical value from "0" to "1" by inputting driver-related information. It is assumed that the greater the degree of normality, the higher the possibility that the driver is in a normal state.
  • the machine learning model is stored in the model storage unit 17.
  • the learning unit 16 repeats the machine learning model based on the judgment result by the state judgment unit 12 that the driver-related information is the normal state driver-related information and the driver-related information corresponding to the judgment result. Let them learn. That is, the learning unit 16 relearns the machine learning model based on the determination result that the information is related to the normal state driver and the driver-related information corresponding to the determination result.
  • the learning unit 16 may train a machine learning model by using a known algorithm for supervised learning as a learning algorithm. Specifically, the learning unit 16 may train a machine learning model composed of a neural network by, for example, so-called supervised learning.
  • supervised learning is a machine learning model in which a set of data of an input and a teacher label is given to a machine learning model as learning data so that the machine learning model learns the characteristics of the input and estimates the result for the input.
  • a neural network is composed of an input layer composed of a plurality of neurons, an intermediate layer composed of a plurality of neurons, and an output layer composed of a plurality of neurons.
  • the middle layer is also called a hidden layer.
  • the intermediate layer may be one layer or two or more layers.
  • FIG. 7 is a diagram for explaining a neural network. For example, in the case of a three-layer neural network as shown in FIG.
  • the driver-related information collected by the information collecting unit 11 and the driver-related information by the state determination unit 12 are normal state driver-related information. Learning is performed by so-called supervised learning so that the driver's normality is output using the set of normality based on the judgment result as learning data.
  • the degree of normality based on the determination result that the information is related to the driver in the normal state is set to the degree of normality "1" indicating that the driver is in the normal state.
  • the driver-related information is input to the input layer, and the weight W1 and the weight W1 and the weight W1 and the normal degree of the driver output from the output layer approach "1" indicating that the normal state is in the normal state. Learn by adjusting W2.
  • the learning unit 16 When the learning unit 16 outputs the determination result that the driver-related information is the normal state driver-related information from the state determination unit 12, the driver-related information collected by the information collecting unit 11 and the normal degree ".
  • the set of 1 ” is given to the machine learning model as learning data, and the machine learning model is made to perform the above-mentioned learning. More specifically, when the state determination unit 12 outputs the determination result that the driver-related information is the normal state driver-related information, the learning unit 16 relearns the machine learning model stored in advance. Let me.
  • the state determination unit 12 determines that the driver-related information collected by the information collection unit 11 is the normal state driver-related information
  • the state determination unit 12 obtains the normal state driver-related information. Instead of storing it in the storage unit 14, it is output to the learning unit 16 in association with the determination result that the information is related to the normal state driver.
  • the estimation unit 132a of the state estimation unit 13a has a normal driver based on the driver-related information collected by the information collection unit 11 and the machine learning model stored in the model storage unit 17. Estimate whether or not it is in a state. Specifically, the estimation unit 132a inputs the driver-related information collected by the information collection unit 11 into the machine learning model, and estimates whether or not the driver is in a normal state. For example, if the normality output from the machine learning model is equal to or higher than a preset threshold value (hereinafter referred to as “degree determination threshold value”), the estimation unit 132a estimates that the driver is in a normal state.
  • a preset threshold value hereinafter referred to as “degree determination threshold value”
  • the estimation unit 132a When the learning unit 16 relearns the machine learning model, the estimation unit 132a operates based on the driver-related information collected by the information collecting unit 11 and the machine learning model relearned by the learning unit 16. Estimate whether a person is in a normal state.
  • the estimation unit 132a may acquire the driver-related information collected by the information collection unit 11 from the state determination unit 12.
  • the model storage unit 17 stores the machine learning model.
  • the model storage unit 17 is provided in the driver state estimation device 1a, but this is only an example.
  • the model storage unit 17 may be provided in a place outside the driver state estimation device 1a where the driver state estimation device 1a can be referred to.
  • FIG. 8 is a flowchart for explaining the operation of the driver state estimation device 1a according to the second embodiment.
  • the specific operations of steps ST804 to ST805 of FIG. 8 are different from the specific operations of steps ST304 to ST305 of FIG. 3 described in the first embodiment. Since the specific operations of steps ST801 to ST803 and step ST806 of FIG. 8 are the same as the specific operations of steps ST301 to ST303 and step ST306 of FIG. 3 described in the first embodiment, respectively. , Omit duplicate description.
  • the driver-related information collected by the information collection unit 11 in step ST801 is the normal state driver-related information.
  • the normal state driver-related information is associated with the determination result and output to the learning unit 16.
  • step ST802 determines in step ST802 that the driver-related information collected by the information collection unit 11 in step ST801 is normal state driver-related information (when “YES” in step ST803), learning is performed.
  • the unit 16 relearns the machine learning model based on the determination result that the information is related to the normal state driver and the driver-related information corresponding to the determination result (step ST804).
  • the driver-related information corresponding to the determination result is the driver-related information determined by the state determination unit 12 to be the normal state driver-related information.
  • the information collection unit in the immediately preceding step ST801. 11 is the driver-related information collected.
  • step ST801 determines that the driver is in a normal state.
  • step ST801 determines that the driver-related information collected by the information collecting unit 11 in step ST801 is normal state driver-related information (when “NO” in step ST803). ) Does not retrain the machine learning model.
  • the operation of the driver state estimation device 1a proceeds to step ST805.
  • the driver is in a normal state based on the driver-related information collected by the information collection unit 11 in step ST801 and the machine learning model stored in the model storage unit 17. It is estimated whether or not there is (step ST805). Specifically, the estimation unit 132a inputs the driver-related information collected by the information collection unit 11 into the machine learning model, and estimates whether or not the driver is in a normal state. For example, if the degree of normality output from the machine learning model is equal to or greater than the degree determination threshold value, the estimation unit 132a estimates that the driver is in a normal state. The estimation unit 132a outputs an estimation result of whether or not the driver is in a normal state to the output unit 15.
  • step ST801 and steps ST805 to ST806 are basically always performed while the driver is driving the vehicle.
  • steps ST802 to ST804 are performed at appropriate timings. Therefore, when it is not the timing when the operations of steps ST802 to ST804 are performed, the driver state estimation device 1a skips the operations of steps ST802 to ST804. Further, the operation of the driver state estimation device 1a described in the flowchart of FIG. 8 is repeated when the driver starts the driver of the vehicle, for example, until the driver finishes driving the vehicle.
  • the driver state estimation device 1a relearns a machine learning model stored in advance based on the result of determining the normal state, thereby making the machine learning model a machine learning model suitable for the driver. By repeating the re-learning of the machine learning model, the accuracy of the machine learning model can be improved and the model can be made more suitable for the driver.
  • the driver state estimation device 1a determines whether the collected driver-related information is the driver-related information collected when the driver is in the normal state, based on the reaction of the driver. ..
  • the driver state estimation device 1a determines that the collected driver-related information is the normal state driver-related information.
  • the machine learning model is retrained based on the judgment result of the above and the driver-related information corresponding to the judgment result.
  • the driver state estimation device 1a estimates whether or not the driver is in a normal state based on the collected driver-related information and the relearned machine learning model. As a result, the driver state estimation device 1a can determine that the driver is driving in a normal state with higher accuracy than the case of determining from the biological information.
  • the model storage unit 17 may be provided in a place outside the driver state estimation device 1a where the driver state estimation device 1a can be referred to.
  • the model storage unit 17 is provided in a server (not shown) connected to the driver state estimation device 1a via a network, and the driver state estimation device 1a acquires a machine learning model from the server. You may.
  • the server is connected to a plurality of driver state estimation devices 1a, and the model storage unit 17 on the server uses a plurality of machine learning models relearned by the plurality of driver state estimation devices 1a. It may be something that you remember.
  • the driver state estimation device 1a selects a machine learning model from the model storage unit 17 on the server, and estimates whether or not the driver is in a normal state based on the selected machine learning model.
  • the driver state estimation device 1a selects a machine learning model relearned by the learning unit 16 when the driver has driven in the past from the model storage unit 17, and the driver is normal. Estimate whether or not it is in a state.
  • the model storage unit 17 stores the machine learning model in association with information that can identify the driver.
  • the driver state estimation device 1a identifies the machine learning model to be selected based on the information that can identify the driver.
  • the driver state estimation device 1a selects the machine learning model that was relearned when the driver drove in the past from the model storage unit 17 on the server, so that the driver drives the vehicle. Even when the above is changed, it is possible to estimate whether or not the driver is in a normal state by using a machine learning model that has been relearned according to the driver.
  • the learning unit 16 uses a known algorithm for supervised learning as a learning algorithm to train a machine learning model, but this is only an example.
  • the learning unit 16 may use deep learning as a learning algorithm for learning the extraction of the feature amount itself, or may train the machine learning model according to another known method. ..
  • Other known methods are, for example, genetic programming, functional logic programming, or support vector machines.
  • the learning unit 16 is provided in the driver state estimation device 1a, but this is only an example.
  • the learning unit 16 may be provided in an external device of the driver state estimation device 1a, which is connected to the driver state estimation device 1a.
  • FIG. 9 shows a configuration example of the driver state estimation device 1b and the learning device 2 when the learning unit 16 is provided in the learning device 2 outside the driver state estimation device 1b in the second embodiment. It is a figure which shows.
  • the learning device 2 is mounted on the vehicle like the driver state estimation device 1b.
  • the learning device 2 may include a learning unit 16, an information collecting unit 11, a state determination unit 12, and a model storage unit 17.
  • the driver state estimation device 1b and the learning device 2 are connected via a network.
  • the driver state estimation device 1b and the learning device 2 each include an information collecting unit 11, but this is only an example.
  • the learning device 2 does not include the information collecting unit 11, and the learning device 2 includes an information acquisition unit (not shown) that acquires the collected driver-related information from the information collecting unit 11 of the driver state estimation device 1b. You may do so.
  • FIG. 9 the same reference numerals are given to the configurations similar to those of the driver state estimation device 1a shown in FIG.
  • the learning device 2 may include a learning unit 16, an information collecting unit 11, a state determination unit 12, and a model storage unit 17.
  • the driver state estimation device 1b and the learning device 2 are connected via a network.
  • the driver state estimation device 1b and the learning device 2 each include an information collecting unit 11, but this is only an example.
  • the learning device 2 does
  • the state determination unit 12 and the model storage unit 17 are provided in the learning device 2, but this is only an example.
  • the state determination unit 12 and the model storage unit 17 may be provided in, for example, the driver state estimation device 1b.
  • the model storage unit 17 may be provided in, for example, a server connected to the driver state estimation device 1b and the learning device 2 via a network.
  • the driver state estimation device 1a collects information based on the reaction of the driver and the information collecting unit 11 that collects the driver-related information related to the driver of the vehicle.
  • the driver-related information collected by the unit 11 is the state determination unit 12 that determines whether the driver-related information is collected when the driver is in a normal state, and the driver-related information by the state determination unit 12.
  • the information collecting unit 11 is added to the machine learning model relearned based on the judgment result that the driver is the driver-related information collected when the driver is in the normal state and the driver-related information corresponding to the judgment result.
  • It is configured to include a state estimation unit 13a for inputting the driver-related information collected by the driver and estimating whether or not the driver is in a normal state. Therefore, the driver state estimation device 1b can determine that the driver is driving in a normal state with higher accuracy than the case of determining from the biological information.
  • the driver state estimation devices 1, 1a, 1b or the learning device 2 are in-vehicle devices mounted on the vehicle, and the information collecting unit 11 and the state determination are performed.
  • the unit 12, the state estimation units 13, 13a, the output unit 15, and the learning unit 16 are provided in the driver state estimation devices 1, 1a, 1b, or the learning device 2.
  • a part of the information collecting unit 11, the state determination unit 12, the state estimation units 13, 13a, the output unit 15, and the learning unit 16 is mounted on the in-vehicle device of the vehicle.
  • the driver state estimation system may be configured by the in-vehicle device and the server, assuming that the other is provided in the server connected to the in-vehicle device via the network. Further, the information collecting unit 11, the state determination unit 12, the state estimation units 13 and 13a, the output unit 15, and the learning unit 16 may all be provided in the server. In this case, for example, the information collecting unit 11 collects driver-related information from the in-vehicle device via the network, and the output unit 15 determines whether the driver is in a normal state with respect to the in-vehicle device via the network. Outputs the estimation result of whether or not.
  • FIGS. 10A and 10B are diagrams showing an example of the hardware configuration of the driver state estimation devices 1 and 1a according to the first and second embodiments.
  • the functions of the information collection unit 11, the state determination unit 12, the state estimation units 13, 13a, the output unit 15, and the learning unit 16 are realized by the processing circuit 1001. That is, the driver state estimation devices 1 and 1a determine whether the collected driver-related information is normal state driver state information based on the driver's reaction, and reset it based on the judgment result.
  • a processing circuit 1001 for performing control for estimating whether or not the driver is in a normal state is provided based on an inference rule or a machine learning model relearned based on the determination result.
  • the processing circuit 1001 may be dedicated hardware as shown in FIG. 10A, or may be a CPU (Central Processing Unit) 1005 that executes a program stored in the memory 1006 as shown in FIG. 10B.
  • CPU Central Processing Unit
  • the processing circuit 1001 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). Gate Array) or a combination of these is applicable.
  • the functions of the information collection unit 11, the state determination unit 12, the state estimation units 13, 13a, the output unit 15, and the learning unit 16 are software, firmware, or a combination of software and firmware. Is realized by. That is, the information collection unit 11, the state determination unit 12, the state estimation units 13, 13a, the output unit 15, and the learning unit 16 execute the CPU 1005 that executes the program stored in the HDD (Hard Disk Drive) 1002, the memory 1006, or the like. , System LSI (Lage-Scale Integration) or the like, which is realized by a processing circuit 1001.
  • the program stored in the HDD 1002, the memory 1006, or the like causes the computer to execute the procedures or methods of the information collecting unit 11, the state determining unit 12, the state estimating units 13, 13a, the output unit 15, and the learning unit 16. It can be said that.
  • the memory 1006 is, for example, a RAM, a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Emergency Memory), an EEPROM (Electrically Emergency Memory), a volatile Memory, etc.
  • a semiconductor memory, a magnetic disk, a flexible disk, an optical disk, a compact disk, a mini disk, a DVD (Digital Versaille Disc), or the like is applicable.
  • the functions of the information collection unit 11, the state determination unit 12, the state estimation units 13, 13a, the output unit 15, and the learning unit 16 are realized by dedicated hardware, and some by software or firmware. It may be realized.
  • the information collecting unit 11 and the output unit 15 are realized by the processing circuit 1001 as dedicated hardware, and the state determination unit 12, the state estimation units 13, 13a, and the learning unit 16 are processed circuit 1001. Can realize the function by reading and executing the program stored in the memory 1006.
  • the storage unit 14 and the model storage unit 17 use the memory 1006. Note that this is an example, and the storage unit 14 and the model storage unit 17 may be composed of an HDD 1002, an SSD (Solid State Drive), a DVD, or the like.
  • the driver state estimation devices 1 and 1a include devices such as a server (not shown), an input interface device 1003 and an output interface device 1004 that perform wired communication or wireless communication.
  • the driver state estimation device of the present disclosure is configured to be able to judge that the driver is driving in a normal state with higher accuracy than the case of judging from biometric information, the driver's state is estimated. It can be applied to a driver state estimation device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

Un dispositif d'estimation d'état de conducteur selon la présente invention comporte : une unité de collecte d'informations (11) qui collecte des informations relatives au conducteur relatives à un conducteur d'un véhicule ; une unité de détermination d'état (12) qui détermine, sur la base d'une réaction du conducteur, si oui ou non les informations relatives au conducteur collectées par l'unité de collecte d'informations (11) sont des informations relatives au conducteur collectées lorsque le conducteur est dans un état normal ; et une unité d'estimation d'état (13) qui estime si oui ou non le conducteur est dans l'état normal sur la base des informations relatives au conducteur collectées par l'unité de collecte d'informations (11), et des informations d'estimation qui ont été réinitialisées sur la base d'un résultat de détermination provenant de l'unité de détermination d'état (12) indiquant que les informations relatives au conducteur sont des informations relatives au conducteur collectées lorsque le conducteur est dans l'état normal.
PCT/JP2020/009315 2020-03-05 2020-03-05 Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur WO2021176633A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/009315 WO2021176633A1 (fr) 2020-03-05 2020-03-05 Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/009315 WO2021176633A1 (fr) 2020-03-05 2020-03-05 Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur

Publications (1)

Publication Number Publication Date
WO2021176633A1 true WO2021176633A1 (fr) 2021-09-10

Family

ID=77613273

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/009315 WO2021176633A1 (fr) 2020-03-05 2020-03-05 Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur

Country Status (1)

Country Link
WO (1) WO2021176633A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009082655A (ja) * 2007-10-03 2009-04-23 Toyota Motor Corp 生理情報検出装置、生理情報演算ユニット、生理情報検出方法
JP2009205645A (ja) * 2008-02-29 2009-09-10 Equos Research Co Ltd ドライバモデル作成装置
WO2010032491A1 (fr) * 2008-09-19 2010-03-25 パナソニック株式会社 Dispositif de détection d'inattention, procédé de détection d'inattention et programme informatique
JP2016007989A (ja) * 2014-06-26 2016-01-18 クラリオン株式会社 車両制御装置及び車両制御方法
JP2019021229A (ja) * 2017-07-21 2019-02-07 ソニーセミコンダクタソリューションズ株式会社 車両制御装置および車両制御方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009082655A (ja) * 2007-10-03 2009-04-23 Toyota Motor Corp 生理情報検出装置、生理情報演算ユニット、生理情報検出方法
JP2009205645A (ja) * 2008-02-29 2009-09-10 Equos Research Co Ltd ドライバモデル作成装置
WO2010032491A1 (fr) * 2008-09-19 2010-03-25 パナソニック株式会社 Dispositif de détection d'inattention, procédé de détection d'inattention et programme informatique
JP2016007989A (ja) * 2014-06-26 2016-01-18 クラリオン株式会社 車両制御装置及び車両制御方法
JP2019021229A (ja) * 2017-07-21 2019-02-07 ソニーセミコンダクタソリューションズ株式会社 車両制御装置および車両制御方法

Similar Documents

Publication Publication Date Title
CN111741884B (zh) 交通遇险和路怒症检测方法
US11937929B2 (en) Systems and methods for using mobile and wearable video capture and feedback plat-forms for therapy of mental disorders
JP6577642B2 (ja) 自動車又は携帯電子装置を使用した能動的且つ自動的なパーソナルアシスタンスを提供するコンピュータベースの方法及びシステム
EP3583485B1 (fr) Ordinateur d'assistant intelligent d'identification humaine avec une efficacité computationelle
JP6761598B2 (ja) 感情推定システム、感情推定モデル生成システム
US9165280B2 (en) Predictive user modeling in user interface design
US20190370580A1 (en) Driver monitoring apparatus, driver monitoring method, learning apparatus, and learning method
US20200310528A1 (en) Vehicle system for providing driver feedback in response to an occupant's emotion
EP3751474A1 (fr) Dispositif d'évaluation, dispositif de commande d'action, procédé d'évaluation et programme d'évaluation
JP2017168097A (ja) 状況に特有の車両ドライバとの交信を提供するためのシステムおよび方法
KR102455056B1 (ko) 컨텍스트에 따라 이벤트의 출력 정보를 제공하는 전자 장치 및 이의 제어 방법
WO2019208450A1 (fr) Dispositif d'aide à la conduite, procédé d'aide à la conduite et programme
EP3296944A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
JP7303901B2 (ja) 複数の候補からドライバーを選択する提案システム
JP7180139B2 (ja) ロボット、ロボットの制御方法及びプログラム
US20170209796A1 (en) Human social development tool
US11697420B2 (en) Method and device for evaluating a degree of fatigue of a vehicle occupant in a vehicle
JP6552548B2 (ja) 地点提案装置及び地点提案方法
US11430231B2 (en) Emotion estimation device and emotion estimation method
KR102499379B1 (ko) 전자 장치 및 이의 피드백 정보 획득 방법
WO2021176633A1 (fr) Dispositif d'estimation d'état de conducteur et procédé d'estimation d'état de conducteur
US20230404456A1 (en) Adjustment device, adjustment system, and adjustment method
JP6657048B2 (ja) 処理結果異常検出装置、処理結果異常検出プログラム、処理結果異常検出方法及び移動体
JP6099845B1 (ja) 情報機器、ナビゲーション装置、作業手順の案内装置、および、負荷状況判定方法
US20220036048A1 (en) Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922950

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20922950

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP