WO2022172724A1 - 情報処理装置、情報処理方法及び情報処理プログラム - Google Patents
情報処理装置、情報処理方法及び情報処理プログラム Download PDFInfo
- Publication number
- WO2022172724A1 WO2022172724A1 PCT/JP2022/002127 JP2022002127W WO2022172724A1 WO 2022172724 A1 WO2022172724 A1 WO 2022172724A1 JP 2022002127 W JP2022002127 W JP 2022002127W WO 2022172724 A1 WO2022172724 A1 WO 2022172724A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- driver
- unit
- vehicle
- eyeball
- information processing
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 84
- 238000003672 processing method Methods 0.000 title claims description 24
- 238000001514 detection method Methods 0.000 claims abstract description 101
- 230000004438 eyesight Effects 0.000 claims abstract description 55
- 230000008859 change Effects 0.000 claims abstract description 45
- 239000011159 matrix material Substances 0.000 claims abstract description 12
- 210000005252 bulbus oculi Anatomy 0.000 claims description 274
- 238000004458 analytical method Methods 0.000 claims description 168
- 210000001508 eye Anatomy 0.000 claims description 80
- 230000004459 microsaccades Effects 0.000 claims description 59
- 230000004424 eye movement Effects 0.000 claims description 54
- 230000004434 saccadic eye movement Effects 0.000 claims description 50
- 230000004044 response Effects 0.000 claims description 44
- 238000011084 recovery Methods 0.000 claims description 29
- 238000009826 distribution Methods 0.000 claims description 14
- 230000014509 gene expression Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 abstract description 92
- 230000006399 behavior Effects 0.000 description 294
- 230000037007 arousal Effects 0.000 description 76
- 238000012545 processing Methods 0.000 description 73
- 238000000034 method Methods 0.000 description 60
- 238000012544 monitoring process Methods 0.000 description 55
- 230000008569 process Effects 0.000 description 47
- 230000000007 visual effect Effects 0.000 description 43
- 238000004891 communication Methods 0.000 description 41
- 230000033001 locomotion Effects 0.000 description 41
- 230000013016 learning Effects 0.000 description 37
- 238000010586 diagram Methods 0.000 description 32
- 230000009471 action Effects 0.000 description 28
- 230000006870 function Effects 0.000 description 27
- 210000004556 brain Anatomy 0.000 description 24
- 230000000875 corresponding effect Effects 0.000 description 23
- 230000036544 posture Effects 0.000 description 22
- 238000013461 design Methods 0.000 description 19
- 210000001747 pupil Anatomy 0.000 description 19
- 230000000694 effects Effects 0.000 description 18
- 230000015654 memory Effects 0.000 description 18
- 230000007704 transition Effects 0.000 description 18
- 238000003860 storage Methods 0.000 description 17
- 230000001133 acceleration Effects 0.000 description 14
- 238000006243 chemical reaction Methods 0.000 description 13
- 238000005286 illumination Methods 0.000 description 11
- 238000009434 installation Methods 0.000 description 11
- 206010062519 Poor quality sleep Diseases 0.000 description 10
- 238000012790 confirmation Methods 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 10
- 238000002360 preparation method Methods 0.000 description 10
- 230000003936 working memory Effects 0.000 description 10
- 230000010391 action planning Effects 0.000 description 9
- 230000003542 behavioural effect Effects 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 8
- 230000007423 decrease Effects 0.000 description 8
- 210000003128 head Anatomy 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000008447 perception Effects 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 230000007177 brain activity Effects 0.000 description 6
- 240000004050 Pentaglottis sempervirens Species 0.000 description 5
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 5
- 206010041349 Somnolence Diseases 0.000 description 5
- 230000036626 alertness Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000019771 cognition Effects 0.000 description 5
- 210000003205 muscle Anatomy 0.000 description 5
- 230000005856 abnormality Effects 0.000 description 4
- 230000010485 coping Effects 0.000 description 4
- 230000004418 eye rotation Effects 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 4
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 4
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 3
- 241000282412 Homo Species 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000017531 blood circulation Effects 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000011514 reflex Effects 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000000638 stimulation Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 206010052804 Drug tolerance Diseases 0.000 description 2
- 206010044565 Tremor Diseases 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000011109 contamination Methods 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000026781 habituation Effects 0.000 description 2
- 230000001939 inductive effect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003340 mental effect Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000005062 synaptic transmission Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 210000003478 temporal lobe Anatomy 0.000 description 2
- 206010003840 Autonomic nervous system imbalance Diseases 0.000 description 1
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 1
- 208000004350 Strabismus Diseases 0.000 description 1
- 208000003443 Unconsciousness Diseases 0.000 description 1
- 230000008649 adaptation response Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 230000036992 cognitive tasks Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001339 gustatory effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 210000001320 hippocampus Anatomy 0.000 description 1
- 230000001976 improved effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 208000016339 iris pattern Diseases 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000004630 mental health Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 231100000862 numbness Toxicity 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 210000005037 parasympathetic nerve Anatomy 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000005043 peripheral vision Effects 0.000 description 1
- 230000000069 prophylactic effect Effects 0.000 description 1
- 230000004439 pupillary reactions Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 210000002027 skeletal muscle Anatomy 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000035900 sweating Effects 0.000 description 1
- 230000009182 swimming Effects 0.000 description 1
- 230000002889 sympathetic effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
- 230000003945 visual behavior Effects 0.000 description 1
- 230000036642 wellbeing Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/18—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/005—Handover processes
- B60W60/0053—Handover processes from vehicle to occupant
Definitions
- the present disclosure relates to an information processing device, an information processing method, and an information processing program.
- the above system determines the driver's response level for returning to manual driving mode, and switches to manual driving. should be executed only if it is determined that the return of is possible. Therefore, for example, in the above system, the driver's arousal level is detected by analyzing the eyeball behavior, which is considered to reflect the results of activities such as recognition of the human brain, and the manual driving mode is selected. It is conceivable to use means for determining the recovery support level as one of the means for determining the recovery support level.
- EVS event vision sensor
- the present disclosure proposes an information processing device, an information processing method, and an information processing program capable of accurately observing eyeball behavior while suppressing an increase in the amount of data.
- an event vision sensor that captures an image of the inside of a moving body and a sensor control unit that controls the event vision sensor
- the event vision sensor is a pixel array having a plurality of pixels arranged in a matrix.
- an event detection unit that detects that a luminance change amount due to incident light in each pixel exceeds a predetermined threshold value, and the sensor control unit controls the event vision sensor to detect the event in the driver's seat of the moving body.
- An information processing device is provided that changes the value of the predetermined threshold value when capturing the eyeball behavior of a driver sitting in a car.
- an information processing method executed by an information processing device including an event vision sensor that captures an image of the interior of a mobile body and a sensor control unit that controls the event vision sensor, the event vision
- the sensor has a pixel array section having a plurality of pixels arranged in a matrix, and an event detection section for detecting that a luminance change amount due to incident light exceeds a predetermined threshold in each pixel, and the event
- An information processing method is provided, including changing the value of the predetermined threshold when a vision sensor captures the eyeball behavior of a driver sitting in the driver's seat of the mobile object.
- an information processing program that causes a computer to execute a control function of an event vision sensor that captures an image of the interior of a moving body, wherein the event vision sensor has a plurality of pixels arranged in a matrix. a pixel array unit; and an event detection unit that detects when the amount of luminance change due to incident light exceeds a predetermined threshold in each of the pixels.
- An information processing program is provided that executes a function of changing the value of the predetermined threshold value when capturing the eyeball behavior of a driver sitting in a car.
- FIG. 4 is a flowchart for explaining an example of running according to an embodiment of the present disclosure; It is an explanatory view for explaining an example of transition of an automatic driving level concerning an embodiment of this indication.
- 4 is a flowchart illustrating an example of monitoring operation according to an embodiment of the present disclosure
- 1 is an explanatory diagram for describing an example of a detailed configuration of a vehicle control system 100 according to an embodiment of the present disclosure
- FIG. 4A and 4B are diagrams showing examples of installation positions of imaging devices included in the sensor unit 113.
- FIG. 3 is an explanatory diagram for explaining examples of various sensors included in the sensor unit 113;
- FIG. 4 is an explanatory diagram for explaining an example of a unit that executes determination of a driver's alertness level, according to an embodiment of the present disclosure
- FIG. 4 is an explanatory diagram for explaining details of an operation example of an eyeball behavior analysis unit 300 according to an embodiment of the present disclosure
- FIG. 2 is a block diagram showing an example configuration of an EVS 400 used in an embodiment of the present disclosure
- FIG. 11 is a block diagram showing an example of a configuration of a pixel 502 located in a pixel array section 500 in the EVS 400 shown in FIG. 10
- FIG. FIG. 2 is an explanatory diagram (Part 1) for explaining installation positions of imaging devices 700 and 702 according to an embodiment of the present disclosure
- FIG. 2 is an explanatory diagram (part 2) for explaining installation positions of the imaging devices 700 and 702 according to the embodiment of the present disclosure
- FIG. 3 is an explanatory diagram (part 3) for explaining the imaging devices 700 and 702 and installation positions according to the embodiment of the present disclosure
- FIG. 4 is an explanatory diagram for explaining an example of a unit that observes eyeball behavior of a driver, according to an embodiment of the present disclosure
- 1 is a flowchart (Part 1) illustrating an example of an information processing method according to an embodiment of the present disclosure
- 2 is a flowchart (part 2) illustrating an example of an information processing method according to an embodiment of the present disclosure
- FIG. 7 is an explanatory diagram for explaining an example of observation data observed by an imaging device 702;
- FIG. 3 is an explanatory diagram for explaining a partial configuration of the vehicle control system 100 for executing the process of determining the awakening level of the driver;
- FIG. 10 is an explanatory diagram for explaining an example of the trajectory of a driver's eyeball behavior when a visual task of viewing information is presented;
- FIG. 10 is a flowchart (part 1) of an information processing method of a driver's awakening level determination process;
- FIG. 2 is a flowchart (part 2) of an information processing method for driver's awakening level determination processing;
- 2 is a hardware configuration diagram showing an example of a computer 1000 that implements at least part of the functions of an automatic driving control unit 112.
- the embodiment of the present disclosure a case where it is applied to automatic driving of a car will be described as an example, but the embodiment of the present disclosure is not limited to being applied to a car, an automobile, an electric vehicle, It can be applied to moving bodies such as hybrid electric vehicles, motorcycles, personal mobility, airplanes, ships, construction machinery, and agricultural machinery (tractors). Furthermore, in the embodiment of the present disclosure, the steering mode of the moving body can be switched between an automatic driving mode and an automatic driving mode in which one or more driving tasks are automatically performed. . Moreover, the present invention may be widely applied not only to mobile bodies but also to automatic control devices in which a monitoring operator needs to intervene as appropriate.
- FIG. 1 is an explanatory diagram for explaining an example of automatic driving levels.
- FIG. 1 shows automated driving levels defined by the SAE (Society of Automotive Engineers).
- SAE Society of Automotive Engineers
- the automatic driving levels defined by the SAE are basically referred to.
- the issues and validity of the widespread adoption of automated driving technology have not been thoroughly examined. There are some parts that are not necessarily explained according to the definition of SAE.
- vehicle driving is not roughly divided into manual driving and automatic driving as described above, but is classified step by step according to the contents of tasks automatically performed by the system side. be done.
- the automatic driving level is classified into five levels, for example, from level 0 to level 4. stages).
- automated driving level 0 is manual driving (direct driving steering by the driver) without driving assistance by the vehicle control system, in which the driver performs all driving tasks and safely drives (e.g., avoids danger). It also performs monitoring related to
- automatic driving level 1 is manual driving (direct driving steering) in which driving assistance by the vehicle control system (automatic braking, ACC (Adaptive Cruise Control), LKAS (Lane Keeping Assistant System), etc.) can be executed, The driver performs all but assisted single function driving tasks and also performs safe driving supervision.
- vehicle control system automated braking, ACC (Adaptive Cruise Control), LKAS (Lane Keeping Assistant System), etc.
- automated driving level 2 is also called “partial driving automation", and under certain conditions, the vehicle control system executes subtasks of the driving task related to vehicle control in both the longitudinal direction and the lateral direction of the vehicle.
- the vehicle control system controls both steering operation and acceleration/deceleration in cooperation (for example, cooperation between ACC and LKAS).
- the subject of driving task execution is basically the driver, and the subject of monitoring related to safe driving is also the driver.
- automated driving level 3 is also called “conditional automated driving", and the vehicle control system performs all driving tasks within a limited area with conditions that can be handled by the functions installed in the vehicle. can be executed.
- the subject of driving task execution is the vehicle control system
- the subject of monitoring related to safe driving is also basically the vehicle control system.
- the vehicle control system is not required to deal with all situations.
- the user (driver) is expected to respond appropriately to requests for intervention by the vehicle control system during preliminary response. It is required to deal with system failures called Silent Failures.
- automated driving level 4 is also called “advanced driving automation", and the vehicle control system performs all driving tasks within a limited area.
- the subject of driving tasks is the vehicle control system
- the subject of monitoring related to safe driving is also the vehicle control system.
- the driver can perform driving operations (manual driving) in response to requests from the vehicle control system side due to system failure etc. No response is expected. Therefore, at Autonomous Driving Level 4, the driver will be able to perform secondary tasks as described above, and depending on the situation, for example, take a temporary nap in a section where conditions are met. It is possible.
- the vehicle will run in an automated driving mode in which the vehicle control system performs all driving tasks.
- the situation changes dynamically depending on the actual road infrastructure maintenance status, etc., there are cases where it becomes clear during the travel route that there are sections where automatic driving level 4 cannot be applied to part of the travel route.
- it is required to set and transition to automatic driving level 2 or lower, which is permitted depending on the conditions.
- the driver is required to proactively execute the driving task.
- the situation changes moment by moment during the itinerary as described above. A transition to 2 or less can occur. Therefore, the driver is required to transition from the secondary task to a ready state in which it is possible to return to manual driving at an appropriate advance notice timing after being notified of the transition of the automatic driving level.
- FIG. 2 is a flowchart for explaining an example of running according to the embodiment of the present disclosure.
- the vehicle control system executes steps from step S11 to step S18, for example. Details of each of these steps are described below.
- the vehicle control system executes driver authentication (step S11).
- the driver authentication is carried out by property authentication such as driver's license, vehicle key (including portable wireless device), knowledge authentication such as password or personal identification number, or biometric authentication such as face, fingerprint, iris of eyes, voiceprint, etc. be able to.
- the driver authentication may be performed using all of property authentication, knowledge authentication, and biometrics authentication, or two of them in combination.
- driver authentication is performed before starting driving, so that even when a plurality of drivers drive the same vehicle, eyeball behavior of each driver, etc., can be detected. It is possible to obtain information unique to each driver, such as the history of each driver, in association with each driver.
- the driver or the like operates, for example, the input unit 101 (see FIG. 3), which will be described later, to set the destination (step S12).
- the vehicle control system can set the destination in advance based on the destination information and calendar information manually input to a smartphone or the like (assuming communication with the vehicle control system is possible) before boarding the vehicle. good.
- the vehicle control system acquires schedule information, etc. stored in advance on a smartphone, etc. or a cloud server, etc. (assuming that communication with the vehicle control system is possible) via a concierge service.
- the destination may be preset in advance.
- the vehicle control system performs pre-planning settings such as travel routes based on the set destination. Furthermore, the vehicle control system acquires local dynamic map (LDM) information, etc., which is constantly updated high-density driving map information of the road on which the vehicle travels, such as information on the road environment of the set driving route, Update. At this time, the vehicle control system repeats acquisition of the LDM, etc., corresponding to the section to be traveled from now on, for each fixed section along the travel in the itinerary. In addition, the vehicle control system appropriately updates and resets the automatic driving level appropriate for each section on the travel route based on the latest acquired LDM information and the like.
- LDM local dynamic map
- the vehicle control system will start displaying the travel section on the travel route. Then, the vehicle control system starts running according to the set automatic driving level (step S13). Note that when the vehicle starts running, the display of the running section is updated based on the position information of the vehicle (own vehicle) and the acquired LDM update information.
- driving includes automatic safety measures when the driver cannot return from automatic driving to manual driving, and more specifically, for example, vehicle control Stops due to MRM etc. determined by the system are also included.
- the vehicle control system appropriately executes monitoring (observation) of the driver's condition (step S14).
- monitoring is performed, for example, to obtain training data for determining the driver's recovery readiness level.
- the monitoring switches the driving mode according to the automatic driving level set for each section on the travel route, including a request to return to manual driving from an unexpected automatic driving that occurred after the start of the itinerary. It is necessary to check the driver's condition in advance, which is necessary for this, whether the return notification was given at the appropriate timing, and whether the driver responded to the notification or warning and responded appropriately to the return action, depending on the changes in the driving environment over time. is executed in the following circumstances.
- step S15 when the vehicle reaches a switching point from the automatic driving mode to the manual driving mode based on the automatic driving level set for each section on the driving route, the vehicle control system can switch the driving mode. It is determined whether or not it is possible (step S15). When the vehicle control system determines that the driving mode can be switched (step S15: Yes), the process proceeds to step S16, and when it determines that the driving mode cannot be switched (step S15: No). ), for example, the process proceeds to step S18.
- step S16 the vehicle control system switches the driving mode (step S16). Further, the vehicle control system determines whether or not the vehicle (own vehicle) has arrived at the destination (step S17). If the vehicle has arrived at the destination (step S17: Yes), the vehicle control system ends the process, and if the vehicle has not arrived at the destination (step S17: No), the process proceeds to step S13. Return to processing. Thereafter, the vehicle control system appropriately repeats the processes from step S13 to step S17 until the vehicle arrives at the destination. Moreover, when the driving mode cannot be switched from automatic driving to manual driving, the vehicle control system may execute an emergency stop by MRM or the like (step S18).
- step S13 is omitted because it includes a series of coping processes that are automatically performed when the driver is unable to return.
- the allowable automated driving level can change from moment to moment depending on vehicle performance, road conditions, weather, and the like.
- the allowable Operational Design Domain may change depending on the deterioration of the detection performance due to temporary contamination of equipment mounted on the vehicle or contamination of sensors. Therefore, the permitted autonomous driving level may also change while driving from the starting point to the destination.
- a takeover section may also be set for the corresponding response. Therefore, in the embodiment of the present disclosure, ODD is set and updated based on various information that changes from moment to moment.
- ODD operation design domain
- the content of the secondary tasks allowed for the driver will also change.
- the range of the contents of the driver's behavior that violates the traffic rules also changes. For example, in the case of automatic driving level 4, even if secondary tasks such as reading are permitted, when the transition is made to automatic driving level 2, secondary tasks such as reading become violations.
- automatic driving there are also sudden transitions between automatic driving levels, so depending on the situation, the driver is required to be in a state of preparation that can immediately return to manual driving from the secondary task. Become.
- FIG. 3 is an explanatory diagram for explaining an example of automatic driving level transitions according to the embodiment of the present disclosure.
- the switching from the automatic driving mode (the lower range in FIG. 3) to the manual driving mode (the upper range in FIG. 3) is, for example, automatic driving level 3 and automatic It is assumed to be executed when transitioning from the section of driving level 4 to the sections of automatic driving levels 0, 1 and 2.
- the driver may be immersed in secondary tasks such as sleeping (nap), watching television or video, or playing games.
- the driver may be just letting go of the steering wheel and may be gazing at the front or surroundings of the vehicle, reading a book, or falling asleep, as in manual driving. may be doing
- the driver's arousal level differs depending on the difference in these secondary tasks.
- the driver's level of consciousness and decision-making will be lowered, that is, the level of arousal will be lowered.
- the driver cannot perform normal manual driving. Therefore, if the driver switches to the manual driving mode in that state, in the worst case, an accident may occur. Therefore, even if the driver's arousal level is lowered, it is necessary to return to a high arousal state (internal arousal return state) that allows the vehicle to be driven with normal consciousness immediately before switching to the manual driving mode. ) is required.
- a high arousal state internal arousal return state
- such switching of the driving mode is performed when the driver is at a level corresponding to returning to the manual driving mode, that is, when the driver is at a level corresponding to returning to the manual driving mode. It is assumed that this can be executed only when an active response indicating that the internal arousal state of the body has returned) can be observed (shown in the center of FIG. 3).
- the system is switched to an emergency evacuation mode such as MRM (Minimal Risk Maneuver).
- MRM Minimum Risk Maneuver
- the transition from automated driving level 4 to automated driving level 3 does not involve switching the driving mode, so the observation itself of the active response indicating the return to internal awakening as described above is performed. do not have.
- the present embodiment is not limited to the example shown in FIG. you can go It should be noted that safety is a prerequisite for steering handover, since the presence of an active response does not necessarily mean that the driver is in a state of being aware of all relevant situations. It can also be said.
- the driver should be legally obliged to return to manual driving. Even so, the driver is not necessarily in a state where he can appropriately respond to a return request RTI (Request to Intervene) as automated driving level 3 from the vehicle control system. More specifically, in response to the return request RTI as automatic driving level 3, the driver is in a state where the arousal state in the brain is restored and the body is free from numbness etc. and returns to a physical state where manual driving is possible. Not always possible.
- RTI Request to Intervene
- a prophylactic dummy wake-up request RTI may be performed from time to time and an active response indicative of the driver's internal alertness wake-up may be observed.
- each arrow indicating the transition of the automatic driving level illustrated in FIG. It is not recommended as it will cause misunderstanding of the state of the vehicle control system by That is, in the vehicle control system according to the embodiment of the present disclosure, once the automatic driving level transitions such as automatically switching from the automatic driving mode to the manual driving mode in which the driver intervenes, the driver actively It is desirable that the system is designed so that it will not automatically return to the automatic operation mode again without a specific instruction. Giving directionality (irreversibility) to the switching of driving modes in this way means that the design prevents the driver from switching to the automatic driving mode without a clear intention.
- the automatic driving mode cannot be activated only when the driver has a clear intention, for example, when the driver is not in the automatic driving mode, the automatic driving mode It is possible to prevent an act such as easily starting a secondary task by mistakenly thinking that it is.
- FIG. 4 is a flow chart illustrating an example of monitoring operation according to an embodiment of the present disclosure.
- the vehicle control system executes steps S21 to S27 when switching from the automatic operation mode to the manual operation mode, for example. Become. Details of each of these steps are described below.
- the driver since the driver is driving in automated driving mode, it is assumed that the driver is completely out of the steering wheel.
- the driver may be performing secondary tasks such as taking a nap, watching videos, playing immersive games, or working with visual tools such as tablets, smartphones, and the like.
- work using visual tools such as tablets and smartphones may be performed, for example, with the driver's seat shifted or in a seat other than the driver's seat.
- the vehicle control system appropriately intermittently performs passive monitoring and/or active monitoring of the driver (step S21).
- active monitoring and passive monitoring are described.
- active monitoring means that the vehicle control system inputs active information to the driver, and the It is an observation method to see the conscious response of
- active information input to the driver can include visual, auditory, tactile, olfactory, and (gustatory) information.
- Such active information induces the driver's perception and cognitive behavior, and if the information affects the risk, the driver will execute (response) judgment and action according to the risk.
- active monitoring for example, steering control with a small amount of steering that does not affect the safe driving of the vehicle as pseudo-active information in order for the vehicle control system to prompt the driver's feedback.
- the act of returning the steering wheel to an appropriate amount is expected as a conscious response (if normally awake). Specifically, since the driver perceives and recognizes the unnecessary steering amount, makes a decision to return to the unnecessary steering amount, and takes action, the action corresponding to the above response ( The act of returning the steering to an appropriate amount of steering) is brought out. Therefore, by observing the response, it becomes possible to determine the state of perception, cognition, judgment, and behavior in the brain of the driver. While driving in manual driving mode, the driver constantly performs a series of actions such as perception, recognition, judgment, and action regarding the road environment, etc. in order to carry out steering. The conscious response of the driver can be observed without actively inputting information to the driver (that is, it can be said that there is a response of steering).
- the frequency of active information input is preferably set to an appropriate frequency to avoid "habituation" as described above.
- Passive monitoring of the driver's condition includes various observation methods, for example, observation of the driver's biological information. More specifically, for example, in passive monitoring, if the driver is seated in the driver's seat and is in a position that allows driving, PERCLOS (percentage of eye opening) related indices, head posture behavior, eyeball behavior (saccade (rapidity) eye movements), fixation, microsaccades, etc.), blinks, facial expressions, facial orientation, etc., are expected to be evaluated in detail.
- PERCLOS percentage of eye opening
- heart rate, pulse rate, blood flow, respiration, brain waves, perspiration state, heart rate and depth of drowsiness estimated from respiration are observed using wearable devices. Extended observation is possible. Furthermore, in the passive monitoring, the driver's sitting in the driver's seat, leaving the seat, movement, destination, posture, etc. may be observed. Furthermore, the steering amount associated with the driver's careful driving state (the state in which the driver is performing manual driving while maintaining proper attention to driving) may be directly observed.
- the information observed by passive monitoring is used to estimate the time required for the driver to return to manual driving when the driving control system issues a driving mode switching notification or warning while driving in automatic driving mode. can be used to In addition, the information observed by passive monitoring can be used to determine whether or not to switch to emergency evacuation mode when the driver cannot expect to return to manual driving within a predetermined time. .
- the vehicle control system notifies the driver of a return request RTI to manual driving (step S22).
- the driver is notified of the return request RTI to manual driving by dynamic haptics such as vibration or visually or audibly.
- the driver returns to the driver's seat if he is in a normal wakefulness state, and returns to a high wakefulness state in which he can drive the vehicle with normal consciousness.
- the return request RTI may be performed a plurality of times in stages. In this case, different means may be used in each stage. .
- the vehicle control system monitors the driver's seated state, seated posture, etc. (step S23).
- the vehicle control system centrally performs active monitoring for properly seated drivers (step S24). For example, as active monitoring, in order to encourage the driver to return to a highly alert state in which he/she can drive the vehicle with normal consciousness, warnings are issued to the driver, and the vehicle is simulated. Active information input such as inputting pseudo-noise steering to manual steering control can be mentioned.
- the vehicle control system intensively monitors eyeball behavior such as the driver's face and saccades (eyeball behavior centralized monitoring) (step S25).
- eyeball behavior there is a part of behavior that appears as a biological reflex that responds to event changes as an adaptive response to loops that do not include thinking elements. Behaviors such as gradual convergence and its return high-speed divergence movement that occur when the vehicle is approaching the background in advance, and smooth pursuit eye movement that counteracts the rotation of one's own body and head and follows an object in the target direction. can be mentioned.
- the eye movement also includes the feature tracking behavior to grasp the features of the visual object and advance the understanding, instead of the reflexive response. In other words, since many phenomena that appear to reflect neurotransmission and processing in the brain are also seen in eye movement, the result of activities such as cognition of the fixation target referred to in the brain's memory is reflected. it is conceivable that.
- the driver by utilizing the fact that cognitive function activity in the brain is reflected in eye movement, it is possible to estimate the arousal level of the driver with high accuracy based on the analysis of eye movement. That is, by performing eye movement observation, when switching from the automatic driving mode to the manual driving mode (more specifically, just before that), the driver returns to a high arousal level that allows the vehicle to be driven with normal consciousness. It is possible to indirectly observe whether or not (recovery support level). In particular, when the driver returns to steering after a period of time has elapsed after leaving the steering task, the driver does not have sufficient memory of the surroundings and vehicle conditions necessary for returning to manual driving. I don't think so.
- the driver visually confirms the situation in front of the road, or visually confirms the factors of the return-to-manual-operation request RTI from the vehicle control system. In such a case, we will try to quickly proceed with the act of grasping the information that we may have grasped. Such an act of grasping information is reflected in the driver's eyeball behavior.
- the eye movement shows a unique behavior for each person and for each state of the person, so it is possible to accurately determine the arousal level of the driver by analyzing the eye movement.
- it is required to always grasp and learn the eyeball behavior peculiar to each driver, and to determine the driver's arousal level based on such learning.
- the confirmation should be based on the memory of the driver based on their past risk experiences. is also greatly affected, so it changes depending on various factors such as road conditions and driving speed during driving. Therefore, the eyeball behavior not only exhibits behaviors peculiar to each person, but also changes under the influence of memories based on the driver's various experiences.
- learning obtained by intermittently learning recovery ability determination of each driver in an active observation interval This makes it possible to make a more suitable determination for each driver.
- the vehicle control system determines the driver's recovery response level by determining the driver's arousal level based on the monitoring in step S25 described above (step S26). Then, the vehicle control system determines whether the driver is at a return reaction level at which it is possible to return to manual driving. Since the vehicle control system according to the present embodiment observes the driver's return process in stages and observes the response of the driver at each stage during the process, composite determination is possible. Then, when the vehicle control system determines that it is possible to return to manual driving with a predetermined degree of certainty based on the ability of the driver to return to internal arousal and confirm the ability of manual driving behavior, the vehicle control system switches from the automatic driving mode to the manual driving mode. Execute (step S27). Recognizing the characteristics of the observed changes over time of various eyeball behaviors of the driver, the final handover is performed with such characteristics, and the characteristics are labeled according to the handover quality regarding the success or failure of each handover attached. Details of this will be described later.
- steps in FIG. 4 do not necessarily have to be processed in the described order, the order may be changed as appropriate, and some steps may be processed in parallel.
- active monitoring in step S24 and intensive eyeball behavior monitoring in step S25 may be performed in parallel, or the order shown in FIG. 4 may be switched.
- FIG. 5 is an explanatory diagram for explaining an example of the detailed configuration of the vehicle control system 100 according to this embodiment.
- the own vehicle or the own vehicle when distinguishing the vehicle provided with the vehicle control system 100 from other vehicles, it is referred to as the own vehicle or the own vehicle.
- the vehicle control system 100 includes an input unit 101, a data acquisition unit 102, a communication unit 103, an in-vehicle device 104, an output control unit 105, an output unit 106, a drive system control unit 107, a drive system system 108, It mainly has a body system control unit 109 , a body system 110 , a storage unit 111 , an automatic driving control unit 112 and a sensor unit 113 .
- the input unit 101, the data acquisition unit 102, the communication unit 103, the output control unit 105, the drive system control unit 107, the body system control unit 109, the storage unit 111, and the automatic operation control unit 112 are connected via the communication network 121, interconnected.
- the communication network 121 is, for example, a CAN (Controller Area Network), a LIN (Local Interconnect Network), a LAN (Local Area Network), or an in-vehicle communication network, bus, etc. conforming to any standard such as FlexRay (registered trademark). Become. Each unit of vehicle control system 100 may be directly connected without communication network 121 .
- the input unit 101 is composed of a device used by passengers such as the driver to input various data and instructions.
- the input unit 101 includes operation devices such as a touch panel, buttons, microphones, switches, and levers, and operation devices that can be input by methods other than manual operation such as voice and gestures.
- the input unit 101 may be a remote control device using infrared rays or other radio waves, or an external connection device such as a mobile device or wearable device corresponding to the operation of the vehicle control system 100 .
- the input unit 101 can generate an input signal based on data, instructions, etc. input by the passenger, and supply the input signal to each function unit of the vehicle control system 100 .
- the data acquisition unit 102 can acquire data used for processing of the vehicle control system 100 from the sensor unit 113 having various sensors and the like, and supply the data to each functional unit of the vehicle control system 100 .
- the sensor unit 113 has various sensors for detecting the situation of the vehicle (own vehicle). Specifically, for example, the sensor unit 113 includes a gyro sensor, an acceleration sensor, an inertial measurement unit (IMU), an operation amount of an accelerator pedal, an operation amount of a brake pedal, a steering angle of a steering wheel, an engine speed, a motor It has a sensor or the like for detecting the number of revolutions or the rotational speed of the wheels.
- IMU inertial measurement unit
- the sensor unit 113 may have various sensors for detecting information on the outside of the vehicle (own vehicle).
- the sensor unit 113 may have an imaging device such as a ToF (Time of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
- the sensor unit 113 may include an environment sensor for detecting weather or the like, an ambient information detection sensor for detecting objects around the vehicle, and the like. Examples of such environmental sensors include raindrop sensors, fog sensors, sunshine sensors, and snow sensors. Examples of the ambient information detection sensor include ultrasonic sensors, radar, LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), and sonar.
- the sensor unit 113 may have various sensors for detecting the current position of the vehicle (own vehicle). Specifically, for example, the sensor unit 113 may have a GNSS receiver or the like that receives GNSS signals from GNSS (Global Navigation Satellite System) satellites. In addition, based on location information by SLAM (Simultaneous Localization and Mapping) that can perform self-location estimation and environment map creation at the same time, location information detected by LiDAR (Light Detection and Ranging), millimeter wave radar, etc. The current position detected by the sensor unit 113 may be complemented by correcting the reference point using the SLAM (Simultaneous Localization and Mapping) that can perform self-location estimation and environment map creation at the same time, location information detected by LiDAR (Light Detection and Ranging), millimeter wave radar, etc.
- the current position detected by the sensor unit 113 may be complemented by correcting the reference point using the
- the sensor unit 113 may have various sensors for detecting information inside the vehicle.
- the sensor unit 113 includes an imaging device (ToF camera, stereo camera, monocular camera, infrared camera, etc.) that captures an image of the driver, a biological information sensor that detects the biological information of the driver, and can have a microphone or the like that collects the sound of
- a biological information sensor is provided, for example, on a seat surface of a seat or a steering wheel, and can detect biological information of a passenger sitting on a seat or a driver holding a steering wheel.
- driver's biological information examples include heart rate, pulse rate, blood flow, respiration, electroencephalogram, skin temperature, skin resistance, sweating state, head posture behavior, eyeball behavior (gazing, blinking, saccade, microsaccade , fixation, drift, gaze, iris pupillary reaction, etc.).
- These biological information include electric potential between predetermined positions on the body surface of a driver, contact-type observable signals such as the blood flow system using infrared light, non-contact-type microwaves, millimeter waves, FM ( Detection of eyeball behavior using non-contact observable signals using frequency modulation waves, imaging equipment (monitoring unit) using infrared wavelengths, and steering and pedal steering to check steering responsiveness It is possible to detect it by using the overload torque measurement information of the equipment, etc. singly or in combination.
- contact-type observable signals such as the blood flow system using infrared light, non-contact-type microwaves, millimeter waves, FM ( Detection of eyeball behavior using non-contact observable signals using frequency modulation waves, imaging equipment (monitoring unit) using infrared wavelengths, and steering and pedal steering to check steering responsiveness It is possible to detect it by using the overload torque measurement information of the equipment, etc. singly or in combination.
- the communication unit 103 communicates with the in-vehicle device 104 as well as various devices outside the vehicle, a server, a base station, and the like, and transmits data supplied from each functional unit of the vehicle control system 100, and transmits received data to the vehicle. It can be supplied to each functional unit of the control system 100 .
- the communication protocol supported by the communication unit 103 is not particularly limited, and the communication unit 103 can also support multiple types of communication protocols.
- the communication unit 103 can perform wireless communication with the in-vehicle device 104 using a wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), WUSB (Wireless Universal Serial Bus), or the like. Also, for example, the communication unit 103 can connect to USB, HDMI (High-Definition Multimedia Interface) (registered trademark), or MHL (Mobile High-Definition Link) via a connection terminal (and cable if necessary) not shown. ) or the like, it is possible to perform wired communication with the in-vehicle device 104 .
- USB High-Definition Multimedia Interface
- MHL Mobile High-Definition Link
- the communication unit 103 communicates with a device (such as an application server or control server) existing on an external network (such as the Internet, a cloud network, or an operator-specific network) via a base station or an access point. can communicate.
- a device such as an application server or control server
- an external network such as the Internet, a cloud network, or an operator-specific network
- the communication unit 103 uses P2P (Peer To Peer) technology to connect terminals (for example, pedestrian or store terminals, terminals carried by regulators, or MTC (Machine Type Communication (terminal) can be performed.
- the communication unit 103 performs vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-pedestrian communication. ) communication such as V2X communication may be performed.
- the communication unit 103 has a beacon receiving unit, receives radio waves or electromagnetic waves transmitted from wireless stations installed on the road, and transmits information such as the current position, congestion, traffic restrictions, required time, etc. may be obtained.
- a beacon receiving unit receives radio waves or electromagnetic waves transmitted from wireless stations installed on the road, and transmits information such as the current position, congestion, traffic restrictions, required time, etc. may be obtained.
- pairing is performed with the preceding vehicle that is traveling in a section that can be the leading vehicle, and the information obtained from the data acquisition unit mounted in the preceding vehicle is acquired as pre-travel information, and the data acquisition unit of the own vehicle is used.
- Complementary use may be performed to complement the data acquired in 102, and it can be a means to ensure the safety of the following platoon, especially in platooning with the leading vehicle.
- the in-vehicle device 104 can include, for example, a mobile device or wearable device possessed by the passenger, an information device carried into or attached to the vehicle, and a navigation device that searches for a route to an arbitrary destination. Considering that the passenger is not necessarily fixed in a fixed seat position due to the spread of automatic driving, the in-vehicle device 104 can be extended to a video player, a game device, and other devices that can be attached and detached from the vehicle installation. can.
- the output control unit 105 can control the output of various information to the passengers of the own vehicle or to the outside of the vehicle.
- the output control unit 105 generates an output signal including at least one of visual information (e.g., image data) and auditory information (e.g., audio data) and supplies it to the output unit 106 so that the output unit Controls the output of visual and auditory information from 106 .
- the output control unit 105 combines image data captured by different imaging devices of the sensor unit 113 to generate a bird's-eye view image, a panoramic image, or the like, and outputs an output signal including the generated image. It is supplied to the output section 106 .
- the output control unit 105 generates audio data including a warning sound or a warning message against danger such as collision, contact, and entry into a dangerous area, and outputs an output signal including the generated audio data to the output unit 106. supply.
- the output unit 106 can have a device capable of outputting visual information or auditory information to passengers in the vehicle or outside the vehicle.
- the output unit 106 includes a display device, an instrument panel, an audio speaker, headphones, a wearable device such as an eyeglass-type display worn by a passenger, a projector, a lamp, and the like.
- the display device of the output unit 106 can display visual information within the driver's field of view, such as a head-up display, a transmissive display, or a device having an AR (Augmented Reality) display function, in addition to a device having a normal display. It may be a display device.
- the output unit 106 provides olfactory stimulation (provides a predetermined odor), Various devices that provide tactile stimulation (such as providing cold air, vibrating, and electrical stimulation) can be included. Furthermore, the output unit 106 may include a device or the like that provides a sensory discomfort stimulus, such as forcing the driver to take a posture that causes discomfort by moving the backrest of the driver's seat.
- HMI Human Machine Interface
- the driving system control unit 107 can control the driving system 108 by generating various control signals and supplying them to the driving system 108 . Further, the driving system control unit 107 may supply control signals to each functional unit other than the driving system 108 as necessary to notify the control status of the driving system 108 or the like.
- the driveline system 108 can have various devices related to the driveline of the own vehicle.
- the driving system 108 includes a driving force generator for generating driving force such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, a steering mechanism for adjusting the steering angle, It has a braking device that generates braking force, an ABS (Antilock Brake System), an ESC (Electronic Stability Control), an electric power steering device, and the like.
- the body system control unit 109 can control the body system 110 by generating various control signals and supplying them to the body system 110 .
- the body system control unit 109 may supply a control signal to each function unit other than the body system 110 as necessary to notify the control status of the body system 110 or the like.
- the body-system system 110 can have various body-system devices mounted on the vehicle body.
- the body system 110 includes a keyless entry system, a smart key system, a power window device, a power seat, a steering wheel, an air conditioner, and various lamps (e.g., head lamps, back lamps, brake lamps, winkers, fog lamps, etc.). etc.
- the storage unit 111 includes, for example, ROM (Read Only Memory), RAM (Random Access Memory), HDD (Hard Disc Drive) and other magnetic storage devices, semiconductor storage devices, optical storage devices, magneto-optical storage devices, and the like. be able to. Further, the storage unit 111 can store various programs, data, and the like used by each functional unit of the vehicle control system 100 . For example, the storage unit 111 stores map data such as a three-dimensional high-precision map such as a dynamic map, a global map covering a wide area with lower accuracy than the high-precision map, and a local map including information about the surroundings of the vehicle. to store map data such as a three-dimensional high-precision map such as a dynamic map, a global map covering a wide area with lower accuracy than the high-precision map, and a local map including information about the surroundings of the vehicle. to store map data such as a three-dimensional high-precision map such as a dynamic map, a global map covering a wide area with lower accuracy than the high
- the automatic driving control unit 112 can perform control related to automatic driving such as autonomous driving or driving support. Specifically, for example, the automatic driving control unit 112 performs collision avoidance or shock mitigation of the own vehicle, follow-up driving based on the inter-vehicle distance, vehicle speed maintenance driving, collision warning of the own vehicle, or lane deviation warning of the own vehicle. Coordinated control is performed for the purpose of realizing ADAS (Advanced Driver Assistance System) functions. Further, for example, the automatic driving control unit 112 can perform cooperative control aimed at automatic driving in which the vehicle autonomously travels without depending on the operation of the driver. Specifically, the automatic driving control unit 112 has a detection unit 131 , a self-position estimation unit 132 , a situation analysis unit 133 , a planning unit 134 and an operation control unit 135 .
- ADAS Advanced Driver Assistance System
- the detection unit 131 can detect various types of information necessary for controlling automatic operation.
- the detection unit 131 has an outside information detection unit 141 , an inside information detection unit 142 , and a vehicle state detection unit 143 .
- the vehicle exterior information detection unit 141 can perform processing for detecting information outside the own vehicle based on data or signals from each unit of the vehicle control system 100 .
- the vehicle exterior information detection unit 141 performs detection processing, recognition processing, and tracking processing of objects around the own vehicle, and detection processing of the distance to the object.
- Objects to be detected include, for example, vehicles, people, obstacles, structures, roads, traffic lights, traffic signs, road markings, and the like.
- the vehicle exterior information detection unit 141 performs processing for detecting the environment around the own vehicle.
- the ambient environment to be detected includes, for example, weather, temperature, humidity, brightness, and road conditions.
- the vehicle exterior information detection unit 141 sends data indicating the result of detection processing to the self-position estimation unit 132, the map analysis unit 151 of the situation analysis unit 133, the traffic rule recognition unit 152, the situation recognition unit 153, and the operation control. It is supplied to the emergency avoidance unit 171 of the unit 135 and the like.
- the information acquired by the information detection unit 141 outside the vehicle is mainly provided by the infrastructure if the driving section is a section in which the LDM constantly updated as a section in which automatic driving is possible is provided by the infrastructure.
- the above information can be received in advance from a vehicle or a group of vehicles traveling in the section in advance, prior to entering the section.
- the vehicle exterior information detection unit 141 may receive the road environment information via the leading vehicle that has previously entered the corresponding section.
- a section can be driven automatically is determined by the presence or absence of advance information provided by the infrastructure corresponding to the section.
- the outside information detection unit 141 is shown and described on the premise that it is mounted on the own vehicle and receives information directly from the infrastructure, but it is not limited to this. .
- the vehicle-exterior information detection unit 141 receives and uses information that the vehicle in front has perceived as "information", thereby further increasing the predictability of dangers that may occur while driving in this embodiment. be able to.
- the in-vehicle information detection unit 142 can detect information in the vehicle based on data or signals from each functional unit of the vehicle control system 100 .
- the in-vehicle information detection unit 142 performs driver authentication processing and recognition processing, driver state detection processing, passenger detection processing, and in-vehicle environment detection processing.
- the state of the driver to be detected includes, for example, physical condition, wakefulness, concentration, fatigue, line of sight direction, degree of influence of alcohol, drugs, etc., detailed eyeball behavior, and the like.
- the in-vehicle environment to be detected includes, for example, temperature, humidity, brightness, and odor.
- the in-vehicle information detection unit 142 supplies data indicating the result of detection processing to the situation recognition unit 153 of the situation analysis unit 133, the emergency situation avoidance unit 171 of the operation control unit 135, and the like. It should be noted that, for example, after the driver is notified of the request RTI to return to manual driving, it is estimated or found that the driver will not be able to achieve manual driving within a predetermined time limit, and deceleration control is performed to allow time. If it is determined that the return to manual operation will not be in time, the in-vehicle information detection unit 142 instructs the emergency avoidance unit 171 and the like to decelerate the vehicle for evacuation and start the evacuation/stop procedure. good too.
- the above-described in-vehicle information detection unit 142 has two main roles, the first role is passive monitoring of the driver's condition during driving, and the second role is to assist in manual driving. This is active monitoring that detects and determines whether or not the driver is at a return reaction level at which manual driving is possible after the return request RTI is notified, based on the driver's conscious response.
- the vehicle state detection unit 143 can detect the state of the vehicle (own vehicle) based on data or signals from each unit of the vehicle control system 100 .
- the state of the own vehicle to be detected includes, for example, speed, acceleration, steering angle, self-diagnostic status and contents such as presence or absence of abnormality, driving operation state, power seat position and tilt, door lock state, and , the status of other in-vehicle equipment, etc.
- the vehicle state detection unit 143 supplies data indicating the result of the detection processing to the situation recognition unit 153 of the situation analysis unit 133, the emergency avoidance unit 171 of the operation control unit 135, and the like.
- the state of the vehicle (self-vehicle) to be recognized includes, for example, the position, attitude, movement (eg, speed, acceleration, moving direction, etc.) of the vehicle (self-vehicle), and the motion of the vehicle (self-vehicle).
- the amount of cargo that determines the characteristics, the movement of the center of gravity of the car body due to cargo loading, the tire pressure, the braking distance movement due to the wear condition of the brake brake pad, the maximum allowable deceleration braking to prevent cargo movement caused by cargo braking, and the liquid cargo. Examples include the centrifugal relaxation limit speed when traveling on a curve.
- the control of the vehicle will not be required due to the conditions peculiar to the vehicle, the conditions peculiar to the loaded cargo, etc., as well as the coefficient of friction of the road surface, the curve and the slope of the road.
- the recovery start timing to be used will be different. Therefore, in the present embodiment, it is required that these various conditions are collected and learned, and the learning result is always reflected in the estimation of the optimum timing for performing control.
- the conditions that determine these actual vehicle controls differ depending on the type of cargo loaded on the vehicle, so the effects of actual accidents, etc., will vary depending on the type of cargo loaded on the vehicle. It is desirable to operate with bias, and it is not necessary to limit the use of the results obtained from uniform learning as they are.
- the self-position estimation unit 132 estimates the position of the vehicle (self-vehicle) based on data or signals from each functional unit of the vehicle control system 100 such as the vehicle exterior information detection unit 141 and the situation recognition unit 153 of the situation analysis unit 133. and estimation processing of posture and the like can be performed. In addition, the self-position estimation unit 132 can generate a local map (hereinafter referred to as self-position estimation map) used for self-position estimation, if necessary.
- self-position estimation map a local map
- the map for self-position estimation is, for example, a highly accurate map using techniques such as SLAM.
- the self-position estimation unit 132 supplies data indicating the result of estimation processing to the map analysis unit 151, the traffic rule recognition unit 152, the situation recognition unit 153, and the like of the situation analysis unit 133.
- FIG. Also, the self-position estimation unit 132 can store the map for self-position estimation in the storage unit 111 .
- the situation analysis unit 133 can analyze the situation of the vehicle (own vehicle) and its surroundings.
- the situation analysis section 133 has a map analysis section 151 , a traffic rule recognition section 152 , a situation recognition section 153 and a situation prediction section 154 .
- the map analysis unit 151 analyzes various data stored in the storage unit 111 while using data or signals from each functional unit of the vehicle control system 100 such as the self-position estimation unit 132 and the vehicle exterior information detection unit 141 as necessary. It is possible to perform map analysis processing and build a map that includes information necessary for autonomous driving processing.
- the map analysis unit 151 applies the constructed map to the traffic rule recognition unit 152, the situation recognition unit 153, the situation prediction unit 154, the route planning unit 161 of the planning unit 134, the action planning unit 162, the operation planning unit 163, etc. supply to
- the traffic rule recognition unit 152 recognizes the surroundings of the vehicle (own vehicle) based on data or signals from each unit of the vehicle control system 100 such as the self-position estimation unit 132, the vehicle exterior information detection unit 141, and the map analysis unit 151. Recognition processing of traffic rules can be performed. By this recognition processing, for example, the positions and conditions of signals around the vehicle (self-vehicle), details of traffic regulations around the self-vehicle, lanes in which the vehicle can travel, etc. are recognized. The traffic rule recognition unit 152 supplies data indicating the result of recognition processing to the situation prediction unit 154 and the like.
- the situation recognition unit 153 receives data or Based on the signal, a process of recognizing the situation regarding the vehicle (own vehicle) can be performed. For example, the situation recognition unit 153 performs recognition processing of the situation of the vehicle (own vehicle), the surrounding situation of the vehicle (own vehicle), the situation of the driver of the vehicle (own vehicle), and the like. In addition, the situation recognition unit 153 generates a local map (hereinafter referred to as a situation recognition map) used for recognizing the situation around the vehicle (self-vehicle) as necessary.
- the situation recognition map can be, for example, an occupancy grid map.
- the situation recognition section 153 supplies data indicating the result of recognition processing (including a situation recognition map, if necessary) to the self-position estimation section 132, the situation prediction section 154, and the like.
- the situation recognition unit 153 causes the storage unit 111 to store the map for situation recognition.
- the situation prediction section 154 performs prediction processing of the situation regarding the vehicle (own vehicle) based on data or signals from each section of the vehicle control system 100 such as the map analysis section 151, the traffic rule recognition section 152, and the situation recognition section 153. be able to.
- the situation prediction unit 154 performs prediction processing of the situation of the vehicle (own vehicle), the surrounding situation of the vehicle (own vehicle), the situation of the driver, and the like.
- the situation of the vehicle (own vehicle) to be predicted includes, for example, the behavior of the vehicle (own vehicle), the occurrence of an abnormality, and the travelable distance.
- the circumstances around the vehicle (own vehicle) to be predicted include, for example, the behavior of moving objects around the vehicle (own vehicle), changes in signal conditions, and environmental changes such as weather.
- the driver's situation to be predicted includes, for example, the behavior and physical condition of the driver. Then, the situation prediction unit 154 sends the data indicating the result of the prediction process to the route planning unit 161 and the action planning unit 162 of the planning unit 134 together with the data from the traffic rule recognition unit 152 and the situation recognition unit 153, as well as to the operation plan. It is supplied to the unit 163 and the like.
- the route planning unit 161 can plan a route to the destination based on data or signals from each functional unit of the vehicle control system 100 such as the map analysis unit 151 and the situation prediction unit 154. For example, the route planning unit 161 sets a route from the current position to the designated destination based on the global map. Also, the route planning unit 161 sets the automatic driving level for each section on the travel route based on the LDM or the like. Further, for example, the route planning unit 161 may appropriately change the route based on conditions such as traffic jams, accidents, traffic restrictions, construction work, and the physical condition of the driver. The route planning unit 161 supplies data indicating the planned route to the action planning unit 162 and the like.
- the action planning unit 162 plans the route planned by the route planning unit 161 within the planned time based on data or signals from each functional unit of the vehicle control system 100 such as the map analysis unit 151 and the situation prediction unit 154. It is possible to plan the behavior of the vehicle (own vehicle) for safe driving. For example, the action planning unit 162 plans starting, stopping, direction of travel (eg, forward, backward, left turn, right turn, direction change, etc.), driving lane, driving speed, overtaking, and the like. The action planning unit 162 supplies data indicating the planned actions of the vehicle (own vehicle) to the action planning unit 163 and the like.
- the open plane space where the vehicle can physically travel Vacant land, etc.
- entry-avoidable dangerous spaces cliffs, pedestrian-crowded places such as station exits
- etc. may be managed as supplementary information, and reflected in control as evacuation spaces during control in an emergency.
- the action planning unit 163 determines the vehicle (own vehicle) can be planned. For example, the motion planning unit 163 plans acceleration, deceleration, travel trajectory, and the like. In addition, the operation planning unit 163 can set the operation mode and plan the timing of executing switching. The motion planning unit 163 supplies data indicating the planned motion of the vehicle (own vehicle) to the acceleration/deceleration control unit 172 and the direction control unit 173 of the motion control unit 135 and the like.
- the operation control unit 135 can control the operation of the vehicle (own vehicle).
- the motion control unit 135 has an emergency avoidance unit 171 , an acceleration/deceleration control unit 172 and a direction control unit 173 .
- the emergency situation avoidance unit 171 Based on the detection results of the vehicle exterior information detection unit 141, the vehicle interior information detection unit 142, and the vehicle state detection unit 143, the emergency situation avoidance unit 171 detects collision, contact, entry into a dangerous zone, driver abnormality, vehicle It is possible to perform processing for detecting an emergency such as an abnormality.
- the emergency avoidance unit 171 supplies data indicating the planned operation of the vehicle to the acceleration/deceleration control unit 172, the direction control unit 173, and the like.
- the acceleration/deceleration control unit 172 can perform acceleration/deceleration control for realizing the operation of the vehicle (own vehicle) planned by the operation planning unit 163 or the emergency avoidance unit 171 .
- the acceleration/deceleration control unit 172 calculates a control target value of a driving force generating device or a braking device for realizing planned acceleration, deceleration, or sudden stop, and drives a control command indicating the calculated control target value. It is supplied to the system control unit 107 .
- an emergency there are mainly two cases in which an emergency can occur.
- One is that an unexpected accident or an accident-inducing factor occurs for a sudden reason during automatic driving on a road that was originally considered safe by LDM, etc. obtained from infrastructure on the driving route in automatic driving mode, and driving This is a case in which the emergency return of the hand is not in time.
- Another is a case where it becomes difficult for some reason to switch from the automatic operation mode to the manual operation mode.
- the direction control unit 173 can perform direction control for realizing the operation of the vehicle (own vehicle) planned by the operation planning unit 163 or the emergency avoidance unit 171 .
- the direction control unit 173 calculates the control target value of the steering mechanism for realizing the traveling trajectory or sharp turn planned by the operation planning unit 163 or the emergency avoidance unit 171, and controls indicating the calculated control target value.
- a command is supplied to the driving system control unit 107 .
- FIG. 6 is a diagram showing an example of the installation positions of the imaging devices of the sensor unit 113.
- the imaging units 7910, 7912, 7914, 7916, and 7918 shown in FIG. 6 to which the imaging devices can be applied are, for example, the front nose, side mirrors, rear bumper, back door, and upper part of the windshield in the vehicle 7900. provided in at least one position of
- An imaging unit 7910 installed on the front nose and an imaging unit 7918 installed on the upper part of the windshield in the vehicle interior mainly acquire images in front of the vehicle 7900 .
- Imaging units 7912 and 7914 installed in the side mirrors mainly acquire side images of the vehicle 7900 .
- An imaging unit 7916 installed on a rear bumper or a back door mainly acquires an image behind the vehicle 7900 .
- An imaging unit 7918 installed above the windshield in the passenger compartment is mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like. Further, in the future, in automatic driving, when the vehicle turns right or left, the use may be expanded to include pedestrians crossing the road on which the vehicle turns left or right in a wide range, and even the range of objects approaching the crossing.
- FIG. 6 shows an example of the imaging range of each of the imaging units 7910, 7912, 7914, and 7916.
- the imaging range a indicates the imaging range of the imaging unit 7910 provided on the front nose
- the imaging ranges b and c indicate the imaging ranges of the imaging units 7912 and 7914 provided on the side mirrors, respectively
- the imaging range d The imaging range of an imaging unit 7916 provided on the rear bumper or back door is shown. For example, by superimposing the image data captured by the imaging units 7910, 7912, 7914, and 7916, a bird's-eye view image of the vehicle 7900 viewed from above can be obtained.
- a bird's-eye view image of the vehicle 7900 viewed from above an all-surrounding stereoscopic display image surrounding the vehicle periphery with a curved plane, and the like. is obtained.
- the vehicle exterior information detection units 7920, 7922, 7924, 7926, 7928, and 7930 provided on the front, rear, sides, corners, and above the windshield in the vehicle interior of the vehicle 7900 may be, for example, ultrasonic sensors or radar devices.
- the exterior information detectors 7920, 7926, and 7930 provided above the front nose, rear bumper, back door, and windshield of the vehicle 7900 may be LiDAR devices, for example.
- These vehicle exterior information detection units 7920 to 7930 are mainly used to detect preceding vehicles, pedestrians, obstacles, and the like. These detection results may be applied to improve the three-dimensional object display of the bird's-eye view display and all-surrounding three-dimensional display.
- FIG. 7 is an explanatory diagram for explaining examples of various sensors included in the sensor unit 113 according to this embodiment.
- FIG. 7 is a diagram showing examples of various sensors included in the sensor unit 113 for obtaining information about the driver in the vehicle.
- the sensor unit 113 is a detector for detecting the position and posture of the driver, which includes, for example, a ToF camera, a stereo camera, a seat strain gauge, and the like. - It has an attitude detection unit 200 .
- the sensor unit 113 also has a face recognition unit 202, a face tracking unit 204, and an eyeball tracking unit (monitoring unit) 206 as detectors for obtaining biological information of the driver. Details of various sensors included in the sensor unit 113 according to the present embodiment will be sequentially described below.
- the face recognition unit 202, the face tracking unit 204, and the eyeball tracking unit (monitoring unit) 206 can be composed of various sensors such as imaging devices, for example.
- the face recognition unit 202 recognizes and detects the face of the driver or the like from the captured image, and outputs the detected information to the face tracking unit 204 .
- a face tracking unit 204 detects movements of the driver's face and head based on the information detected by the face recognition unit 202 .
- the eyeball tracking unit 206 detects the eyeball behavior of the driver. Details of the eyeball tracking unit 206 according to the embodiment of the present disclosure will be described later.
- the sensor unit 113 may have a biological information detection unit 208 as another detector for obtaining biological information of the driver. Further, the sensor unit 113 may have an authentication unit 210 that authenticates the driver.
- the authentication method of the authentication unit 210 is not particularly limited, and may be biometric authentication using face, fingerprint, iris of eyes, voiceprint, etc., in addition to knowledge authentication using a password or personal identification number. In the above description, main sensors included in the sensor unit 113 have been described, but the sensor unit 113 may include various other sensors.
- FIG. 8 is an explanatory diagram for explaining an example of a unit that executes determination of a driver's arousal level according to the present embodiment.
- the unit that executes determination of the arousal level of the driver includes part of the vehicle interior information detection unit 142 of the detection unit 131 shown in FIG. including. More specifically, FIG. A data base (DB) 310 is shown, which work together to determine the level of arousal of the driver.
- DB data base
- the eyeball behavior analysis unit 300 acquires the driver's eyeball behavior detected by the eyeball tracking unit 206 of the sensor unit 113 via the data acquisition unit 102 and analyzes the acquired eyeball behavior. For example, the eyeball behavior analysis unit 300 detects and analyzes eyeball behaviors such as saccades (eyeball rotation), fixation, and microsaccades (eyeball minute rotations) of the driver's eyes. Eyeball behavior information analyzed by the eyeball behavior analysis unit 300 is output to an eyeball behavior learning device 302 and a determination unit 320, which will be described later.
- the eyeball behavior analysis unit 300 can dynamically switch the analysis mode according to the driving mode or the like.
- the eyeball behavior analysis unit 300 can switch between at least two analysis modes (first analysis mode, second analysis mode).
- first analysis mode such as 250 fps or more in one analysis mode (first analysis mode)
- second analysis mode performs analysis in the other analysis mode (second analysis mode).
- analysis mode analysis can be performed at a low frame rate (second frame rate) such as 60 fps.
- the eyeball behavior analysis unit 300 determines the period of the preparation mode for switching (the period of the preparation mode for operation mode change). , intensively samples and analyzes eye movement at a high frame rate (first analysis mode). In this eye behavior analysis, the eye behavior analysis unit 300 observes (sampling) and analyzes the driver's microsaccades and involuntary eye movements. Then, the determination unit 320, which will be described later, determines the driver's arousal level (recovery response level) based on the analysis result.
- the time length of the eye movement analysis period in the period of the preparation mode described above is set before the vehicle reaches the switching point of the driving mode according to the automatic driving level set on the route based on the LDM etc. It is preferable to determine the level (that is, the driver's response level for returning to manual driving) so as to ensure sufficient time for determining with high accuracy. Therefore, in the present embodiment, the starting point (monitoring point) of the eye movement analysis period during the preparation mode is the schedule (journey), LDM, road conditions, driving speed, vehicle type (trailer, general passenger car), driving It will be determined based on the seated state of the hand (state information obtained by steady cycle monitoring), etc., that is, the time length of the relevant period will change dynamically.
- the eyeball behavior analysis unit 300 samples and analyzes the eyeball behavior at a low frame rate (second analysis mode). This eye movement analysis is performed as passive monitoring described above, for example, PERCLOS, saccades, fixation, etc. are also observed and analyzed, and performed to determine the drowsiness and fatigue of the driver.
- the analysis frequency may be dynamically switched according to the automatic driving level (automatic driving level 3, automatic driving level 4). Level 3 may be executed more frequently than Autonomous Driving Level 4. As explained earlier, in automatic driving level 3, in order to ensure safe driving, it is expected that the driver will always be in a state of readiness to be able to immediately return to manual driving. . Therefore, in the automatic driving level 3, eye behavior analysis is performed frequently in order to detect drowsiness and fatigue and determine whether the driver can immediately return to manual driving. preferable.
- the eye behavior analysis unit 300 may, for example, perform eye movement during the preparation mode period described above in order to acquire teacher data for the eye behavior learner 302 to learn later.
- Eyeball behavior analysis may be performed at a high frame rate or a low frame rate in a period having a short time length compared to the time length of the behavior analysis period.
- the eye behavior analysis unit 300 performs eye behavior analysis (first analysis mode) at a high frame rate when a decrease in the driver's arousal level is detected by passive monitoring in the automatic driving mode. may be executed.
- the analysis result in this case becomes teacher data (teacher data labeled as eyeball behavior when the arousal level is lowered) for learning by the eyeball behavior learner 302, which will be described later.
- the eye behavior analysis unit 300 analyzes the eye behavior at a low frame rate when the driving mode is the manual driving mode (second analysis mode).
- This eye movement analysis is performed as passive monitoring described above, for example, PERCLOS, saccades, fixation, etc. are also observed and analyzed, and performed to determine the drowsiness and fatigue of the driver.
- the eye behavior analysis unit 300 performs eye behavior analysis, which will be described later, in a situation where it is recognized that the driver is normally driving manually based on the driving behavior of the driver.
- teacher data for learning by the learner 302 teacher data labeled as eyeball behavior when the arousal level is normal
- an eyeball behavior analysis may be performed.
- the eye movement analysis unit 300 In order to acquire the teacher data, the eye movement analysis unit 300, for example, in a period having a short time length compared to the time length of the eye movement analysis period in the preparation mode period, a high frame rate or a low frame rate Perform eye movement analysis at the rate. Further, even in the manual driving mode, the eye behavior analysis unit 300 performs eye behavior analysis at a high frame rate (first analysis mode) may be executed. The analysis result in this case also becomes teacher data (teacher data labeled as eyeball behavior when the arousal level is lowered) for learning by the eyeball behavior learner 302 described later.
- teacher data teacher data labeled as eyeball behavior when the arousal level is lowered
- the eyeball behavior analysis unit 300 does not always perform eyeball behavior analysis at a high frame rate. Inducing drive loads can be reduced. Furthermore, in the present embodiment, the eye behavior analysis unit 300 executes eye behavior analysis at a high event detection rate when necessary, so that the driver's arousal level (recovery response level) can be accurately determined. can. Continuous high-speed driving of the imaging unit at unnecessary timing is not only wasteful, but the heat generated by the imaging device and signal transmission is a factor in noise, and the sensitivity of the imaging performance decreases at the timing when the necessary high-speed imaging observation is required. There are detrimental effects.
- the eye behavior learner 302 learns, as teacher data, the analysis results of the driver's eye behavior labeled with each arousal level acquired in the past, and generates a database 310 for determination by the determination unit 320 described later. and output to the storage unit 111 (see FIG. 5).
- the eye behavior learner 302 can be a supervised learner such as a support vector regression or deep neural network.
- the analysis result (eye behavior) and the labeled arousal level (normal or decreased) are input to the eye behavior learner 302 as an input signal and a teacher signal, respectively. 302 performs machine learning on the relationship between these input information according to predetermined rules.
- the eye behavior learner 302 receives the above-described pairs of input signals and teacher signals, and performs machine learning on these inputs to determine the relationship between the analysis result (eye behavior) and the arousal level.
- a database (DB) 310 that stores the relationship information shown is generated.
- the generated DB 310 is not limited to being stored in the storage unit 111, and may be stored in a cloud server (not shown) in association with identification information for identifying the driver. good.
- the stored DB 310 can be used even in different vehicles when the driver changes to a commercial vehicle or uses a shared car or a rental car. Furthermore, it is preferable that the information in the DB 310 is always updated regardless of where it is stored.
- the evaluation criterion may be further normalized according to the vehicle.
- the learning and generation of the database (DB) is explained by choosing between two options, whether the arousal level is normal or low, but it is further subdivided and classified into the recovery quality, and other driver state transitions are explained. Learning may be performed in association with information to improve the accuracy of predicting the quality of observable information acquired by observation means other than eyeball behavior and the awakening and recovery quality of the driver.
- the determination unit 320 determines the driver's arousal level (recovery reaction level) based on the eye behavior analysis result analyzed by the eye behavior analysis unit 300 . For example, if it is confirmed that the driver is performing eye saccades, fixation, microsaccades, or other eye movements for problem solving, the determination unit 320 determines that the driver's arousal level is can be determined to be high. On the other hand, if these eyeball behaviors are not observed or if they are few, the determination unit 320 can determine that the driver's arousal level is low.
- eyeball behavior shows different behavior when a person is in a normal arousal state and when a person is in a state of reduced consciousness and arousal. Furthermore, each person exhibits characteristic behavior. Therefore, in the present embodiment, the determination unit 320 performs determination by referring to the database (DB) 310 generated by the eyeball behavior learning device 302 in association with each driver. More specifically, in the present embodiment, the determination unit 320 stores the analysis result of the eyeball behavior of the driver during the preparation mode period in a database ( DB) 310 to determine the arousal level (recovery response level). Therefore, in the present embodiment, determination is made by referring to the eyeball behavior specific to each driver obtained based on learning, so that the accuracy of the determination can be improved. In addition, the difference in the characteristics of each person may be due to the visual characteristics of each person, or because visual confirmation works based on the visual memory of past risk factors, the manifestation of such differences is infinitely different.
- the determination unit 320 can output determination results to the situation prediction unit 154 (see FIG. 5) and the planning unit (moving body operation control unit) 134 (see FIG. 5).
- the planning unit 134 may plan to switch the operation mode based on the determination result of the determining unit 320 .
- FIG. 9 is an explanatory diagram for explaining details of an operation example of the eyeball behavior analysis unit 300 according to the embodiment of the present disclosure.
- the left end is the starting point (departure point), and the right end is the destination point (destination). do.
- passive monitoring is performed intermittently at a predetermined frequency regardless of the operating mode.
- passive monitoring includes eye behavior analysis at a low frame rate, and the eye behavior analysis includes, for example, PERCLOS evaluation by observing the entire eye, saccades as detailed eye behavior, fixation, etc. are also observed. , is analyzed and performed to determine driver drowsiness and fatigue.
- eyeballs Behavioral analysis eg, microsaccades
- the eyeball behavior analysis unit 300 executes eyeball behavior analysis at a high frame rate, for example.
- passive monitoring detects a decrease in the driver's arousal level in the driving section in the manual driving mode of automatic driving level 2 shown on the left side of FIG.
- passive monitoring may be performed more frequently, and eye movement analysis (for example, microsaccade) may be performed at a higher frame rate.
- the analysis result of the eyeball behavior in this case becomes teacher data (teacher data labeled as eyeball behavior when the arousal level is lowered) for the eyeball behavior learner 302 to learn.
- a warning or notification may be issued to the driver upon detection of a decrease in the arousal level of the driver. Active monitoring can be performed by the driver consciously responding to warnings and notifications.
- the notification is performed by the vehicle control system 100 executing steering control with an unnecessary steering amount as active information, in which case the driver may adjust the steering to an appropriate steering amount.
- the act of returning to is a conscious response.
- active monitoring is not limited to being executed when the driver's arousal level is lowered, as indicated by the dashed line in FIG. good.
- the driver is constantly making road environment perception judgment altitudes for manual driving. Not required.
- eye movement analysis may be performed periodically.
- the driver may leave the driver's seat in the driving section in Autonomous Driving mode of Autonomous Driving Level 4. During this period, it is difficult to perform regular eye movement analysis (difficult observation period). Therefore, passive monitoring may be performed more frequently during this period.
- the eye movement analysis is performed intensively at a high frame rate. Furthermore, the time length of the eye movement analysis period determines the driver's arousal level with high accuracy before the vehicle reaches the driving mode switching point according to the automatic driving level set on the route based on LDM etc. It is preferably determined in such a way that there is sufficient time to do so. Therefore, in the present embodiment, the starting point (monitoring point) of the eye movement analysis period is the schedule (journey), LDM, road conditions, driving speed, vehicle type (trailer, general passenger car), driver's seating state (status information), etc., that is, the time length of the period changes dynamically.
- eyeball behavior analysis is not always performed at a high frame rate, loads such as imaging processing and analysis processing can be reduced. Furthermore, in the present embodiment, eye movement analysis is performed at a high frame rate when necessary, so the driver's arousal level (recovery reaction level) can be determined with high accuracy.
- various observations of the driver are continuously performed, trends are learned, and the recovery reaction level such as the arousal level is determined after considering the changes in the trends from time to time.
- the experience and history of the driver affect the driver's perception, cognition, judgment, behavior, and the like.
- the observed eyeball behavior changes greatly depending on how the driver perceives the necessity of returning (switching) from automatic driving to manual driving.
- the search for visually available information increases; You can move from search to action without searching for information. Therefore, when information is lacking, the driver directs his/her gaze to the target and searches for the missing information through recognition of individual visual information. A fixation, etc. will be repeated.
- the driver when the driver makes a risk judgment when returning from automatic driving to manual driving, the driver visually searches for information that remains in memory, has a high risk, and is insufficient for making a judgment. do. For example, if the driver has been watching videos or operating a mobile device without looking ahead for a while, first check the front of the vehicle to grasp the situation, and check the lanes and obstacles that affect the direction of travel of the vehicle. , Keep an eye on the movements of the parallel runner and the oncoming vehicle, perform procedures such as developing a fixation to understand the situation, and confirming the message information of the return request RTI (notification). In addition, for example, in urban areas where pedestrians are partly on the road and school zones where kindergarten children jump out are mixed, line of sight to check whether people are entering the road from the road periphery behavior dominates.
- the system by which humans temporarily store and process information while performing cognitive tasks is called working memory.
- Information necessary for human action judgment is accumulated and processed in the human working memory, but it is believed that there is a limit to the capacity and period of accumulation that can be accumulated. Specifically, the information stored in the working memory decays over time. For example, less important information is less important, so the working memory behaves like a dynamic cache memory.
- the need for users to constantly check the surrounding environment information necessary for safe manual driving will gradually decrease as the area of operational design that can be used expands. As a result, the number of times the vehicle needs to check the front of the vehicle while driving is reduced or even eliminated.
- the preliminary visual information of the traveling road required for judgment decreases.
- the reason why drivers regularly drive in front of the road during manual driving is that there is a weighted stimulus of risk importance that can be a risk factor. will be refrained from.
- the work that deviates from the driving and steering work increases, and the memory of the fading working memory lacks risk factors, as a result, the need to observe changes in the situation on a regular basis diminishes. Observation of behavior that In the present embodiment, considering the characteristics of the human working memory that information with low risk importance fades as stored information over time, the driver is instructed to perform appropriate manual driving at an appropriate timing. Information is provided and notified, and the driver's condition or response is observed.
- visual information acquisition also includes the act of acquiring information for providing feedback on behavior control.
- the information necessary for action determination can be obtained by using a human-machine interface such as the means disclosed in Patent Document 5 filed by the present applicant.
- the driving route is divided into various sections (manual operation section, driver intervention required section, etc.), and the driving route is displayed to the driver with different colors and different widths for each section.
- the driver constantly updates and provides the approach information along the route along the vehicle's itinerary.
- the driver can perceive the approaching as an imminent risk because it is visually captured in the working memory of thinking as a risk.
- providing visual approach information containing semantic information serves as a stimulus for confirming the situation before taking over given to the working memory.
- the manner in which this update information is provided also affects the driver's visual behavior.
- information provided to the driver may be factored in as an influencing factor to evaluate the observed behavior.
- the driver will recognize the importance of the need to return from automatic driving to manual driving, accompanied by a sense of time approach.
- visual information necessary for return is accumulated in the working memory of the driver.
- the act of acquiring the missing information before moving to the action will be executed. .
- the presentation information of the human-machine interface effectively planted (memorized) the prediction information required for driving in the working memory of the driver, and the driver felt the necessity of reconfirmation on the way. It is possible to keep the driver's unconsciousness shallow by appropriately providing the information that causes the driver to lose consciousness. As a result, the driver's judgment is hastened, leading to a reduction in the driver's recovery delay time (manual driving recovery possible time) disclosed in Patent Document 3 above.
- the main focus is mainly on the analysis of eyeball behavior, but if the driver's arousal state is insufficient, the feedback of the acquired information may be performed incompletely as described above. , Various behaviors other than eyeball behavior appear, and in some cases, it may lead to excessive steering of the driver. Therefore, drivers who use the automatic driving function while regularly checking the forecast information that predicts the situation such as the point where automatic driving is switched to manual driving, and those who neglect such regular confirmation at all Drivers who are on the road will have different eyeball behaviors with respect to the return request RTI (notification). Furthermore, if the driver does not have enough time to return to manual driving and the situation is incomplete for manual driving, and the driver shifts to steering behavior, feedback on steering behavior will be incomplete. Overshooting steering with an inappropriate amount of steering, such as excessive steering wheel operation, may also occur.
- the vehicle control system 100 is configured to Approach information presented along with it, additional information added to the presented risk information, notifications to the driver, actual return actions and eyeball behaviors induced by these, steering stability etc. It is built as a system that influences each other and determines the driver's return behavior. Note that the explanation of the observation learning of the learning is mainly based on the observation learning of the eyeball behavior. The state estimation may be extended to use other than the eyeball behavior.
- the vehicle control system 100 it is required to grasp the state of the driver's brain activity in order to determine the recovery response level. For example, if the driver temporarily takes a nap before returning to manual driving, and his consciousness is completely separated from the steering, the memory necessary for grasping the situation (necessary for steering judgment) in the brain will be lost. The activity of referring and judging will decrease. In order to ascertain whether the state of activity for grasping the situation in the driver's brain, whose consciousness has once moved away from steering, has returned to the level of consciousness during steering, it is necessary to directly observe the activity in the brain.
- fMRI Magnetic Resonance Imaging
- EEG Electroencephalogram
- the activity to make a judgment can be visually recognized and memorized. Therefore, by observing the driver's eyeball behavior (saccades, microsaccades, etc.), it is possible to estimate a part of the driver's activity.
- the central visual field used to see details is narrow, so when information such as peripheral vision and sound is obtained, humans turn the central visual field in the corresponding direction. becomes.
- the inertia of the head and eyeballs is smaller than when the entire body is moved, so the time required for the head and eyeballs to change direction is short. Therefore, the eye movement can be a high speed movement. Therefore, accurate observation of high-speed eye movement is effective for accurately estimating brain activity. For example, in analyzing the behavior of the saccade of the eyeball, it is necessary to observe changes in the line of sight due to high-speed rotation of the eyeball. Coordinate detection is required.
- the present inventor uses an event vision sensor (EVS) to determine the driver's return reaction level (awakening level) by analyzing the eyeball behavior. I came up with the idea of observing
- EVS event vision sensor
- the EVS is an image sensor that sensitively detects changes in brightness. It has a wider dynamic range than general RGB sensors and IR sensors due to the logarithmic conversion characteristics of photoelectric conversion. Therefore, it is possible to easily obtain edge information (edge information, which is a changing point of the brightness boundary of the subject in a wide range from dark to bright, which is difficult for frame accumulation type RGB sensors and IR sensors).
- edge information which is a changing point of the brightness boundary of the subject in a wide range from dark to bright, which is difficult for frame accumulation type RGB sensors and IR sensors.
- the EVS has no concept of frame rate, and can output corresponding address information at a high data rate according to frequent changes in brightness.
- the EVS sparsely outputs the time stamp information and pixel information (coordinate information) when the brightness change exceeds the threshold. The amount is smaller than that of RGB sensors and IR sensors, and the burden of data transmission and arithmetic processing can be lightened.
- FIG. 10 is a block diagram showing an example configuration of the EVS 400 used in the embodiment of the present disclosure
- FIG. 11 shows an example configuration of the pixels 502 located in the pixel array section 500 in the EVS 400 shown in FIG. It is a block diagram.
- the EVS 400 has a pixel array section 500 configured by arranging a plurality of pixels 502 (see FIG. 11) in a matrix.
- Each pixel 502 can generate a voltage corresponding to a photocurrent generated by photoelectric conversion as a pixel signal.
- each pixel 502 can detect the presence or absence of an event by comparing a change in photocurrent corresponding to a change in luminance of incident light with a predetermined threshold. In other words, pixel 502 can detect an event based on a luminance change exceeding a predetermined threshold.
- the EVS 2400 has a drive circuit 411, an arbiter section (arbitration section) 413, a column processing section 414, and a signal processing section 412 as peripheral circuit sections of the pixel array section 500.
- each pixel 502 When detecting an event, each pixel 502 can output to the arbiter unit 413 a request to output event data representing the occurrence of the event. Then, each pixel 502 outputs the event data to the driving circuit 411 and the signal processing unit 412 when receiving a response indicating permission to output the event data from the arbiter unit 413 . Also, the pixel 502 that has detected the event outputs a pixel signal generated by photoelectric conversion to the column processing unit 414 .
- the driving circuit 411 can drive each pixel 502 of the pixel array section 500 .
- the drive circuit 411 detects an event, drives the pixel 502 that outputs the event data, and outputs the pixel signal of the corresponding pixel 502 to the column processing unit 414 .
- the arbiter unit 413 arbitrates requests requesting the output of event data supplied from each pixel 502, responds based on the arbitration result (permission/non-permission of event data output), and resets event detection. A reset signal can be sent to the pixel 502 to do so.
- the column processing unit 414 can perform processing for converting analog pixel signals output from the pixels 502 of the corresponding column into digital signals for each column of the pixel array unit 500 .
- the column processing unit 414 can also perform CDS (Correlated Double Sampling) processing on digitized pixel signals.
- the signal processing unit 412 performs predetermined signal processing on the digitized pixel signals supplied from the column processing unit 414 and the event data output from the pixel array unit 500, and converts the signal-processed event data ( time stamp information, etc.) and pixel signals can be output.
- a change in the photocurrent generated by the pixel 502 can be regarded as a change in the amount of light (luminance change) incident on the pixel 502 . Therefore, an event can also be said to be a luminance change of pixel 502 exceeding a predetermined threshold. Furthermore, the event data representing the occurrence of an event can include at least positional information such as coordinates representing the position of the pixel 502 where the light intensity change as the event has occurred.
- each pixel 502 has a light receiving section 504 , a pixel signal generation section 506 and a detection section (event detection section) 508 .
- the light receiving unit 504 can photoelectrically convert incident light to generate a photocurrent. Then, the light receiving unit 504 can supply a voltage signal corresponding to the photocurrent to either the pixel signal generating unit 506 or the detecting unit 508 under the control of the driving circuit 411 .
- the pixel signal generation unit 506 can generate the signal supplied from the light receiving unit 504 as a pixel signal. Then, the pixel signal generation unit 506 can supply the generated analog pixel signals to the column processing unit 414 via vertical signal lines VSL (not shown) corresponding to columns of the pixel array unit 500 .
- the detection unit 508 detects whether an event has occurred based on whether the amount of change in photocurrent from the light receiving unit 504 (that is, the amount of change in luminance due to incident light on the pixel 502) exceeds a predetermined threshold. can be done.
- the events can include, for example, an ON event indicating that the amount of change in photocurrent has exceeded the upper threshold, and an OFF event indicating that the amount of change has fallen below the lower threshold. Note that the detection unit 508 may detect only on-events.
- the detection unit 508 can output to the arbiter unit 413 a request to output event data representing the occurrence of the event. Then, when receiving a response to the request from the arbiter unit 413 , the detection unit 508 can output event data to the drive circuit 411 and the signal processing unit 412 .
- the detection of the pupils of the eyes of the parts of the driver's face is particularly specialized, and when used in the behavior analysis of that area, the area to which the arbiter unit 413 reads and allocates is defined as an ROI (Region of Interest).
- ROI Region of Interest
- a limited ROI reading function may be provided.
- the target area is a quasi-circular ellipse, it is possible to further define the target area of the search arbiter by the center point and the radius, and save the read data.
- EVS 400 detects an event based on a change in luminance exceeding a predetermined threshold, as described above. Therefore, for example, when the threshold value is decreased, the EVS 400 detects even a small change in luminance as an event, so that many detected events occur and the data amount of the detected events increases. As a result, the amount of data exceeds the transmission band of the interface, and a problem may arise in that the data cannot be transmitted to other functional blocks or the like.
- the events detected by the EVS 400 become sparse with respect to the rotation of the eyeball, and the point consists of a coordinate group that moves over time based on the movement of the pupil boundary line.
- a problem may occur in that the movement point of the cloud cannot be observed with high accuracy.
- the inventor of the present invention has created the embodiment of the present disclosure in view of such circumstances.
- the observation of the eye behavior is optimized by adjusting a threshold that can define the frequency of event detection of the EVS 400. be able to.
- the lighting devices that illuminate the driver may be dynamically adjusted accordingly.
- the illumination device can be adjusted arbitrarily by the shutter of the light source device, etc. Therefore, optimization adjustment is possible. There is a merit of high flexibility such as being able to perform
- the event detection of the EVS 400 is performed at the timing when the eyeball behavior needs to be observed. Adjust the threshold of By doing so, saccade behavior and microsaccade behavior can be captured efficiently and accurately while suppressing an increase in the amount of data and reducing the load on arithmetic processing and transmission.
- a filter that selectively transmits narrow-band wavelength light is used at the timing when the eyeball behavior needs to be observed. is used to illuminate the driver's face with light of the narrow band wavelength.
- FIGS. 12 to 12 to 14 are explanatory diagrams for explaining the installation positions of the imaging devices 700 and 702 according to this embodiment.
- the imaging device 700 is an imaging device that mainly observes the position of the driver 900 inside the vehicle 600, and also observes the face and sitting posture of the driver 900 sitting in the driver's seat 602 (see FIG. 13). is.
- the imaging device 700 can be an RGB camera (visible light camera) or a ToF camera.
- the imaging device 700 may be the EVS 400 .
- the imaging device 700 is preferably provided on the upper front side of the driver's seat 602 inside the vehicle 600 .
- the imaging device 702 is an imaging device that captures an image of the inside of the vehicle 600. More specifically, the EVS 400 that mainly observes eyeball behavior (eyeball saccade, fixation, microsaccade, etc.) of the driver 900. It is an imaging device consisting of. Note that in this embodiment, the imaging device 702 may observe not only the behavior of the eyeballs, but also the condition of the face of the driver 900 and various conditions inside the vehicle 600 . In addition, as shown in FIG. 12, the imaging device 702 is preferably provided in the vehicle 600 near the front lower side of the driver's seat 602 and below the steering wheel.
- the imaging device 700 is provided above and in front of the driver's seat 602 in the vehicle 600, and is mainly seated on the driver's seat 602. It has a wide angle of view so that the figure of the driver 900 can be captured.
- the imaging device 702 is provided so as to face the face of the driver 900 so that it can mainly capture the face of the driver 900 seated in the driver's seat 602 . More specifically, as shown in FIG. 13, the image pickup device 702 is arranged in a line-of-sight direction 902 ( preferably below the horizontal far-infinity line of sight. Furthermore, the imaging device 702 can observe the pupils of the driver 900 from the front, so that the angle ⁇ between the line segment connecting the eyeballs of the driver 900 and the imaging device 702 and the line of sight direction 902 is It is preferable that the angle is set to be 10 degrees or more and less than 30 degrees.
- the camera arrangement can be arranged without obstructing the driver's 900 direct field of vision using a half mirror, a wavelength selection mirror, etc., and the angle ⁇ formed by the optical axis of the imaging device is 10 degrees. It doesn't have to be more.
- the imaging device 702 preferably has an angle of view that can capture at least the face of the driver 900 seated in the driver's seat 602 when viewed from above the vehicle 600 .
- FIG. 15 is an explanatory diagram for explaining an example of a unit that observes the eyeball behavior of the driver 900 according to this embodiment.
- the unit that observes the eyeball behavior of the driver 900 includes a part of the in-vehicle information detection unit 142 shown in FIG. and an illumination unit 710 that illuminates at least the face.
- observation of the eyeball behavior of the driver 900 according to the present embodiment is performed.
- Each functional block shown in FIG. 15 will be sequentially described below.
- the imaging device 702 is an imaging device that observes the eyeball behavior of the driver 900 as described above, and is composed of the EVS 400 .
- the imaging device 702 is controlled by the sensor control unit 330 to be described later, and outputs observed data to the eyeball behavior analysis unit 300 .
- Illumination unit 710 is provided in vehicle 600 and can irradiate the face of driver 900 with light of a predetermined wavelength (for example, near-infrared light) when capturing eyeball behavior of driver 900 .
- the illuminating unit 710 mainly disposes a filter that transmits light with a wavelength of 940 nm or more and 960 nm or less in the imaging device lens, so that the light of the narrow band wavelength is emitted to the driver at the timing when the eyeball behavior needs to be observed.
- the face of 900 By illuminating the face of 900, the influence of external light can be suppressed.
- near-infrared light with little external light (sunlight) noise can make the boundary shade between the iris and the pupil of the eye remarkable.
- Eyeball behavior (saccade behavior) can be selectively, efficiently and accurately observed.
- the lighting unit 710 may adjust the intensity of the light to be emitted, the irradiation time, the irradiation interval, etc., by the lighting control unit 332, which will be described later, so that the behavior of the driver's 900 eyeballs can be more easily captured.
- a wavelength sensitivity other than the above settings may be used in accordance with the type of light source used and the wavelength sensitivity distribution of the light receiving section of the imaging device.
- the imaging device 702 when used as an imaging device that only observes eyeball behavior, the above filter may be applied to the imaging device 702 . By doing so, the imaging device 702 can capture only the reflection of near-infrared light with little noise of external light.
- the sensor control section 330 is provided inside the vehicle interior information detection section 142 and controls the imaging device 702 . Specifically, the sensor control unit 330 changes the threshold that can define the event detection frequency of the imaging device 702 that is the EVS 400 when capturing the eyeball behavior of the driver 900 . By doing so, the imaging device 702 can observe eyeball behaviors such as saccades, microsaccades and drifts, blink detection, changes in facial expressions, etc., while suppressing an increase in the amount of data. Event detection can be performed at the optimum frequency to accurately capture changes.
- the imaging device 702 has a function of capturing not only the eyeball behavior of the driver 900 but also the position and posture of the driver 900, by temporarily optimizing the threshold for observing the eyeball behavior, It becomes possible to observe the eyeball behavior with high accuracy.
- the sensor control unit 330 sets, for example, a region in the pixel array unit 500 of the imaging device 702 that captures the face or eyeballs of the driver 900 as an ROI (Region of Interest), and The threshold for the pixel 502 may be changed so that only the data for that pixel 502 is output. Further, the sensor control section 330 may analyze the observed data by the eyeball behavior analysis section 300 and may change the threshold by receiving the analysis result as feedback.
- ROI Region of Interest
- Illumination control unit 332 is provided in vehicle interior information detection unit 142 and controls illumination unit 710 . Specifically, the lighting control unit 332 controls the intensity of the light from the lighting unit 710, the irradiation time, or the irradiation interval when capturing the eyeball behavior of the driver 900. FIG. By doing so, it becomes easier for the imaging device 702 to capture the eyeball behavior of the driver 900 . Furthermore, the illumination control unit 332 analyzes the observed data by the eyeball behavior analysis unit 300, and receives the analysis results as feedback to control the intensity of the light from the illumination unit 710, the irradiation time, or the irradiation interval. good too.
- the eyeball behavior analysis unit 300 analyzes data observed by the imaging device 702 . Specifically, the eyeball behavior analysis unit 300 distinguishes the type of eyeball behavior (saccade, microsaccade, etc.) based on the shape of the point distribution consisting of the position coordinate points of the pixels 502 of the imaging device 702 that detected the event. can do. More specifically, the eyeball behavior analysis unit 300 analyzes that a microsaccade has occurred when the appearance of a point distribution having a crescent shape is detected. Furthermore, in the present embodiment, the analysis result of the eye behavior analysis unit 300 is used for determination by the determination unit 320 (determination of the driver's return response level), and based on the determination result of the determination unit 320, the driving mode will be switched.
- the determination unit 320 determination of the driver's return response level
- FIG. 16 and 17 are flowcharts for explaining an example of the information processing method according to this embodiment
- FIG. 18 is an explanatory diagram for explaining an example of observation data observed by the imaging device 702.
- the information processing method according to this embodiment can include steps from step S31 to step S44. Details of each of these steps according to the present embodiment will be described below.
- the vehicle control system 100 evaluates the transition of the driver 900 from leaving the seat to returning to the driving posture in the driver's seat 602 (step S31).
- the vehicle control system 100 determines whether the driver 900 is seated on the driver's seat 602 and returned to the driving posture (step S32). If the driver 900 is seated on the driver's seat 602 and has returned to the driving posture (step S32: Yes), the process proceeds to step S33, and the driver 900 is not seated on the driver's seat 602 or has returned to the driving posture. If not (step S32: No), the process returns to step S31.
- the position and posture of the driver 900 may be specified by detecting the face of the driver 900 from observation data observed by the imaging devices 700 and 702 described above.
- the vehicle control system 100 confirms the driving posture of the driver 900 and identifies the positions of the driver's 900 face and eye regions (step S33).
- the positions of the face and eyeballs of the driver 900 are specified. good too.
- the vehicle control system 100 sets a region within the pixel array section 500 that captures the face or eyeballs of the imaging device 702 as an ROI (Region of Interest) according to the position of the face or eyeballs of the driver 900 (step S34). .
- ROI Region of Interest
- the vehicle control system 100 sets a region within the pixel array section 500 that captures the face or eyeballs of the imaging device 702 as an ROI (Region of Interest) according to the position of the face or eyeballs of the driver 900 (step S34). .
- ROI Region of Interest
- the imaging device 702 has a function of capturing not only the eyeball behavior of the driver 900 but also the position and posture of the driver 900, by temporarily optimizing the threshold for observing the eyeball behavior, It becomes possible to observe the eyeball behavior with high accuracy.
- the vehicle control system 100 adjusts a threshold (event detection threshold) that can define the event detection frequency of the imaging device 702 (step S35). At this time, only the threshold value of the pixel 502 corresponding to the ROI set in step S34 described above may be adjusted.
- the threshold may be set to a predetermined numerical value according to the eyeball behavior (saccade, microsaccade) to be observed. attribute information (iris color, age, gender, etc.).
- the threshold value may be set according to the time of driving, the weather, and the influence of outside light expected from the surrounding conditions of the road (topography, tunnel, etc.) during driving. Furthermore, in the present embodiment, it may be set based on the tendency of the observation data of the eyeball behavior of the driver 900 that is machine-learned in each driving.
- the imaging device 702 detects, as an event, the change in luminance caused by this movement.
- the imaging device 702 which is the EVS 400, is set to a threshold value suitable for observing microsaccades, so that the microsaccades can be captured with high accuracy from the event point distribution (point cloud). can.
- microsaccades are caused by the movement of the skeletal muscles that pull the eyeball around the eyeball: the superior/inferior rectus muscle, the lateral/medial rectus muscle, and the superior/inferior oblique muscle.
- lateral movement of the pupil is mainly caused by the superior/inferior rectus muscles and the lateral/medial rectus muscles. What is important here is to capture microsaccades, which are exploratory movements, as a revival of thinking activities necessary for brain activity based on visual information.
- the vehicle control system 100 adjusts the irradiation unit (light source) (step S36).
- the vehicle control system 100 adjusts the intensity of the light from the lighting unit 710, the irradiation time, or the irradiation interval. In this embodiment, by doing so, it becomes easier for the imaging device 702 to capture the eyeball behavior of the driver 900 .
- the vehicle control system 100 determines whether or not event data distributed in a crescent shape is detected as eyeball behavior from the observed data (step S37). For example, as a microsaccade, when the eyeball rotates at high speed as shown in FIG. A set of period points is detected as a crescent-shaped distribution. As the eyeball rotates, the area with the largest amount of crossing movement on the imaging plane is the thick central part of the crescent moon. The sum of the events that have occurred appears as such a crescent-shaped area. For microsaccades, individual rotations are primarily random wiggles within the fixation. If event data distributed in a crescent shape is detected (step S37: Yes), the process proceeds to step S38.
- step S37 If event data distributed in a crescent shape is not detected (step S37: No), steps S35 and S36 are performed. back to The reason why the set of points generated by the movement of the eyeball is distributed as a crescent is that the pupil boundary is swept as the image plane moves for a certain period of time, and the generated event points are collected as a result of the work. .
- the set of points generated by the movement of the eyeball is distributed as a crescent is that the pupil boundary is swept as the image plane moves for a certain period of time, and the generated event points are collected as a result of the work.
- it When graphically illustrated with a finer time resolution, it becomes a circular arc with a line width distribution, but is expressed as a crescent moon in order to explain the principle constituting the present embodiment in an easy-to-understand manner.
- the actual pupil color, shape, and sharpness of the boundaries vary from person to person, and also vary depending on lighting conditions, usable light source wavelengths, hand path widths of narrow-band filters, and other factors.
- the time-series set of detected point clouds may not be a crescent moon with a clear contour. Described herein as detecting a crescent distribution including detection events occurring in ocular microbehavior when those distinct regions are not relevant. That is, in the present embodiment, the shape of the detected event data distribution may be, for example, a crescent shape, or may be another shape, and is not particularly limited.
- FIG. 18 schematically shows a crescent-shaped dark-light (left) point cloud and a crescent-shaped light-dark (right) point cloud, which are generated by microsaccades. Then, while the driver 900 repeatedly refers to and understands the memory, by performing fixation with random eye movement by microsaccades, the imaging device 702 captures a crescent-shaped point cloud and In the vicinity, the boundary line of the pupil is observed. In other words, in the case of microsaccades, a point cloud with a shape different from that of a wide point cloud with, for example, the width of the pupil, caused by a large saccade movement between normal fixations is observed. By observing the crescent-shaped point cloud, it is possible to observe the expression of thinking activity in the brain.
- saccades and microsaccades can be distinguished by the shape of the observed point clide.
- a saccade which is a movement of the direction of fixation performed on an object at a significantly different position, is observed as a thick belt-shaped point cloud in a certain direction along the movement of the saccade.
- the microsaccade during fixation in detailed information search is observed as a crescent-shaped point cloud that fluctuates rapidly around the fixation direction. Therefore, saccades and microsaccades can be distinguished by the shape of the observed point clide.
- step S37 when event data distributed in a crescent shape is not detected (step S37: No), event detection by the imaging device 702 is performed based on the analysis result of observation data by the imaging device 702. , the intensity of light from the illumination unit 710, the irradiation time, the irradiation interval, or the like may be adjusted again.
- the event determination threshold and the illumination light source are adjusted so that events occur with a frequency that can be grasped as a crescent distribution due to the movement of the pupil boundary accompanying the pupil rotation caused by the microsaccade.
- the imaging device 702 of the vehicle control system 100 outputs a coordinate point group (point cloud) of luminance change pixels (step S38).
- the vehicle control system 100 performs detailed analysis of the point group that constitutes the eyeball behavior based on the point clide output in step S38 (step S39).
- the vehicle control system 100 performs a detailed evaluation of eyeball behaviors, ie, eyeball behaviors such as saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation) (step S40).
- eyeball behaviors ie, eyeball behaviors such as saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation)
- eyeball behaviors such as saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation)
- saccade Since it directly analyzes the limited data itself that is actually output, it is an estimation (analytic evaluation) of brain activity, so it is a normal series of multiple-frame images that searches the entire image to detect the pupil. It does not perform high-load arithmetic processing such as processing analysis.
- the difference between saccade and microsaccade is that the former saccade starts from the starting point of the saccade and continuously extends as a belt-shaped point cloud over the target direction.
- the vehicle control system 100 converts the eyeball behavior of the driver 900 into a saccade (eyeball rotation), a fixation (fixation), and a microsaccade (eyeball minute rotation) that occurs during the fixation for visual confirmation. If it corresponds, it is determined whether or not it is an eyeball behavior (step S41). If the eye movement for visual confirmation is detected (step S41: Yes), the process proceeds to step S43, and if the eye movement for visual confirmation is not detected (step S41: No), the process proceeds to step S42. .
- the vehicle control system 100 reaches the upper limit of the time budget (allowable limit time at which there is a risk of exceeding the takeover limit point while maintaining automatic driving when MRM is not activated) that is allowed for the awakening return determination in the predetermined determination process. It is determined whether or not it has reached (step S42). If the upper limit of retry allowance is reached (step S42: Yes), the process proceeds to step S44, and if not (step S42: No), the process returns to step S38. Then, the vehicle control system 100 determines whether or not the driver 900 has an arousal level at which manual driving can be resumed, and records observation data as the cognitive response of the driver 900 (step S43). Further, the vehicle control system 100 determines that it cannot be confirmed within the remaining time budget that the driver 900 has an arousal level that allows manual operation to be resumed, and the vehicle 600 moves to a manual operation section such as an emergency stop. (step S44).
- the upper limit of the time budget allowable limit time at which there is a risk of exceeding
- Step S43 is processed by a flow having substeps shown in FIG.
- Step S43 may include steps S51 through S55. Details of each of these substeps are described below.
- the vehicle control system 100 determines whether or not the driver 900 has an arousal level at which manual driving can be resumed (step S51). If the determination has been performed (step S51: Yes), the process proceeds to step S52, and if the determination has not been performed (step S51: No), the above-described step S43 is performed, and the process returns to step S51. .
- the vehicle control system 100 determines whether or not it has been determined that the driver 900 has an arousal level at which manual driving can be resumed (step S52). If it is determined that the driver 900 has an arousal level that allows manual operation to be resumed (step S52: Yes), the process proceeds to step S53, where the driver 900 has an arousal level that allows manual operation to be resumed. If it is determined that there is no (step S52: No), the process proceeds to step S54.
- the vehicle control system 100 switches from the automatic driving mode to the manual driving mode (more specifically, it starts preparatory processing for switching) (step S53).
- the vehicle control system 100 determines whether or not the predetermined upper limit of the number of retries for determination processing has been reached (step S54). If the upper limit of the number of retries has been reached (step S54: Yes), the process proceeds to step S55, and if not (step S54: No), the process returns to step S51. In addition, the number of retries must not exceed the handover standby allowable limit point.
- the vehicle control system 100 determines that the arousal level evaluation has failed, that is, it cannot be confirmed that the driver 900 has the arousal level that allows the driver to return to manual driving, and ends the process. In this case, returning to manual operation is not permitted, and the vehicle performs processing such as an emergency stop to avoid entering the manual operation section (step S55).
- the basis of this setting is a measure to prevent the return from awakening in a situation where the return to wakefulness is ambiguous. None disables the override function. If it is not possible to confirm an early return, the purpose of setting a limit point where it can be handled and then invalidating it is the basic purpose. A hasty return will prevent an induced accident.
- FIG. 19 is an explanatory diagram for explaining a partial configuration of the vehicle control system 100 for executing the process of determining the awakening level of the driver 900.
- FIG. 19 shows the face tracking unit 204 and the eyeball behavior analysis unit 300 described above, the display information DB (Data Base) 800 stored in the storage unit 111, and the display information provided in the output control unit 105.
- a generator 802 and a display 804 are shown.
- a face tracking unit (Driver Facial Tracker) 204 detects movement information of the face and head of the driver 900 and outputs it to the display information generating unit 802 .
- the display information generation unit 802 generates tasks to be displayed on the display unit 804 after confirming that the driver 900 is seated in the driver's seat 602 . Specifically, for example, a task is generated to answer the number of small animal silhouettes among the plurality of displayed silhouettes.
- the display information DB 800 stores data that can be used to generate various assignments. A specific example of the problem will be described later. Although presenting tasks to the driver is not necessarily an essential step in normal use, some drivers neglect to check the surroundings even though they are awake. Since some users aimlessly start handover without visually confirming the surrounding situation, by encouraging the driver to artificially grasp the situation again, micro soccer will appear for the visual confirmation target required at that time. It is an effective task generation for estimating activity in the brain through the detection of de. In other words, this is an embodiment in which the task is performed artificially when the line-of-sight checking operation for checking the surroundings of the vehicle, which should be performed naturally, cannot be expected.
- the driver 900 moves his or her line of sight to the task in order to obtain the answer to the task.
- the display unit 804 displays a visual task that requires judgment such as answering the number of silhouettes of small animals among a plurality of silhouettes.
- the driver 900 performs an eye movement to supplement necessary information in order to obtain an answer to this task. For example, eyeball actions such as eyeball saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation) are performed.
- the eye behavior analysis unit 300 recognizes the task generated by the display information generation unit 802 and then analyzes the eye behavior of the driver 900 observing the task displayed on the display unit 804 .
- the display unit 804 faces the face of the driver 900 and is positioned below the line of sight when the driver 900 sees an object located at infinity ahead. is preferred.
- This eyeball behavior shows different behavior when a person is in a normal arousal state and when a person is in a state of reduced consciousness and arousal.
- a person performs a large eyeball rotation called a saccade, directs the eyeball (more precisely, the central visual field) to a predetermined visual point, In the vicinity of it, fixation (fixation) and eyeball movement accompanied by microsaccades, which are minute eyeball rotation movements in local regions, are performed.
- FIG. 20 is an explanatory diagram for explaining an example of the trajectory of the eyeball behavior of the driver 900 when the visual task of viewing information is presented.
- FIG. 20(a) shows a task, specifically, the task of counting the number of silhouettes of small animals among a plurality of silhouettes.
- the order of gaze differs depending on the viewer. Some subjects look at the question text "Q” first, while others look at the answer "Ans” and look at the question text "Q” and look at the entire drawing of the array. There are various things such as looking at it quickly and then looking at the problem. However, what is important in brain activity evaluation is that the driver 900, who is the subject of the evaluation, is unable to execute the search and fixation required to confirm the acquisition of information necessary for the answer at that moment. It is to evaluate whether the behavior to move is expressed.
- FIGS. 20(b) and (c) The observation data shown in FIGS. 20(b) and (c) will be described as an example.
- (b) shows eyeball behavior when coping with a task in a state of high arousal.
- (c) shows an example of the trajectory of the eyeball behavior when the visual task coping ability is reduced.
- visual information search including eyeball saccades shows a remarkable tendency of so-called eye swimming. is. This is influenced by individual tendencies, such as behavioral traits such as strabismus and effectiveness, and changes in visual acuity due to the physical condition of the day. It is better to make a judgment by considering the characteristics of each individual after specifying the
- the system presents the driver 900 with symbols that require some mental judgment, for example, and observes eye movements.
- the driver 900 preferentially executes the thinking activity in the brain to deal with the task, and other secondary actions are performed. It is inferred that they are not immersed in the task.
- the system may detect that the line of sight of the driver 900 has turned to the presentation information, and may perform processing for determining completion of recognition upon further visual recognition of the detection.
- the driver is manually driving, there is an obligation to monitor the surroundings when driving. Although it is not always required to generate the target visual information in the display information generation unit 802, by including means for displaying, even on a monotonous road that does not attract the driver's attention, the driver's attention can be displayed more flexibly. It is possible to grasp the state.
- the learning data is generated, for example, by a learning device (not shown) included in the vehicle control system 100 or an external server.
- the learning device acquires the eye behavior information analyzed by the eye behavior analysis unit 300, acquires the awakening level information of the driver 900 determined by the determination unit 320 based on the eye behavior information, and further 900 driving/steering information is also acquired. Then, based on the acquired information, the learning device learns the correct correspondence relationship between the eyeball behavior of the driver 900 and the arousal level of the driver 900 at the time of the eyeball behavior, and stores it in the storage unit as learning data. do.
- the learning device performs context-adaptive determination by performing interlocking learning with input influence factors such as the correlation with the biosignals of the driver 900 from other biosensors (not shown) and usage hours during the day. may be performed to improve the accuracy of determination.
- the determination unit 320 acquires the eye behavior information analyzed by the eye behavior analysis unit 300, and uses the eye behavior information and the learning data generated by the learning device to perform more accurate wakefulness determination.
- the correspondence data between general eyeball behavior and arousal level is used to calculate the arousal level without using the learning data. It is good also as composition which judges.
- the eyeball behavior analysis unit 300 performs eyeball saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation), which are eyeball behaviors for problem solving by the driver 900.
- Acquire eyeball behavior such as The learning device repeatedly acquires the behavioral characteristics corresponding to the degree of alertness of the driver 900, performs cumulative learning, and builds a dictionary for judging the degree of alertness from the eyeball behavior. This dictionary is used to estimate the arousal state of the user at the time of observation from newly observed eyeball behavior characteristics.
- 21 and 22 are flowcharts of the information processing method of the driver's 900 awakening level determination process. Hereinafter, processing of each step shown in FIG. 21 will be described in order.
- step S61 the movement of the face of the driver 900 by the face tracking unit (Driver Facial Tracker) 204 is acquired. Based on the obtained facial movement of the driver 900, it is determined whether the driver 900 is seated in the driver's seat 602 or has returned to the driving posture (step S61). If the driver 900 is seated on the driver's seat 602 and has returned to the driving posture (step S61: Yes), the process proceeds to step S62, where the driver 900 is not seated on the driver's seat 602 or is in the driving posture. (step S61: No), the process returns to step S61.
- step S62 It is determined whether eye movement analysis processing can be executed for the driver 900 (step S62). For example, if the driver 900 is not at the position where the display unit 804 is located, even if the task is displayed on the display unit 804, the task cannot be seen. If the eyeball behavior analysis process is executable (step S62: Yes), the process proceeds to step S63, and if the eyeball behavior analysis process is not executable (step S62: No), the process returns to step S62.
- step S63 The display information to be displayed to the driver 900, that is, the task is generated (step S63).
- step S63 the display information generated in step S63, that is, the assignment is displayed on the display unit 804 (step S64).
- step S65 the eyeball behavior of driver 900 guided by the task displayed in step S64 is analyzed (step S65).
- step S66 wakefulness determination processing is executed (step S66).
- step S64 described above when the task generated by the display information generating unit 802 is displayed on the display unit 804, the driver 900 moves his or her line of sight to the task in order to obtain the answer to the task.
- the driver 900 develops an eyeball behavior for supplementing necessary information. For example, eyeball actions such as eyeball saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation) are performed.
- the eyeball behavior analysis unit 300 analyzes these eyeball behaviors of the driver 900 .
- the eye behavior information analyzed by the eye behavior analysis unit 300 is output to the determination unit 320 .
- the determination unit 320 determines the wakefulness of the driver 900 based on the eye behavior information analyzed by the eye behavior analysis unit 300 .
- Determination unit 320 determines that driver 900 is highly alert. On the other hand, when these eyeball behaviors are not observed, or when there are few of them, the determination unit 320 determines that the driver's 900 level of alertness is low.
- step S65 the details of the processing of step S65 described above will be described.
- the eyeball behavior analysis unit 300 acquires observation data of the eyeball behavior of the driver 900 after the task is displayed (step S71). For example, the eye movement analysis unit 300 acquires acceleration data of eye movement of the driver 900 detected by the eye tracking unit (Driver Eye Tracker) 206 .
- the eye tracking unit Driver Eye Tracker
- the eye behavior analysis unit 300 acquires eye behavior information such as eye saccades (eye rotation), fixation, and microsaccades (micro eye rotation) from the observation data acquired in step S71. (Step S72).
- eye behavior information such as eye saccades (eye rotation), fixation, and microsaccades (micro eye rotation) from the observation data acquired in step S71.
- the driver 900 performs an eye movement to acquire information necessary for solving the task. For example, eyeball actions such as eyeball saccade (eyeball rotation), fixation, and microsaccade (eyeball minute rotation) are performed.
- the eyeball behavior analysis unit 300 extracts the eyeball behavior information of the driver 900 from the observation data.
- the eyeball behavior analysis unit 300 determines whether or not sufficient observation data has been acquired for determination of arousal (step S73). Specifically, the eye behavior analysis unit 300 extracts eye behavior information such as eye saccades (eye rotation), fixation, and micro saccades (micro eye rotation) extracted from the observation data of the driver 900. is sufficient data to determine whether or not the data corresponds to the problem solving process. And when it determines with it being enough (step S73: Yes), it progresses to step S74, and when it determines with it not being enough (step S73: No), it returns to step S72.
- eye behavior information such as eye saccades (eye rotation), fixation, and micro saccades (micro eye rotation) extracted from the observation data of the driver 900. is sufficient data to determine whether or not the data corresponds to the problem solving process. And when it determines with it being enough (step S73: Yes), it progresses to step S74, and when it determines with it not being enough (step S73: No), it returns to step S72.
- the determination unit 320 determines the arousal level of the driver 900 based on the analysis result of the observation data of the driver's eyeball behavior after the task presentation (step S74). Specifically, the determination unit 320 determines whether the driver 900 executes an eyeball behavior such as an eyeball saccade (eyeball rotation), a fixation, or a microsaccade (eyeball minute rotation) in order to solve the problem. Analyze whether or not there is. Then, the determining unit 320 determines that the eyeball behavior of the driver 900 corresponds to a saccade (eyeball rotation), a fixation, or a microsaccade (eyeball minute rotation) for problem solving. If so, it is determined that the awakening level of the driver 900 is high.
- an eyeball behavior such as an eyeball saccade (eyeball rotation), a fixation, or a microsaccade (eyeball minute rotation) in order to solve the problem. Analyze whether or not there is. Then, the determining unit 320 determines that the eye
- the information processing method described above presents a visual task to the driver 900 before returning to manual driving, and analyzes the eyeball behavior of the driver 900 that occurs when the task is resolved.
- the eyeball behavior of the driver 900 is to induce a specific eyeball behavior such as a saccade (eyeball rotation) for problem solving, a fixation, or a microsaccade (eyeball minute rotation).
- a specific eyeball behavior such as a saccade (eyeball rotation) for problem solving, a fixation, or a microsaccade (eyeball minute rotation).
- the state of the driver 900 is in a state of internal arousal in the brain sufficient to start manual driving recovery. Specifically, from the analysis of these eyeball behaviors, when it is determined that the driver 900 has fully recovered from wakefulness, it is determined that the driver 900 has a high level of wakefulness that allows manual operation. Allow to start driving. On the other hand, if it is determined that these eyeball behaviors have not sufficiently occurred, it is determined that the driver 900 does not have a high level of arousal that enables manual operation, and the start of manual operation is not permitted. In this case, emergency evacuation processing such as stopping before entering the manual operation section is performed.
- the process from confirming the visual information for the actual task to finding the solution of the task is based on the driver's 900 state at that time, the state of repeated implementation of the same task, the behavioral characteristics of checking the question after looking at the answer options, the degree of fatigue, and the corresponding time. It may differ greatly due to the influence of various factors such as eyesight, visual fatigue, external light interference, and mental well-being. Therefore, in order to make a judgment with high accuracy, it is necessary to generate a driver generated by learning processing such as recovery quality (normal recovery, delayed recovery, recovery abandonment, system emergency response) at the time of handover execution that occurs each time from long-term repeated use It is preferable to use 900-specific return predictive dictionary data. Furthermore, as described repeatedly, it is desirable to use return prediction dictionary data specific to the driver 900 to perform normal return prediction based on the eyeball behavior characteristic analysis results. These processes make it possible to start safe manual operation.
- the data used for determining whether the driver 900 is ready to start safe manual driving and the data input to the learning device are It is preferable to include vehicle, road environment information, status obtained from driver's biosignals, and history information.
- the driver 900 determines whether or not the driver 900 has an arousal level that allows the driver to return to manual driving based on the eyeball behavior of the driver 900 of the mobile device capable of switching between automatic driving and manual driving. described as applicable in some cases.
- the analysis of the eyeball behavior is to analyze the brain activity of the subject using observation data that can be observed from the outside, other than determining the state of the driver 900 at the time of handover from automatic driving to manual driving can also be used in various ways.
- the eyeball behavior analysis method described above observes the results of correlation with stored information for tasks, and can be used in a variety of ways by observing and judging reactions to presented tasks.
- the process of obtaining the answer will reflect the subject's state and psychology. Therefore, it is also possible to apply this method to the authenticity determination of respondents when presenting a reporting task such as a drinking report or overwork report, for example.
- the above-described task presentation need not be limited to the operation of the vehicle 600.
- various events and occupations such as aircraft operation, train operation, crane operation, air traffic controller, remote automated driving controller, etc.
- extension to authenticity evaluation by psychological analysis at the time of self-reporting Also available.
- the superior temporal sulcus of the temporal lobe activates when the subject selects the visual information needed to solve a task
- the interparietal sulcus activates when he/she turns his attention
- the frontal eye field activates when he/she moves the eye. It is known.
- the hippocampus inside the temporal lobe works to recall what is remembered.
- the eyeball behavior analysis processing described above can also be used as verification and monitoring processing of the mental health of a subject such as the driver 900 . Specifically, for example, by using it for grasping the condition and managing the health of the driver 900 of a commercial vehicle such as a bus or a taxi, safe operation can be made possible.
- the eyeball behavior of the driver 900 using the EVS 400 when observing the eyeball behavior of the driver 900 using the EVS 400, by adjusting the threshold that can define the frequency of event detection of the EVS 400, the eyeball Behavioral observations can be optimized. Also, in embodiments of the present disclosure, lighting devices that illuminate the driver 900 may be adjusted. By doing so, according to the embodiment of the present disclosure, observation of eyeball behavior such as saccade, microsaccade and drift, detection of blinks, changes in facial expression, etc. can be performed while suppressing an increase in the amount of data. Event detection can be performed at the optimum frequency in order to accurately capture changes that are the object of observation.
- the event detection of the EVS 400 is performed at the timing when the eyeball behavior needs to be observed. Adjust the threshold of By doing so, saccade behavior and microsaccade behavior can be captured efficiently and accurately while suppressing an increase in the amount of data and reducing the load on arithmetic processing and transmission.
- a filter that selectively transmits narrow-band wavelength light is used at the timing when the eyeball behavior needs to be observed. is used to illuminate the driver's 900 face with light of the narrow band wavelength. By doing so, the boundary shading formed by the iris and the pupil of the eyeball can be made remarkable, so that the saccade behavior can be selectively, efficiently and accurately observed.
- an automobile was described as an example, but the present embodiment is not limited to being applied to automobiles, automobiles, electric vehicles, hybrid electric vehicles, motorcycles, and personal mobility. , airplanes, ships, construction machines, agricultural machines (tractors) and the like.
- the eye movement is not the conscious behavior of the subject, but the reflexive action to the visual information and the intentional correction of the reflexive action, a delay behavior appears in the behavior until the correction.
- the embodiments of the present disclosure can also be applied to remote steering operations of various mobile objects and the like.
- FIG. 23 is a hardware configuration diagram showing an example of a computer 1000 that implements at least part of the functions of the automatic driving control unit 112.
- the computer 1000 has a CPU 1100 , a RAM 1200 , a ROM (Read Only Memory) 1300 , a HDD (Hard Disk Drive) 1400 , a communication interface 1500 and an input/output interface 1600 .
- Each part of computer 1000 is connected by bus 1050 .
- the CPU 1100 operates based on programs stored in the ROM 1300 or HDD 1400 and controls each section. For example, the CPU 1100 loads programs stored in the ROM 1300 or HDD 1400 into the RAM 1200 and executes processes corresponding to various programs.
- the ROM 1300 stores a boot program such as BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, and programs dependent on the hardware of the computer 1000.
- BIOS Basic Input Output System
- the HDD 1400 is a computer-readable recording medium that non-temporarily records programs executed by the CPU 1100 and data used by such programs.
- HDD 1400 is a recording medium that records an information processing program according to the present disclosure, which is an example of program data 1450 .
- a communication interface 1500 is an interface for connecting the computer 1000 to an external network 1550 (for example, the Internet).
- the CPU 1100 receives data from another device via the communication interface 1500, and transmits data generated by the CPU 1100 to another device.
- the input/output interface 1600 is an interface for connecting the input/output device 1650 and the computer 1000 .
- the CPU 1100 receives data from an input/output device 1650 such as a keyboard, mouse, and microphone via the input/output interface 1600 .
- the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input/output interface 1600 .
- the input/output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
- Media include, for example, optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, semiconductor memories, etc. is.
- optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable disk)
- magneto-optical recording media such as MO (Magneto-Optical disk)
- tape media magnetic recording media
- magnetic recording media semiconductor memories, etc. is.
- the CPU 1100 of the computer 1000 executes the program stored in the RAM 1200 to control the sensor control unit 330 and lighting. It realizes the function of the control unit 332 .
- the HDD 1400 also stores an information processing program and the like according to the present disclosure.
- CPU 1100 reads and executes program data 1450 from HDD 1400 , as another example, these programs may be obtained from another device via external network 1550 .
- the sensor control unit 330 and the lighting control unit 332 according to the present embodiment are composed of a plurality of devices on the premise of connection to a network (or communication between devices), such as cloud computing. may be applied to the system. That is, the information processing apparatus according to the present embodiment described above can be realized as an information processing system according to the present embodiment by, for example, a plurality of apparatuses.
- An example of the hardware configuration of at least part of the automatic driving control unit 112 has been described above.
- Each component described above may be configured using general-purpose members, or may be configured by hardware specialized for the function of each component. Such a configuration can be changed as appropriate according to the technical level of implementation.
- the above-described embodiments of the present disclosure include, for example, an information processing method executed by an information processing apparatus or an information processing system as described above, a program for operating the information processing apparatus, and a program in which the program is recorded. may include non-transitory tangible media that have been processed. Also, the program may be distributed via a communication line (including wireless communication) such as the Internet.
- each step in the information processing method according to the embodiment of the present disclosure described above does not necessarily have to be processed in the described order.
- each step may be processed in an appropriately changed order.
- each step may be partially processed in parallel or individually instead of being processed in chronological order.
- the processing of each step does not necessarily have to be processed in accordance with the described method, and may be processed by another method by another functional unit, for example.
- the details are described based on the automatic driving level defined by SAE, but the concept of classifying the use of automatic driving by the automatic driving level is the design viewpoint of the vehicle 600 It is a classification classified by
- the user when viewed from the user's point of view, the user always correctly understands and understands the permitted autonomous driving level in the operation design area where operation at each level is permitted according to the available autonomous driving level.
- it cannot be said that it is necessarily easy for the driver 900 to drive in accordance with the automatic driving level of the vehicle 600 .
- the situation that the vehicle system can handle changes dynamically over time due to various external and internal factors, and in situations where the level of automated driving during driving cannot be determined uniquely based only on physical road sections.
- the driver 900 is required to subordinately respond to the level that the vehicle control system 100 is allowed to follow depending on the road conditions.
- looking at the relationship between the driver 900 and the vehicle control system 100 from an ergonomic point of view in order to achieve the purpose of using the vehicle 600, which is movement, and to obtain secondary benefits during that time, Behavioral decisions are made by looking at the balance between the burden of driving and the various risks that accompany it.
- the burden refers to the task of steering the vehicle 600 for movement and the constant risk incurred in doing so.
- the advantage of automatic driving when viewed from the driver's 900 point of view is to be freed from the restraint of driving and to make it possible to use that time meaningfully without being involved in driving and using it without being dependent on driving.
- it can be said that it is necessary to convert the concept that supports automated driving control to the concept of Human Centered Design, which reverses the relationship of the conventional concept of Machine Centered Design.
- the driver 900 can use it as a "operation design area" for designing the vehicle 600. From an ergonomic point of view, it can be said that the use of autonomous driving that permits various actual autonomous driving functions depending on the arousal and physical readiness that can be handled according to the level of autonomous driving is desirable from an ergonomic perspective.
- the driver 900 learns behavior so that he/she can make appropriate preparations for return according to the upper limit of the autonomous driving steering environment allowed on each road. Furthermore, for the driver 900 who has progressed in such behavioral learning, withdrawal from higher automatic driving level 4 or the like, that is, withdrawal from driving and steering work by advanced automatic driving driving that can obtain the benefit of performing NDRA etc. allow. On the other hand, the driver 900 cannot be observed to indicate an expected appropriate return, or the driver 900 returns based on the observed state of the driver 900 by referring to the driver's 900 past return response history and learning data.
- the automatic operation control is a mode of use of the vehicle 600 that is easy for people to use. That is, by changing the control concept of the automatic driving system of the vehicle 600 from the so-called machine centered design to the human centered design, it becomes possible to provide a user-friendly mode of use through automatic driving control.
- the adaptive control using the driver's 900 state observation means has been described based on the former Machine Centered Design. However, even if it is replaced with the Human Centered Design, the driver's 900 state observation means of the driver 900 is similarly caused to switch from automatic driving to manual driving (taking over).
- Adaptive control can be implemented using
- an event vision sensor that captures an image of the inside of a moving object; a sensor control unit that controls the event vision sensor; with The event vision sensor, a pixel array section having a plurality of pixels arranged in a matrix; an event detection unit that detects that a luminance change amount due to incident light exceeds a predetermined threshold in each pixel; has The sensor control unit changes the value of the predetermined threshold value when the event vision sensor captures the eyeball behavior of a driver sitting in the driver's seat of the mobile body.
- Information processing equipment (2) The information processing apparatus according to (1), wherein the eyeball behavior includes at least one of eyeball saccade, fixation, and microsaccade.
- the information processing apparatus includes a filter that transmits light with a wavelength of 940 nm or more and 960 nm or less.
- the irradiation unit includes a filter that transmits light with a wavelength of 940 nm or more and 960 nm or less.
- the information processing apparatus according to (9) or (10) above, further comprising an irradiation control unit that controls intensity, irradiation time, or irradiation interval of light from the irradiation unit.
- the irradiation control unit performs control based on an analysis result of the eyeball behavior analysis unit.
- (13) further comprising a display information generation unit that generates a task and displays it on the display unit; The eye behavior analysis unit analyzes the eye behavior of the driver observing the task displayed on the display unit.
- the information processing apparatus according to any one of (5) to (12) above.
- (14) The information according to (13) above, wherein the display faces the driver's face and is positioned below a line-of-sight direction when the driver sees an object positioned at infinity ahead. processing equipment.
- the event vision sensor faces the driver's face and is positioned downward with respect to the line-of-sight direction, A line segment connecting the eyeball of the driver and the event vision sensor forms an angle of 10 degrees or more and less than 30 degrees with the line of sight direction.
- the information processing device according to (14) above.
- (16) The information processing device according to (15) above, wherein the event vision sensor has an angle of view capable of capturing at least the driver's face.
- an event vision sensor that captures an image of the inside of a moving object; a sensor control unit that controls the event vision sensor;
- An information processing method executed by an information processing device comprising The event vision sensor, a pixel array section having a plurality of pixels arranged in a matrix; an event detection unit that detects that a luminance change amount due to incident light exceeds a predetermined threshold in each pixel; has changing the value of the predetermined threshold value when capturing the eye movement of a driver sitting in the driver's seat of the moving object by the event vision sensor;
- Information processing methods including: (20) An information processing program that causes a computer to execute a control function of an event vision sensor that captures an image of the inside of a mobile object, The event vision sensor, a pixel array section having a plurality of pixels arranged in a matrix; an event detection unit that detects that a luminance change amount due to incident light exceeds a predetermined threshold in each pixel; has causing the computer to execute a function of changing the value of the predetermined threshold value
- vehicle control system 101 input unit 102 data acquisition unit 103 communication unit 104 in-vehicle equipment 105 output control unit 106 output unit 107 drive system control unit 108 drive system system 109 body system control unit 110 body system system 111 storage unit 112 automatic operation control unit 113 sensor unit 121 communication network 131, 508 detection unit 132 self-position estimation unit 133 situation analysis unit 134 planning unit 135 operation control unit 141 vehicle exterior information detection unit 142 vehicle interior information detection unit 143 vehicle state detection unit 151 map analysis unit 152 traffic rule recognition Section 153 Situation Recognition Section 154 Situation Prediction Section 161 Route Planning Section 162 Action Planning Section 163 Motion Planning Section 171 Emergency Avoidance Section 172 Acceleration/Deceleration Control Section 173 Direction Control Section 200 Position/Posture Detection Section 202 Face Recognition Section 204 Face Tracking Section 206 Eye tracking unit 208 Biometric information detection unit 210 Authentication unit 300 Eye behavior analysis unit 302 Eye behavior learning device 310 DB 320 determination unit 330 sensor control unit 332 illumination control unit 400 EVS 411 drive
Abstract
Description
1. 自動運転レベルの例について
2. 走行の例について
3. 自動運転レベルの遷移の例について
4. モニタリングの例について
5. 車両制御システムの詳細構成
6. センサ部113の概略構成
7. 運転手の覚醒レベルの判定を実行するユニットの概略構成
8. 眼球挙動解析部300の動作例について
9. 本開示の実施形態を創作するに至る背景
9.1 EVSの使用
9.2 EVSについて
9.3 本開示の実施形態を創作するに至る背景
10. 実施形態
10.1 設置位置について
10.2 ユニットの構成
10.3 情報処理方法
10.4 実施例
11. まとめ
12. ハードウェア構成
13. 補足
まずは、本開示の実施形態の詳細を説明する前に、自動運転技術の自動運転レベルについて、図1を参照して説明する。図1は、自動運転レベルの例を説明するための説明図である。図1においては、SAE(Society of Automotive Engineers)により定義された自動運転レベルを示している。なお、以下の説明においては、上記SAEで定義された自動運転レベルを基本的に参照して説明する。ただし、図1に示される自動運転レベルの検討においては、自動運転技術が広く普及した場合の課題や妥当性が検討し尽くされていないことから、以下の説明においては、これら課題等を踏まえ、必ずしもSAEの定義通りの解釈で説明していない個所も存在する。
次に、上述した自動運転レベルを踏まえて、図2を参照して、本開示の実施形態に係る走行の例について説明する。図2は、本開示の実施形態に係る走行の一例を説明するためのフローチャートである。図2に示すように、本開示の実施形態に係る走行においては、車両制御システムは、例えば、ステップS11からステップS18までのステップを実行することとなる。以下に、これら各ステップの詳細について説明する。
次に、図3を参照して、さらに詳細に、本開示の実施形態に係る自動運転レベルの遷移の例について説明する。図3は、本開示の実施形態に係る自動運転レベルの遷移の一例を説明するための説明図である。
そこで、図4を参照して、自動運転モードから手動運転モードへの切り替えの際のモニタリング(観測)の例について説明する。図4は、本開示の実施形態に係るモニタリング動作の一例を説明するフローチャートである。図4に示すように、本開示の実施形態においては、車両制御システムは、例えば、自動運転モードから手動運転モードへの切り替えの際には、ステップS21からステップS27までのステップを実行することとなる。以下に、これら各ステップの詳細について説明する。
次に、図5を参照して、本開示の実施形態に係る車両制御システム100の詳細構成について説明する。図5は、本実施形態に係る車両制御システム100の詳細構成の一例について説明するための説明図である。なお、以下、車両制御システム100が設けられている車両を他の車両と区別する場合、自車又は自車両と称する。
次に、図7を参照して、上述したセンサ部113に含まれる車内の運転手の情報を得るための各種センサの例を説明する。図7は、本実施形態に係るセンサ部113に含まれる各種センサの例を説明するための説明図である。図7は、センサ部113に含まれる車内の運転手の情報を得るための各種センサの例を示す図である。図7に示すように、センサ部113は、運転手の位置、姿勢を検出するための検出器として、例えば、ToFカメラ、ステレオカメラ、シート・ストレイン・ゲージ(Seat Strain Gauge)等からなる、位置・姿勢検出部200を有する。また、センサ部113は、運転手の生体情報を得るための検出器として、顔認識部202、顔追跡部204、及び眼球追跡部(監視部)206を有する。以下に、本実施形態に係るセンサ部113に含まれる各種センサの詳細について順次説明する。
次に、図8を参照して、本開示の実施形態に係る、運転手の覚醒レベル(復帰反応レベル)の判定を実行するユニットの構成例について説明する。図8は、本実施形態に係る、運転手の覚醒レベルの判定を実行するユニットの例を説明するための説明図である。詳細には、運転手の覚醒レベルの判定を実行するユニットは、図5に示す検出部131の車内情報検出部142の一部と、状況分析部133の状況認識部153と、記憶部111とを含む。より具体的には、図8には、車内情報検出部142に含まれる眼球挙動解析部300及び眼球挙動学習器302と、状況認識部153に含まれる判定部320と、記憶部111に格納されたデータベース(DB)310とが示され、これらが協働することにより、運転手の覚醒レベルの判定を実行する。以下に、図8に示される各機能ブロックについて順次説明する。
眼球挙動解析部300は、センサ部113の眼球追跡部206の検出した運転手の眼球挙動を、データ取得部102を介して取得し、解析を行う。例えば、眼球挙動解析部300は、運転手の眼球のサッカード(眼球回転)、固視(Fixation)やマイクロサッカード(眼球微小回転)等の眼球挙動を検出、解析する。眼球挙動解析部300の解析した眼球挙動情報は、後述する眼球挙動学習器302や判定部320に出力される。
眼球挙動学習器302は、過去に取得された、各覚醒レベルにラベル付けされた運転手の眼球挙動の解析結果を教師データとして学習し、後述する判定部320による判定のためのデータベース310を生成し、記憶部111(図5 参照)に出力する。本実施形態においては、例えば、眼球挙動学習器302は、サポートベクターレグレッションやディープニューラルネットワーク等の教師付き学習器であることができる。この場合、解析結果(眼球挙動)とそれにラベル付された覚醒レベル(正常時、又は、低下時)とが、それぞれ入力信号及び教師信号として眼球挙動学習器302に入力され、当該眼球挙動学習器302は、所定の規則に従ってこれら入力情報の間の関係について機械学習を行う。そして、当該眼球挙動学習器302は、上述した複数の入力信号及び教師信号の対が入力され、これら入力に対して機械学習を行うことにより、解析結果(眼球挙動)と覚醒レベルとの関係を示す関係情報を格納したデータベース(DB)310を生成する。なお、生成したDB310は、上記記憶部111に格納されることに限定されるものではなく、運転手を識別するための識別情報に紐づけてクラウド上のサーバ(図示省略)に格納してもよい。格納されたDB310は、運転手が利用営業車の乗り換えやシェアカーやレンタカー等を使用する際に異なる車両でも利用することができる。さらに、DB310内の情報は、いずれの場所に格納されていても、常に更新されていることが好ましい。なお、車両の種別により運動特性等により求められる復帰要件が異なる場合は、車両に応じた評価判定基準の正規化をさらに行ってもよい。なお、データベース(DB)の学習・生成は、上記説明では覚醒レベルが正常か低下の二者択一で説明をしているが、より細分化し復帰品質に区分けして更に他の運転者状態推移情報と紐付けて学習を行い、眼球挙動以外の観測手段により取得された可観測情報と運転者の覚醒、復帰品質予測の精度向上を図ってもよい。
判定部320は、眼球挙動解析部300の解析した眼球挙動の解析結果に基づいて、運転手の覚醒レベル(復帰反応レベル)を判定する。例えば、運転手が、課題解決のための眼球のサッカード、固視やマイクロサッカード等の眼球挙動を実行していることが確認された場合は、判定部320は、運転手の覚醒レベルは高いと判定することができる。一方、これらの眼球挙動が観測されなかった場合、または少ない場合は、判定部320は運転手の覚醒レベルが低いと判定することはできる。
次に、図9を参照して、本開示の実施形態に係る眼球挙動解析部300の動作例の詳細についてさらに説明する。図9は、本開示の実施形態に係る眼球挙動解析部300の動作例の詳細を説明するための説明図である。なお、図9においては、左端が出発地点(出発地)であるものとし、右端は目的地点(目的地)であるものとし、以下においては、出発地点から目的地点に向かって説明を行うものとする。
<9.1 EVSの使用>
さらに、本開示の実施形態の詳細を説明する前に、本発明者が本開示の実施形態を創作するに至る背景について説明する。
そこで、図10及び図11を参照して、Event Vision Sensor(EVS)400について説明する。図10は、本開示の実施形態で使用されるEVS400の構成の一例を示すブロック図であり、図11は、図10に示すEVS400における画素アレイ部500に位置する画素502の構成の一例を示すブロック図である。
しかしながら、本発明者は、独自に車両制御システム100について検討を進める中、EVS400を用いて運転手の眼球挙動等の観測データを取得しようとする場合に、以下のような課題が生じ得ることを認識した。EVS400は、先に説明したように、輝度変化が所定の閾値を超えたことに基づきイベントを検出する。従って、例えば上記閾値を小さくした場合には、EVS400は、小さな輝度変化であってもイベントとして検出することから、検出されるイベントが多発し、検出されたイベントのデータ量が増加する。その結果、データ量がインタフェースの伝送帯域を超え、他の機能ブロック等に当該データを送信できないといった問題が生じ得る。このような場合、EVS400で検出された全てのデータが送信できていないこととなるため、結果的に、本来検出したい所望のイベントを検出できないといったこととなる。一方、例えば上記閾値を大きくすると、EVS400が所望のイベントを検出できないといった問題が生じ得る。
<10.1 設置位置について>
まずは、本開示の実施形態におけるEVS400の設置位置の例について、図12から図12から図14を参照して説明する。図12から図14は、本実施形態に係る撮像装置700、702の設置位置を説明するための説明図である。
次に、図15を参照して、本開示の実施形態に係る、運転手900の眼球挙動の観測を実行するユニットの構成例について説明する。図15は、本実施形態に係る、運転手900の眼球挙動の観測を実行するユニットの例を説明するための説明図である。
撮像装置702は、先に説明したように、運転手900の眼球挙動を観測する撮像装置であって、EVS400からなる。そして、撮像装置702は、後述するセンサ制御部330によって制御され、観測したデータを眼球挙動解析部300へ出力する。
照明部710は、車両600内に設けられ、運転手900の眼球挙動を捉える際に、運転手900の顔面に所定の波長の光(例えば、近赤外光)を照射することができる。例えば、照明部710は、主に940nm以上960nm以下の波長の光を透過するフィルタを撮像装置レンズに配置することにより、眼球挙動の観測が必要なタイミングで、該当狭帯域波長の光を運転手900の顔面に照明することで外光による影響をおさえるができる。本実施形態においては、外光(太陽光)のノイズが少ない近赤外光により、眼球の虹彩と瞳孔とがなす境界濃淡を顕著にすることができることから、撮像装置702により、運転手900の眼球挙動(サッカード挙動)を選択的に、且つ、効率的に精度よく観測することができる。さらに、照明部710は、後述する照明制御部332により、運転手900の眼球挙動をより捉えやすくするために、照射する光の強度や、照射時間、照射間隔等が調整されてもよい。なお、利用する光源の種類、撮像装置の受光部の波長感度分布に合わせ、上記設定以外の波長感度を利用してもよい。
センサ制御部330は、車内情報検出部142内に設けられ、撮像装置702を制御する。詳細には、センサ制御部330は、運転手900の眼球挙動を捉える際、EVS400からなる撮像装置702のイベント検出の頻度を定義しえる閾値を変更する。このようにすることで、撮像装置702は、サッカード、マイクロサッカードやドリフト等の眼球挙動、瞬きの検出、顔の表情変化等の観測を、データ量の増加を抑えつつ、観測目的とする変化を精度よく捉えるために最適な頻度でイベント検出を行うことができる。さらに、撮像装置702が、運転手900の眼球挙動だけでなく、運転手900の位置や姿勢も捉える機能を担っている場合には、一時的に閾値を眼球挙動の観測に最適することにより、眼球挙動を精度よく観測することが可能となる。
照明制御部332は、車内情報検出部142内に設けられ、照明部710を制御する。詳細には、照明制御部332は、運転手900の眼球挙動を捉える際、照明部710からの光の強度、照射時間、又は、照射間隔を制御する。このようにすることで、撮像装置702は、運転手900の眼球挙動をより捉えやすくなる。さらに、照明制御部332は、観測したデータを眼球挙動解析部300によって解析し、解析結果をフィードバックされることにより、照明部710からの光の強度、照射時間、又は、照射間隔を制御してもよい。
眼球挙動解析部300は、撮像装置702で観測されたデータを解析する。詳細には、眼球挙動解析部300は、イベントを検出した撮像装置702の画素502の位置座標点からなるポイント分布の形状に基づいて、眼球挙動の種別(サッカード、マイクロサッカード等)を区別することができる。さらに具体的には、眼球挙動解析部300は、三日月形状を持つポイント分布の発現を検出した場合、マイクロサッカードが発現したと解析する。さらに、本実施形態においては、眼球挙動解析部300の解析結果は、判定部320での判定(運転手の復帰対応レベルの判定)に用いられ、判定部320の判定結果に基づいて、運転モードを切り替えたりすることとなる。
次に、図16から図18を参照して、本開示の実施形態に係る情報処理方法について説明する。図16及び図17は、本実施形態に係る情報処理方法の一例を説明するフローチャートであり、図18は、撮像装置702により観測される観測データの一例を説明するための説明図である。
さらに、運転手900の覚醒度の判定処理を実行する具体例について説明する。例えば、以下に説明するような課題を運転手900に提示することにおり、実行される。
以上のように、本開示の実施形態によれば、EVS400を用いて運転手900の眼球挙動を観測する際には、EVS400のイベント検出の頻度を定義しえる閾値を調整することにより、上記眼球挙動の観測を最適化することができる。また、本開示の実施形態においては、運転手900を照明する照明装置を調整してもよい。このようにすることにより、本開示の実施形態によれば、サッカード、マイクロサッカードやドリフト等の眼球挙動、瞬きの検出、顔の表情変化等の観測を、データ量の増加を抑えつつ、観測目的とする変化を精度よく捉えるために最適な頻度でイベント検出を行うことができる。
上述してきた各実施形態に係る自動運転制御部112の全体又は一部は、例えば図23に示すような構成のコンピュータ1000によって実現される。図23は、自動運転制御部112の少なくとも一部の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インタフェース1500、及び入出力インタフェース1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
なお、先に説明した本開示の実施形態は、例えば、上記で説明したような情報処理装置又は情報処理システムで実行される情報処理方法、情報処理装置を機能させるためのプログラム、及びプログラムが記録された一時的でない有形の媒体を含みうる。また、当該プログラムをインターネット等の通信回線(無線通信も含む)を介して頒布してもよい。
(1)
移動体の内部を撮像するイベントビジョンセンサと、
前記イベントビジョンセンサを制御するセンサ制御部と、
を備え、
前記イベントビジョンセンサは、
マトリクス状に配列する複数の画素を有する画素アレイ部と、
前記各画素において、入射光による輝度変化量が所定の閾値を超えたことを検出するイベント検出部と、
を有し、
前記センサ制御部は、前記イベントビジョンセンサにより前記移動体の運転席に着座する運転手の眼球挙動を捉える際に、前記所定の閾値の値を変更する、
情報処理装置。
(2)
前記眼球挙動には、眼球のサッカード、固視及びマイクロサッカードのうちの少なくとも1つが含まれる、上記(1)に記載の情報処理装置。
(3)
前記センサ制御部は、前記画素アレイ部内の位置に応じて、前記各画素に対応する前記所定の閾値を変更する、上記(1)又は(2)に記載の情報処理装置。
(4)
前記センサ制御部は、前記運転手の顔面又は眼球の位置に応じて、前記画素アレイ部内に所定の領域を設定し、前記所定の領域内の前記各画素に対応する前記所定の閾値を変更する、上記(3)に記載の情報処理装置。
(5)
前記イベントビジョンセンサで観測したデータを解析する眼球挙動解析部をさらに備える、上記(1)~(4)のいずれか1つに記載の情報処理装置。
(6)
前記眼球挙動解析部は、前記イベントビジョンセンサで観測されたポイント分布の形状に基づいて、前記眼球挙動の種別を区別する、上記(5)に記載の情報処理装置。
(7)
前記眼球挙動解析部は、三日月形状を持つ前記ポイント分布の発現を検出した場合、マイクロサッカードが発現したと解析する、上記(6)に記載の情報処理装置。
(8)
前記センサ制御部は、前記眼球挙動解析部の解析結果に基づいて、前記所定の閾値を変更する、上記(5)~(7)のいずれか1つに記載の情報処理装置。
(9)
前記イベントビジョンセンサにより前記運転手の眼球挙動を捉える際に、前記運転手の顔面に所定の波長の光を照射する照射部をさらに備える、
上記(5)~(8)のいずれか1つに記載の情報処理装置。
(10)
前記照射部は、940nm以上960nm以下の波長の光を透過するフィルタを有する、上記(9)に記載の情報処理装置。
(11)
前記照射部からの光の強度、照射時間、又は、照射間隔を制御する照射制御部をさらに備える、上記(9)又は(10)に記載の情報処理装置。
(12)
前記照射制御部は、前記眼球挙動解析部の解析結果に基づいて制御を行う、上記(11)に記載の情報処理装置。
(13)
課題を生成して表示部に表示する表示情報生成部をさらに備え、
前記眼球挙動解析部は、前記表示部に表示された前記課題を観察する前記運転手の前記眼球挙動を解析する、
上記(5)~(12)のいずれか1つに記載の情報処理装置。
(14)
前記表示部は、前記運転手の顔面と対向し、且つ、前記運転手が前方の無限遠に位置する物体を見る際の視線方向に対して下方に位置する、上記(13)に記載の情報処理装置。
(15)
前記イベントビジョンセンサは、前記運転手の顔面と対向し、且つ、前記視線方向に対して下方に位置し、
前記運転手の眼球と前記イベントビジョンセンサとを結ぶ線分と、前記視線方向とがなす角度は、10度以上30度未満である、
上記(14)に記載の情報処理装置。
(16)
前記イベントビジョンセンサは、前記運転手の顔面を少なくとも捉えることができる画角を有する、上記(15)に記載の情報処理装置。
(17)
前記眼球挙動の解析結果に基づいて、前記運転手の手動運転への復帰対応レベルを判定する判定部をさらに備える、上記(5)~(16)のいずれか1つに記載の情報処理装置。
(18)
前記復帰対応レベルの判定結果に基づいて、前記移動体の運転モードを切り替える移動体運転制御部をさらに備える、上記(17)に記載の情報処理装置。
(19)
移動体の内部を撮像するイベントビジョンセンサと、
前記イベントビジョンセンサを制御するセンサ制御部と、
を備える情報処理装置で実行される情報処理方法であって、
前記イベントビジョンセンサは、
マトリクス状に配列する複数の画素を有する画素アレイ部と、
前記各画素において、入射光による輝度変化量が所定の閾値を超えたことを検出するイベント検出部と、
を有し、
前記イベントビジョンセンサにより前記移動体の運転席に着座する運転手の眼球挙動を捉える際に、前記所定の閾値の値を変更する、
ことを含む、情報処理方法。
(20)
コンピュータに、移動体の内部を撮像するイベントビジョンセンサの制御機能を実行させる情報処理プログラムであって、
前記イベントビジョンセンサは、
マトリクス状に配列する複数の画素を有する画素アレイ部と、
前記各画素において、入射光による輝度変化量が所定の閾値を超えたことを検出するイベント検出部と、
を有し、
前記コンピュータに、前記イベントビジョンセンサにより前記移動体の運転席に着座する運転手の眼球挙動を捉える際に、前記所定の閾値の値を変更する機能を実行させる、
情報処理プログラム。
101 入力部
102 データ取得部
103 通信部
104 車内機器
105 出力制御部
106 出力部
107 駆動系制御部
108 駆動系システム
109 ボディ系制御部
110 ボディ系システム
111 記憶部
112 自動運転制御部
113 センサ部
121 通信ネットワーク
131、508 検出部
132 自己位置推定部
133 状況分析部
134 計画部
135 動作制御部
141 車外情報検出部
142 車内情報検出部
143 車両状態検出部
151 マップ解析部
152 交通ルール認識部
153 状況認識部
154 状況予測部
161 ルート計画部
162 行動計画部
163 動作計画部
171 緊急事態回避部
172 加減速制御部
173 方向制御部
200 位置・姿勢検出部
202 顔認識部
204 顔追跡部
206 眼球追跡部
208 生体情報検出部
210 認証部
300 眼球挙動解析部
302 眼球挙動学習器
310 DB
320 判定部
330 センサ制御部
332 照明制御部
400 EVS
411 駆動回路
412 信号処理部
413 アービタ部
414 カラム処理部
500 画素アレイ部
502 画素
504 受光部
506 画素信号生成部
600 車両
602 運転席
700、702 撮像装置
710 照明部
800 表示情報DB
802 表示情報生成部
804 表示部
900 運転手
902 視線方向
Claims (20)
- 移動体の内部を撮像するイベントビジョンセンサと、
前記イベントビジョンセンサを制御するセンサ制御部と、
を備え、
前記イベントビジョンセンサは、
マトリクス状に配列する複数の画素を有する画素アレイ部と、
前記各画素において、入射光による輝度変化量が所定の閾値を超えたことを検出するイベント検出部と、
を有し、
前記センサ制御部は、前記イベントビジョンセンサにより前記移動体の運転席に着座する運転手の眼球挙動を捉える際に、前記所定の閾値の値を変更する、
情報処理装置。 - 前記眼球挙動には、眼球のサッカード、固視及びマイクロサッカードのうちの少なくとも1つが含まれる、請求項1に記載の情報処理装置。
- 前記センサ制御部は、前記画素アレイ部内の位置に応じて、前記各画素に対応する前記所定の閾値を変更する、請求項1に記載の情報処理装置。
- 前記センサ制御部は、前記運転手の顔面又は眼球の位置に応じて、前記画素アレイ部内に所定の領域を設定し、前記所定の領域内の前記各画素に対応する前記所定の閾値を変更する、請求項3に記載の情報処理装置。
- 前記イベントビジョンセンサで観測したデータを解析する眼球挙動解析部をさらに備える、請求項1に記載の情報処理装置。
- 前記眼球挙動解析部は、前記イベントビジョンセンサで観測されたポイント分布の形状に基づいて、前記眼球挙動の種別を区別する、請求項5に記載の情報処理装置。
- 前記眼球挙動解析部は、三日月形状を持つ前記ポイント分布の発現を検出した場合、マイクロサッカードが発現したと解析する、請求項6に記載の情報処理装置。
- 前記センサ制御部は、前記眼球挙動解析部の解析結果に基づいて、前記所定の閾値を変更する、請求項5に記載の情報処理装置。
- 前記イベントビジョンセンサにより前記運転手の眼球挙動を捉える際に、前記運転手の顔面に所定の波長の光を照射する照射部をさらに備える、
請求項5に記載の情報処理装置。 - 前記照射部は、940nm以上960nm以下の波長の光を透過するフィルタを有する、請求項9に記載の情報処理装置。
- 前記照射部からの光の強度、照射時間、又は、照射間隔を制御する照射制御部をさらに備える、請求項9に記載の情報処理装置。
- 前記照射制御部は、前記眼球挙動解析部の解析結果に基づいて制御を行う、請求項11に記載の情報処理装置。
- 課題を生成して表示部に表示する表示情報生成部をさらに備え、
前記眼球挙動解析部は、前記表示部に表示された前記課題を観察する前記運転手の前記眼球挙動を解析する、
請求項5に記載の情報処理装置。 - 前記表示部は、前記運転手の顔面と対向し、且つ、前記運転手が前方の無限遠に位置する物体を見る際の視線方向に対して下方に位置する、請求項13に記載の情報処理装置。
- 前記イベントビジョンセンサは、前記運転手の顔面と対向し、且つ、前記視線方向に対して下方に位置し、
前記運転手の眼球と前記イベントビジョンセンサとを結ぶ線分と、前記視線方向とがなす角度は、10度以上30度未満である、
請求項14に記載の情報処理装置。 - 前記イベントビジョンセンサは、前記運転手の顔面を少なくとも捉えることができる画角を有する、請求項15に記載の情報処理装置。
- 前記眼球挙動の解析結果に基づいて、前記運転手の手動運転への復帰対応レベルを判定する判定部をさらに備える、請求項5に記載の情報処理装置。
- 前記復帰対応レベルの判定結果に基づいて、前記移動体の運転モードを切り替える移動体運転制御部をさらに備える、請求項17に記載の情報処理装置。
- 移動体の内部を撮像するイベントビジョンセンサと、
前記イベントビジョンセンサを制御するセンサ制御部と、
を備える情報処理装置で実行される情報処理方法であって、
前記イベントビジョンセンサは、
マトリクス状に配列する複数の画素を有する画素アレイ部と、
前記各画素において、入射光による輝度変化量が所定の閾値を超えたことを検出するイベント検出部と、
を有し、
前記イベントビジョンセンサにより前記移動体の運転席に着座する運転手の眼球挙動を捉える際に、前記所定の閾値の値を変更する、
ことを含む、情報処理方法。 - コンピュータに、移動体の内部を撮像するイベントビジョンセンサの制御機能を実行させる情報処理プログラムであって、
前記イベントビジョンセンサは、
マトリクス状に配列する複数の画素を有する画素アレイ部と、
前記各画素において、入射光による輝度変化量が所定の閾値を超えたことを検出するイベント検出部と、
を有し、
前記コンピュータに、前記イベントビジョンセンサにより前記移動体の運転席に着座する運転手の眼球挙動を捉える際に、前記所定の閾値の値を変更する機能を実行させる、
情報処理プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022581291A JPWO2022172724A1 (ja) | 2021-02-12 | 2022-01-21 | |
DE112022001065.7T DE112022001065T5 (de) | 2021-02-12 | 2022-01-21 | Informationsverarbeitungseinrichtung, informationsverarbeitungsverfahren und informationsverarbeitungsprogramm |
CN202280008325.7A CN116685516A (zh) | 2021-02-12 | 2022-01-21 | 信息处理装置、信息处理方法和信息处理程序 |
US18/260,191 US20240051585A1 (en) | 2021-02-12 | 2022-01-21 | Information processing apparatus, information processing method, and information processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021020377 | 2021-02-12 | ||
JP2021-020377 | 2021-02-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022172724A1 true WO2022172724A1 (ja) | 2022-08-18 |
Family
ID=82838756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/002127 WO2022172724A1 (ja) | 2021-02-12 | 2022-01-21 | 情報処理装置、情報処理方法及び情報処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240051585A1 (ja) |
JP (1) | JPWO2022172724A1 (ja) |
CN (1) | CN116685516A (ja) |
DE (1) | DE112022001065T5 (ja) |
WO (1) | WO2022172724A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019103744A (ja) * | 2017-12-14 | 2019-06-27 | オムロン株式会社 | 瞳孔検出装置および検出システム |
JP2020536309A (ja) * | 2017-09-28 | 2020-12-10 | アップル インコーポレイテッドApple Inc. | イベントカメラデータを使用するアイトラッキングの方法及び装置 |
US20210035298A1 (en) * | 2019-07-30 | 2021-02-04 | Apple Inc. | Utilization of luminance changes to determine user characteristics |
-
2022
- 2022-01-21 US US18/260,191 patent/US20240051585A1/en active Pending
- 2022-01-21 CN CN202280008325.7A patent/CN116685516A/zh active Pending
- 2022-01-21 DE DE112022001065.7T patent/DE112022001065T5/de active Pending
- 2022-01-21 JP JP2022581291A patent/JPWO2022172724A1/ja active Pending
- 2022-01-21 WO PCT/JP2022/002127 patent/WO2022172724A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020536309A (ja) * | 2017-09-28 | 2020-12-10 | アップル インコーポレイテッドApple Inc. | イベントカメラデータを使用するアイトラッキングの方法及び装置 |
JP2019103744A (ja) * | 2017-12-14 | 2019-06-27 | オムロン株式会社 | 瞳孔検出装置および検出システム |
US20210035298A1 (en) * | 2019-07-30 | 2021-02-04 | Apple Inc. | Utilization of luminance changes to determine user characteristics |
Also Published As
Publication number | Publication date |
---|---|
US20240051585A1 (en) | 2024-02-15 |
DE112022001065T5 (de) | 2023-12-28 |
CN116685516A (zh) | 2023-09-01 |
JPWO2022172724A1 (ja) | 2022-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7204739B2 (ja) | 情報処理装置、移動装置、および方法、並びにプログラム | |
JP7155122B2 (ja) | 車両制御装置及び車両制御方法 | |
JP7288911B2 (ja) | 情報処理装置、移動装置、および方法、並びにプログラム | |
JP7299840B2 (ja) | 情報処理装置および情報処理方法 | |
JP7080598B2 (ja) | 車両制御装置および車両制御方法 | |
JP7324716B2 (ja) | 情報処理装置、移動装置、および方法、並びにプログラム | |
JP7273031B2 (ja) | 情報処理装置、移動装置、情報処理システム、および方法、並びにプログラム | |
WO2021145131A1 (ja) | 情報処理装置、情報処理システム、情報処理方法及び情報処理プログラム | |
JPWO2020100539A1 (ja) | 情報処理装置、移動装置、および方法、並びにプログラム | |
JP7431223B2 (ja) | 情報処理装置、移動装置、および方法、並びにプログラム | |
JP7357006B2 (ja) | 情報処理装置、移動装置、および方法、並びにプログラム | |
WO2021049219A1 (ja) | 情報処理装置、移動装置、情報処理システム、および方法、並びにプログラム | |
JP2021128349A (ja) | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム | |
WO2022172724A1 (ja) | 情報処理装置、情報処理方法及び情報処理プログラム | |
JP7238193B2 (ja) | 車両制御装置および車両制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22752555 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280008325.7 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2022581291 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18260191 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112022001065 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22752555 Country of ref document: EP Kind code of ref document: A1 |