JP6958886B2 - Operator state estimator - Google Patents

Operator state estimator Download PDF

Info

Publication number
JP6958886B2
JP6958886B2 JP2016200328A JP2016200328A JP6958886B2 JP 6958886 B2 JP6958886 B2 JP 6958886B2 JP 2016200328 A JP2016200328 A JP 2016200328A JP 2016200328 A JP2016200328 A JP 2016200328A JP 6958886 B2 JP6958886 B2 JP 6958886B2
Authority
JP
Japan
Prior art keywords
driver
operator
model
vehicle
monitored object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2016200328A
Other languages
Japanese (ja)
Other versions
JP2018063489A (en
Inventor
浩 得竹
翔一朗 寺西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kanazawa University NUC
Original Assignee
Kanazawa University NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kanazawa University NUC filed Critical Kanazawa University NUC
Priority to JP2016200328A priority Critical patent/JP6958886B2/en
Publication of JP2018063489A publication Critical patent/JP2018063489A/en
Application granted granted Critical
Publication of JP6958886B2 publication Critical patent/JP6958886B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Traffic Control Systems (AREA)

Description

本発明は、自動車などの操縦対象体を操作する操縦者の注意力低下などの状態を推定する操縦者状態推定装置に関する。 The present invention relates to a driver state estimation device that estimates a state such as a decrease in attention of a driver who operates a maneuvering object such as an automobile.

漫然運転などのヒューマンエラーが交通事故の主な原因となっている。このような事故を低減することなどを目指した自動走行システムの研究開発が活発に行われている。内閣府は2020年代前半を目途に準自動走行システムの市場化を掲げている。準自動走行システムでは、システムが要請したときに操縦を全自動走行システムからドライバ(操縦者)に切り替える。その切り替えの際は、ドライバが十分な操縦を行える状態であることを確認 したのちに切り替えることが望ましい。また、ドライバが運転している場合(手動運転時)でもドライバの状態を確認する技術は漫然運転や脇見運転の防止の点で有用である。 Human errors such as casual driving are the main causes of traffic accidents. Research and development of automatic driving systems aiming at reducing such accidents are being actively carried out. The Cabinet Office is aiming to market a semi-autonomous driving system by the first half of the 2020s. In the semi-autonomous driving system, the maneuvering is switched from the fully automatic driving system to the driver (operator) when requested by the system. When switching, it is desirable to confirm that the driver is in a state of sufficient maneuvering before switching. In addition, the technique of checking the driver's state even when the driver is driving (during manual driving) is useful in terms of preventing inattentive driving and inattentive driving.

発明者らはビークル(操縦対象体)を操縦するオペレータモデル(ドライバモデル)を陽に用いたビークルダイナミクスの解析手法や制御系設計手法を構築してきた(非特許文献1〜5)。またビークルを操縦するオペレータ(操縦者)状態推定のためのオペレータモデルのリアルタイム同定手法と同定モデルからオペレータ状態を推定する手法を開発してきた(非特許文献6〜8)。オペレータモデルを用いることで、オペレータの振る舞いを直接考慮した解析、設計が可能となった。 The inventors have constructed a vehicle dynamics analysis method and a control system design method that explicitly use an operator model (driver model) for manipulating a vehicle (control object) (Non-Patent Documents 1 to 5). In addition, we have developed a real-time identification method of an operator model for estimating the operator (operator) state of operating a vehicle and a method of estimating the operator state from the identification model (Non-Patent Documents 6 to 8). By using the operator model, it is possible to analyze and design by directly considering the behavior of the operator.

また、特許文献1には、車両を運転するドライバの顔が映る画像に基づいて、ドライバの両眼の視線によって規定される輻輳角を順次算出し、算出した輻輳角の標準偏差に基づいて、車両を運転するドライバの覚醒度が低下した状態であるか否かを判定する覚醒度判定装置が開示されている。
特許文献2には、運転者の覚醒時の心電図波形における隣接するR波の間隔であるRRIデータを取得し、取得されたRRIデータに基づいて心拍変動を解析し、標準化された覚醒時HRV指標データに基づいて多変量統計的プロセス管理を用いて運転者毎の眠気検出モデルを構築する眠気検出方法及び装置が開示されている。
特許文献3には、入力される生体情報又は車両情報を示す複数の特徴量をそれぞれ2値的に識別する弱識別手段と、弱識別手段による複数の特徴量の識別結果に基づいて運転者の注意力低下を推定する状態推定手段と、複数の特徴量と運転者の注意力低下との関係をAdaBoostを用いて学習する学習手段とを備える運転者状態推定装置が開示されている。
特許文献4には、顔の特定部位の一時的な変化が所定時間以上継続しているとき、一時的な変化の所定周期内での回数が所定回数以上であるときに、運転者が漫然状態であると判定する運転者状態検出方法及び装置が開示されている。
Further, in Patent Document 1, the convergence angle defined by the line of sight of both eyes of the driver is sequentially calculated based on the image showing the face of the driver driving the vehicle, and based on the calculated standard deviation of the convergence angle, the convergence angle is sequentially calculated. An arousal degree determination device for determining whether or not a driver who drives a vehicle is in a state of reduced arousalness is disclosed.
In Patent Document 2, RRI data, which is the interval between adjacent R waves in the ECG waveform during awakening of the driver, is acquired, heart rate variability is analyzed based on the acquired RRI data, and a standardized HRV index during awakening is obtained. Drowsiness detection methods and devices for constructing a drowsiness detection model for each driver using multivariate statistical process management based on the data are disclosed.
Patent Document 3 describes a weak identification means for binarly identifying a plurality of feature amounts indicating input biological information or vehicle information, and a driver's identification result based on the identification results of the plurality of feature amounts by the weak identification means. A driver state estimation device including a state estimating means for estimating attention reduction and a learning means for learning the relationship between a plurality of features and the driver's attention reduction using AdaBoost is disclosed.
According to Patent Document 4, when a temporary change of a specific part of a face continues for a predetermined time or longer, or when the number of temporary changes within a predetermined cycle is equal to or longer than a predetermined number of times, the driver is in a vague state. A driver state detection method and an apparatus for determining the above are disclosed.

特開2012−113450号公報Japanese Unexamined Patent Publication No. 2012-113450 特開2015−226696号公報Japanese Unexamined Patent Publication No. 2015-226696 特開2009−301367号公報Japanese Unexamined Patent Publication No. 2009-301367 特開2016−71577号公報Japanese Unexamined Patent Publication No. 2016-71577

Fujinaga,J.,Tokutake,H.and Miura,Y.:Pilot-in-the-Loop Analysis of Aileron Operation and Flight Simulator Experiments,Transactions of the Japan Society for Aeronauticaland Space Sciences,vol.50,p.193-200(2007)Fujinaga, J., Tokutake, H.and Miura, Y .: Pilot-in-the-Loop Analysis of Aileron Operation and Flight Simulator Experiments, Transactions of the Japan Society for Aeronautical and Space Sciences, vol.50, p.193-200 (2007) Miura,Y.,Tokutake,H.and Fukui,K.:Handling qualityies evaluation method based on actual driver characteristics,Vehicle System Dynamics, vol.45,p.807-817(2007)Miura, Y., Tokutake, H.and Fukui, K .: Handling qualityies evaluation method based on actual driver characteristics, Vehicle System Dynamics, vol.45, p.807-817 (2007) Tokutake,H.,Fujinaga,J.and Miura,Y.:Lateral-Directional controller design using a pilot model and flight simulator experiments.The Aeronautical Journal,vol.112,p.213-218(2008).Tokutake, H., Fujinaga, J. and Miura, Y .: Lateral-Directional controller design using a pilot model and flight simulator experiments. The Aeronautical Journal, vol.112, p.213-218 (2008). Tokutake,H.,Miura,Y.and Okubo,H.:Workload analysis method via optimal drive model,SAE Technical Papers(2004),2004-01-3536,doi:10.4271/2004-01-3536Tokutake, H., Miura, Y. and Okubo, H .: Workload analysis method via optimal drive model, SAE Technical Papers (2004), 2004-01-3536, doi: 10.4271 / 2004-01-3536 Tokutake,H., Sato,M:Controller design using standard operator model,Journal of Guidance, Control and Dynamics,vol. 28, p.872-877(2005)Tokutake, H., Sato, M: Controller design using standard operator model, Journal of Guidance, Control and Dynamics, vol. 28, p.872-877 (2005) Tokutake,H.,Sugimoto,Y.,and Shirakata,T.:Real-time Identification Method of Driver Model with Steering Manipulation,Vehicle System Dynamics,vol.51,p.109-121(2012).Tokutake, H., Sugimoto, Y., and Shirakata, T .: Real-time Identification Method of Driver Model with Steering Manipulation, Vehicle System Dynamics, vol.51, p.109-121 (2012). Tokutake,H.:Drowsiness Estimation from Identified Driver Model,Systems Science & Control Engineering,vol.3,p.381-390(2015).Tokutake, H .: Drowsiness Estimation from Identified Driver Model, Systems Science & Control Engineering, vol.3, p.381-390 (2015). 寺西翔一朗,得竹浩:第 53回飛行機シンポジウム講演集,3G05,(2015).Shoichiro Teranishi, Hiroshi Tokutake: Proceedings of the 53rd Airplane Symposium, 3G05, (2015).

非特許文献6〜8で提案したオペレータ状態を推定する手法は、車両運動と操縦履歴のみを利用するため、一般に行われている生体信号を用いたドライバ状態推定手法より低価格で実装・運用できる。しかし、ドライバは自動走行時に操縦を行わないため、操縦履歴を利用した手法はドライバが操縦している状況しか適用できない。
また、特許文献1〜4は、輻輳角、心拍変動、視線移動量又は口周りの変化等の生体信号のみを利用して運転者の状態を判断するものであり、ドライバが自車両周辺の注視すべき対象(被監視物)にどれほど注意を払っているかは把握できない。
Since the method for estimating the operator state proposed in Non-Patent Documents 6 to 8 uses only the vehicle motion and the maneuvering history, it can be implemented and operated at a lower price than the driver state estimation method using biological signals that is generally performed. .. However, since the driver does not steer during automatic driving, the method using the maneuvering history can be applied only to the situation where the driver is maneuvering.
Further, Patent Documents 1 to 4 determine the driver's condition by using only biological signals such as convergence angle, heart rate variability, eye movement amount, or change in mouth circumference, and the driver gazes around the vehicle. It is not possible to know how much attention is being paid to the object to be monitored (object to be monitored).

そこで本発明は、手動運転時と自動走行時とに関わらず操縦者の状態を推定することができ、操縦者が操縦対象体周辺の被監視物にどれほど注意しているかを把握できる操縦者状態推定装置を提供することを目的とする。 Therefore, the present invention can estimate the state of the operator regardless of whether the vehicle is manually driven or automatically driven, and can grasp how much the driver is paying attention to the monitored object around the object to be operated. It is an object of the present invention to provide an estimation device.

請求項1記載の本発明の操縦者状態推定装置は、操縦対象体を操縦する操縦者の注意力低下状態を推定する操縦者状態推定装置であって、被監視物位置測定手段と、注視点測定手段と、操縦者モデル同定手段と、操縦者状態推定手段とを備え、前記被監視物位置測定手段は、前記操縦者が監視する被監視物の位置を継続的に測定し、前記注視点測定手段は、前記被監視物を監視する前記操縦者の注視点を継続的に測定し、前記操縦者モデル同定手段は、前記操縦者の入出力関係を、前記被監視物の継続的な変位を入力とし、前記操縦者の前記注視点の継続的な変位を出力として示す操縦者モデルを同定し、前記操縦者状態推定手段は、前記操縦者モデルの時数を、前記操縦者の以前の操縦者モデルの時数、又は規範的な操縦者モデルの時数と比較し、前記操縦者モデルの時定数が、前記操縦者の以前の操縦者モデルの時定数、又は前記規範的な操縦者モデルの時定数よりも大きい場合、現在の前記操縦者が前記注意力低下状態であると推定することを特徴とする。
請求項2記載の本発明は、請求項1又は請求項2に記載の操縦者状態推定装置において、前記操縦者モデル同定手段は、前記操縦者の前記注視点が前記被監視物から所定範囲内にある場合の前記被監視物の継続的な変位及び前記操縦者の前記注視点の継続的な変位を前記操縦者モデルの同定に用いることを特徴とする。
請求項3記載の本発明は、請求項1又は請求項2に記載の操縦者状態推定装置において、前記操縦対象体は、道路を走行する車両であり、前記被監視物は、前記車両の外に存在する静止物又は移動物であることを特徴とする。
請求項4記載の本発明は、請求項3に記載の操縦者状態推定装置において、前記車両は、前記操縦者が運転を行う手動運転モードと、加速、操舵及び制動の全て又は複数の操作が自動で行われる自動走行モードとを有する準自動走行システム車であり、前記操縦者状態推定手段は、前記手動運転モードと前記自動走行モードを切り替える前に前記操縦者の前記注意力低下状態を推定することを特徴とする。
Operator state estimating apparatus of the present invention according to claim 1, a pilot state estimating device for estimating the attention low under status of operator steering the steering object, and the object to be monitored object position measuring means, The gaze point measuring means, the operator model identification means, and the operator state estimation means are provided, and the monitored object position measuring means continuously measures the position of the monitored object monitored by the operator, and the above-mentioned The gazing point measuring means continuously measures the gazing point of the operator who monitors the monitored object, and the operator model identifying means continuously adjusts the input / output relationship of the operator to the monitored object. as input Do displacement, said pilot model to identify showing the continued displacement of the fixation point of the operator as an output, the operator state estimation means, a constant number when the pilot model, the pilot constant number when the previous pilot model, or as compared to the constant number when the normative pilot model, the time constant of the pilot model is, the time constant of a previous pilot model of the operator, or the If it is larger than the time constant of the normative operator model, it is presumed that the current operator is in the state of reduced attention.
According to the second aspect of the present invention, in the operator state estimation device according to the first or second aspect, the operator model identification means means that the viewpoint of the operator is within a predetermined range from the monitored object. characterized by using the continuous displacement of the continuous displacement and the gaze point of the operator of the monitored object when it is in the identification of the pilot model.
According to the third aspect of the present invention, in the driver state estimation device according to the first or second aspect, the maneuvering object is a vehicle traveling on a road, and the monitored object is outside the vehicle. It is characterized by being a stationary object or a moving object existing in.
The present invention according to claim 4 is the driver state estimation device according to claim 3, wherein the vehicle has a manual driving mode in which the driver operates and all or a plurality of operations of acceleration, steering and braking. It is a semi-automatic driving system vehicle having an automatic driving mode that is automatically performed, and the driver state estimating means estimates the driver's attention reduction state before switching between the manual driving mode and the automatic driving mode. It is characterized by doing.

本発明は、周辺環境情報(被監視物の変位)とドライバ(操縦者)の注視点移動を利用することで走行時のドライバの視点移動をモデル化し、得られたドライバモデル(操縦者モデル)の特徴量によりドライバ状態(操縦者状態)を推定する。したがって本発明によれば、手動運転時と自動走行時とに関わらず操縦者の状態を推定できると共に、操縦者が自車両(操縦対象体)周辺のどの被監視物にどれほど注意しているかを把握できる操縦者状態推定装置を提供できる。 The present invention models the driver's viewpoint movement during traveling by utilizing the surrounding environment information (displacement of the monitored object) and the driver's (operator's) gaze movement, and the obtained driver model (driver's model). The driver state (driver state) is estimated from the feature amount of. Therefore, according to the present invention, the state of the driver can be estimated regardless of whether the vehicle is manually driven or automatically driven, and how much attention the driver is paying attention to which monitored object around the own vehicle (control object). It is possible to provide a driver state estimation device that can be grasped.

本発明の一実施例による操縦者状態推定装置を用いたヒューマンエラー防止システムの概略構成図Schematic configuration of a human error prevention system using the operator state estimation device according to an embodiment of the present invention. ドライバの注視点移動モデルを示す図Diagram showing the driver's gaze movement model 車両を運転するドライバ、車両、環境からなる閉ループ系を示す図Diagram showing a closed-loop system consisting of a driver driving a vehicle, a vehicle, and the environment. 実験装置概略図Schematic diagram of experimental equipment 前方車両位置のxy座標系への変換図Conversion diagram of the vehicle position in front to the xy coordinate system 自動走行中の各被験者の瞳孔径の平均値、標準偏差を示す図The figure which shows the average value and standard deviation of the pupil diameter of each subject during automatic driving 自動走行中のデータから同定したドライバ モデルから得られた各被験者の(a)残差、(b)ゲイン、(c)時定数を示す図The figure which shows (a) residual, (b) gain, (c) time constant of each subject obtained from the driver model identified from the data during automatic driving. 自動走行中のデータから同定したドライバ モデルから得られた各被験者の(a)残差、(b)ゲイン、(c)時定数を示す図The figure which shows (a) residual, (b) gain, (c) time constant of each subject obtained from the driver model identified from the data during automatic driving. 自動走行中のデータから同定したドライバ モデルから得られた各被験者の(a)残差、(b)ゲイン、(c)時定数を示す図The figure which shows (a) residual, (b) gain, (c) time constant of each subject obtained from the driver model identified from the data during automatic driving. 自動走行中のデータから同定したドライバ モデルから得られた各被験者の(a)残差、(b)ゲイン、(c)時定数を示す図The figure which shows (a) residual, (b) gain, (c) time constant of each subject obtained from the driver model identified from the data during automatic driving. 各被験者の(a)残差、(b)ゲイン、(c)時定数の自動走行 時の平均値、標準偏差を示す図The figure which shows (a) residual, (b) gain, (c) average value and standard deviation of time constant at the time of automatic running of each subject. 副次課題がある場合とない場合の、ドライバの反応時間と運転切り替え前の 20秒間に得られた時定数の平均値の関係を示す図The figure which shows the relationship between the reaction time of a driver and the average value of the time constant obtained in 20 seconds before switching of operation with and without a secondary task.

本発明の第1の実施の形態による操縦者状態推定装置は、被監視物位置測定手段と、注視点測定手段と、操縦者モデル同定手段と、操縦者状態推定手段とを備え、被監視物位置測定手段は、操縦者が監視する被監視物の位置を継続的に測定し、注視点測定手段は、被監視物を監視する操縦者の注視点を継続的に測定し、操縦者モデル同定手段は、操縦者の入出力関係を、被監視物の継続的な変位を入力とし、操縦者の注視点の継続的な変位を出力として示す操縦者モデルを同定し、操縦者状態推定手段は、操縦者モデルの時数を、操縦者の以前の操縦者モデルの時数、又は規範的な操縦者モデルの時数と比較操縦者モデルの時定数が、操縦者の以前の操縦者モデルの時定数、又は規範的な操縦者モデルの時定数よりも大きい場合、現在の操縦者が状態注意力低下状態であると推定するものである。本実施の形態によれば、操縦者の被監視物に対する注視点移動を利用した操縦者モデルから操縦者状態を推定するため、手動運転時と自動走行時とに関わらずドライバの状態を推定することができ、ドライバが自車両周辺のどの被監視物にどれほど注意しているかを把握できる。
本発明の第2の実施の形態は、第1の実施の形態による操縦者状態推定装置において、操縦者モデル同定手段は、操縦者の注視点が被監視物から所定範囲内にある場合の被監視物の継続的な変位及び操縦者の注視点の継続的な変位を操縦者モデルの同定に用いるものである。本実施の形態によれば、注視点が被監視物から所定範囲外にあるときのデータは操縦者モデルの同定に用いないことで、より正確にモデル化を行うことができる。
本発明の第3の実施の形態は、第1又は第2の実施の形態による操縦者状態推定装置において、操縦対象体は、道路を走行する車両であり、被監視物は、車両の外に存在する静止物又は移動物とするものである。本実施の形態によれば、車両(自動車)に適用できる。また、被監視物を自車両周辺の他車両、信号機、標識及び車線等の静止物又は移動物とすることで、ドライバが自車両周辺のどの被監視物にどれほど注意しているかを把握できる。
本発明の第4の実施の形態は、第3の実施の形態による操縦者状態推定装置において、車両は、操縦者が運転を行う手動運転モードと、加速、操舵及び制動の全て又は複数の操作が自動で行われる自動走行モードとを有する準自動走行システム車であり、操縦者状態推定手段は、手動運転モードと自動走行モードとを切り替える前に操縦者の注意力低下状態を推定するものである。本実施の形態によれば、モード切り替え前に操縦者の状態を推定するので、自動走行から手動運転に切り替える際の事故を減らすことができる。
The operator state estimation device according to the first embodiment of the present invention includes a monitored object position measuring means, a gazing point measuring means, a driver model identification means, and a driver state estimating means, and includes a monitored object. The position measuring means continuously measures the position of the monitored object monitored by the operator, and the gazing point measuring means continuously measures the gazing point of the operator who monitors the monitored object to identify the operator model. It means the input-output relationship of the steering longitudinal's inputs the continuous displacement of the monitored object, to identify the pilot model showing the continued displacement of the focus point of the operator as the output, operator condition estimating means is a constant number when the pilot model, a constant number when the previous pilot model of steering longitudinal person or compared with a constant number when the normative pilot model, the time constant of the pilot model, pilot If it is greater than the time constant of the previous operator model or the time constant of the normative operator model, it is presumed that the current operator is in a state of reduced attention. According to the present embodiment, in order to estimate the driver's state from the driver's model using the gaze movement of the driver with respect to the monitored object, the driver's state is estimated regardless of whether the driver is in manual driving or automatic driving. It is possible to grasp which monitored object around the own vehicle and how much the driver is paying attention to.
A second embodiment of the present invention is the operator state estimation device according to the first embodiment, wherein the operator model identification means is a displacement when the operator's gaze point is within a predetermined range from the monitored object. The continuous displacement of the watch and the continuous displacement of the operator's gaze point are used to identify the operator model. According to the present embodiment, the data when the gazing point is out of the predetermined range from the monitored object is not used for the identification of the operator model, so that the modeling can be performed more accurately.
A third embodiment of the present invention is the operator state estimation device according to the first or second embodiment, in which the maneuvering object is a vehicle traveling on a road and the monitored object is outside the vehicle. It is an existing stationary or moving object. According to this embodiment, it can be applied to a vehicle (automobile). In addition, by setting the monitored object as a stationary object such as another vehicle, a traffic light, a sign, and a lane around the own vehicle, or a moving object, it is possible to grasp which monitored object around the own vehicle and how much attention the driver is paying attention to.
A fourth embodiment of the present invention is the driver state estimation device according to the third embodiment, in which the vehicle has a manual driving mode in which the driver operates and all or a plurality of operations of acceleration, steering and braking. Is a semi-automatic driving system vehicle having an automatic driving mode in which is automatically performed, and the driver state estimation means estimates the driver's attention reduction state before switching between the manual driving mode and the automatic driving mode. be. According to the present embodiment, since the state of the operator is estimated before the mode is switched, it is possible to reduce accidents when switching from automatic driving to manual driving.

以下、本発明の一実施例による操縦者状態推定装置について説明する。
図1は、本実施例による操縦者状態推定装置を用いたヒューマンエラー防止システムの概略構成図である。
操縦者状態推定装置は、準自動走行システムを備えた車両(操縦対象体)に搭載されている。ここで、「準自動走行システム」とは、加速、操舵及び制動を全て操縦者が行う手動運転モードと、加速、操舵及び制動のうち少なくとも一つの操作が自動で行われる自動走行モードとを有し、所定条件下で手動運転モードと自動走行モードが切り替わるシステムをいう。
操縦者状態推定装置は、被監視物位置測定手段10と、注視点測定手段20と、ドライバモデル同定手段(操縦者モデル同定手段)30と、ドライバ状態推定手段(操縦者状態推定手段)40を備える。
被監視物位置測定手段10は、ドライバ(操縦者)が監視する被監視物の位置を継続的に測定し、ドライバモデル同定手段30に送信する。被監視物は、他車両、信号機、歩行者、標識又は車線など、ドライバが自車両を操縦するにあたって視線を向けるべき車外の静止物又は移動物である。
注視点測定手段20は、被監視物を監視するドライバの視線を追跡することにより注視点を継続的に測定し、ドライバモデル同定手段30に送信する。
ドライバモデル同定手段30は、ドライバの入出力関係を、被監視物位置測定手段10から送信された被監視物の位置に基づいて求めた被監視物の変位を入力とし、注視点測定手段20から送信されたドライバの注視点に基づいて求めたドライバの注視点の変位を出力として示すドライバモデルを同定し、同定したドライバモデルをドライバ状態推定手段40に送信する。
ドライバ状態推定手段40は、ドライバモデル同定手段30によって同定されたドライバモデルの特徴量を、操縦者の以前の操縦者モデルの特徴量、又は規範的な操縦者モデルの特徴量と比較することにより、現在の操縦者の状態を推定する。比較に用いる特徴量は、特に限定はないが、ゲイン、時定数又は残差とすることが好ましい。この場合例えば、ドライバモデル同定手段30によって同定されたドライバモデルの時定数の所定時間の平均値を、記憶手段60に記憶されている規範的なドライバモデルの時定数の所定時間の平均値と比較することにより、ドライバの注意力低下の有無を推定する。なお、ドライバモデル手段30によって同定されたドライバモデルを記憶手段60にも送信して保存しておき、最新のドライバモデルの特徴量を、保存された以前のドライバモデルの特徴量と比較することにより注意力低下の有無を推定してもよい。また、ゲイン、時定数及び残差について別のドライバモデルとそれぞれ比較し、総合的に注意力低下の有無を推定してもよい。
また、本実施例によるヒューマンエラー防止システムは、警告・操縦介入手段50を備えている。警告・操縦介入手段50は、ドライバ状態推定手段40がドライバの注意力が低下していると推定したとき、警告を発する。また、自動走行モードにあるときは、手動運転への切り替えを行わず自動走行モードを継続する。これにより、漫然運転などのヒューマンエラーに起因する事故を防止できる。
このように、ドライバの被監視物に対する注視点移動を利用したドライバモデルからドライバ状態を推定するため、手動運転時と自動走行時とに関わらずドライバの状態を推定することができ、ドライバが自車両周辺のどの被監視物にどれほど注意しているかを把握できる。
Hereinafter, the operator state estimation device according to an embodiment of the present invention will be described.
FIG. 1 is a schematic configuration diagram of a human error prevention system using the operator state estimation device according to the present embodiment.
The driver state estimation device is mounted on a vehicle (control object) equipped with a semi-automatic driving system. Here, the "quasi-automatic driving system" has a manual driving mode in which the operator performs all acceleration, steering and braking, and an automatic driving mode in which at least one of acceleration, steering and braking is automatically performed. However, it refers to a system that switches between manual driving mode and automatic driving mode under predetermined conditions.
The operator state estimating device includes a monitored object position measuring means 10, a gazing point measuring means 20, a driver model identifying means (driver model identifying means) 30, and a driver state estimating means (driver state estimating means) 40. Be prepared.
The monitored object position measuring means 10 continuously measures the position of the monitored object monitored by the driver (operator) and transmits the position to the driver model identification means 30. The monitored object is a stationary object or a moving object outside the vehicle, such as another vehicle, a traffic light, a pedestrian, a sign, or a lane, to which the driver should direct his / her line of sight when maneuvering his / her own vehicle.
The gazing point measuring means 20 continuously measures the gazing point by tracking the line of sight of the driver who monitors the monitored object, and transmits the gazing point to the driver model identification means 30.
The driver model identification means 30 inputs the input / output relationship of the driver by inputting the displacement of the monitored object obtained based on the position of the monitored object transmitted from the monitored object position measuring means 10, and from the gazing point measuring means 20. A driver model showing the displacement of the driver's gaze point obtained based on the transmitted driver's gaze point is identified as an output, and the identified driver model is transmitted to the driver state estimation means 40.
The driver state estimating means 40 compares the features of the driver model identified by the driver model identifying means 30 with the features of the driver's previous driver model or the normative driver model. , Estimate the current state of the driver. The feature amount used for comparison is not particularly limited, but is preferably gain, time constant, or residual. In this case, for example, the average value of the time constants of the driver model identified by the driver model identification means 30 for a predetermined time is compared with the average value of the time constants of the normative driver model stored in the storage means 60 for a predetermined time. By doing so, it is estimated whether or not the driver's attention is reduced. The driver model identified by the driver model means 30 is also transmitted to the storage means 60 and saved, and the feature amount of the latest driver model is compared with the feature amount of the saved previous driver model. It may be estimated whether or not there is a decrease in attention. In addition, the gain, time constant, and residual may be compared with other driver models, and the presence or absence of a decrease in attention may be comprehensively estimated.
Further, the human error prevention system according to the present embodiment includes the warning / maneuvering intervention means 50. Warning / The maneuvering intervention means 50 issues a warning when the driver state estimating means 40 estimates that the driver's attention is low. Further, when in the automatic driving mode, the automatic driving mode is continued without switching to the manual driving. This makes it possible to prevent accidents caused by human error such as inadvertent driving.
In this way, since the driver state is estimated from the driver model using the gaze movement of the driver with respect to the monitored object, the driver state can be estimated regardless of whether the driver is in manual driving or automatic driving, and the driver himself / herself. It is possible to grasp which monitored object around the vehicle and how much attention is paid to it.

また、ドライバモデル同定手段30は、ドライバの注視点が被監視物から所定範囲内にある場合の被監視物の変位及びドライバの注視点の変位をドライバモデルの同定に用いる。
注視点測定手段20によって測定されたドライバの注視点データには、被監視物を監視していないときの注視点の移動も含まれる可能性がある。そのため注視点が被監視物から所定範囲外にあるときのデータはドライバモデルの同定に用いないことで、より正確にモデル化を行うことができる。
Further, the driver model identification means 30 uses the displacement of the monitored object and the displacement of the driver's gazing point when the gaze point of the driver is within a predetermined range from the monitored object to identify the driver model.
The driver's gaze point data measured by the gaze point measuring means 20 may include the movement of the gaze point when the monitored object is not being monitored. Therefore, the data when the gazing point is outside the predetermined range from the monitored object is not used for the identification of the driver model, so that the modeling can be performed more accurately.

次に、ドライバの注視点移動モデルについて説明する。
図2は、ドライバの注視点移動モデルを示す図である。ドライバモデル同定手段30は、周辺環境(被監視物の変位)を入力、ドライバの注視点の変位を出力とするドライバの入出力関係を示すドライバモデルを同定する。
ドライバは自動走行中、注視点を移動させることで周辺環境の変化を察知する。そのドライバの振る舞いを以下のモデルにより表現する。

Figure 0006958886
本実施例では解析を簡単にするため1次遅れ系をドライバモデルとして採用する。
Figure 0006958886
(1)、(2)式で表現されるドライバモデルからドライバ状態を推定する。そのため残差の大きさJres(式(3))、ゲインK、時定数Τを評価量として採用する。
Figure 0006958886
iはデータセット番号、tはi番目の切り分けたデータセットの開始時間、Lはi番目の切り分けたデータセットの時間幅である。残差からは線形ドライバモデルで表現できない注視点 移動の非線形要素や周辺環境の変化と相関のない注視点移動量、注視点分布の範囲の大きさを評価できる。 Next, the gaze movement model of the driver will be described.
FIG. 2 is a diagram showing a gaze point movement model of the driver. The driver model identification means 30 identifies a driver model that indicates the input / output relationship of the driver by inputting the surrounding environment (displacement of the monitored object) and outputting the displacement of the gaze point of the driver.
The driver detects changes in the surrounding environment by moving the gazing point during automatic driving. The behavior of the driver is expressed by the following model.
Figure 0006958886
In this embodiment, a first-order lag system is adopted as a driver model in order to simplify the analysis.
Figure 0006958886
The driver state is estimated from the driver model expressed by the equations (1) and (2). Therefore, the magnitude of the residual Jres (Equation (3)), the gain K, and the time constant Τ are adopted as the evaluation quantities.
Figure 0006958886
i is the data set number, t i is the start time of the i-th isolate the data set, L i is the time width of the i-th isolate the data set. From the residuals, it is possible to evaluate the non-linear elements of gazing point movement that cannot be expressed by the linear driver model, the gazing point movement amount that does not correlate with changes in the surrounding environment, and the size of the gazing point distribution range.

次に、注意力低下とドライバモデルの関係性について説明する。
図3に車両を運転するドライバ、車両、環境(被監視物)からなる閉ループ系を示す。ドライバは操縦するとき周辺環境の変化を 認知し、どのように操縦するか判断する。そして手動運転時は 操縦を自動運転時は周辺環境の監視を行う。ドライバの注意力が低下したとき、認知の遅れやミスなどが発生 し、ドライバは判断や操作を誤ることとなる。
既存の研究では模擬運転状況で注意力が低下したとき、刺激提示に対するドライバの反応が遅れることが実験的に確認 されている(内田信行ほか:携帯電話会話時における運転者の注意状態評価について,IATSS Review,vol.30,No.3,p.57-65(2005).)。そのため、周囲環境の変化に対する注視点移動の時間的な遅れを表現するドライバモデルの時定数は注意力低下により増加する。また、手動運転に切り替えた直後の操縦成績は、モデル特徴量から予測可能である。
Next, the relationship between attention loss and the driver model will be described.
FIG. 3 shows a closed loop system consisting of a driver driving a vehicle, a vehicle, and an environment (object to be monitored). When maneuvering, the driver recognizes changes in the surrounding environment and decides how to maneuver. Then, during manual driving, maneuvering is performed, and during automatic driving, the surrounding environment is monitored. When the driver's attention is reduced, recognition delays and mistakes occur, and the driver makes mistakes in judgment and operation.
In the existing research, it has been experimentally confirmed that the driver's response to the stimulus presentation is delayed when the driver's attention is reduced in the simulated driving situation (Nobuyuki Uchida et al .: Evaluation of the driver's attention state during mobile phone conversation, IATSS Review, vol.30, No.3, p.57-65 (2005).). Therefore, the time constant of the driver model, which expresses the time delay of the gaze movement with respect to the change in the surrounding environment, increases due to the decrease in attention. In addition, the maneuvering performance immediately after switching to manual operation can be predicted from the model features.

次に、本発明の操縦者状態推定装置を用いた実験について説明する。
本実験ではドライバに周辺環境の監視と同時に副次課題による精神的負荷を与えることで、注意力低下状態を模擬する。ドライバに精神的負荷を与えられているか確認するため、瞳 孔径の変動をリファレンスとして用いる。
瞳孔は副交感神経により括約筋、交感神経により散大筋が支配を受けている(伊藤謙治, 桑野園子,小松原明哲:人間工学ハンドブック,東京,朝倉書店,2003,p.363.)。そのため、瞳孔径は人の心理状態を示す客観的指標となる(Yamanaka,K. and Kawakami, M.:Convenient Evaluation of Mental Stress with Pupil Diameter, International Journal of Occupational Safety and Ergonomics, vol.15, No.4, p.447-450 (2009).)。本実験では、瞳孔径の測定データから瞬きや瞳孔径が1/30秒間で0.1mm変化した点(Sandra,P.M.:U.S.Patent,US6090051A,(2000).)、瞳孔径が2mmから8mmの範囲を超えた点(産業技術総合研究所人間福祉医工学研門:人間計測ハンドブック,東京,朝倉書店,2003,p.113-115.)を除去し、精神的負荷の指標として用いる。
本実験では SeeingMachines社のアイトラッカーFOVIOを用いて瞳孔径を測定し、瞳孔径を精神的負荷の指標とした。
Next, an experiment using the operator state estimation device of the present invention will be described.
In this experiment, the driver is monitored for the surrounding environment and at the same time is given a mental load due to a secondary task to simulate a state of reduced attention. The fluctuation of the pupil diameter is used as a reference to confirm whether the driver is mentally loaded.
The pupil is controlled by the parasympathetic nerve, which controls the sphincter muscle, and by the sympathetic nerve, which controls the dilator muscle (Kenji Ito, Sonoko Kuwano, Akitetsu Komatsubara: Ergonomics Handbook, Tokyo, Asakura Shoten, 2003, p.363.). Therefore, the pupil diameter is an objective index showing a person's psychological state (Yamanaka, K. and Kawakami, M .: Convenient Evaluation of Mental Stress with Pupil Diameter, International Journal of Occupational Safety and Ergonomics, vol.15, No. 4, p.447-450 (2009).). In this experiment, from the measurement data of the pupil diameter, the point where the pupil diameter changed by 0.1 mm in 1/30 second (Sandra, PM: US Patent, US6090051A, (2000).), And the pupil diameter ranged from 2 mm to 8 mm. The excess points (Industrial Technology Research Institute, Human Welfare and Medical Engineering Laboratory: Human Measurement Handbook, Tokyo, Asakura Shoten, 2003, p.113-115.) Are removed and used as an index of mental load.
In this experiment, the pupil diameter was measured using the eye tracker FOVIO manufactured by Seeing Machines, and the pupil diameter was used as an index of mental load.

図4に実験装置概略を示す。ドライバXはプロジェクタ1からスクリーン2に映し出される映像を見て、自動走行時は周辺環境の監視、手動運転 時はステアリング3やペダル4を用いて操縦する。ペダル4は、アクセルペダル4Aとブレーキペダル4Bとからなる。ステアリング2 に対するフォースフィードバックは無い。また、注視点および瞳孔径の測定には、注視点測定手段20として非接触式のアイトラッカー(SeeingMachines社製 FOVIO)を用いる。ドライビングシミュレータは株式会社フォーラムエイト社のUC-win/Roadを用いて構築した。UC-win/Road SDKと Delphi XE2を用いて、任意の操縦対象ダイナミクスや外乱などの実験条件をプログラミングする事が可能である。コンピュータ5はドライビングシュミレータを動作・制御する。
なお、被監視物位置測定手段10、ドライバモデル同定手段30及びドライバ状態推定手段40は図示を省略する。
FIG. 4 shows an outline of the experimental device. The driver X sees the image projected on the screen 2 from the projector 1, monitors the surrounding environment during automatic driving, and operates by using the steering 3 and the pedal 4 during manual driving. The pedal 4 includes an accelerator pedal 4A and a brake pedal 4B. There is no force feedback for steering 2. A non-contact eye tracker (FOVIO manufactured by Seeing Machines) is used as the gazing point measuring means 20 for measuring the gazing point and the pupil diameter. The driving simulator was constructed using UC-win / Road of FORUM8 Co., Ltd. Using the UC-win / Road SDK and Delphi XE2, it is possible to program experimental conditions such as arbitrary maneuvering target dynamics and disturbances. The computer 5 operates and controls the driving simulator.
The monitored object position measuring means 10, the driver model identifying means 30, and the driver state estimating means 40 are not shown.

本実験では、ドライバが注視する対象は蛇行している前方車両のみとした。さらに解析を簡単にするため、走行環境を高さ一定の直線道路とした。また自車両と前方車両の車間距離を一定とし、被験者(ドライバX)から見て前方車両の位置が奥行き方向、鉛直方向に変化しないものとした。これらの実験条件により、本実験では被験者の道路面に平行な注視点移動のみを対象とする。
被験者は実験開始時、蛇行している前方車両と車間距離を一定に保つように自動走行している車両の運転席に乗車している。実験開始から約120秒後に前方車両と自車両の間に別の車両が割り込みを行う。スクリーン2に割り込み車両が映ると同時に自動走行から手動運転に切り替わり、それから60秒間は被験者が操縦し実験は終了する。
In this experiment, the driver gazes only at the meandering vehicle in front. To further simplify the analysis, the driving environment was set to a straight road with a constant height. Further, the distance between the own vehicle and the vehicle in front is kept constant, and the position of the vehicle in front does not change in the depth direction and the vertical direction when viewed from the subject (driver X). Under these experimental conditions, in this experiment, only the gaze movement parallel to the road surface of the subject is targeted.
At the start of the experiment, the subject was in the driver's seat of a vehicle that was automatically traveling so as to keep the distance between the meandering vehicle in front and the vehicle constant. Approximately 120 seconds after the start of the experiment, another vehicle interrupts between the vehicle in front and the own vehicle. At the same time as the interrupting vehicle appears on the screen 2, the automatic driving is switched to the manual driving, and then the subject controls for 60 seconds and the experiment ends.

本実験では、ドライバの前方車両の移動に対する注視点移動をモデル化するため、被験者には主要課題として常に前方車両を監視するように指示した。また、ドライバの注意力低下状態を模擬するため、自動走行中に副次課題を行うことで精神的負荷を与えた。副次課題は連続して流れてくる数字の最後からN番目の数字を口答するというNバック課題を与えた。さらに被験者には割り込み車両を認識したらすぐにブレーキペダル4Bを踏むよう指示した。 In this experiment, in order to model the gaze movement of the driver with respect to the movement of the vehicle in front, the subject was instructed to constantly monitor the vehicle in front as a main task. In addition, in order to simulate the driver's attention loss state, a mental load was given by performing a secondary task during automatic driving. The secondary task was given an N-back task of dictating the Nth number from the end of the numbers that flow continuously. Furthermore, the subject was instructed to step on the brake pedal 4B as soon as he / she recognized the interrupting vehicle.

自車両は蛇行している前方車両との車間距離が一定になるようにフィードバック制御により自動走行する。自動走行中 はステアリング3やペダル4を操作しても自車両には反映されな い。前方車両は、被験者に次の動きが予測できないようにランダムに蛇行している。割り込み車両は、自車両より速度が20km/h大きい状態で割り込むため、被験者が不適切な操縦をしない限り衝突しない。 The own vehicle automatically travels by feedback control so that the distance between the vehicle and the meandering vehicle in front becomes constant. Even if the steering 3 or pedal 4 is operated during automatic driving, it is not reflected in the own vehicle. The vehicle in front meanders randomly so that the subject cannot predict the next movement. Since the interrupting vehicle interrupts at a speed 20 km / h higher than that of the own vehicle, the interrupting vehicle does not collide unless the subject improperly steers.

事前に本実験の内容と目的を十分に説明し合意を得た被験者4名に操縦実験に参加してもらった。十分な操縦練習を行ったのちに主要課題と副次課題を与えた操縦実験を実施した。副次的課題の答えは口頭で述べてもらい実験補助者が聞き取った。
また計測した注視点位置、前方車両の位置を 図5に示すようなxy座標系に変換しモデル同定に用いる。これはxy平面内に常に前方車両が存在するように一定速度で前方に進む座 標系である。
We asked four subjects who fully explained the content and purpose of this experiment in advance and obtained an agreement to participate in the maneuvering experiment. After sufficient maneuvering practice, a maneuvering experiment was conducted in which the main task and the sub-task were given. The answer to the secondary task was verbally stated and heard by the experimental assistant.
Further, the measured gaze point position and the position of the vehicle in front are converted into the xy coordinate system as shown in FIG. 5 and used for model identification. This is a coordinate system that moves forward at a constant speed so that the vehicle in front is always present in the xy plane.

本実験では、被験者が割り込み車両を認識してからすぐブレーキペダル4Bを踏むように指示した。そこで、スクリーン2に割り込み車両が映ってからブレーキペダル4Bを踏むまでの時間(以降、「ドライバの反応時間」又は「反応時間」という)を操縦成績とした。このドライ バの反応時間が短いほど操縦成績は良いとする。 In this experiment, the subject was instructed to step on the brake pedal 4B immediately after recognizing the interrupting vehicle. Therefore, the time from when the interrupting vehicle is displayed on the screen 2 to when the brake pedal 4B is depressed (hereinafter referred to as "driver's reaction time" or "reaction time") is used as the maneuvering result. The shorter the reaction time of this driver, the better the maneuvering performance.

次にデータ処理について説明する。
ドライバモデルを以下に示す。

Figure 0006958886
出力xδ,yδは平均値を除去した注視点の位置、入力xe,yは平均値を除去した前方車両の位置、w,wは残差である。前方 車両は主に左右に移動するためH,wを解析対象とし、ドラ イバモデルHのゲインおよび時定数、x方向の残差wの時間変動を確認する。
また、得られたデータには前方車両を監視していないときの注視点の移動も含まれている。そこで注視点と前方車両が5m以上離れているデータは前方車両の監視を行っていないと見なしモデル同定に利用しない。5m以上離れていないデータは10秒間隔で切り分け、10秒未満であっても2秒以上のデータは同定に用いた。 Next, data processing will be described.
The driver model is shown below.
Figure 0006958886
Output x [delta], y [delta] is the gazing point removal of the average value position, the input x e, the y e position of the forward vehicle removing the mean value, the w x, w y is the residual. Since the vehicle in front mainly moves left and right, H x and w x are analyzed, and the gain and time constant of the driver model H x and the time variation of the residual w x in the x direction are confirmed.
The obtained data also includes the movement of the gazing point when the vehicle in front is not being monitored. Therefore, the data in which the gazing point and the vehicle in front are separated by 5 m or more are not used for model identification because it is considered that the vehicle in front is not monitored. Data not separated by 5 m or more were separated at 10-second intervals, and data of 2 seconds or more was used for identification even if it was less than 10 seconds.

図6に自動走行中の各被験者(subject)の瞳孔径の平均値、標準偏差を示す。
図7〜10に自動走行中のデータから同定したドライバモデルHから得られた各被験者の(a)残差、(b)ゲイン、(c)時定数を示す。横軸は経過時間[秒]であり、上述の通り実験開始から約120秒経過するまでは自動走行、その後は手動運転である。図7は被験者A、図8は被験者B、図9は被験者C、図10は被験者Dのものである。
図11に各被験者の(a)残差、(b)ゲイン、(c)時定数の自動走行 時の平均値、標準偏差を示す。ただし同定モデルが不安定な場合は、入力と相関のある振る舞いをしていないとして解析対象から除いた。
なお、図6〜11において、実線は副次課題がない場合を示し、破線は副次課題がある場合を示している。
表1に各被験者の副次課題ありの場合と副次課題なしの場合の反応時間(Reaction time)[秒]、表2に副次課題の正答率[%]を示す。
FIG. 6 shows the average value and standard deviation of the pupil diameter of each subject during automatic driving.
(A) residual of each subject obtained from the driver model H x identified from the data in the automatic traveling in FIG 7 to 10, (b) gain, indicating the time constant (c). The horizontal axis is the elapsed time [seconds], and as described above, automatic driving is performed until about 120 seconds have passed from the start of the experiment, and then manual driving is performed. FIG. 7 is for subject A, FIG. 8 is for subject B, FIG. 9 is for subject C, and FIG. 10 is for subject D.
FIG. 11 shows (a) residuals, (b) gains, (c) average values of time constants during automatic driving, and standard deviations of each subject. However, when the identification model was unstable, it was excluded from the analysis because it did not behave in a correlation with the input.
In FIGS. 6 to 11, the solid line indicates the case where there is no secondary task, and the broken line indicates the case where there is a secondary task.
Table 1 shows the reaction time [seconds] for each subject with and without the secondary task, and Table 2 shows the correct answer rate [%] for the secondary task.

Figure 0006958886
Figure 0006958886
Figure 0006958886
Figure 0006958886

図6より、自動走行中の副次課題により被験者A,B,Dの瞳孔径は増加しており、副次課題により精神的負荷を与えたこと がわかる。また、表1より、3名とも副次課題を行ったときの反応時間が増加しており、副次課題により注意力低下状態になったことがわかる。被験者Cは瞳孔径の増加は確認されなかったが反応時間は0.36秒増加しており、被験者Cもまた副次課題により注意力低下状態になったと推定できる。
なお、副次課題がある場合の被験者Bのモデル特徴量が実験開始約80秒以降得られていない(図8参照)。これは注視点が前方車両から5m以上離れていることが多く、モデル同定されなかったためである。
被験者4人中A,C,Dの3人において、副次課題がある場合はない場合より時定数が大きい(図11(c)参照)。なお、被験者Bの時定数への副次課題の影響は見られな い。被験者Bの副次課題を与えた実験では、時間経過とともに注視点が前方車両より5m以上離れることが多くなり同定モデルを得られなかった。そのため実験開始直後の副次課題 の影響をあまり受けていない同定モデルのみで平均値を計算したために、あまり変化が現れない結果になったものと推定される。
図12に副次課題がある場合とない場合の、ドライバの反応時間と運転切り替え前の 20秒間に得られた時定数の平均値の関係を示す。「●」は被験者Aの副次課題ありの場合、「○」は被験者Aの副次課題なしの場合、「▲」は被験者Cの副次課題ありの場合、「△」は被験者Cの副次課題なしの場合、「■」は被験者Dの副次課題ありの場合、「□」は被験者Dの副次課題なしの場合である。被験者Bは副次課題がある場合の運転切り替え前20秒間で モデルが得られなかったためプロットしていない。図12より被験者A,C,Dでは自動走行時の時定数とドライバの反応時間の間に正の相関があることがわかる。これより自動走行時の注視点の動きを表したドライバモデルの時定数が大きい場合、注意力低下状態であると予測できることがわかる。被験者Bにおいても、注視点の動きは副次課題の影響を強く受けていることは明らかであるため、予測アルゴリズムが構築可能である。
From FIG. 6, it can be seen that the pupil diameters of subjects A, B, and D increased due to the secondary task during automatic driving, and that the secondary task gave a mental load. In addition, from Table 1, it can be seen that the reaction time when all three subjects performed the secondary task increased, and the attention decreased due to the secondary task. Although no increase in pupil diameter was confirmed in subject C, the reaction time increased by 0.36 seconds, and it can be estimated that subject C was also in a state of decreased attention due to a secondary task.
It should be noted that the model features of subject B when there is a secondary task have not been obtained after about 80 seconds from the start of the experiment (see FIG. 8). This is because the gazing point is often 5 m or more away from the vehicle in front, and the model was not identified.
In 3 of the 4 subjects, A, C, and D, the time constant was larger than when there was no secondary task (see FIG. 11 (c)). The influence of the secondary task on the time constant of subject B is not seen. In the experiment in which the secondary task of subject B was given, the gazing point was often separated from the vehicle in front by 5 m or more with the passage of time, and an identification model could not be obtained. Therefore, it is presumed that the result was that there was not much change because the average value was calculated only with the identification model that was not so affected by the secondary task immediately after the start of the experiment.
FIG. 12 shows the relationship between the reaction time of the driver and the average value of the time constants obtained in 20 seconds before switching the operation with and without the secondary task. “●” indicates that subject A has a secondary task, “○” indicates that subject A does not have a secondary task, “▲” indicates that subject C has a secondary task, and “△” indicates subject C's secondary task. When there is no next task, "■" is the case where the subject D has a secondary task, and "□" is the case where the subject D has no secondary task. Subject B did not plot because the model could not be obtained 20 seconds before the operation switching when there was a secondary task. From FIG. 12, it can be seen that in subjects A, C, and D, there is a positive correlation between the time constant during automatic driving and the reaction time of the driver. From this, it can be seen that when the time constant of the driver model, which represents the movement of the gazing point during automatic driving, is large, it can be predicted that the driver's attention is low. Since it is clear that the movement of the gazing point is strongly influenced by the secondary task also in subject B, a prediction algorithm can be constructed.

なお、上記では、準自動走行システムを備えた車両に搭載する場合を例に説明したが、本発明による操縦者状態推定装置は、準自動走行システムを備えない手動運転のみの車両にも適用可能である。さらに、航空機、二輪車、船舶、鉄道など他の乗物、又は遠隔操縦可能な無人航空機(いわゆる「ドローン」)等にも適用可能である。
また、上記式(1)〜(4)は本発明の理解を容易にするための例示であり、本発明の趣旨を逸脱しない範囲で様々に変更可能である。
In the above description, the case where the vehicle is mounted on a vehicle equipped with a semi-automatic driving system has been described as an example, but the driver state estimation device according to the present invention can also be applied to a vehicle only for manual driving without a semi-automatic driving system. Is. Furthermore, it can be applied to other vehicles such as aircraft, motorcycles, ships, and railroads, or unmanned aerial vehicles (so-called "drones") that can be remotely controlled.
Further, the above equations (1) to (4) are examples for facilitating the understanding of the present invention, and can be variously modified without departing from the spirit of the present invention.

本発明による操縦者状態推定装置は、操縦対象体を操縦する者の視点移動をモデル化し、得られた操縦者モデルの特徴量によりドライバの注意力低下などの状態を効果的に推定するので、自動車等に適用することで操縦者のヒューマンエラーによる事故防止に寄与する。 The driver state estimation device according to the present invention models the viewpoint movement of the person who controls the maneuvering object, and effectively estimates the state such as the driver's attention reduction based on the obtained feature amount of the driver model. By applying it to automobiles, etc., it contributes to the prevention of accidents due to human error of the operator.

1 プロジェクタ
2 スクリーン
3 ステアリング
4 ペダル
5 コンピュータ
10 被監視物位置測定手段
20 注視点測定手段
30 ドライバモデル同定手段(操縦者モデル同定手段)
40 ドライバ状態推定手段(操縦者状態推定手段)
50 警告・操縦介入手段
60 記憶手段
X ドライバ(操縦者)
1 Projector 2 Screen 3 Steering 4 Pedal 5 Computer 10 Observed object position measuring means 20 Gaze point measuring means 30 Driver model identification means (operator model identification means)
40 Driver state estimation means (driver state estimation means)
50 Warning / Maneuvering intervention means 60 Memory means X driver (operator)

Claims (4)

操縦対象体を操作する操縦者の注意力低下状態を推定する操縦者状態推定装置であって、
被監視物位置測定手段と、
注視点測定手段と、
操縦者モデル同定手段と、
操縦者状態推定手段とを備え、
前記被監視物位置測定手段は、前記操縦者が監視する被監視物の位置を継続的に測定し、
前記注視点測定手段は、前記被監視物を監視する前記操縦者の注視点を継続的に測定し、
前記操縦者モデル同定手段は、前記操縦者の入出力関係を、前記被監視物の継続的な変位を入力とし、前記操縦者の前記注視点の継続的な変位を出力として示す操縦者モデルを同定し、
前記操縦者状態推定手段は、前記操縦者モデルの時数を、前記操縦者の以前の操縦者モデルの時数、又は規範的な操縦者モデルの時数と比較し、前記操縦者モデルの時定数が、前記操縦者の以前の操縦者モデルの時定数、又は前記規範的な操縦者モデルの時定数よりも大きい場合、現在の前記操縦者前記注意力低下状態であると推定することを特徴とする操縦者状態推定装置。
It is a driver state estimation device that estimates the attention reduction state of the driver who operates the maneuvering object.
Measured object position measuring means and
Gaze point measuring means and
Operator model identification means,
Equipped with a driver state estimation means
The monitored object position measuring means continuously measures the position of the monitored object monitored by the operator.
The gaze point measuring means continuously measures the gaze point of the operator who monitors the monitored object.
The driver model identification means obtains a driver model in which the input / output relationship of the driver is input by the continuous displacement of the monitored object and the continuous displacement of the gaze point of the driver is displayed as an output. Identify and
The operator state estimation means, a constant number when the pilot model, compared with the constant number when a constant number or normative pilot model, when the previous pilot model of the pilot, the pilot If the time constant of the model is greater than the time constant of the operator's previous operator model, or the time constant of the normative operator model, it is presumed that the current operator is in the diminished attention state. An operator state estimation device characterized by doing so.
前記操縦者モデル同定手段は、前記操縦者の前記注視点が前記被監視物から所定範囲内にある場合の前記被監視物の継続的な変位及び前記操縦者の前記注視点の継続的な変位を前記操縦者モデルの同定に用いることを特徴とする請求項1に記載の操縦者状態推定装置。 The operator model identification means is a continuous displacement of the monitored object when the gaze point of the operator is within a predetermined range from the monitored object and a continuous displacement of the gaze point of the operator. The operator state estimation device according to claim 1, wherein is used for identifying the operator model. 前記操縦対象体は、道路を走行する車両であり、
前記被監視物は、前記車両の外に存在する静止物又は移動物であることを特徴とする請求項1又は請求項2に記載の操縦者状態推定装置。
The maneuvering object is a vehicle traveling on the road.
The operator state estimation device according to claim 1 or 2, wherein the monitored object is a stationary object or a moving object existing outside the vehicle.
前記車両は、前記操縦者が運転を行う手動運転モードと、加速、操舵及び制動のうち少なくとも一つの操作が自動で行われる自動走行モードとを有する準自動走行システム車であり、
前記操縦者状態推定手段は、前記手動運転モードと前記自動走行モードとを切り替える前に前記操縦者の前記注意力低下状態を推定することを特徴とする請求項3に記載の操縦者状態推定装置。
The vehicle is a semi-automatic driving system vehicle having a manual driving mode in which the driver drives and an automatic driving mode in which at least one of acceleration, steering and braking is automatically performed.
The driver state estimation device according to claim 3, wherein the driver state estimating means estimates the attention-reduced state of the driver before switching between the manual operation mode and the automatic driving mode. ..
JP2016200328A 2016-10-11 2016-10-11 Operator state estimator Active JP6958886B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016200328A JP6958886B2 (en) 2016-10-11 2016-10-11 Operator state estimator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2016200328A JP6958886B2 (en) 2016-10-11 2016-10-11 Operator state estimator

Publications (2)

Publication Number Publication Date
JP2018063489A JP2018063489A (en) 2018-04-19
JP6958886B2 true JP6958886B2 (en) 2021-11-02

Family

ID=61967841

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2016200328A Active JP6958886B2 (en) 2016-10-11 2016-10-11 Operator state estimator

Country Status (1)

Country Link
JP (1) JP6958886B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7164275B2 (en) * 2018-10-31 2022-11-01 株式会社豊田中央研究所 Centralized state estimator
US11912307B2 (en) 2020-03-18 2024-02-27 Waymo Llc Monitoring head movements of drivers tasked with monitoring a vehicle operating in an autonomous driving mode
JP7409184B2 (en) * 2020-03-19 2024-01-09 マツダ株式会社 state estimation device
JP7002063B1 (en) 2021-04-30 2022-02-14 国立大学法人東北大学 Traffic accident estimation device, traffic accident estimation method, and traffic accident estimation program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07117593A (en) * 1993-10-21 1995-05-09 Mitsubishi Electric Corp Alarm device for vehicle
JP6218618B2 (en) * 2014-01-21 2017-10-25 アルパイン株式会社 Driving support device, driving support method, and driving support program
JP6323318B2 (en) * 2014-12-12 2018-05-16 ソニー株式会社 Vehicle control apparatus, vehicle control method, and program

Also Published As

Publication number Publication date
JP2018063489A (en) 2018-04-19

Similar Documents

Publication Publication Date Title
JP6958886B2 (en) Operator state estimator
EP3068672B1 (en) Changing of the driving mode for a driver assistance system
Wada et al. Characterization of expert drivers' last-second braking and its application to a collision avoidance system
US10059346B2 (en) Driver competency during autonomous handoff
Wickens et al. Workload.
Gold et al. Taking over control from highly automated vehicles
US9251704B2 (en) Reducing driver distraction in spoken dialogue
US11603104B2 (en) Driver abnormality determination system, method and computer program
EP3588372B1 (en) Controlling an autonomous vehicle based on passenger behavior
Wulf et al. Recommendations supporting situation awareness in partially automated driver assistance systems
KR20200113202A (en) Information processing device, mobile device, and method, and program
Funkhouser et al. Reaction times when switching from autonomous to manual driving control: a pilot investigation
CN112180921A (en) Automatic driving algorithm training system and method
WO2016158341A1 (en) Driving assistance device
CN104174161A (en) Apparatus and method for safe drive inducing game
US20220081009A1 (en) Information processing apparatus, moving apparatus, method and program
JP7154959B2 (en) Apparatus and method for recognizing driver's state based on driving situation judgment information
Graf et al. The Predictive Corridor: A Virtual Augmented Driving Assistance System for Teleoperated Autonomous Vehicles.
Sentouh et al. Toward a shared lateral control between driver and steering assist controller
Hirose et al. A study on modeling of driver's braking action to avoid rear-end collision with time delay neural network
Li et al. Bayesian network-based identification of driver lane-changing intents using eye tracking and vehicle-based data
Feleke et al. Detection of driver emergency steering intention using EMG signal
Zhou et al. Influence of cognitively distracting activity on driver’s eye movement during preparation of changing lanes
KR102479484B1 (en) System and Method for Improving Traffic for Autonomous Vehicles at Non Signalized Intersections
Berberian et al. MINIMA project: detecting and mitigating the negative impact of automation

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20190920

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20200727

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20200908

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20201029

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20210413

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20210521

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20210914

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20210930

R150 Certificate of patent or registration of utility model

Ref document number: 6958886

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150