JP2018063489A - Operator state estimation device - Google Patents
Operator state estimation device Download PDFInfo
- Publication number
- JP2018063489A JP2018063489A JP2016200328A JP2016200328A JP2018063489A JP 2018063489 A JP2018063489 A JP 2018063489A JP 2016200328 A JP2016200328 A JP 2016200328A JP 2016200328 A JP2016200328 A JP 2016200328A JP 2018063489 A JP2018063489 A JP 2018063489A
- Authority
- JP
- Japan
- Prior art keywords
- pilot
- driver
- operator
- model
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000006073 displacement reaction Methods 0.000 claims abstract description 21
- 238000005259 measurement Methods 0.000 claims description 14
- 230000009467 reduction Effects 0.000 claims description 10
- 230000001133 acceleration Effects 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 abstract description 10
- 230000008859 change Effects 0.000 abstract description 5
- 230000033001 locomotion Effects 0.000 description 21
- 238000002474 experimental method Methods 0.000 description 18
- 210000001747 pupil Anatomy 0.000 description 14
- 238000010586 diagram Methods 0.000 description 7
- 238000000034 method Methods 0.000 description 7
- 230000003340 mental effect Effects 0.000 description 6
- 230000035484 reaction time Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000002265 prevention Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 208000032140 Sleepiness Diseases 0.000 description 2
- 206010041349 Somnolence Diseases 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 230000037321 sleepiness Effects 0.000 description 2
- 230000002889 sympathetic effect Effects 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005037 parasympathetic nerve Anatomy 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000021670 response to stimulus Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 210000005070 sphincter Anatomy 0.000 description 1
Abstract
Description
本発明は、自動車などの操縦対象体を操作する操縦者の注意力低下などの状態を推定する操縦者状態推定装置に関する。 The present invention relates to a driver state estimation device that estimates a state such as a reduction in the attention of a driver who operates a target object such as an automobile.
漫然運転などのヒューマンエラーが交通事故の主な原因となっている。このような事故を低減することなどを目指した自動走行システムの研究開発が活発に行われている。内閣府は2020年代前半を目途に準自動走行システムの市場化を掲げている。準自動走行システムでは、システムが要請したときに操縦を全自動走行システムからドライバ(操縦者)に切り替える。その切り替えの際は、ドライバが十分な操縦を行える状態であることを確認 したのちに切り替えることが望ましい。また、ドライバが運転している場合(手動運転時)でもドライバの状態を確認する技術は漫然運転や脇見運転の防止の点で有用である。 Human errors such as casual driving are the main cause of traffic accidents. Research and development of automated driving systems aiming at reducing such accidents are actively conducted. The Cabinet Office is aiming to market a semi-automated driving system in the first half of the 2020s. In the semi-automatic traveling system, the operation is switched from the fully automatic traveling system to the driver (operator) when requested by the system. When switching, it is desirable to check after confirming that the driver is in sufficient maneuverability. Further, even when the driver is driving (manual driving), the technology for checking the driver state is useful in terms of preventing random driving and side-by-side driving.
発明者らはビークル(操縦対象体)を操縦するオペレータモデル(ドライバモデル)を陽に用いたビークルダイナミクスの解析手法や制御系設計手法を構築してきた(非特許文献1〜5)。またビークルを操縦するオペレータ(操縦者)状態推定のためのオペレータモデルのリアルタイム同定手法と同定モデルからオペレータ状態を推定する手法を開発してきた(非特許文献6〜8)。オペレータモデルを用いることで、オペレータの振る舞いを直接考慮した解析、設計が可能となった。 The inventors have constructed a vehicle dynamics analysis method and a control system design method that explicitly use an operator model (driver model) that controls a vehicle (control object) (Non-Patent Documents 1 to 5). In addition, an operator model real-time identification method for estimating the state of an operator (operator) operating a vehicle and a method for estimating an operator state from an identification model have been developed (Non-Patent Documents 6 to 8). By using the operator model, analysis and design that directly takes into account the behavior of the operator became possible.
また、特許文献1には、車両を運転するドライバの顔が映る画像に基づいて、ドライバの両眼の視線によって規定される輻輳角を順次算出し、算出した輻輳角の標準偏差に基づいて、車両を運転するドライバの覚醒度が低下した状態であるか否かを判定する覚醒度判定装置が開示されている。
特許文献2には、運転者の覚醒時の心電図波形における隣接するR波の間隔であるRRIデータを取得し、取得されたRRIデータに基づいて心拍変動を解析し、標準化された覚醒時HRV指標データに基づいて多変量統計的プロセス管理を用いて運転者毎の眠気検出モデルを構築する眠気検出方法及び装置が開示されている。
特許文献3には、入力される生体情報又は車両情報を示す複数の特徴量をそれぞれ2値的に識別する弱識別手段と、弱識別手段による複数の特徴量の識別結果に基づいて運転者の注意力低下を推定する状態推定手段と、複数の特徴量と運転者の注意力低下との関係をAdaBoostを用いて学習する学習手段とを備える運転者状態推定装置が開示されている。
特許文献4には、顔の特定部位の一時的な変化が所定時間以上継続しているとき、一時的な変化の所定周期内での回数が所定回数以上であるときに、運転者が漫然状態であると判定する運転者状態検出方法及び装置が開示されている。
Patent Document 1 sequentially calculates the convergence angle defined by the eyes of the driver's eyes based on the image of the driver's face driving the vehicle, and based on the calculated standard deviation of the convergence angle, An awakening level determination device that determines whether or not a driver driving a vehicle has a reduced level of awakening is disclosed.
In Patent Document 2, RRI data, which is an interval between adjacent R waves in an electrocardiogram waveform when a driver is awakened, is acquired, heart rate variability is analyzed based on the acquired RRI data, and a standardized awakening HRV index is disclosed. A sleepiness detection method and apparatus for constructing a sleepiness detection model for each driver using multivariate statistical process management based on data is disclosed.
Patent Document 3 discloses a weak identification unit that binaryly identifies a plurality of feature amounts indicating input biological information or vehicle information, and a driver's identification based on the identification result of the plurality of feature amounts by the weak identification unit. A driver state estimation device is disclosed that includes a state estimation unit that estimates a reduction in attention and a learning unit that learns the relationship between a plurality of feature amounts and a reduction in driver's attention using AdaBoost.
In Patent Document 4, when a temporary change of a specific part of a face continues for a predetermined time or more, when the number of temporary changes within a predetermined period is a predetermined number or more, the driver is in a loose state. A driver state detection method and apparatus that determine that the
非特許文献6〜8で提案したオペレータ状態を推定する手法は、車両運動と操縦履歴のみを利用するため、一般に行われている生体信号を用いたドライバ状態推定手法より低価格で実装・運用できる。しかし、ドライバは自動走行時に操縦を行わないため、操縦履歴を利用した手法はドライバが操縦している状況しか適用できない。
また、特許文献1〜4は、輻輳角、心拍変動、視線移動量又は口周りの変化等の生体信号のみを利用して運転者の状態を判断するものであり、ドライバが自車両周辺の注視すべき対象(被監視物)にどれほど注意を払っているかは把握できない。
Since the method for estimating the operator state proposed in Non-Patent Documents 6 to 8 uses only vehicle motion and steering history, it can be implemented and operated at a lower price than the commonly used driver state estimation method using biological signals. . However, since the driver does not perform maneuvering during automatic driving, the technique using the maneuvering history can be applied only to the situation where the driver is maneuvering.
Further, Patent Documents 1 to 4 determine a driver's state using only biological signals such as an angle of convergence, heart rate variability, line-of-sight movement, or a change in the circumference of the mouth. It is not possible to know how much attention is paid to the target (monitored object).
そこで本発明は、手動運転時と自動走行時とに関わらず操縦者の状態を推定することができ、操縦者が操縦対象体周辺の被監視物にどれほど注意しているかを把握できる操縦者状態推定装置を提供することを目的とする。 Therefore, the present invention can estimate the state of the driver regardless of whether it is in manual driving or automatic driving, and can understand how much the driver is paying attention to the monitored object around the control object. An object is to provide an estimation device.
請求項1記載の本発明の操縦者状態推定装置は、操縦対象体を操縦する操縦者の注意力低下などの状態を推定する操縦者状態推定装置であって、被監視物位置測定手段と、注視点測定手段と、操縦者モデル同定手段と、操縦者状態推定手段とを備え、前記被監視物位置測定手段は、前記操縦者が監視する被監視物の位置を測定し、前記注視点測定手段は、前記被監視物を監視する前記操縦者の注視点を測定し、前記操縦者モデル同定手段は、前記操縦者の入出力関係を、前記被監視物の変位を入力とし、前記操縦者の前記注視点の変位を出力として示す操縦者モデルを同定し、前記操縦者状態推定手段は、前記操縦者モデルの特徴量を、前記操縦者の以前の操縦者モデルの特徴量、又は規範的な操縦者モデルの特徴量と比較することにより、現在の前記操縦者の状態を推定することを特徴とする。
請求項2記載の本発明は、請求項1に記載の操縦者状態推定装置において、前記特徴量は、ゲイン、時定数又は残差であることを特徴とする。
請求項3記載の本発明は、請求項1又は請求項2に記載の操縦者状態推定装置において、前記操縦者モデル同定手段は、前記操縦者の前記注視点が前記被監視物から所定範囲内にある場合の前記被監視物の変位及び前記操縦者の前記注視点の変位を前記操縦者モデルの同定に用いることを特徴とする。
請求項4記載の本発明は、請求項1から請求項3のいずれか1項に記載の操縦者状態推定装置において、前記操縦対象体は、道路を走行する車両であり、前記被監視物は、前記車両の外に存在する静止物又は移動物であることを特徴とする。
請求項5記載の本発明は、請求項4に記載の操縦者状態推定装置において、前記車両は、前記操縦者が運転を行う手動運転モードと、加速、操舵及び制動の全て又は複数の操作が自動で行われる自動走行モードとを有する準自動走行システム車であり、前記操縦者状態推定手段は、前記手動運転モードと前記自動走行モードを切り替える前に前記操縦者の状態を推定することを特徴とする。
The pilot state estimating apparatus according to the first aspect of the present invention is a pilot state estimating apparatus that estimates a state such as a reduction in the attention of a pilot who is maneuvering a target object, and includes a monitored object position measuring unit, A gazing point measurement unit, a pilot model identification unit, and a pilot state estimation unit; and the monitored object position measurement unit measures the position of the monitored object monitored by the pilot, and the gazing point measurement. The means measures the gaze point of the pilot who monitors the monitored object, and the pilot model identification means receives the input / output relationship of the pilot as the displacement of the monitored object, and the driver A pilot model indicating the displacement of the gazing point as an output is identified, and the pilot state estimating means determines the characteristic amount of the pilot model, the characteristic amount of the previous pilot model of the pilot, or the normative By comparing with the characteristic amount of a simple pilot model, And estimating the state of the operator of the resident.
According to a second aspect of the present invention, in the pilot state estimating device according to the first aspect, the feature amount is a gain, a time constant, or a residual.
According to a third aspect of the present invention, in the pilot state estimating device according to the first or second aspect, the pilot model identifying means is configured so that the gaze point of the pilot is within a predetermined range from the monitored object. The displacement of the object to be monitored and the displacement of the gaze point of the driver are used for identifying the driver model.
According to a fourth aspect of the present invention, in the pilot state estimating device according to any one of the first to third aspects, the pilot object is a vehicle traveling on a road, and the monitored object is A stationary object or a moving object existing outside the vehicle.
According to a fifth aspect of the present invention, in the pilot state estimating device according to the fourth aspect, the vehicle is operated in a manual operation mode in which the driver operates, and all or a plurality of operations of acceleration, steering, and braking are performed. A semi-automatic traveling system vehicle having an automatic traveling mode performed automatically, wherein the driver state estimating means estimates the state of the driver before switching between the manual driving mode and the automatic traveling mode. And
本発明は、周辺環境情報(被監視物の変位)とドライバ(操縦者)の注視点移動を利用することで走行時のドライバの視点移動をモデル化し、得られたドライバモデル(操縦者モデル)の特徴量によりドライバ状態(操縦者状態)を推定する。したがって本発明によれば、手動運転時と自動走行時とに関わらず操縦者の状態を推定できると共に、操縦者が自車両(操縦対象体)周辺のどの被監視物にどれほど注意しているかを把握できる操縦者状態推定装置を提供できる。 The present invention models the driver's viewpoint movement during traveling by using the surrounding environment information (displacement of the monitored object) and the driver's (pilot) gaze point movement, and the obtained driver model (pilot model) The driver state (operator state) is estimated based on the feature amount. Therefore, according to the present invention, it is possible to estimate the state of the driver regardless of whether it is in manual driving or automatic driving, and how much attention the driver is about which monitored object around the own vehicle (control target object). It is possible to provide an operator state estimation device that can be grasped.
本発明の第1の実施の形態による操縦者状態推定装置は、被監視物位置測定手段と、注視点測定手段と、操縦者モデル同定手段と、操縦者状態推定手段とを備え、被監視物位置測定手段は、操縦者が監視する被監視物の位置を測定し、注視点測定手段は、被監視物を監視する操縦者の注視点を測定し、操縦者モデル同定手段は、前記操縦者の入出力関係を、被監視物の変位を入力とし、操縦者の注視点の変位を出力として示す操縦者モデルを同定し、操縦者状態推定手段は、操縦者モデルの特徴量を、前記操縦者の以前の操縦者モデルの特徴量、又は規範的な操縦者モデルの特徴量と比較することにより、現在の操縦者の状態を推定するものである。本実施の形態によれば、操縦者の被監視物に対する注視点移動を利用した操縦者モデルから操縦者状態を推定するため、手動運転時と自動走行時とに関わらずドライバの状態を推定することができ、ドライバが自車両周辺のどの被監視物にどれほど注意しているかを把握できる。
本発明の第2の実施の形態は、第1の実施の形態による操縦者状態推定装置において、特徴量は、ゲイン、時定数又は残差としたものである。本実施の形態によれば、操縦者の状態をゲイン、時定数又は残差を利用して推定できる。
本発明の第3の実施の形態は、第1又は第2の実施の形態による操縦者状態推定装置において、操縦者モデル同定手段は、操縦者の注視点が被監視物から所定範囲内にある場合の被監視物の変位及び操縦者の注視点の変位を操縦者モデルの同定に用いるものである。本実施の形態によれば、注視点が被監視物から所定範囲外にあるときのデータは操縦者モデルの同定に用いないことで、より正確にモデル化を行うことができる。
本発明の第4の実施の形態は、第1から第3のいずれか1つの実施の形態による操縦者状態推定装置において、操縦対象体は、道路を走行する車両であり、被監視物は、車両の外に存在する静止物又は移動物とするものである。本実施の形態によれば、車両(自動車)に適用できる。また、被監視物を自車両周辺の他車両、信号機、標識及び車線等の静止物又は移動物とすることで、ドライバが自車両周辺のどの被監視物にどれほど注意しているかを把握できる。
本発明の第5の実施の形態は、第4の実施の形態による操縦者状態推定装置において、車両は、操縦者が運転を行う手動運転モードと、加速、操舵及び制動の全て又は複数の操作が自動で行われる自動走行モードとを有する準自動走行システム車であり、操縦者状態推定手段は、手動運転モードと自動走行モードとを切り替える前に操縦者の状態を推定するものである。本実施の形態によれば、モード切り替え前に操縦者の状態を推定するので、自動走行から手動運転に切り替える際の事故を減らすことができる。
A pilot state estimation device according to a first embodiment of the present invention includes a monitored object position measuring unit, a gazing point measuring unit, a pilot model identifying unit, and a pilot state estimating unit, The position measuring means measures the position of the monitored object monitored by the pilot, the gazing point measuring means measures the gazing point of the pilot monitoring the monitored object, and the pilot model identifying means is the pilot The operator model is identified by taking the displacement of the monitored object as an input and the displacement of the gaze point of the driver as an output. The current pilot's state is estimated by comparing the characteristic quantity of the previous pilot model of the driver or the characteristic quantity of the normative pilot model. According to the present embodiment, in order to estimate the driver state from the driver model that uses the movement of the gazing point to the monitored object of the driver, the driver state is estimated regardless of whether the vehicle is driven manually or automatically. It is possible to grasp how much the driver is paying attention to which monitored object around the vehicle.
According to the second embodiment of the present invention, in the pilot state estimation device according to the first embodiment, the feature amount is a gain, a time constant, or a residual. According to the present embodiment, the state of the driver can be estimated using gain, time constant, or residual.
According to a third embodiment of the present invention, in the pilot state estimation device according to the first or second embodiment, the pilot model identifying means has the driver's point of sight within a predetermined range from the monitored object. In this case, the displacement of the monitored object and the displacement of the driver's gaze point are used for identification of the operator model. According to the present embodiment, the data when the gazing point is outside the predetermined range from the monitored object is not used for identification of the pilot model, so that modeling can be performed more accurately.
According to a fourth embodiment of the present invention, in the pilot state estimation device according to any one of the first to third embodiments, the control target object is a vehicle traveling on a road, and the monitored object is It is a stationary object or a moving object that exists outside the vehicle. The present embodiment can be applied to a vehicle (automobile). In addition, since the monitored object is a stationary object or a moving object such as another vehicle around the own vehicle, a traffic light, a sign, and a lane, it can be understood how much the driver is paying attention to which monitored object around the own vehicle.
According to a fifth embodiment of the present invention, in the driver state estimation device according to the fourth embodiment, the vehicle operates in a manual driving mode in which the driver operates, and all or a plurality of operations of acceleration, steering, and braking. Is a semi-automatic traveling system vehicle having an automatic traveling mode in which automatic operation is performed, and the driver state estimating means estimates the state of the driver before switching between the manual driving mode and the automatic traveling mode. According to the present embodiment, since the state of the driver is estimated before the mode is switched, it is possible to reduce accidents when switching from automatic traveling to manual driving.
以下、本発明の一実施例による操縦者状態推定装置について説明する。
図1は、本実施例による操縦者状態推定装置を用いたヒューマンエラー防止システムの概略構成図である。
操縦者状態推定装置は、準自動走行システムを備えた車両(操縦対象体)に搭載されている。ここで、「準自動走行システム」とは、加速、操舵及び制動を全て操縦者が行う手動運転モードと、加速、操舵及び制動のうち少なくとも一つの操作が自動で行われる自動走行モードとを有し、所定条件下で手動運転モードと自動走行モードが切り替わるシステムをいう。
操縦者状態推定装置は、被監視物位置測定手段10と、注視点測定手段20と、ドライバモデル同定手段(操縦者モデル同定手段)30と、ドライバ状態推定手段(操縦者状態推定手段)40を備える。
被監視物位置測定手段10は、ドライバ(操縦者)が監視する被監視物の位置を継続的に測定し、ドライバモデル同定手段30に送信する。被監視物は、他車両、信号機、歩行者、標識又は車線など、ドライバが自車両を操縦するにあたって視線を向けるべき車外の静止物又は移動物である。
注視点測定手段20は、被監視物を監視するドライバの視線を追跡することにより注視点を継続的に測定し、ドライバモデル同定手段30に送信する。
ドライバモデル同定手段30は、ドライバの入出力関係を、被監視物位置測定手段10から送信された被監視物の位置に基づいて求めた被監視物の変位を入力とし、注視点測定手段20から送信されたドライバの注視点に基づいて求めたドライバの注視点の変位を出力として示すドライバモデルを同定し、同定したドライバモデルをドライバ状態推定手段40に送信する。
ドライバ状態推定手段40は、ドライバモデル同定手段30によって同定されたドライバモデルの特徴量を、操縦者の以前の操縦者モデルの特徴量、又は規範的な操縦者モデルの特徴量と比較することにより、現在の操縦者の状態を推定する。比較に用いる特徴量は、特に限定はないが、ゲイン、時定数又は残差とすることが好ましい。この場合例えば、ドライバモデル同定手段30によって同定されたドライバモデルの時定数の所定時間の平均値を、記憶手段60に記憶されている規範的なドライバモデルの時定数の所定時間の平均値と比較することにより、ドライバの注意力低下の有無を推定する。なお、ドライバモデル手段30によって同定されたドライバモデルを記憶手段60にも送信して保存しておき、最新のドライバモデルの特徴量を、保存された以前のドライバモデルの特徴量と比較することにより注意力低下の有無を推定してもよい。また、ゲイン、時定数及び残差について別のドライバモデルとそれぞれ比較し、総合的に注意力低下の有無を推定してもよい。
また、本実施例によるヒューマンエラー防止システムは、警告・操縦介入手段50を備えている。警告・操縦介入手段50は、ドライバ状態推定手段40がドライバの注意力が低下していると推定したとき、警告を発する。また、自動走行モードにあるときは、手動運転への切り替えを行わず自動走行モードを継続する。これにより、漫然運転などのヒューマンエラーに起因する事故を防止できる。
このように、ドライバの被監視物に対する注視点移動を利用したドライバモデルからドライバ状態を推定するため、手動運転時と自動走行時とに関わらずドライバの状態を推定することができ、ドライバが自車両周辺のどの被監視物にどれほど注意しているかを把握できる。
Hereinafter, a pilot state estimating apparatus according to an embodiment of the present invention will be described.
FIG. 1 is a schematic configuration diagram of a human error prevention system using a pilot state estimation device according to this embodiment.
The pilot state estimation device is mounted on a vehicle (a steering target body) provided with a semi-automatic traveling system. Here, the “semi-automated driving system” includes a manual driving mode in which the driver performs all acceleration, steering and braking, and an automatic driving mode in which at least one operation of acceleration, steering and braking is automatically performed. A system in which the manual operation mode and the automatic travel mode are switched under a predetermined condition.
The pilot state estimation device includes a monitored object position measurement unit 10, a gazing point measurement unit 20, a driver model identification unit (pilot model identification unit) 30, and a driver state estimation unit (pilot state estimation unit) 40. Prepare.
The monitored object position measuring means 10 continuously measures the position of the monitored object monitored by the driver (operator), and transmits it to the driver model identifying means 30. The monitored object is a stationary object or a moving object outside the vehicle, such as another vehicle, a traffic light, a pedestrian, a sign, or a lane, to which the driver should turn his / her eyes when maneuvering the own vehicle.
The gaze point measurement unit 20 continuously measures the gaze point by tracking the line of sight of the driver who monitors the monitored object, and transmits the gaze point to the driver model identification unit 30.
The driver model identification unit 30 receives the displacement of the monitored object obtained based on the position of the monitored object transmitted from the monitored object position measuring unit 10 as the input / output relationship of the driver. The driver model indicating the displacement of the driver gazing point obtained based on the transmitted driver gazing point as an output is identified, and the identified driver model is transmitted to the driver state estimating means 40.
The driver state estimation unit 40 compares the feature amount of the driver model identified by the driver model identification unit 30 with the feature amount of the previous pilot model of the driver or the feature amount of the normative pilot model. Estimate the current pilot status. The feature amount used for comparison is not particularly limited, but is preferably a gain, a time constant, or a residual. In this case, for example, the average value of the time constant of the driver model identified by the driver model identification means 30 is compared with the average value of the time constant of the normative driver model stored in the storage means 60 for the predetermined time. Thus, it is estimated whether or not the driver's attention is reduced. The driver model identified by the driver model means 30 is also transmitted to the storage means 60 and stored, and the feature quantity of the latest driver model is compared with the saved feature quantity of the previous driver model. The presence or absence of attention reduction may be estimated. Further, the gain, time constant, and residual may be compared with other driver models, respectively, and the presence / absence of a reduction in attention may be estimated comprehensively.
In addition, the human error prevention system according to the present embodiment includes a warning / steering intervention means 50. The warning / steering intervention means 50 issues a warning when the driver state estimation means 40 estimates that the driver's attention is reduced. When in the automatic travel mode, the automatic travel mode is continued without switching to manual operation. Thereby, it is possible to prevent accidents caused by human errors such as sloppy driving.
In this way, since the driver state is estimated from the driver model that uses gaze point movement of the driver to the monitored object, the driver state can be estimated regardless of whether the vehicle is in manual driving or automatic driving. It is possible to grasp how much attention is being paid to which monitored objects around the vehicle.
また、ドライバモデル同定手段30は、ドライバの注視点が被監視物から所定範囲内にある場合の被監視物の変位及びドライバの注視点の変位をドライバモデルの同定に用いる。
注視点測定手段20によって測定されたドライバの注視点データには、被監視物を監視していないときの注視点の移動も含まれる可能性がある。そのため注視点が被監視物から所定範囲外にあるときのデータはドライバモデルの同定に用いないことで、より正確にモデル化を行うことができる。
Further, the driver model identification unit 30 uses the displacement of the monitored object and the displacement of the driver's gazing point when the driver's gazing point is within a predetermined range from the monitored object for identifying the driver model.
The driver's gazing point data measured by the gazing point measurement unit 20 may include movement of the gazing point when the monitored object is not monitored. Therefore, the data when the gazing point is outside the predetermined range from the monitored object is not used for identifying the driver model, so that the modeling can be performed more accurately.
次に、ドライバの注視点移動モデルについて説明する。
図2は、ドライバの注視点移動モデルを示す図である。ドライバモデル同定手段30は、周辺環境(被監視物の変位)を入力、ドライバの注視点の変位を出力とするドライバの入出力関係を示すドライバモデルを同定する。
ドライバは自動走行中、注視点を移動させることで周辺環境の変化を察知する。そのドライバの振る舞いを以下のモデルにより表現する。
FIG. 2 is a diagram illustrating a driver's gaze point movement model. The driver model identification unit 30 identifies a driver model indicating the input / output relationship of the driver with the surrounding environment (displacement of the monitored object) as an input and the displacement of the driver's gazing point as an output.
During automatic driving, the driver detects changes in the surrounding environment by moving the point of interest. The driver's behavior is expressed by the following model.
次に、注意力低下とドライバモデルの関係性について説明する。
図3に車両を運転するドライバ、車両、環境(被監視物)からなる閉ループ系を示す。ドライバは操縦するとき周辺環境の変化を 認知し、どのように操縦するか判断する。そして手動運転時は 操縦を自動運転時は周辺環境の監視を行う。ドライバの注意力が低下したとき、認知の遅れやミスなどが発生 し、ドライバは判断や操作を誤ることとなる。
既存の研究では模擬運転状況で注意力が低下したとき、刺激提示に対するドライバの反応が遅れることが実験的に確認 されている(内田信行ほか:携帯電話会話時における運転者の注意状態評価について,IATSS Review,vol.30,No.3,p.57-65(2005).)。そのため、周囲環境の変化に対する注視点移動の時間的な遅れを表現するドライバモデルの時定数は注意力低下により増加する。また、手動運転に切り替えた直後の操縦成績は、モデル特徴量から予測可能である。
Next, the relationship between the reduction in attention and the driver model will be described.
FIG. 3 shows a closed loop system including a driver, a vehicle, and an environment (monitored object) for driving the vehicle. The driver recognizes changes in the surrounding environment when maneuvering and determines how to maneuver. During manual operation, control the surrounding environment. When the driver's attention is reduced, a recognition delay or mistake occurs, and the driver makes a mistake in judgment or operation.
In existing studies, it has been experimentally confirmed that the driver's response to stimulus presentation is delayed when attention is reduced in simulated driving conditions (Nobuyuki Uchida et al .: Evaluation of driver's attention during mobile phone conversation, IATSS Review, vol.30, No.3, p.57-65 (2005).). For this reason, the time constant of the driver model that expresses the time delay of the movement of the gazing point with respect to changes in the surrounding environment increases due to a reduction in attention. In addition, the maneuvering result immediately after switching to manual operation can be predicted from the model feature amount.
次に、本発明の操縦者状態推定装置を用いた実験について説明する。
本実験ではドライバに周辺環境の監視と同時に副次課題による精神的負荷を与えることで、注意力低下状態を模擬する。ドライバに精神的負荷を与えられているか確認するため、瞳 孔径の変動をリファレンスとして用いる。
瞳孔は副交感神経により括約筋、交感神経により散大筋が支配を受けている(伊藤謙治, 桑野園子,小松原明哲:人間工学ハンドブック,東京,朝倉書店,2003,p.363.)。そのため、瞳孔径は人の心理状態を示す客観的指標となる(Yamanaka,K. and Kawakami, M.:Convenient Evaluation of Mental Stress with Pupil Diameter, International Journal of Occupational Safety and Ergonomics, vol.15, No.4, p.447-450 (2009).)。本実験では、瞳孔径の測定データから瞬きや瞳孔径が1/30秒間で0.1mm変化した点(Sandra,P.M.:U.S.Patent,US6090051A,(2000).)、瞳孔径が2mmから8mmの範囲を超えた点(産業技術総合研究所人間福祉医工学研門:人間計測ハンドブック,東京,朝倉書店,2003,p.113-115.)を除去し、精神的負荷の指標として用いる。
本実験では SeeingMachines社のアイトラッカーFOVIOを用いて瞳孔径を測定し、瞳孔径を精神的負荷の指標とした。
Next, an experiment using the pilot state estimation device of the present invention will be described.
In this experiment, the driver is given a mental load due to a secondary task at the same time as monitoring the surrounding environment, thereby simulating a state of reduced attention. Use the pupil diameter variation as a reference to check whether the driver is mentally stressed.
The pupil is controlled by the sphincter by the parasympathetic nerve and by the sympathetic nerve by the sympathetic nerve (Kenji Ito, Sonoko Kuwano, Akitetsu Komatsubara: Ergonomics Handbook, Tokyo, Asakura Shoten, 2003, p.363.). Therefore, pupil diameter is an objective indicator of human psychological state (Yamanaka, K. and Kawakami, M .: Convenient Evaluation of Mental Stress with Pupil Diameter, International Journal of Occupational Safety and Ergonomics, vol. 15, No. 4, p.447-450 (2009). In this experiment, from the measurement data of the pupil diameter, the blink or the pupil diameter changed by 0.1 mm in 1/30 seconds (Sandra, PM: USPatent, US6090051A, (2000).), The pupil diameter ranged from 2 mm to 8 mm. The excess points (National Institute for Human Welfare and Medical Engineering: Human Measurement Handbook, Tokyo, Asakura Shoten, 2003, p.113-115) are removed and used as an index of mental load.
In this experiment, the pupil diameter was measured using an eye tracker FOVIO of SeeingMachines, and the pupil diameter was used as an index of mental load.
図4に実験装置概略を示す。ドライバXはプロジェクタ1からスクリーン2に映し出される映像を見て、自動走行時は周辺環境の監視、手動運転 時はステアリング3やペダル4を用いて操縦する。ペダル4は、アクセルペダル4Aとブレーキペダル4Bとからなる。ステアリング2 に対するフォースフィードバックは無い。また、注視点および瞳孔径の測定には、注視点測定手段20として非接触式のアイトラッカー(SeeingMachines社製 FOVIO)を用いる。ドライビングシミュレータは株式会社フォーラムエイト社のUC-win/Roadを用いて構築した。UC-win/Road SDKと Delphi XE2を用いて、任意の操縦対象ダイナミクスや外乱などの実験条件をプログラミングする事が可能である。コンピュータ5はドライビングシュミレータを動作・制御する。
なお、被監視物位置測定手段10、ドライバモデル同定手段30及びドライバ状態推定手段40は図示を省略する。
FIG. 4 shows an outline of the experimental apparatus. The driver X looks at the image projected on the screen 2 from the projector 1 and monitors the surrounding environment during automatic driving and uses the steering 3 and pedal 4 during manual driving. The pedal 4 includes an accelerator pedal 4A and a brake pedal 4B. There is no force feedback for steering 2. Further, in the measurement of the gazing point and the pupil diameter, a non-contact type eye tracker (FOVIO manufactured by SeeingMachines) is used as the gazing point measurement means 20. The driving simulator was constructed using UC-win / Road of FORUM8. Using UC-win / Road SDK and Delphi XE2, it is possible to program experimental conditions such as arbitrary dynamics and disturbances. The computer 5 operates and controls the driving simulator.
The monitored object position measuring means 10, the driver model identifying means 30, and the driver state estimating means 40 are not shown.
本実験では、ドライバが注視する対象は蛇行している前方車両のみとした。さらに解析を簡単にするため、走行環境を高さ一定の直線道路とした。また自車両と前方車両の車間距離を一定とし、被験者(ドライバX)から見て前方車両の位置が奥行き方向、鉛直方向に変化しないものとした。これらの実験条件により、本実験では被験者の道路面に平行な注視点移動のみを対象とする。
被験者は実験開始時、蛇行している前方車両と車間距離を一定に保つように自動走行している車両の運転席に乗車している。実験開始から約120秒後に前方車両と自車両の間に別の車両が割り込みを行う。スクリーン2に割り込み車両が映ると同時に自動走行から手動運転に切り替わり、それから60秒間は被験者が操縦し実験は終了する。
In this experiment, the driver focused only on the meandering front vehicle. To make the analysis easier, the driving environment is a straight road with a constant height. Further, the distance between the host vehicle and the preceding vehicle is constant, and the position of the preceding vehicle does not change in the depth direction and the vertical direction when viewed from the subject (driver X). Due to these experimental conditions, this experiment targets only the gaze point movement parallel to the subject's road surface.
At the start of the experiment, the subject is in the driver's seat of the vehicle that is automatically traveling so as to maintain a constant distance between the meandering front vehicle and the vehicle. About 120 seconds after the start of the experiment, another vehicle interrupts between the preceding vehicle and the host vehicle. At the same time as an interrupting vehicle appears on the screen 2, the vehicle switches from automatic driving to manual driving, and then the subject controls for 60 seconds to complete the experiment.
本実験では、ドライバの前方車両の移動に対する注視点移動をモデル化するため、被験者には主要課題として常に前方車両を監視するように指示した。また、ドライバの注意力低下状態を模擬するため、自動走行中に副次課題を行うことで精神的負荷を与えた。副次課題は連続して流れてくる数字の最後からN番目の数字を口答するというNバック課題を与えた。さらに被験者には割り込み車両を認識したらすぐにブレーキペダル4Bを踏むよう指示した。 In this experiment, in order to model the movement of the gazing point relative to the movement of the vehicle ahead of the driver, the subject was instructed to always monitor the vehicle ahead as the main task. Also, in order to simulate the driver's low attention level, a mental load was given by performing a secondary task during automatic driving. The secondary task gave the N-back task of speaking the Nth number from the end of the numbers that flow continuously. Further, the subject was instructed to step on the brake pedal 4B as soon as the interrupting vehicle was recognized.
自車両は蛇行している前方車両との車間距離が一定になるようにフィードバック制御により自動走行する。自動走行中 はステアリング3やペダル4を操作しても自車両には反映されな い。前方車両は、被験者に次の動きが予測できないようにランダムに蛇行している。割り込み車両は、自車両より速度が20km/h大きい状態で割り込むため、被験者が不適切な操縦をしない限り衝突しない。 The host vehicle automatically travels by feedback control so that the inter-vehicle distance from the meandering front vehicle is constant. During automatic driving, steering 3 and pedal 4 are not reflected on the host vehicle. The vehicle ahead is meandering randomly so that the subject cannot predict the next movement. Since the interrupting vehicle interrupts at a speed 20 km / h higher than the own vehicle, it does not collide unless the subject performs inappropriate maneuvering.
事前に本実験の内容と目的を十分に説明し合意を得た被験者4名に操縦実験に参加してもらった。十分な操縦練習を行ったのちに主要課題と副次課題を与えた操縦実験を実施した。副次的課題の答えは口頭で述べてもらい実験補助者が聞き取った。
また計測した注視点位置、前方車両の位置を 図5に示すようなxy座標系に変換しモデル同定に用いる。これはxy平面内に常に前方車両が存在するように一定速度で前方に進む座 標系である。
Four subjects who explained the content and purpose of this experiment in advance and agreed to participate in the pilot experiment. After carrying out sufficient maneuvering practice, a maneuvering experiment was conducted in which main and secondary tasks were given. The assistant was asked to verbally answer the secondary task.
The measured gazing point position and the position of the vehicle ahead are converted into an xy coordinate system as shown in FIG. 5 and used for model identification. This is a coordinate system that moves forward at a constant speed so that a forward vehicle always exists in the xy plane.
本実験では、被験者が割り込み車両を認識してからすぐブレーキペダル4Bを踏むように指示した。そこで、スクリーン2に割り込み車両が映ってからブレーキペダル4Bを踏むまでの時間(以降、「ドライバの反応時間」又は「反応時間」という)を操縦成績とした。このドライ バの反応時間が短いほど操縦成績は良いとする。 In this experiment, the subject instructed to step on the brake pedal 4B immediately after recognizing the interrupting vehicle. Therefore, the time from when the interrupting vehicle appears on the screen 2 until the brake pedal 4B is depressed (hereinafter referred to as “driver's reaction time” or “response time”) is defined as the driving result. The shorter the response time of this driver, the better the steering performance.
次にデータ処理について説明する。
ドライバモデルを以下に示す。
また、得られたデータには前方車両を監視していないときの注視点の移動も含まれている。そこで注視点と前方車両が5m以上離れているデータは前方車両の監視を行っていないと見なしモデル同定に利用しない。5m以上離れていないデータは10秒間隔で切り分け、10秒未満であっても2秒以上のデータは同定に用いた。
Next, data processing will be described.
The driver model is shown below.
The obtained data also includes the movement of the point of interest when the vehicle ahead is not monitored. Therefore, data in which the gaze point is ahead of the vehicle ahead by 5 m or more is regarded as not monitoring the vehicle ahead and is not used for model identification. Data that was not more than 5 m apart was cut at 10 second intervals, and data of 2 seconds or longer was used for identification even if it was less than 10 seconds.
図6に自動走行中の各被験者(subject)の瞳孔径の平均値、標準偏差を示す。
図7〜10に自動走行中のデータから同定したドライバモデルHxから得られた各被験者の(a)残差、(b)ゲイン、(c)時定数を示す。横軸は経過時間[秒]であり、上述の通り実験開始から約120秒経過するまでは自動走行、その後は手動運転である。図7は被験者A、図8は被験者B、図9は被験者C、図10は被験者Dのものである。
図11に各被験者の(a)残差、(b)ゲイン、(c)時定数の自動走行 時の平均値、標準偏差を示す。ただし同定モデルが不安定な場合は、入力と相関のある振る舞いをしていないとして解析対象から除いた。
なお、図6〜11において、実線は副次課題がない場合を示し、破線は副次課題がある場合を示している。
表1に各被験者の副次課題ありの場合と副次課題なしの場合の反応時間(Reaction time)[秒]、表2に副次課題の正答率[%]を示す。
FIG. 6 shows the average value and standard deviation of the pupil diameter of each subject during automatic running.
(A) residual of each subject obtained from the driver model H x identified from the data in the automatic traveling in FIG 7 to 10, (b) gain, indicating the time constant (c). The horizontal axis represents the elapsed time [seconds], and as described above, automatic running is performed until about 120 seconds have elapsed from the start of the experiment, and manual operation is performed thereafter. 7 shows subject A, FIG. 8 shows subject B, FIG. 9 shows subject C, and FIG.
FIG. 11 shows (a) the residual, (b) the gain, (c) the average value and the standard deviation of the subject at the time of automatic driving of the time constant. However, if the identification model was unstable, it was excluded from the analysis because it did not behave in correlation with the input.
6 to 11, the solid line indicates a case where there is no secondary problem, and the broken line indicates a case where there is a secondary problem.
Table 1 shows the reaction time (seconds) of each subject with and without a subtask, and Table 2 shows the correct answer rate [%] of the subtask.
図6より、自動走行中の副次課題により被験者A,B,Dの瞳孔径は増加しており、副次課題により精神的負荷を与えたこと がわかる。また、表1より、3名とも副次課題を行ったときの反応時間が増加しており、副次課題により注意力低下状態になったことがわかる。被験者Cは瞳孔径の増加は確認されなかったが反応時間は0.36秒増加しており、被験者Cもまた副次課題により注意力低下状態になったと推定できる。
なお、副次課題がある場合の被験者Bのモデル特徴量が実験開始約80秒以降得られていない(図8参照)。これは注視点が前方車両から5m以上離れていることが多く、モデル同定されなかったためである。
被験者4人中A,C,Dの3人において、副次課題がある場合はない場合より時定数が大きい(図11(c)参照)。なお、被験者Bの時定数への副次課題の影響は見られな い。被験者Bの副次課題を与えた実験では、時間経過とともに注視点が前方車両より5m以上離れることが多くなり同定モデルを得られなかった。そのため実験開始直後の副次課題 の影響をあまり受けていない同定モデルのみで平均値を計算したために、あまり変化が現れない結果になったものと推定される。
図12に副次課題がある場合とない場合の、ドライバの反応時間と運転切り替え前の 20秒間に得られた時定数の平均値の関係を示す。「●」は被験者Aの副次課題ありの場合、「○」は被験者Aの副次課題なしの場合、「▲」は被験者Cの副次課題ありの場合、「△」は被験者Cの副次課題なしの場合、「■」は被験者Dの副次課題ありの場合、「□」は被験者Dの副次課題なしの場合である。被験者Bは副次課題がある場合の運転切り替え前20秒間で モデルが得られなかったためプロットしていない。図12より被験者A,C,Dでは自動走行時の時定数とドライバの反応時間の間に正の相関があることがわかる。これより自動走行時の注視点の動きを表したドライバモデルの時定数が大きい場合、注意力低下状態であると予測できることがわかる。被験者Bにおいても、注視点の動きは副次課題の影響を強く受けているこことは明らかであるため、予測アルゴリズムが構築可能である。
From FIG. 6, it can be seen that the pupil diameters of subjects A, B, and D increased due to the secondary task during automatic driving, and that a mental burden was applied due to the secondary task. Moreover, from Table 1, it can be seen that the reaction time when all three persons performed the sub task increased, and the attention decreased due to the sub task. Although the increase in pupil diameter was not confirmed for subject C, the reaction time increased by 0.36 seconds, and it can be estimated that subject C was also in a state of reduced attention due to the secondary task.
Note that the model feature of subject B when there is a secondary task is not obtained after about 80 seconds from the start of the experiment (see FIG. 8). This is because the gazing point is often 5 m or more away from the vehicle ahead and the model was not identified.
Three of A, C, and D among the four test subjects have a larger time constant than the case where there is no secondary task (see FIG. 11C). There is no effect of the secondary task on the time constant of subject B. In the experiment that gave the sub task of the subject B, the identification model could not be obtained because the gazing point was more than 5 m away from the vehicle ahead with time. Therefore, the average value was calculated using only the identification model that was not significantly affected by the secondary task immediately after the start of the experiment, so it is estimated that the results did not change much.
FIG. 12 shows the relationship between the reaction time of the driver and the average value of the time constants obtained during the 20 seconds before the operation switching, with and without the secondary problem. “●” indicates that subject A has a secondary task, “◯” indicates that subject A has no secondary task, “▲” indicates subject C has a secondary task, and “△” indicates subject C's secondary task. When there is no next task, “■” indicates that the subject D has a secondary task, and “□” indicates that the subject D has no secondary task. Subject B is not plotted because a model was not obtained in 20 seconds before switching operation when there was a secondary task. FIG. 12 shows that subjects A, C, and D have a positive correlation between the time constant during automatic driving and the response time of the driver. From this, it can be seen that when the time constant of the driver model representing the movement of the gazing point during automatic driving is large, it can be predicted that the attention level is reduced. Also in the subject B, since the movement of the gazing point is clearly influenced by the secondary task, a prediction algorithm can be constructed.
なお、上記では、準自動走行システムを備えた車両に搭載する場合を例に説明したが、本発明による操縦者状態推定装置は、準自動走行システムを備えない手動運転のみの車両にも適用可能である。さらに、航空機、二輪車、船舶、鉄道など他の乗物、又は遠隔操縦可能な無人航空機(いわゆる「ドローン」)等にも適用可能である。
また、上記式(1)〜(4)は本発明の理解を容易にするための例示であり、本発明の趣旨を逸脱しない範囲で様々に変更可能である。
In the above description, the case where the vehicle is equipped with a semi-automatic driving system has been described as an example. However, the pilot state estimation device according to the present invention can be applied to a vehicle only for manual driving without a semi-automatic driving system. It is. Furthermore, the present invention can also be applied to other vehicles such as an aircraft, a motorcycle, a ship, and a railway, or an unmanned aircraft that can be remotely controlled (so-called “drone”).
The above formulas (1) to (4) are examples for facilitating the understanding of the present invention, and various modifications can be made without departing from the spirit of the present invention.
本発明による操縦者状態推定装置は、操縦対象体を操縦する者の視点移動をモデル化し、得られた操縦者モデルの特徴量によりドライバの注意力低下などの状態を効果的に推定するので、自動車等に適用することで操縦者のヒューマンエラーによる事故防止に寄与する。 The pilot state estimation device according to the present invention models the viewpoint movement of the person who controls the pilot object, and effectively estimates the state such as a driver's attention reduction based on the characteristic amount of the obtained pilot model. By applying it to automobiles, etc., it contributes to the prevention of accidents caused by human error.
1 プロジェクタ
2 スクリーン
3 ステアリング
4 ペダル
5 コンピュータ
10 被監視物位置測定手段
20 注視点測定手段
30 ドライバモデル同定手段(操縦者モデル同定手段)
40 ドライバ状態推定手段(操縦者状態推定手段)
50 警告・操縦介入手段
60 記憶手段
X ドライバ(操縦者)
DESCRIPTION OF SYMBOLS 1 Projector 2 Screen 3 Steering 4 Pedal 5 Computer 10 Monitored object position measurement means 20 Gaze point measurement means 30 Driver model identification means (operator model identification means)
40 Driver state estimation means (driver state estimation means)
50 Warning / steering intervention means 60 Memory means X Driver (operator)
Claims (5)
被監視物位置測定手段と、
注視点測定手段と、
操縦者モデル同定手段と、
操縦者状態推定手段とを備え、
前記被監視物位置測定手段は、前記操縦者が監視する被監視物の位置を測定し、
前記注視点測定手段は、前記被監視物を監視する前記操縦者の注視点を測定し、
前記操縦者モデル同定手段は、前記操縦者の入出力関係を、前記被監視物の変位を入力とし、前記操縦者の前記注視点の変位を出力として示す操縦者モデルを同定し、
前記操縦者状態推定手段は、前記操縦者モデルの特徴量を、前記操縦者の以前の操縦者モデルの特徴量、又は規範的な操縦者モデルの特徴量と比較することにより、現在の前記操縦者の状態を推定することを特徴とする操縦者状態推定装置。 A pilot state estimating device for estimating a state such as a reduction in the attention of a pilot who operates a pilot object,
Monitored object position measuring means;
Gaze point measuring means,
A pilot model identification means;
A pilot state estimating means,
The monitored object position measuring means measures the position of the monitored object monitored by the operator,
The gazing point measurement means measures the gazing point of the driver who monitors the monitored object,
The pilot model identifying means identifies the pilot model indicating the input / output relationship of the pilot, with the displacement of the monitored object as an input and the displacement of the gaze point of the pilot as an output,
The pilot state estimation means compares the characteristic amount of the pilot model with the characteristic amount of the previous pilot model of the pilot, or the characteristic amount of the normative pilot model, so A pilot state estimating device characterized by estimating a driver's state.
前記被監視物は、前記車両の外に存在する静止物又は移動物であることを特徴とする請求項1から請求項3のいずれか1項に記載の操縦者状態推定装置。 The steering object is a vehicle traveling on a road,
The pilot state estimation apparatus according to any one of claims 1 to 3, wherein the monitored object is a stationary object or a moving object existing outside the vehicle.
前記操縦者状態推定手段は、前記手動運転モードと前記自動走行モードとを切り替える前に前記操縦者の状態を推定することを特徴とする請求項4に記載の操縦者状態推定装置。
The vehicle is a quasi-automatic travel system vehicle having a manual operation mode in which the driver operates and an automatic travel mode in which at least one of acceleration, steering, and braking is automatically performed,
5. The pilot state estimating device according to claim 4, wherein the pilot state estimating unit estimates the state of the pilot before switching between the manual operation mode and the automatic travel mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016200328A JP6958886B2 (en) | 2016-10-11 | 2016-10-11 | Operator state estimator |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016200328A JP6958886B2 (en) | 2016-10-11 | 2016-10-11 | Operator state estimator |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2018063489A true JP2018063489A (en) | 2018-04-19 |
JP6958886B2 JP6958886B2 (en) | 2021-11-02 |
Family
ID=61967841
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2016200328A Active JP6958886B2 (en) | 2016-10-11 | 2016-10-11 | Operator state estimator |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP6958886B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020071705A (en) * | 2018-10-31 | 2020-05-07 | 株式会社豊田中央研究所 | Concentrated state estimation device |
WO2021188680A1 (en) * | 2020-03-18 | 2021-09-23 | Waymo Llc | Monitoring head movements of drivers tasked with monitoring a vehicle operating in an autonomous driving mode |
JP2021145974A (en) * | 2020-03-19 | 2021-09-27 | マツダ株式会社 | State estimation device |
JP7002063B1 (en) * | 2021-04-30 | 2022-02-14 | 国立大学法人東北大学 | Traffic accident estimation device, traffic accident estimation method, and traffic accident estimation program |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07117593A (en) * | 1993-10-21 | 1995-05-09 | Mitsubishi Electric Corp | Alarm device for vehicle |
JP2015138308A (en) * | 2014-01-21 | 2015-07-30 | アルパイン株式会社 | Driving support device, driving support method, and driving support program |
JP2016115023A (en) * | 2014-12-12 | 2016-06-23 | ソニー株式会社 | Vehicle control system, vehicle control method, and program |
-
2016
- 2016-10-11 JP JP2016200328A patent/JP6958886B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07117593A (en) * | 1993-10-21 | 1995-05-09 | Mitsubishi Electric Corp | Alarm device for vehicle |
JP2015138308A (en) * | 2014-01-21 | 2015-07-30 | アルパイン株式会社 | Driving support device, driving support method, and driving support program |
JP2016115023A (en) * | 2014-12-12 | 2016-06-23 | ソニー株式会社 | Vehicle control system, vehicle control method, and program |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020071705A (en) * | 2018-10-31 | 2020-05-07 | 株式会社豊田中央研究所 | Concentrated state estimation device |
WO2021188680A1 (en) * | 2020-03-18 | 2021-09-23 | Waymo Llc | Monitoring head movements of drivers tasked with monitoring a vehicle operating in an autonomous driving mode |
US11912307B2 (en) | 2020-03-18 | 2024-02-27 | Waymo Llc | Monitoring head movements of drivers tasked with monitoring a vehicle operating in an autonomous driving mode |
JP2021145974A (en) * | 2020-03-19 | 2021-09-27 | マツダ株式会社 | State estimation device |
JP7409184B2 (en) | 2020-03-19 | 2024-01-09 | マツダ株式会社 | state estimation device |
JP7002063B1 (en) * | 2021-04-30 | 2022-02-14 | 国立大学法人東北大学 | Traffic accident estimation device, traffic accident estimation method, and traffic accident estimation program |
WO2022230960A1 (en) * | 2021-04-30 | 2022-11-03 | 国立大学法人東北大学 | Traffic accident inference device, traffic accident inference method, and traffic accident inference program |
Also Published As
Publication number | Publication date |
---|---|
JP6958886B2 (en) | 2021-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wada et al. | Characterization of expert drivers' last-second braking and its application to a collision avoidance system | |
JP6958886B2 (en) | Operator state estimator | |
Markkula | Modeling driver control behavior in both routine and near-accident driving | |
WO2019151266A1 (en) | Information processing device, mobile apparatus, method, and program | |
US9251704B2 (en) | Reducing driver distraction in spoken dialogue | |
Gold et al. | Taking over control from highly automated vehicles | |
CN109032116A (en) | Vehicle trouble processing method, device, equipment and storage medium | |
KR20210088565A (en) | Information processing devices, mobile devices and methods, and programs | |
US9446729B2 (en) | Driver assistance system | |
CN112180921B (en) | Automatic driving algorithm training system and method | |
Kuehn et al. | Takeover times in highly automated driving (level 3) | |
JP7309339B2 (en) | Gaze-driven communication for assistance in mobility | |
US11538259B2 (en) | Toward real-time estimation of driver situation awareness: an eye tracking approach based on moving objects of interest | |
WO2020131803A4 (en) | Systems and methods for detecting and dynamically mitigating driver fatigue | |
US20220081009A1 (en) | Information processing apparatus, moving apparatus, method and program | |
JP7154959B2 (en) | Apparatus and method for recognizing driver's state based on driving situation judgment information | |
CN114030475A (en) | Vehicle driving assisting method and device, vehicle and storage medium | |
US20240000354A1 (en) | Driving characteristic determination device, driving characteristic determination method, and recording medium | |
Benloucif et al. | Multi-level cooperation between the driver and an automated driving system during lane change maneuver | |
CN116331221A (en) | Driving assistance method, driving assistance device, electronic equipment and storage medium | |
CN112172829B (en) | Lane departure warning method and device, electronic equipment and storage medium | |
Xing et al. | Advanced driver intention inference: Theory and design | |
CN109189567A (en) | Time-delay calculation method, apparatus, equipment and computer readable storage medium | |
Gonçalves et al. | Driver capability monitoring in highly automated driving: from state to capability monitoring | |
Hirose et al. | A study on modeling of driver's braking action to avoid rear-end collision with time delay neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20190920 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20200727 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20200908 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20201029 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20210413 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20210521 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20210914 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20210930 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 6958886 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |