CN103839046A - Automatic driver attention identification system and identification method thereof - Google Patents

Automatic driver attention identification system and identification method thereof Download PDF

Info

Publication number
CN103839046A
CN103839046A CN201310731617.6A CN201310731617A CN103839046A CN 103839046 A CN103839046 A CN 103839046A CN 201310731617 A CN201310731617 A CN 201310731617A CN 103839046 A CN103839046 A CN 103839046A
Authority
CN
China
Prior art keywords
msub
mtd
mrow
driver
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310731617.6A
Other languages
Chinese (zh)
Other versions
CN103839046B (en
Inventor
张伟
成波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd, Suzhou Automotive Research Institute of Tsinghua University filed Critical Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd
Priority to CN201310731617.6A priority Critical patent/CN103839046B/en
Publication of CN103839046A publication Critical patent/CN103839046A/en
Application granted granted Critical
Publication of CN103839046B publication Critical patent/CN103839046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an automatic driver attention identification system which comprises an image collection device, an image processing device and an alarm device. The automatic driver attention identification system is characterized in that the image collection device is used for obtaining the facial image of a driver in real time and transmitting the collected image of the driver to the image processing device, the image processing device is used for obtaining the relative attitude angle of the driver according to the image of the driver and analyzing and judging the attention state of the driver according to a driver attention judging model, and the alarm device is used for giving an alarm when the image processing device judges that the driver is in an inattention state. The system judges the fatigue state of the driver according to the attention state of the driver, and is beneficial for constructing a real-time and accurate fatigue pre-warning system.

Description

Automatic driver attention recognition system and recognition method thereof
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to an automatic driver attention recognition system and a recognition method thereof.
Background
Researchers at the virginia scientific and transportation association and the national highway safety administration have completed a survey for the first 1 year and concluded that driver distraction is the most important cause of traffic accidents that have already occurred or that have suffered from accidents such as collisions, scratches and other minor friction accidents through the study of thousands of hours of recorded traffic accident data. Among the car accidents that have already occurred, 80% of them are caused by inattention. In traffic accidents, which are bad car accidents, 65% of the accidents are caused by the inattention of the driver. The driver's inattention (such as the driver making and receiving calls while driving, talking with others, sleeping inadequately, eating things, etc.) has been the leading cause of car accidents. Methods and apparatus for monitoring the state of attention of a driver and preventing distraction of the driver have also been developed.
Chinese patent publication No. CN203020194 discloses an apparatus for preventing driver from distracting based on image processing technology, which includes a CCD camera, a DSP processor, a mobile phone shield, and an audible and visual alarm. The device not only can discern that the driver uses the phone, can also discern whether the driver is eating the action of dispersing the attention such as thing in the driving process, and when the driver is eating the thing or using the cell-phone, DSP treater sends the instruction and gives audible-visual annunciator and cell-phone shielding ware, reminds the driver not to use the cell-phone or eat the thing.
The above-described device presents a significant drawback: only a few specific attention-dispersing behaviors such as telephone use, food eating and the like of a driver can be identified, and when the driver has other attention-dispersing behaviors which cannot be identified by the device, accidents can occur due to the fact that dangerous behaviors cannot be identified in time.
Disclosure of Invention
The invention provides an automatic driver attention recognition system, and aims to provide an automatic driver attention recognition system with higher real-time performance and safety by judging the change condition of the head posture of a driver, and to warn dangerous driving behaviors with inattentive attention in real time.
In order to solve the problems in the prior art, the technical scheme provided by the invention is as follows:
a driver attention automatic identification system comprises an image acquisition device, an image processing device and an alarm device, and is characterized in that the image acquisition device is used for acquiring a face image of a driver in real time and transmitting the acquired image of the driver to the image processing device; the image processing device is used for acquiring a relative attitude angle of a driver through a driver image and analyzing and judging the attention state of the driver according to a driver attention discrimination model; the alarm device is used for giving an alarm when the image processing device judges that the driver is in the state of inattention.
The preferred technical scheme is as follows: the alarm device is selected from one or any combination of more than two of the following alarm prompters: LED lamp, audible alarm, safety belt vibrator and seat vibrator.
The preferred technical scheme is as follows: the image acquisition device is selected from a camera, and the output end of the camera is connected with the input end of the image processing device.
The preferred technical scheme is as follows: the image processing device is a DSP processing system, and the output end of the DSP processing system is connected with the input end of the alarm device.
The preferred technical scheme is as follows: the image acquisition device is arranged above a vehicle instrument panel, and a front face image when a driver faces the front of the vehicle can be shot as an installation reference.
Another object of the present invention is to provide a method for automatically recognizing attention of a driver, characterized in that the method comprises the steps of:
(1) collecting a face image of a driver;
(2) acquiring a relative attitude angle of a driver through a driver image, and analyzing and judging the attention state of the driver according to a driver attention discrimination model;
(3) when the image processing device judges that the driver is in the state of inattention, the alarm prompt is carried out.
The preferred technical scheme is as follows: in the step (2), the relative attitude angle of the driver is obtained by utilizing the projective transformation principle, and the method specifically comprises the following steps:
1) acquiring a face area through a driver image, and detecting angular points in the face area;
2) tracking the corner points in the face area by adopting an L-K optical flow tracking algorithm, and selecting points participating in attitude angle calculation;
3) through the feature point set selected on the basis of face tracking, an eigen matrix and a translation vector can be solved by an epipolar line equation;
4) and obtaining a correct solution of the rotation matrix according to the intrinsic matrix, and obtaining an Euler angle of the head movement of the driver rotating around 3 coordinate axes according to the rotation matrix, namely obtaining a relative attitude angle of the driver.
The preferred technical scheme is as follows: in the method it is assumed that a point M of the driver's head is located from a time t relative to the camera coordinate systemkPosition (x) ofk,yk,zk) Move to tk+1Position (x) ofk+1,yk+1,zk+1) Its projection on the two-dimensional image plane is from (x'k,y′k) Go to (x'k+1,y′k+1) Let the rotation matrix and translation vector be R respectivelykAnd TkAnd then obtaining a three-dimensional rigid motion model:
x k + 1 y k + 1 z k + 1 = r xx r xy r xz r yx r yy r yz r zx r zy r zz x k y k z k + t x t y t z = R k x k y k z k + T k ;
according to the perspective projection model of the camera, the following results are obtained:
<math> <mrow> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>x</mi> <mi>k</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> </mfrac> <mo>,</mo> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>y</mi> <mi>k</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> </mfrac> <mo>;</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mo>=</mo> <mi>F</mi> <mfrac> <mrow> <msub> <mi>r</mi> <mi>xx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>xy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>xz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mrow> <mrow> <msub> <mi>r</mi> <mi>zx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>z</mi> </msub> </mrow> </mfrac> </mrow> </math>
then: <math> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mo>=</mo> <mi>F</mi> <mfrac> <mrow> <msub> <mi>r</mi> <mi>yx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>yy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>yz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>y</mi> </msub> </mrow> <mrow> <msub> <mi>r</mi> <mi>zx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>z</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
under the normalized perspective projection (F = 1), the constraint relationship of the outer limit equation is satisfied between them, namely:
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> </mtd> <mtd> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mi>E</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </math> the eigenmatrix E is then:
E=[Tk]×Rk
wherein <math> <mrow> <msub> <mrow> <mo>[</mo> <msub> <mi>T</mi> <mi>k</mi> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>x</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mi>z</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> [Tk]×As translation vector TkAn anti-symmetric matrix of (a).
The preferred technical scheme is as follows: in the method, if the translation vector is T and the eigen matrix is E, the formula is solved:
Figure BDA0000447581370000037
wherein the constraint conditions are as follows: | T | non-conducting phosphor2And obtaining a translation vector T.
The preferred technical scheme is as follows: in the method, the rotation matrix R is expressed by Euler angles and has the form:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>R</mi> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>12</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>13</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>22</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>23</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>31</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>32</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>33</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> <mtr> <mtd> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi>a</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> <mo>-</mo> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> <mo>+</mo> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> <mo>-</mo> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
obtaining a calculation formula of the attitude angle according to the rotation matrix R:
α=tan-1(r21/r11)
<math> <mrow> <mi>&beta;</mi> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>r</mi> <mn>31</mn> </msub> <mo>/</mo> <msqrt> <msubsup> <mi>r</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>r</mi> <mn>33</mn> <mn>2</mn> </msubsup> </msqrt> <mo>)</mo> </mrow> </mrow> </math>
γ=tan-1(r32/r33)。
in addition, in the technical scheme of the invention, a driver attention discrimination model needs to be preferably constructed, and the driver attention state is judged through the driver attention discrimination model according to the analysis result of the collected face image. The distraction is mainly classified into two types, one type of which requires a longer total time to complete the subtask due to more or more complicated subtasks, and the condition is mainly characterized in that the time proportion of the head of the driver deviating from the road surface is increased. The second type is that the head of the driver continuously deviates from the road surface for a period of time, so that the driver stops operating the vehicle in the period of time, lane deviation is easy to occur, and traffic accidents are easy to happen.
The driver attention discrimination model is constructed in the following way: and when the duration time of the sight deviation exceeds 2s, alarming the distraction, if not, monitoring in a fixed time window, and judging whether the time proportion of the road right ahead is less than 70%, if so, alarming the distraction, otherwise, indicating the normal state of attention. The driver attention discrimination model is constructed by collecting the relative attitude angles of the driver under the conditions, and whether the driver is in the attention dispersion state or not is judged by combining time elements.
In the prior art, most driver attention recognition devices only aim at a few specific attention-dispersing behaviors, and when the attention-non-focusing behaviors which cannot be recognized by the devices occur, dangerous situations are easy to occur. The invention judges the attention state of the driver according to the change of the head posture of the driver and determines whether to carry out fatigue early warning of the driver according to the judged attention state of the driver. In the driver fatigue early warning system, the fatigue state of a person is judged according to the attention state of the driver, and the real-time and accurate fatigue early warning system is favorably constructed.
The automatic driver attention recognition system comprises a camera, a Digital Signal Processor (DSP) system based on DM6437 and an alarm device; the output end of the camera is connected with the input end of the DSP processing system, and the output end of the DSP processing system is connected with the input end of the alarm device.
The method for automatically identifying the attention of the driver comprises the following steps:
1. acquiring a video image: the camera can be installed above a vehicle instrument panel to take a front face image of a driver facing the front of a vehicle as a reference, and after the vehicle is started, the camera is adopted to collect a video image and acquire the face state information of the driver in real time. Wherein, in order to enable this system also can normally work when driving night, still installed infrared LED lamp on the camera, it can open infrared lamp according to the light environment of surrounding is automatic to remedy the not enough of light, and the invisible light of its institute's transmission does not influence the driver and normally drives in addition.
The DSP processing system analyzes the image: the video signal is converted and then transmitted to a DSP processing system for analysis.
The main task of the DSP processing system is how to discriminate the driver's attention status. In the invention, the discrimination of the driver distraction state is realized by analyzing the orientation of the driver in a time domain, and the discrimination can be divided into the following two steps:
(1) calculating relative attitude angle of driver based on projective geometry principle
In the invention, the calculation of the relative attitude angle of the driver is completed by utilizing the projective transformation principle. The algorithm flow for resolving the relative attitude angle of the driver based on the projective geometry principle mainly comprises the following steps:
1) acquiring a face area through a driver image, and detecting angular points in the face area;
2) tracking the corner points in the face area by adopting an L-K optical flow tracking algorithm, and selecting points participating in attitude angle calculation;
3) through the feature point set selected on the basis of face tracking, an eigen matrix and a translation vector can be solved by an epipolar line equation;
4) and obtaining a correct solution of the rotation matrix according to the intrinsic matrix, and obtaining an Euler angle of the head movement of the driver rotating around 3 coordinate axes according to the rotation matrix, namely obtaining a relative attitude angle of the driver.
(2) Establishing a driver attention discrimination model, and judging the attention state of the driver according to the analysis result
The distraction is mainly classified into two types, one type of which requires a longer total time to complete the subtask due to more or more complicated subtasks, and the condition is mainly characterized in that the time proportion of the head of the driver deviating from the road surface is increased. The second type is that the head of the driver continuously deviates from the road surface for a period of time, so that the driver stops operating the vehicle in the period of time, lane deviation is easy to occur, and traffic accidents are easy to happen.
The invention establishes an attention discrimination model as follows: and when the duration time of the sight deviation exceeds 2s, alarming the distraction, if not, monitoring in a fixed time window, and judging whether the time proportion of the road right ahead is less than 70%, if so, alarming the distraction, otherwise, indicating the normal state of attention.
3. Judging whether to alarm and the specific mode of alarming: and when the driver is judged to be in the state of inattention, giving an alarm. There are various ways to alert the distraction status of the present invention. When the distraction occurs, the following alarm modes can be adopted in consideration of the acceptability and the alarm effect of the alarm mode: flashing LED lights, voice alarm, seat belt vibration and seat vibration.
Compared with the scheme in the prior art, the invention has the advantages that:
1. different attention dispersion states of a driver have different behavior characteristics, and the existing devices only aim at specific certain behaviors to identify the attention state of the driver, so that certain limitation exists; the automatic driver attention recognition system does not aim at specific attention dispersion behaviors, but judges the attention state of the driver according to the change of the head posture of the driver.
2. In the invention, the head attitude estimation of the driver is carried out by adopting an algorithm for resolving the relative attitude angle of the driver based on the projective geometry principle.
3. The face detection of the invention is achieved by two ways with different priorities. When the face is in the front position, a machine learning algorithm formed by Adaboost and ASM is used as a first priority to independently complete face detection; and if the machine learning algorithm fails due to the change of the posture or illumination in the next frame of image, starting a discrimination model serving as a second priority to finish the face detection.
4. The invention establishes a driver attention discrimination model and judges the driver attention state according to the analysis result of the estimation of the head posture of the driver.
5. The technology of the invention lays a foundation for further research work in the future. The automatic driver attention recognition is one of key technologies for driver fatigue detection, and in a driver fatigue early warning system, the fatigue state of a person is judged according to the attention state of the driver, so that the real-time and accurate fatigue early warning system is favorably constructed.
Drawings
The invention is further described with reference to the following figures and examples:
FIG. 1 is a schematic structural diagram of an automatic driver attention recognition system according to the present invention;
FIG. 2 is a flow chart of a face detection method in the automatic driver attention recognition method of the present invention;
FIG. 3 is a flowchart of a method for resolving a relative attitude angle of a driver based on a projective geometry principle in the automatic driver attention recognition method of the present invention;
FIG. 4 is a flowchart illustrating a method for determining whether attention is distracted in the automatic driver attention recognition method of the present invention.
Detailed Description
The above-described scheme is further illustrated below with reference to specific examples. It should be understood that these examples are for illustrative purposes and are not intended to limit the scope of the present invention. The conditions used in the examples may be further adjusted according to the conditions of the particular manufacturer, and the conditions not specified are generally the conditions in routine experiments.
Examples
Fig. 1 is a schematic structural view of an automatic driver attention recognition system according to the present invention. The system comprises an image acquisition device, an image processing device and an alarm device, wherein the image acquisition device is used for acquiring a face image of a driver in real time and transmitting the acquired image of the driver to the image processing device; the image processing device is used for acquiring a relative attitude angle of a driver through a driver image and analyzing and judging the attention state of the driver according to a driver attention discrimination model; the alarm device is used for giving an alarm when the image processing device judges that the driver is in the state of inattention.
In the aspect of system hardware, the image acquisition device adopts a camera, and the image processing device adopts a Digital Signal Processor (DSP) processing system based on DM 6437; the alarm device adopts a flashing LED lamp, a voice alarm, safety belt vibration, seat vibration and the like. The output end of the camera is connected with the input end of the DSP processing system, and the output end of the DSP processing system is connected with the input end of the alarm device.
When the attention of the driver is automatically recognized, the method comprises the following steps:
(1) acquiring a video image:
the camera can be installed above a vehicle instrument panel to take a front face image of a driver facing the front of a vehicle as a reference, and after the vehicle is started, the camera is adopted to collect a video image and acquire the face state information of the driver in real time. Wherein, in order to enable this system also can normally work when driving night, still installed infrared LED lamp on the camera, it can open infrared lamp according to the light environment of surrounding is automatic to remedy the not enough of light, and the invisible light of its institute's transmission does not influence the driver and normally drives in addition.
(2) Analyzing processed images
And analyzing and processing the image by adopting a DSP processing system. And video signals collected by the camera are converted and then transmitted to the DSP processing system for analysis.
The main task of the DSP processing system is how to discriminate the driver's attention status. The discrimination of the driver distraction state is realized by analyzing the orientation of the driver in a time domain, and can be divided into the following two specific steps:
1) calculating relative attitude angle of driver based on projective geometry principle
In the invention, the calculation of the relative attitude angle of the driver is completed by utilizing the projective transformation principle. The algorithm flow for resolving the relative attitude angle of the driver based on the projective geometry principle mainly comprises the following steps:
the first step is as follows: detecting a face area;
the second step is that: detecting an angular point;
the third step: tracking corners in the face region (L-K optical flow tracking algorithm);
the fourth step: calculating an intrinsic matrix;
the fifth step: estimating a translation vector T;
and a sixth step: calculating a rotation matrix R;
the seventh step: and (4) calculation of a rotation Euler angle.
The first step comprises the following specific implementation steps:
the face detection of the invention can be achieved through two ways with different priorities. When the face is in the front position, a machine learning algorithm formed by Adaboost and ASM is used as a first priority to independently complete face detection, the separation of the foreground and the background is realized, self-adaptive skin color modeling is carried out according to the foreground, pure background modeling is realized according to the background, and a face discrimination model based on skin color information and motion information is constructed. And if the machine learning algorithm fails due to the change of the posture or illumination in the next frame of image, starting a discrimination model serving as a second priority to finish the face detection. It should be noted that the reason why the motion information and the skin color information are combined rather than the design of the discrimination model based on the skin color information alone is that the interior trim of the vehicle is often designed with a warm tone similar to skin color, and the model based on skin color alone is easily interfered by the environment in the cab. The skin color information and the motion information are mutually complementary, so that the interference of scenery outside a window can be overcome, the interference of interior decoration of a cab can be prevented, and the face detection precision is improved. Fig. 2 is a block diagram of a face detection algorithm in the present invention.
Wherein the second step comprises the following specific implementation steps:
the method for solving the relative attitude angle of the driver based on the projective geometry is realized on the basis of tracking the corner points in the face region.
When two frames of images in a video are used for resolving relative changes of the attitude angle of a driver, the coordinates of a feature point set on an object space target on the front frame image and the rear frame image need to be known. If eight feature points (inner and outer eye corner points, nostril points and mouth corner points) on the front and rear frame images are known, the face pose angle can be calculated according to the projective geometry theory. However, it is difficult to accurately detect these feature points in a wide angle range; on the other hand, the inner and outer eye corners, the nostrils and the mouth corners all occupy a certain pixel range on the image, and the characteristic points detected when the driver is in the front posture are hard to ensure to be strong angular points. Therefore, when tracking these isolated feature points, a tracking failure often occurs. Failure of tracking any one feature point will make the attitude angle solution impossible.
In order to overcome the difficulty, in the selection of the feature point set, the invention does not select specific feature points (such as the eye corner points, mouth corner points and the like mentioned above) for tracking, but finds all points with the corner strength exceeding a certain threshold value in the whole scene range, and then only reserves the corner points in the face region range as candidate points according to the detection result of the face region.
The third step is specifically implemented as follows:
the face region is divided into nine sub-regions, which are respectively marked as 1-9. Within each sub-region, the optical flow vector is decomposed along the x, y axis of the image coordinate system. Let u be the maximum and minimum values in the x-directionmaxAnd uminThe maximum and minimum values in the y direction are each vmaxAnd vmin. Each with Δ u ═ u (u)max-umin) (vi)/5 and Δ v ═ vmax-vmin) And/5 discretizing optical flow components in the x direction and the y direction by step size, and making histograms of the x-direction components and the y-direction components. Let the corner point be at mu on the x component histogram0The + Deltau range is most distributed, whereas on the y-component histogram, the corner point is at v0The + av range is distributed the most. Traversing all corner points in the sub-regions, if the optical flow component of the corner point in the x direction is in mu0+ Δ u range with a component in the y direction at v0The + Δ v range is selected as the point participating in the attitude angle calculation.
The fourth step comprises the following specific implementation steps:
the essence of solving the attitude angle of the driver by projective transformation is to estimate the three-dimensional motion parameters of the driver by using a two-dimensional image sequence, the problem of the head attitude can be described as recovering the motion of a point in a three-dimensional space by the motion of a point on a two-dimensional image plane, and the bottom-layer pixel-level description of an image is converted into the concept of an upper-layer angle by a series of processing means. More specifically, assume that a point M on the driver's head is at a time t relative to the camera coordinate systemkPosition (x) ofk,yk,zk) Move to tk+1Position (x) ofk+1,yk+1,zk+1) Its projection onto the two-dimensional image plane is from (x'k,y′k) Go to (x'k+1,y′k+1) Let the rotation matrix and translation vector be R respectivelykAnd TkAnd then obtaining a three-dimensional rigid motion model:
x k + 1 y k + 1 z k + 1 = r xx r xy r xz r yx r yy r yz r zx r zy r zz x k y k z k + t x t y t z = R k x k y k z k + T k .
let space point (x)k,yk,zk) The projection on the image plane is (x'k,y′k) According to the perspective projection model of the camera, the following can be obtained:
<math> <mrow> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>x</mi> <mi>k</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> </mfrac> <mo>,</mo> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>y</mi> <mi>k</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> </mfrac> <mo>;</mo> </mrow> </math>
then there are
<math> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mo>=</mo> <mi>F</mi> <mfrac> <mrow> <msub> <mi>r</mi> <mi>xx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>xy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>xz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mrow> <mrow> <msub> <mi>r</mi> <mi>zx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>z</mi> </msub> </mrow> </mfrac> </mrow> </math>
<math> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mo>=</mo> <mi>F</mi> <mfrac> <mrow> <msub> <mi>r</mi> <mi>yx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>yy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>yz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>y</mi> </msub> </mrow> <mrow> <msub> <mi>r</mi> <mi>zx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>z</mi> </msub> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
Under the normalized perspective projection (F = 1), the constraint relationship of the outer limit equation is satisfied between them, namely:
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> </mtd> <mtd> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mi>E</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mn>0</mn> <mo>;</mo> </mrow> </math>
this equation can be simplified as:
m ~ k + 1 T E m ~ k = 0 ;
wherein, <math> <mrow> <msub> <mover> <mi>m</mi> <mo>~</mo> </mover> <mi>k</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> <msub> <mover> <mi>m</mi> <mo>~</mo> </mover> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> e is called the eigenmatrix, i.e.:
E=Tk×Rk
for convenient operation, translation vector T can be introducedkOf an antisymmetric array [ T ]k]×
<math> <mrow> <msub> <mrow> <mo>[</mo> <msub> <mi>T</mi> <mi>k</mi> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>x</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mi>z</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
The eigenmatrix can then be expressed as:
E=[Tk]×Rk
so far, the following conclusions can be drawn: the eigen matrix E can be solved by an epipolar line equation through a feature point set selected on the basis of face tracking. The problem of driver attitude estimation then translates from the eigen-matrix E according to the formula E ═ Tk]×RkSolving the rotation matrix RkAnd translation vector TkTo a problem of (a).
The fifth step comprises the following specific implementation steps:
the estimation of motion parameters from the eigenmatrices is currently generally solved by optimization algorithms. The translation vector T may be obtained by solving the following equation:
Figure BDA0000447581370000111
wherein the constraint conditions are as follows: | T | non-conducting phosphor21. T is actually the correspondence matrix ETE unit norm vector of minimum eigenvalues.
The sixth step comprises the following specific implementation steps:
because the calculation of the rotation matrix R by adopting an optimization algorithm is complex and the solution is often not unique, the invention introduces some beneficial conclusions obtained by many scholars in the study of the property of the eigen matrix, provides a method for solving the analytic solution of the rotation matrix by the eigen matrix and provides a strict mathematical proof.
Theorem 1: the eigen matrix is decomposed into singular values by a constant factor
Figure BDA0000447581370000116
Two possible solutions of the rotation matrix R that satisfy the intrinsic matrix constraint under this decomposition can be obtained:
R = U 0 - k 0 k 0 0 0 0 d V T ;
where k ═ 1, d ═ det (u) det (v).
From this theorem it can be seen that: if singular value decomposition is performed on the eigen matrix, then the formula is followed
Figure BDA0000447581370000113
The rotation matrix R can be calculated. However, there are two problems here: one is that the singular value decomposition results are not unique, in which case they are based on the formula
Figure BDA0000447581370000114
Whether the calculated rotation matrix R is unique or not; secondly according to a formula
Figure BDA0000447581370000115
Two solutions of the rotation matrix are obtained, how to identify which solution is the true solution.
Literature studies have further yielded two good properties with respect to the eigenmatrix:
theorem of law2: the intrinsic matrix E is subjected to singular value decomposition under the condition of a constant factor difference
Figure BDA0000447581370000117
(Un,Vn) For any pair of orthogonal matrices that satisfy the singular value decomposition of E, n ═ 1,2,3, when n ═ i and n ≠ j, respectively, then the two possible solutions for the rotation matrix R obtained for each decomposition correspond to equality.
Theorem 3: the camera makes rigid motion and unitizes corresponding points in two image planes into U0,U2Two solutions R, R' of the rotation matrix obtained by singular value decomposition of the eigen matrix should satisfy
(RTU2×R'TU2)·U0=0;
And R isTU2,R'TU2About unit translation vector T0And (4) symmetry.
From theorem 2, it can be seen that any singular value decomposition of the eigen-matrix E results in the same rotation matrix R. From theorem 3, the rotation matrix error solution can be derived2Rotating to the back of the camera is not physically realizable, and usually the scene observed before and after the rigid body motion of the camera should always be in front of the camera. From this, it is not difficult to obtain a method for judging the correct solution of the rotation matrix, i.e. checking the vector angle U0,RTU2And U0,R'TU2And obtaining a rotation matrix R corresponding to the small included angle as a correct solution.
The seventh step comprises the following specific implementation steps:
after the rotation matrix R is solved, the euler angles of the driver's head around 3 coordinate axes need to be calculated. The rotation matrix R is represented by euler angles and is of the form:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>R</mi> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>12</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>13</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>22</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>23</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>31</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>32</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>33</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> <mtr> <mtd> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi>a</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> <mo>-</mo> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> <mo>+</mo> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> <mo>-</mo> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
from this, a calculation formula for solving the attitude angle from the rotation matrix can be derived, as shown in the following equation:
α=tan-1(r21/r11)
<math> <mrow> <mi>&beta;</mi> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>r</mi> <mn>31</mn> </msub> <mo>/</mo> <msqrt> <msubsup> <mi>r</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>r</mi> <mn>33</mn> <mn>2</mn> </msubsup> </msqrt> <mo>)</mo> </mrow> </mrow> </math>
γ=tan-1(r32/r33)。
2) establishing a driver attention discrimination model, and judging the attention state of the driver according to the analysis result
The distraction is mainly classified into two types, one type of which requires a longer total time to complete the subtask due to more or more complicated subtasks, and the condition is mainly characterized in that the time proportion of the head of the driver deviating from the road surface is increased. The second type is that the head of the driver continuously deviates from the road surface for a period of time, so that the driver stops operating the vehicle in the period of time, lane deviation is easy to occur, and traffic accidents are easy to happen.
For the first category, studies have shown that, when attention is focused, 80% -90% of the time scale is forward; when attention is highly dispersed, less than 50% of the time scale is toward the front; for safe driving, most sight lines (> 70%) are distributed to the road ahead, and therefore, the first judgment rule adopted by the invention is as follows: and in the fixed time window, judging that the attention is distracted when the time proportion of the sight line deviating from the front road range is less than 70%.
For the second kind of situation, Green proposes that when a driver drives normally, the duration of single sight deviation is 1.2-1.85s at most, and most of the sight deviation is less than 1.2 s; kurokawa & Wierwill teaches that the duration of the line of sight away from the road should typically be 1-1.5 s; rockwell proposed a 2s rule that the duration of the out-of-road-line-of-sight should be less than 2 s. So the second discriminant rule employed is: a single gaze deviation duration of greater than 2s is judged to be distracting.
The invention establishes an attention discrimination model as follows: and when the duration time of the sight deviation exceeds 2s, alarming the distraction, if not, monitoring whether the time proportion of looking at the road right ahead in a fixed time window is less than 70%, if so, alarming the distraction, otherwise, indicating the normal state of attention.
(3) And alarming according to the judgment condition.
And when the driver is judged to be in the state of inattention, giving an alarm. There are various ways to alert the distraction status of the present invention. When the distraction occurs, the following alarm modes can be adopted in consideration of the acceptability and the alarm effect of the alarm mode: flashing LED lights, voice alarm, seat belt vibration and seat vibration.
The method judges the fatigue state of the driver according to the attention state of the driver, and is beneficial to constructing a real-time and accurate fatigue early warning system.
The above examples are only for illustrating the technical idea and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (10)

1. A driver attention automatic identification system comprises an image acquisition device, an image processing device and an alarm device, and is characterized in that the image acquisition device is used for acquiring a face image of a driver in real time and transmitting the acquired image of the driver to the image processing device; the image processing device is used for acquiring a relative attitude angle of a driver through a driver image and analyzing and judging the attention state of the driver according to a driver attention discrimination model; the alarm device is used for giving an alarm when the image processing device judges that the driver is in the state of inattention.
2. The system according to claim 1, wherein the warning device is selected from one or any combination of two or more of the following warning prompters: LED lamp, audible alarm, safety belt vibrator and seat vibrator.
3. The system according to claim 1, wherein the image capturing device is selected from a camera, and an output terminal of the camera is connected to an input terminal of the image processing device.
4. The system according to claim 1, wherein the image processing device is a DSP processing system, and an output of the DSP processing system is connected to an input of the warning device.
5. The system according to claim 1, wherein the image capturing device is installed above a dashboard of the vehicle, and is used as a reference for installation of a front face image when the driver faces the front of the vehicle.
6. A method for automatically recognizing attention of a driver, characterized by comprising the steps of:
(1) collecting a face image of a driver;
(2) acquiring a relative attitude angle of a driver through a driver image, and analyzing and judging the attention state of the driver according to a driver attention discrimination model;
(3) when the image processing device judges that the driver is in the state of inattention, the alarm prompt is carried out.
7. The method for automatically identifying attention of a driver as claimed in claim 6, wherein the step (2) of the method comprises obtaining the relative attitude angle of the driver by using a projective transformation method, and comprises the following steps:
1) acquiring a face area through a driver image, and detecting angular points in the face area;
2) tracking the corner points in the face area by adopting an L-K optical flow tracking algorithm, and selecting points participating in attitude angle calculation;
3) through the feature point set selected on the basis of face tracking, an eigen matrix and a translation vector can be solved by an epipolar line equation;
4) and obtaining a correct solution of the rotation matrix according to the intrinsic matrix, and obtaining an Euler angle of the head movement of the driver rotating around 3 coordinate axes according to the rotation matrix, namely obtaining a relative attitude angle of the driver.
8. Method for automatic recognition of the attention of a driver according to claim 6, characterized in that it is assumed in the method that a point M of the driver's head is located from the moment t with respect to the camera coordinate systemkPosition (x) ofk,yk,zk) Move to tk+1Position (x) ofk+1,yk+1,zk+1) Its projection on the two-dimensional image plane is from (x'k,y'k) Go to (x'k+1,y'k+1) Let the rotation matrix and translation vector be R respectivelykAnd TkAnd then obtaining a three-dimensional rigid motion model:
x k + 1 y k + 1 z k + 1 = r xx r xy r xz r yx r yy r yz r zx r zy r zz x k y k z k + t x t y t z = R k x k y k z k + T k ;
according to the perspective projection model of the camera, the following results are obtained:
<math> <mrow> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>x</mi> <mi>k</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> </mfrac> <mo>,</mo> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>y</mi> <mi>k</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> </mfrac> <mo>;</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mo>=</mo> <mi>F</mi> <mfrac> <mrow> <msub> <mi>r</mi> <mi>xx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>xy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>xz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mrow> <mrow> <msub> <mi>r</mi> <mi>zx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>z</mi> </msub> </mrow> </mfrac> </mrow> </math>
then: <math> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mi>F</mi> <mfrac> <msub> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mo>=</mo> <mi>F</mi> <mfrac> <mrow> <msub> <mi>r</mi> <mi>yx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>yy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>yz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>y</mi> </msub> </mrow> <mrow> <msub> <mi>r</mi> <mi>zx</mi> </msub> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zy</mi> </msub> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>r</mi> <mi>zz</mi> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>t</mi> <mi>z</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
under the normalized perspective projection (F = 1), the constraint relationship of the outer limit equation is satisfied between them, namely:
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> </mtd> <mtd> <msubsup> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>&prime;</mo> </msubsup> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mi>E</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>k</mi> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </math> the eigenmatrix E is then:
E=[Tk]×Rk
wherein <math> <mrow> <msub> <mrow> <mo>[</mo> <msub> <mi>T</mi> <mi>k</mi> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>x</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mi>z</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> [Tk]×As translation vector TkAn anti-symmetric matrix of (a).
9. The method according to claim 6, wherein assuming that the translation vector is T and the eigen matrix is E, the method solves the following equations:
Figure FDA0000447581360000031
wherein the constraint conditions are as follows: | T | non-conducting phosphor2A translation vector T is obtained, 1.
10. The automatic driver attention recognition method according to claim 6, wherein the rotation matrix R in the method is expressed by Euler angles in the form of:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>R</mi> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>12</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>13</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>22</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>23</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>31</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>32</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>33</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> <mtr> <mtd> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi>a</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> <mo>-</mo> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> <mo>+</mo> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>cos</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>sin</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> <mo>-</mo> <mi>cos</mi> <mi></mi> <mi>&alpha;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&beta;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&beta;</mi> <mi>sin</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>cos</mi> <mi></mi> <mi>&beta;</mi> <mi>cos</mi> <mi>&gamma;</mi> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
obtaining a calculation formula of the attitude angle according to the rotation matrix R:
α=tan-1(r21/r11)
<math> <mrow> <mi>&beta;</mi> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>r</mi> <mn>31</mn> </msub> <mo>/</mo> <msqrt> <msubsup> <mi>r</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>r</mi> <mn>33</mn> <mn>2</mn> </msubsup> </msqrt> <mo>)</mo> </mrow> </mrow> </math>
γ=tan-1(r32/r33)。
CN201310731617.6A 2013-12-26 2013-12-26 Automatic driver attention identification system and identification method thereof Active CN103839046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310731617.6A CN103839046B (en) 2013-12-26 2013-12-26 Automatic driver attention identification system and identification method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310731617.6A CN103839046B (en) 2013-12-26 2013-12-26 Automatic driver attention identification system and identification method thereof

Publications (2)

Publication Number Publication Date
CN103839046A true CN103839046A (en) 2014-06-04
CN103839046B CN103839046B (en) 2017-02-01

Family

ID=50802525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310731617.6A Active CN103839046B (en) 2013-12-26 2013-12-26 Automatic driver attention identification system and identification method thereof

Country Status (1)

Country Link
CN (1) CN103839046B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106659441A (en) * 2014-06-17 2017-05-10 皇家飞利浦有限公司 Evaluating clinician attention
CN106781284A (en) * 2016-12-31 2017-05-31 中国铁道科学研究院电子计算技术研究所 Trainman's driver fatigue monitor system based on Multi-information acquisition
CN106919916A (en) * 2017-02-23 2017-07-04 上海蔚来汽车有限公司 For the face front attitude parameter method of estimation and device of driver status detection
CN107323460A (en) * 2017-07-12 2017-11-07 金俊如 A kind of driving safety auxiliary device based on electromyographic signal
CN107818310A (en) * 2017-11-03 2018-03-20 电子科技大学 A kind of driver attention's detection method based on sight
CN108205651A (en) * 2016-12-20 2018-06-26 中国移动通信有限公司研究院 A kind of recognition methods of action of having a meal and device
CN108482124A (en) * 2018-02-24 2018-09-04 宁波科达仪表有限公司 A kind of vehicle meter and its working method with monitoring function
CN108961678A (en) * 2018-04-26 2018-12-07 华慧视科技(天津)有限公司 One kind being based on Face datection Study in Driver Fatigue State Surveillance System and its detection method
CN109733280A (en) * 2018-12-05 2019-05-10 江苏大学 Safety device of vehicle and its control method based on driver's facial characteristics
CN109949357A (en) * 2019-02-27 2019-06-28 武汉大学 A kind of stereopsis is to relative attitude restoration methods
CN110020583A (en) * 2017-12-22 2019-07-16 丰田自动车株式会社 Sleepiness apparatus for predicting
CN110466530A (en) * 2019-08-15 2019-11-19 广州小鹏汽车科技有限公司 Based reminding method and system, vehicle in a kind of driving procedure
CN110909611A (en) * 2019-10-29 2020-03-24 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN113168525A (en) * 2018-12-03 2021-07-23 法雷奥舒适驾驶助手公司 Device and method for detecting distraction of driver of vehicle
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1495658A (en) * 2002-06-30 2004-05-12 贺贵明 Driver's face image identification and alarm device and method
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN101950355A (en) * 2010-09-08 2011-01-19 中国人民解放军国防科学技术大学 Method for detecting fatigue state of driver based on digital video
CN102208125A (en) * 2010-03-30 2011-10-05 深圳市赛格导航科技股份有限公司 Fatigue driving monitoring system and method thereof
CN102324166A (en) * 2011-09-19 2012-01-18 深圳市汉华安道科技有限责任公司 Fatigue driving detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1495658A (en) * 2002-06-30 2004-05-12 贺贵明 Driver's face image identification and alarm device and method
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN102208125A (en) * 2010-03-30 2011-10-05 深圳市赛格导航科技股份有限公司 Fatigue driving monitoring system and method thereof
CN101950355A (en) * 2010-09-08 2011-01-19 中国人民解放军国防科学技术大学 Method for detecting fatigue state of driver based on digital video
CN102324166A (en) * 2011-09-19 2012-01-18 深圳市汉华安道科技有限责任公司 Fatigue driving detection method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FAN XIAO ETAL: "yawning detection for monitoring driver fatigue", 《INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS》 *
成波等: "基于多源信息融合的驾驶人疲劳状态监测及预警方法研究", 《公路交通科技》 *
成波等: "基于机器视觉的驾驶员注意力状态监测技术研究", 《汽车工程》 *
郭克友: "基于视觉的驾驶人疲劳及注意力监测方法", 《公路交通科技》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106659441A (en) * 2014-06-17 2017-05-10 皇家飞利浦有限公司 Evaluating clinician attention
CN106659441B (en) * 2014-06-17 2019-11-26 皇家飞利浦有限公司 Evaluate doctor's attention
CN108205651A (en) * 2016-12-20 2018-06-26 中国移动通信有限公司研究院 A kind of recognition methods of action of having a meal and device
CN108205651B (en) * 2016-12-20 2021-04-06 中国移动通信有限公司研究院 Eating action recognition method and device
CN106781284A (en) * 2016-12-31 2017-05-31 中国铁道科学研究院电子计算技术研究所 Trainman's driver fatigue monitor system based on Multi-information acquisition
CN106919916A (en) * 2017-02-23 2017-07-04 上海蔚来汽车有限公司 For the face front attitude parameter method of estimation and device of driver status detection
CN107323460A (en) * 2017-07-12 2017-11-07 金俊如 A kind of driving safety auxiliary device based on electromyographic signal
CN107818310A (en) * 2017-11-03 2018-03-20 电子科技大学 A kind of driver attention's detection method based on sight
CN107818310B (en) * 2017-11-03 2021-08-06 电子科技大学 Driver attention detection method based on sight
CN110020583A (en) * 2017-12-22 2019-07-16 丰田自动车株式会社 Sleepiness apparatus for predicting
CN108482124B (en) * 2018-02-24 2021-01-01 宁波科达仪表有限公司 Motor vehicle instrument with monitoring function and working method thereof
CN108482124A (en) * 2018-02-24 2018-09-04 宁波科达仪表有限公司 A kind of vehicle meter and its working method with monitoring function
CN108961678A (en) * 2018-04-26 2018-12-07 华慧视科技(天津)有限公司 One kind being based on Face datection Study in Driver Fatigue State Surveillance System and its detection method
CN113168525A (en) * 2018-12-03 2021-07-23 法雷奥舒适驾驶助手公司 Device and method for detecting distraction of driver of vehicle
CN109733280A (en) * 2018-12-05 2019-05-10 江苏大学 Safety device of vehicle and its control method based on driver's facial characteristics
CN109733280B (en) * 2018-12-05 2021-07-20 江苏大学 Vehicle safety device based on facial features of driver and control method thereof
CN109949357A (en) * 2019-02-27 2019-06-28 武汉大学 A kind of stereopsis is to relative attitude restoration methods
CN109949357B (en) * 2019-02-27 2022-07-05 武汉大学 Method for recovering relative posture of stereo image pair
CN110466530A (en) * 2019-08-15 2019-11-19 广州小鹏汽车科技有限公司 Based reminding method and system, vehicle in a kind of driving procedure
CN110909611A (en) * 2019-10-29 2020-03-24 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN110909611B (en) * 2019-10-29 2021-03-05 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN114022871A (en) * 2021-11-10 2022-02-08 中国民用航空飞行学院 Unmanned aerial vehicle driver fatigue detection method and system based on depth perception technology

Also Published As

Publication number Publication date
CN103839046B (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN103839046B (en) Automatic driver attention identification system and identification method thereof
US9824286B2 (en) Method and apparatus for early detection of dynamic attentive states for providing an inattentive warning
US10235768B2 (en) Image processing device, in-vehicle display system, display device, image processing method, and computer readable medium
US9662977B2 (en) Driver state monitoring system
US10655978B2 (en) Controlling an autonomous vehicle based on passenger behavior
JP7407198B2 (en) Driving monitoring methods, systems and electronic equipment
CN108229345A (en) A kind of driver&#39;s detecting system
García et al. Driver monitoring based on low-cost 3-D sensors
CN103824420A (en) Fatigue driving identification system based on heart rate variability non-contact measuring
KR101914362B1 (en) Warning system and method based on analysis integrating internal and external situation in vehicle
JP2010191793A (en) Alarm display and alarm display method
KR20120055011A (en) Method for tracking distance of eyes of driver
CN111832373A (en) Automobile driving posture detection method based on multi-view vision
CN114764912A (en) Driving behavior recognition method, device and storage medium
JP2022033805A (en) Method, device, apparatus, and storage medium for identifying passenger&#39;s status in unmanned vehicle
EP3440592A1 (en) Method and system of distinguishing between a glance event and an eye closure event
CN110222616A (en) Pedestrian&#39;s anomaly detection method, image processing apparatus and storage device
KR20120067890A (en) Apparatus for video analysis and method thereof
CN109685083A (en) The multi-dimension testing method of driver&#39;s driving Misuse mobile phone
CN113401058A (en) Real-time display method and system for automobile A column blind area based on three-dimensional coordinates of human eyes
US20230230294A1 (en) Picture processing device, picture processing method and non-transitory computer-readable recording medium
JP2023147206A (en) Object information acquisition method and system for implementation
US11807264B2 (en) Driving assistance apparatus, driving assistance method, and medium
DE102017211555A1 (en) Method for monitoring at least one occupant of a motor vehicle, wherein the method is used in particular for monitoring and detecting possible dangerous situations for at least one occupant
JP7412514B1 (en) Cabin monitoring method and cabin monitoring system that implements the above cabin monitoring method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20240705

Granted publication date: 20170201

PP01 Preservation of patent right