CN111160126B - Driving state determining method, driving state determining device, vehicle and storage medium - Google Patents
Driving state determining method, driving state determining device, vehicle and storage medium Download PDFInfo
- Publication number
- CN111160126B CN111160126B CN201911267783.9A CN201911267783A CN111160126B CN 111160126 B CN111160126 B CN 111160126B CN 201911267783 A CN201911267783 A CN 201911267783A CN 111160126 B CN111160126 B CN 111160126B
- Authority
- CN
- China
- Prior art keywords
- driver
- vehicle
- frequency
- determining
- driving state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000001815 facial effect Effects 0.000 claims abstract description 54
- 230000000007 visual effect Effects 0.000 claims description 44
- 230000008569 process Effects 0.000 claims description 40
- 230000004399 eye closure Effects 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 11
- 241001282135 Poromitra oscitans Species 0.000 claims description 8
- 206010048232 Yawning Diseases 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 208000004350 Strabismus Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000019504 cigarettes Nutrition 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Traffic Control Systems (AREA)
Abstract
The application is applicable to the technical field of computers, and provides a driving state determining method, which comprises the following steps: acquiring target image frames and vehicle running state information within a preset duration, wherein the target image frames contain facial feature information of a driver; a driving state of the driver is determined based on the facial feature information and the vehicle running state information. Compared with the prior art, the driving state of the driver is determined based on the facial feature information and the vehicle driving state information, so that the probability of misjudgment can be reduced, and the accuracy of driving state judgment can be improved.
Description
Technical Field
The application belongs to the technical field of computers, and particularly relates to a driving state determining method, a driving state determining device, a vehicle and a storage medium.
Background
The conventional common driver state monitoring system (Driver Status Monitor, DSM) is mainly based on a computer vision technology, obtains a face image of a driver through a camera, and judges whether the driver has an abnormal state or behavior according to a detection result of a computer vision algorithm.
Disclosure of Invention
In view of this, the embodiments of the present application provide a driving state determining method, apparatus, vehicle, and storage medium, so as to solve the problem in the prior art that erroneous judgment occurs when driving state determination is performed based on computer vision technology.
A first aspect of an embodiment of the present application provides a driving state determining method, including:
acquiring target image frames and vehicle running state information within a preset duration, wherein the target image frames contain facial feature information of a driver;
a driving state of the driver is determined based on the facial feature information and the vehicle running state information.
In an alternative implementation, the determining the driving state of the driver based on the facial feature information and the vehicle driving state information includes:
determining visual information of the driver based on the facial feature information;
based on the visual information, determining a single duration of the driver's sight line deviating from the vehicle driving direction and/or a frequency of the driver's sight line deviating from the vehicle driving direction within the preset duration;
determining left and right deflection frequencies of the vehicle in the running process based on the vehicle running state information;
If the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the single duration of the sight line of the driver, deviating from the running direction of the vehicle, in the preset duration is larger than a preset first time threshold value, determining that the driver is in a distracted driving state;
and if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the frequency of the sight line of the driver deviating from the running direction of the vehicle in the preset time period is larger than a preset second frequency threshold value, determining that the driver is in the distraction driving state.
In an optional implementation manner, after the determining, based on the visual information, a single duration of time that the driver's line of sight deviates from the vehicle driving direction and/or a frequency that the line of sight deviates from the vehicle driving direction within the preset duration, the method further includes:
if the single duration of the sight line of the driver, deviating from the running direction of the vehicle, within the preset duration is larger than a preset first limit time threshold, determining that the driver is in a distraction driving state; the first limit time threshold is equal to n times the first time threshold, and n is a positive integer greater than 1.
In an alternative implementation, the visual information includes eye state information;
the determining visual information of the driver based on the facial feature information includes:
analyzing the facial feature information by utilizing a visual analysis model which is trained in advance to obtain the eye state information;
correspondingly, the determining, based on the visual information, a single duration of the driver's line of sight deviating from the vehicle driving direction and/or a frequency of the line of sight deviating from the vehicle driving direction within the preset duration includes:
and determining single duration time of the driver in which the sight line deviates from the running direction of the vehicle and/or frequency of the driver in which the sight line deviates from the running direction of the vehicle in the preset duration time based on the eye state information.
In an alternative implementation, the determining the driving state of the driver based on the facial feature information and the vehicle driving state information includes:
determining visual information of the driver based on the facial feature information;
determining at least one of a single eye closure duration, an eye closure frequency, and a yawning frequency of the driver within the preset duration based on the visual information;
Determining left and right deflection frequencies of the vehicle in the running process based on the vehicle running state information;
if the left-right deflection frequency of the vehicle in the running process is larger than a preset third frequency threshold value, and the single-time duration of the eye closure of the driver in the preset duration is larger than a preset second time threshold value, determining that the driver is in a fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value and the eye closing frequency of the driver in the preset duration is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is larger than the third frequency threshold value and the yawing frequency of the driver in the preset duration is larger than the preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
In an alternative implementation, after the determining, based on the visual information, that the driver closes the eyes for a single duration, a frequency of closing the eyes, or a frequency of yawning within the preset duration, the method further includes:
If the single eye closing duration of the driver in the preset duration is greater than a preset second limit time threshold, determining that the driver is in the fatigue driving state, wherein the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset duration is greater than a preset third threshold frequency value, determining that the driver is in the fatigue driving state, wherein the third threshold frequency value is equal to k times of the third threshold frequency value, and k is a positive integer greater than 1;
or if the yawing frequency of the driver in the preset duration is greater than a preset fourth limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limit frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
In an alternative implementation, the vehicle running state information includes a direction angle of the vehicle running;
the determining the left-right yaw rate of the vehicle during traveling based on the vehicle traveling state information includes:
and determining the left-right deflection frequency of the vehicle in the driving process based on the driving direction angle of the vehicle.
A second aspect of the embodiments of the present application provides a driving state determining apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring target image frames and vehicle running state information in preset duration, and the target image frames contain facial feature information of a driver;
a determination module for determining a driving state of the driver based on the facial feature information and the vehicle running state information.
In an alternative implementation, the determining module includes:
a first determination unit configured to determine visual information of the driver based on the facial feature information;
a second determining unit configured to determine, based on the visual information, a single duration of a line of sight of the driver deviating from a vehicle traveling direction and/or a frequency of a line of sight deviating from the vehicle traveling direction within the preset duration;
a third determining unit configured to determine a left-right yaw rate of the vehicle during traveling based on the vehicle traveling state information;
a fourth determining unit, configured to determine that the driver is in a distracted driving state if a left-right yaw rate of the vehicle during driving is greater than a preset first frequency threshold, and a single duration of a line of sight of the driver, within the preset duration, deviating from a driving direction of the vehicle is greater than a preset first time threshold;
Or if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the frequency of the driver's sight line deviating from the running direction of the vehicle in the preset time period is larger than a preset second frequency threshold value, determining that the driver is in the distraction driving state.
In an alternative implementation, the determining module includes:
a fifth determining unit, configured to determine that the driver is in a distracted driving state if a single duration of the driver's line of sight deviating from a vehicle driving direction within the preset duration is greater than a preset first limit time threshold; the first limit time threshold is equal to n times the first time threshold, and n is a positive integer greater than 1.
In an alternative implementation, the visual information includes eye state information;
the first determination unit includes:
analyzing the facial feature information by utilizing a visual analysis model which is trained in advance to obtain the eye state information;
correspondingly, the second determining unit is configured to:
and determining single duration time of the driver in which the sight line deviates from the running direction of the vehicle and/or frequency of the driver in which the sight line deviates from the running direction of the vehicle in the preset duration time based on the eye state information.
In an alternative implementation, the determining module includes:
a sixth determination unit configured to determine visual information of the driver based on the facial feature information;
a seventh determining unit that determines at least one of an eye closure single duration, an eye closure frequency, and a yawning frequency of the driver within the preset duration based on the visual information;
an eighth determination unit configured to determine a left-right yaw rate of the vehicle during traveling based on the vehicle traveling state information;
a ninth determining unit, configured to determine that the driver is in a fatigue driving state if the left-right yaw rate of the vehicle during running is greater than a preset third frequency threshold, and the single eye-closure duration of the driver in the preset duration is greater than a preset second time threshold;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value and the eye closing frequency of the driver in the preset duration is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is larger than the third frequency threshold value and the yawing frequency of the driver in the preset duration is larger than the preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
In an alternative implementation, the determining module includes:
a tenth determining unit, configured to determine that the driver is in the fatigue driving state if the single eye-closure duration of the driver in the preset duration is greater than a preset second limit time threshold, where the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset duration is greater than a preset third threshold frequency value, determining that the driver is in the fatigue driving state, wherein the third threshold frequency value is equal to k times of the third threshold frequency value, and k is a positive integer greater than 1;
or if the yawing frequency of the driver in the preset duration is greater than a preset fourth limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limit frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
In an alternative implementation, the vehicle running state information includes a direction angle of the vehicle running;
the third determining unit, or the eighth determining unit, is specifically configured to:
And determining the left-right deflection frequency of the vehicle in the driving process based on the driving direction angle of the vehicle.
Compared with the prior art, the driving state determining method provided by the first aspect of the embodiment of the application can reduce the probability of erroneous judgment and improve the accuracy of driving state judgment by determining the driving state of the driver based on the facial feature information and the vehicle driving state information.
Compared with the prior art, the second to fourth aspects of the embodiments of the present application have the same beneficial effects as compared with the first aspect of the embodiments of the present application, and are not described herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation of a driving state determination method provided in a first embodiment of the present application;
FIG. 2 is a flowchart showing the implementation of S102 in FIG. 1 in the first embodiment;
FIG. 3 is a flowchart showing the implementation of S102 in FIG. 1 in a second embodiment;
FIG. 4 is a flowchart showing the implementation of S102 in FIG. 1 in a third embodiment;
FIG. 5 is a flowchart showing the implementation of S102 in FIG. 1 in the fourth embodiment;
fig. 6 is a schematic structural view of a driving state determining device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be noted that, the conventional vehicle-mounted driver state monitoring system (Driver Status Monitor, DSM, or Driver Monitoring System/Driver Monitor System, DMS) is mainly based on computer vision technology to analyze the face image of the driver obtained by the camera, so as to determine whether the driver has abnormal driving states, such as distraction driving, fatigue driving, or abnormal behaviors, such as smoking, making a call, feeding food, etc. The method for determining the driving state based on the computer vision technology only analyzes based on the face image of the driver and has a certain misjudgment phenomenon. For example, in certain situations the driver may frequently look at the outside rear view mirror, the action may be misinterpreted as distracting driving, or some drivers may naturally habitually squint eyes, such people may not be able to accurately determine driving state by computer vision analysis methods, etc.
The common vehicle-mounted inertial measurement module (Inertial measurement unit, IMU) can measure acceleration of the vehicle relative to the ground in real time and sense motion track change of the vehicle and change of the stable state of the vehicle body, so that in the embodiment of the application, the motion state of the vehicle is combined with the above-mentioned computer vision technology to judge the driving state of the driver, and the accuracy of judging the driving state is improved.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples. As shown in fig. 1, a flowchart of an implementation of the driving state determining method provided in the first embodiment of the present application is shown, where the embodiment may be implemented by hardware or software execution inside a vehicle. The details are as follows:
s101, acquiring target image frames and vehicle running state information in a preset time period, wherein the target image frames contain facial feature information of a driver.
The target image frame can be captured by a camera which is pre-installed in the vehicle, and it is understood that the camera is usually installed at a position where the front image of the face of the driver can be captured to the greatest extent possible. The vehicle running state information may be measured by an inertial measurement module, such as a gyroscope or accelerometer, which is pre-installed on the vehicle.
In the present embodiment, the driving state is determined by acquiring the target image frame containing the facial feature information of the driver and the vehicle running state information within the preset time period, and combining the facial feature information of the driver and the vehicle running state information.
S102, determining the driving state of the driver based on the facial feature information and the vehicle driving state information.
It can be appreciated that if articles such as cigarettes, mobile phones, water cups, food and the like are present around the driver, a distraction driving state may occur to the driver, for example, the driver may drink water, smoke, make a call, eat food and the like during driving; in addition, when the driver is in a fatigue driving state, the driver usually has a corresponding phenomenon of yawning or eye closure, and when the driver is in a distraction driving state or a fatigue driving state, the facial feature information of the driver is changed, and the vehicle running state information of the vehicle controlled by the driver is also changed accordingly, namely, the driving state of the driver can be determined by the facial feature information and the vehicle running state information.
Specifically, as shown in fig. 2, a flowchart of the implementation of S102 in fig. 1 in the first embodiment is shown. As can be seen from fig. 2, S102 includes:
And S1021, determining visual information of the driver based on the facial feature information.
The facial feature information includes preset facial key feature information, such as at least one of eye feature information, mouth feature information, eyebrow feature information, nose feature information, etc., as an example and not by way of limitation, in this example, the facial feature information includes eye feature information, the visual information includes eye state information, and the determining visual information of the driver based on the facial feature information includes:
and analyzing the eye characteristic information by utilizing a visual analysis model which is trained in advance to obtain the eye state information.
The input of the visual analysis model which is trained in advance is facial feature information, and the output is visual information; in this embodiment, the facial feature information is eye feature information, and the visual information is eye state information.
And S1022, determining single duration time of the driver in which the sight line deviates from the running direction of the vehicle and/or frequency of the driver in which the sight line deviates from the running direction of the vehicle in the preset duration time based on the visual information.
Correspondingly, the determining, based on the visual information, a single duration of the driver's line of sight deviating from the vehicle driving direction and/or a frequency of the line of sight deviating from the vehicle driving direction within the preset duration includes:
And determining single duration time of the driver in which the sight line deviates from the running direction of the vehicle and/or frequency of the driver in which the sight line deviates from the running direction of the vehicle in the preset duration time based on the eye state information.
S1023, determining the left-right deflection frequency of the vehicle in the running process based on the vehicle running state information.
By way of example and not limitation, the vehicle travel state information includes a direction angle at which the vehicle is traveling. The S1023 includes:
and determining the left-right deflection frequency of the vehicle in the driving process based on the driving direction angle of the vehicle.
It should be noted that, in this embodiment, the vehicle driving state information generally further includes an acceleration value or an angular velocity value, and the frequency of left deflection and right deflection during the driving process of the vehicle are measured by the multiple direction angles of the vehicle driving, so as to measure the lane keeping capability of the vehicle during the driving process, and determine whether the driver has abnormal driving behavior according to the lane keeping capability of the vehicle during the driving process.
S1024, if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the single duration of the sight line of the driver, deviating from the running direction of the vehicle, in the preset duration is larger than a preset first time threshold value, determining that the driver is in a distraction driving state;
Or if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the frequency of the driver's sight line deviating from the running direction of the vehicle in the preset time period is larger than a preset second frequency threshold value, determining that the driver is in the distraction driving state.
It will be appreciated that, when the driver is not focusing on the driving, the eyes will deviate from the driving direction of the vehicle, but it is determined whether the driver is in a distracted driving state only according to the deviation of the eyes of the driver from the driving direction of the vehicle, and erroneous judgment may occur.
As shown in fig. 3, a flowchart of the implementation of S102 in fig. 1 in the second embodiment is shown. As can be seen from fig. 3, compared with the embodiment shown in fig. 2, the specific implementation procedures of S1031 to S1032 and 1021 to S1011 are the same, and the difference is that S1033 is included after S1032, and the details of S1023 and S1024 are not included, as follows:
s1033, if the single duration of the sight line of the driver, deviating from the running direction of the vehicle, within the preset duration is greater than a preset first limit time threshold, determining that the driver is in a distraction driving state; the first limit time threshold is equal to n times the first time threshold, and n is a positive integer greater than 1.
It will be appreciated that in some special situations, such as when the inertial measurement module fails, a phenomenon of missed determination of the distraction driving may occur according to the method of fig. 2, and thus, in this example, the accuracy of determining the distraction driving state is improved by presetting the first limit time threshold. It should be noted that, the first limit time threshold is generally n times that of the first time threshold, and needs to be determined according to experience and a preset experiment, which needs to comprehensively consider the risk of accident occurrence and the accuracy of distraction driving.
As shown in fig. 4, a flowchart of the implementation of S102 in fig. 1 in the third embodiment is shown. As can be seen from fig. 4, compared with the embodiment shown in fig. 2, the implementation procedure of S1043 is the same as the implementation procedure of S1013, except that S1041, S1042, and S104 do not include the contents of S1011, S1012, and S1013. The details are as follows:
s1041, determining visual information of the driver based on the facial feature information.
As an example and not by way of limitation, in this example, the facial feature information includes eye feature information and mouth feature information, and the visual information includes eye state information and mouth state information; the determining visual information of the driver based on the facial feature information includes:
Analyzing the eye feature information by utilizing a visual analysis model which is trained in advance to obtain the eye state information, and analyzing the mouth feature information by utilizing the visual analysis model to obtain the mouth state information.
In this example, the inputs of the visual analysis model trained beforehand are eye feature information and mouth feature information, and the outputs are eye state information and mouth state information.
S1042, based on the visual information, determining at least one of eye closing single duration, eye closing frequency and yawning frequency of the driver in the preset duration.
Correspondingly, the determining, based on the visual information, at least one of a single duration of eye closure, an eye closure frequency, and a yawning frequency of the driver within the preset duration includes:
determining a single eye closure duration or an eye closure frequency of the driver within the preset duration based on the eye state information;
and determining the yawing frequency of the driver in the preset duration based on the mouth state information.
S1043, determining a yaw rate of the vehicle during traveling based on the vehicle traveling state information.
S1044, if the left-right deflection frequency of the vehicle in the running process is greater than a preset third frequency threshold value, and the single-time duration of the eye closure of the driver in the preset duration is greater than a preset second time threshold value, determining that the driver is in a fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value and the eye closing frequency of the driver in the preset duration is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is larger than the third frequency threshold value and the yawing frequency of the driver in the preset duration is larger than the preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
It can be appreciated that, in the driving process, the driver may perform actions such as eye closure or yawing due to abrupt change of the surrounding environment, and in order to prevent accidents in the driving process, the preset threshold for judging eye closure or yawing is not easy to be too long, so in this example, when determining the fatigue driving state, the left and right deflection frequency of the vehicle in the driving process is combined with the single duration of eye closure of the driver in the preset duration, or the left and right deflection frequency of the vehicle in the driving process is combined with the eye closure frequency of the driver in the preset duration, or the left and right deflection frequency of the vehicle in the driving process is combined with the yawing frequency of the driver in the preset duration, so as to improve the accuracy of determining the fatigue driving state.
As shown in fig. 5, a flowchart of the implementation of S102 in fig. 1 in the fourth embodiment is shown. As can be seen from fig. 5, the embodiment of the present invention is similar to the embodiment of fig. 4 in that the specific implementation procedures of S1051 to S1053 are the same as the specific implementation procedures of S1041 to S1042, except that S1054 is included after S1053, and the contents of S1043 and S1044 are not included, as described in detail below.
S1054, if the single eye closing duration of the driver in the preset duration is greater than a preset second limit time threshold, determining that the driver is in the fatigue driving state, wherein the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset duration is greater than a preset third threshold frequency value, determining that the driver is in the fatigue driving state, wherein the third threshold frequency value is equal to k times of the third threshold frequency value, and k is a positive integer greater than 1;
or if the yawing frequency of the driver in the preset duration is greater than a preset fourth limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limit frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
It will be appreciated that in some special situations, such as when the inertial measurement module fails, according to the method in fig. 4, a phenomenon of missed determination of the fatigue driving may occur, so in this example, the accuracy of determining the fatigue driving state is improved by presetting at least one of the second, third and fourth limit frequency thresholds. It should be noted that, the second, third and fourth threshold frequency values are determined according to experience and a preset experiment, and the risk of accident and the accuracy of fatigue driving are comprehensively considered.
As can be seen from the above analysis, the driving state determining method provided in the embodiment of the present application includes: acquiring target image frames and vehicle running state information within a preset duration, wherein the target image frames contain facial feature information of a driver; a driving state of the driver is determined based on the facial feature information and the vehicle running state information. Compared with the prior art, the driving state of the driver is determined based on the facial feature information and the vehicle driving state information, so that the probability of misjudgment can be reduced, and the accuracy of driving state judgment can be improved.
Fig. 6 is a schematic structural diagram of a driving state determining device provided in an embodiment of the present application. As can be seen from fig. 6, the driving state determining device 6 provided in the present embodiment includes: the vehicle driving system comprises an acquisition module 601 and a determination module 602, wherein the acquisition module 601 is used for acquiring target image frames and vehicle driving state information within a preset duration, and the target image frames comprise facial feature information of a driver;
a determination module 602 for determining a driving state of the driver based on the facial feature information and the vehicle driving state information.
In an alternative implementation, the determining module 602 includes:
a first determination unit configured to determine visual information of the driver based on the facial feature information;
a second determining unit configured to determine, based on the visual information, a single duration of a line of sight of the driver deviating from a vehicle traveling direction and/or a frequency of a line of sight deviating from the vehicle traveling direction within the preset duration;
a third determining unit configured to determine a left-right yaw rate of the vehicle during traveling based on the vehicle traveling state information;
a fourth determining unit, configured to determine that the driver is in a distracted driving state if a left-right yaw rate of the vehicle during driving is greater than a preset first frequency threshold, and a single duration of a line of sight of the driver, within the preset duration, deviating from a driving direction of the vehicle is greater than a preset first time threshold;
Or if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the frequency of the driver's sight line deviating from the running direction of the vehicle in the preset time period is larger than a preset second frequency threshold value, determining that the driver is in the distraction driving state.
In an alternative implementation, the determining module 602 includes:
a fifth determining unit, configured to determine that the driver is in a distracted driving state if a single duration of the driver's line of sight deviating from a vehicle driving direction within the preset duration is greater than a preset first limit time threshold; the first limit time threshold is equal to n times the first time threshold, and n is a positive integer greater than 1.
In an alternative implementation, the visual information includes eye state information;
the first determination unit includes:
analyzing the facial feature information by utilizing a visual analysis model which is trained in advance to obtain the eye state information;
correspondingly, the second determining unit is configured to:
and determining single duration time of the driver in which the sight line deviates from the running direction of the vehicle and/or frequency of the driver in which the sight line deviates from the running direction of the vehicle in the preset duration time based on the eye state information.
In an alternative implementation, the determining module 602 includes:
a sixth determination unit configured to determine visual information of the driver based on the facial feature information;
a seventh determining unit that determines at least one of an eye closure single duration, an eye closure frequency, and a yawning frequency of the driver within the preset duration based on the visual information;
an eighth determination unit configured to determine a left-right yaw rate of the vehicle during traveling based on the vehicle traveling state information;
a ninth determining unit, configured to determine that the driver is in a fatigue driving state if the left-right yaw rate of the vehicle during running is greater than a preset third frequency threshold, and the single eye-closure duration of the driver in the preset duration is greater than a preset second time threshold;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value and the eye closing frequency of the driver in the preset duration is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is larger than the third frequency threshold value and the yawing frequency of the driver in the preset duration is larger than the preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
In an alternative implementation, the determining module 602 includes:
a tenth determining unit, configured to determine that the driver is in the fatigue driving state if the single eye-closure duration of the driver in the preset duration is greater than a preset second limit time threshold, where the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset duration is greater than a preset third threshold frequency value, determining that the driver is in the fatigue driving state, wherein the third threshold frequency value is equal to k times of the third threshold frequency value, and k is a positive integer greater than 1;
or if the yawing frequency of the driver in the preset duration is greater than a preset fourth limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limit frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
In an alternative implementation, the vehicle running state information includes a direction angle of the vehicle running;
the third determining unit, or the eighth determining unit, is specifically configured to:
And determining the left-right deflection frequency of the vehicle in the driving process based on the driving direction angle of the vehicle.
Fig. 7 is a schematic view of a vehicle provided in an embodiment of the present application. As shown in fig. 7, the vehicle 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a driving state determination program, stored in the memory 71 and executable on the processor 70. The steps of the various driving state determination method embodiments described above, such as steps 101 through 102 shown in fig. 1, are implemented when the processor 70 executes the computer program 72.
By way of example, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 72 in the vehicle 7. For example, the computer program 72 may be divided into an acquisition module and a determination module (module in the virtual device), each of which functions specifically as follows:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring target image frames and vehicle running state information in preset duration, and the target image frames contain facial feature information of a driver;
A determination module for determining a driving state of the driver based on the facial feature information and the vehicle running state information.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of communication units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. . Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (6)
1. A driving state determination method, characterized by comprising:
acquiring target image frames and vehicle running state information within a preset duration, wherein the target image frames contain facial feature information of a driver;
determining a driving state of the driver based on the facial feature information and the vehicle running state information;
wherein the determining the driving state of the driver based on the facial feature information and the vehicle driving state information includes:
determining a single duration of the driver's line of sight from the vehicle travel direction and/or a frequency of the line of sight from the vehicle travel direction within the preset duration based on the facial feature information;
Determining a left-right deflection frequency of the vehicle in the running process based on the vehicle running state information, wherein the vehicle running state information comprises a direction angle and an angular speed value of the vehicle running;
if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the single duration of the sight line of the driver, deviating from the running direction of the vehicle, in the preset duration is larger than a preset first time threshold value, or the frequency of the sight line of the driver, deviating from the running direction of the vehicle, in the preset duration is larger than a preset second frequency threshold value, determining that the driver is in a distraction driving state;
if the single duration of the sight line of the driver, deviating from the running direction of the vehicle, within the preset duration is larger than a preset first limit time threshold, determining that the driver is in a distraction driving state; the first limit time threshold is equal to n times of the first time threshold, and n is a positive integer greater than 1;
wherein the determining the driving state of the driver based on the facial feature information and the vehicle running state information further includes:
determining visual information of the driver based on the facial feature information;
Determining at least one of a single eye closure duration, an eye closure frequency, and a yawning frequency of the driver within the preset duration based on the visual information;
determining left and right deflection frequencies of the vehicle in the running process based on the vehicle running state information;
if the left-right deflection frequency of the vehicle in the running process is larger than a preset third frequency threshold value, and the single-time duration of the eye closure of the driver in the preset duration is larger than a preset second time threshold value, determining that the driver is in a fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value and the eye closing frequency of the driver in the preset duration is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is larger than the third frequency threshold value and the yawing frequency of the driver in the preset duration is larger than the preset fourth frequency threshold value, determining that the driver is in the fatigue driving state;
if the single eye closing duration of the driver in the preset duration is greater than a preset second limit time threshold, determining that the driver is in the fatigue driving state, wherein the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
Or if the eye closing frequency of the driver in the preset duration is greater than a preset third threshold frequency value, determining that the driver is in the fatigue driving state, wherein the third threshold frequency value is equal to k times of the third threshold frequency value, and k is a positive integer greater than 1;
or if the yawing frequency of the driver in the preset duration is greater than a preset fourth limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limit frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
2. The driving state determination method according to claim 1, wherein the facial feature information includes eye feature information;
the determining, based on the facial feature information, a single duration of the driver's line of sight from the vehicle travel direction and/or a frequency of the line of sight from the vehicle travel direction within the preset time period includes:
determining visual information of the driver based on the facial feature information, the visual information including eye state information;
analyzing the eye feature information by utilizing a visual analysis model which is trained in advance to obtain the eye state information;
And determining single duration time of the driver in which the sight line deviates from the running direction of the vehicle and/or frequency of the driver in which the sight line deviates from the running direction of the vehicle in the preset duration time based on the eye state information.
3. The driving state determination method according to claim 1 or 2, characterized in that the vehicle running state information includes a direction angle of vehicle running;
the determining the left-right yaw rate of the vehicle during traveling based on the vehicle traveling state information includes:
and determining the left-right deflection frequency of the vehicle in the driving process based on the driving direction angle of the vehicle.
4. A driving state determination device, characterized by comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring target image frames and vehicle running state information in preset duration, and the target image frames contain facial feature information of a driver;
a determination module that determines a driving state of the driver based on the facial feature information and the vehicle running state information;
wherein the driving state determination means is for implementing the driving state determination method of any one of claims 1 to 3.
5. A vehicle comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, carries out the steps of the driving state determination method according to any one of claims 1 to 3.
6. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the driving state determination method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911267783.9A CN111160126B (en) | 2019-12-11 | 2019-12-11 | Driving state determining method, driving state determining device, vehicle and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911267783.9A CN111160126B (en) | 2019-12-11 | 2019-12-11 | Driving state determining method, driving state determining device, vehicle and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111160126A CN111160126A (en) | 2020-05-15 |
CN111160126B true CN111160126B (en) | 2023-12-19 |
Family
ID=70556721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911267783.9A Active CN111160126B (en) | 2019-12-11 | 2019-12-11 | Driving state determining method, driving state determining device, vehicle and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111160126B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112883834A (en) * | 2021-01-29 | 2021-06-01 | 重庆长安汽车股份有限公司 | DMS system distraction detection method, DMS system distraction detection system, DMS vehicle, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002331849A (en) * | 2001-05-07 | 2002-11-19 | Nissan Motor Co Ltd | Driving behavior intention detector |
CN101984478A (en) * | 2010-08-03 | 2011-03-09 | 浙江大学 | Abnormal S-type driving warning method based on binocular vision lane marking detection |
CN102881116A (en) * | 2011-07-13 | 2013-01-16 | 上海库源电气科技有限公司 | System and method for pre-warning of fatigue driving |
CN108021875A (en) * | 2017-11-27 | 2018-05-11 | 上海灵至科技有限公司 | A kind of vehicle driver's personalization fatigue monitoring and method for early warning |
WO2019028798A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Method and device for monitoring driving condition, and electronic device |
-
2019
- 2019-12-11 CN CN201911267783.9A patent/CN111160126B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002331849A (en) * | 2001-05-07 | 2002-11-19 | Nissan Motor Co Ltd | Driving behavior intention detector |
CN101984478A (en) * | 2010-08-03 | 2011-03-09 | 浙江大学 | Abnormal S-type driving warning method based on binocular vision lane marking detection |
CN102881116A (en) * | 2011-07-13 | 2013-01-16 | 上海库源电气科技有限公司 | System and method for pre-warning of fatigue driving |
WO2019028798A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Method and device for monitoring driving condition, and electronic device |
CN109803583A (en) * | 2017-08-10 | 2019-05-24 | 北京市商汤科技开发有限公司 | Driver monitoring method, apparatus and electronic equipment |
CN108021875A (en) * | 2017-11-27 | 2018-05-11 | 上海灵至科技有限公司 | A kind of vehicle driver's personalization fatigue monitoring and method for early warning |
Also Published As
Publication number | Publication date |
---|---|
CN111160126A (en) | 2020-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10915769B2 (en) | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium | |
US20210009150A1 (en) | Method for recognizing dangerous action of personnel in vehicle, electronic device and storage medium | |
CN111079476B (en) | Driving state analysis method and device, driver monitoring system and vehicle | |
JP7146959B2 (en) | DRIVING STATE DETECTION METHOD AND DEVICE, DRIVER MONITORING SYSTEM AND VEHICLE | |
KR102305914B1 (en) | Driving management methods and systems, in-vehicle intelligent systems, electronic devices, media | |
US20210197849A1 (en) | Warning device and driving tendency analysis method | |
EP3113073A1 (en) | Determination device, determination method, and non-transitory storage medium | |
CN109584507A (en) | Driver behavior modeling method, apparatus, system, the vehicles and storage medium | |
CN107405121B (en) | Method and device for detecting a fatigue state and/or a sleep state of a driver of a vehicle | |
KR20220004754A (en) | Neural Networks for Head Pose and Gaze Estimation Using Photorealistic Synthetic Data | |
EP3310043B1 (en) | Head-mounted display and display control method | |
CN110909718B (en) | Driving state identification method and device and vehicle | |
DE102014206626A1 (en) | Fatigue detection using data glasses (HMD) | |
CN106945672B (en) | Method and controller for outputting drowsiness warning | |
WO2023241358A1 (en) | Fatigue driving determination method and apparatus, and electronic device | |
US11427206B2 (en) | Vehicle operation assistance device, vehicle operation assistance method, and program | |
US20160314674A1 (en) | Vehicle operator impairment detection system and method | |
CN111160126B (en) | Driving state determining method, driving state determining device, vehicle and storage medium | |
CN112949345A (en) | Fatigue monitoring method and system, automobile data recorder and intelligent cabin | |
CN111062300A (en) | Driving state detection method, device, equipment and computer readable storage medium | |
CN112513784B (en) | Data glasses for vehicles with automatic hiding display content | |
JP7019394B2 (en) | Visual target detection device, visual target detection method, and program | |
CN116798189A (en) | State detection method, device and storage medium | |
CN112926364A (en) | Head posture recognition method and system, automobile data recorder and intelligent cabin | |
CN112633247B (en) | Driving state monitoring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |