CN111160126A - Driving state determination method and device, vehicle and storage medium - Google Patents

Driving state determination method and device, vehicle and storage medium Download PDF

Info

Publication number
CN111160126A
CN111160126A CN201911267783.9A CN201911267783A CN111160126A CN 111160126 A CN111160126 A CN 111160126A CN 201911267783 A CN201911267783 A CN 201911267783A CN 111160126 A CN111160126 A CN 111160126A
Authority
CN
China
Prior art keywords
driver
vehicle
frequency
determining
driving state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911267783.9A
Other languages
Chinese (zh)
Other versions
CN111160126B (en
Inventor
黄凯明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Streamax Technology Co Ltd
Original Assignee
Streamax Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Streamax Technology Co Ltd filed Critical Streamax Technology Co Ltd
Priority to CN201911267783.9A priority Critical patent/CN111160126B/en
Publication of CN111160126A publication Critical patent/CN111160126A/en
Application granted granted Critical
Publication of CN111160126B publication Critical patent/CN111160126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application is suitable for the technical field of computers, and provides a driving state determining method, which comprises the following steps: acquiring a target image frame and vehicle running state information within a preset time length, wherein the target image frame comprises facial feature information of a driver; determining a driving state of the driver based on the facial feature information and the vehicle driving state information. Compared with the prior art, the driving state of the driver is determined based on the facial feature information and the vehicle driving state information, so that the probability of misjudgment can be reduced, and the accuracy of driving state judgment is improved.

Description

Driving state determination method and device, vehicle and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a driving state determination method and device, a vehicle and a storage medium.
Background
The conventional common Driver State Monitor (DSM) is mainly based on a computer vision technology, acquires a face image of a Driver through a camera, and judges whether the Driver has an abnormal state or behavior according to a detection result of a computer vision algorithm.
Disclosure of Invention
In view of this, embodiments of the present application provide a driving state determining method, an apparatus, a vehicle, and a storage medium, so as to solve a phenomenon of erroneous determination when determining a driving state based on a computer vision technology in the prior art.
A first aspect of an embodiment of the present application provides a driving state determination method, including:
acquiring a target image frame and vehicle running state information within a preset time length, wherein the target image frame comprises facial feature information of a driver;
determining a driving state of the driver based on the facial feature information and the vehicle driving state information.
In an optional implementation, the determining the driving state of the driver based on the facial feature information and the vehicle driving state information includes:
determining visual information of the driver based on the facial feature information;
determining a single duration of time that the line of sight of the driver deviates from the vehicle driving direction and/or a frequency that the line of sight deviates from the vehicle driving direction within the preset duration based on the visual information;
determining a left-right deflection frequency of the vehicle during running based on the vehicle running state information;
if the left and right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the single duration of the sight line of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset first time threshold value, determining that the driver is in a distracted driving state;
and if the left-right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the frequency of the visual line of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset second frequency threshold value, determining that the driver is in the distracted driving state.
In an optional implementation manner, after the determining, based on the visual information, a single duration that the line of sight of the driver deviates from the vehicle driving direction within the preset time period and/or a frequency that the line of sight deviates from the vehicle driving direction, the method further includes:
if the single duration of the driver, within the preset duration, of the sight line deviating from the vehicle running direction is greater than a preset first limit time threshold, determining that the driver is in a distracted driving state; the first threshold is equal to n times the first threshold, n being a positive integer greater than 1.
In an alternative implementation, the visual information includes eye state information;
the determining visual information of the driver based on the facial feature information includes:
analyzing the facial feature information by using a pre-trained visual analysis model to obtain the eye state information;
correspondingly, the determining, based on the visual information, the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction by the driver within the preset time period includes:
and determining the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction of the driver in the preset duration based on the eye state information.
In an optional implementation, the determining the driving state of the driver based on the facial feature information and the vehicle driving state information includes:
determining visual information of the driver based on the facial feature information;
determining at least one of a single eye closure duration, a frequency of eye closure, and a frequency of yawning for the driver within the preset duration based on the visual information;
determining a left-right deflection frequency of the vehicle during running based on the vehicle running state information;
if the left-right deflection frequency of the vehicle in the driving process is larger than a preset third frequency threshold value, and the single eye closing duration of the driver in the preset duration is larger than a preset second time threshold value, determining that the driver is in a fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value, and the eye closing frequency of the driver in the preset time length is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left and right deflection frequency of the vehicle in the driving process is greater than the third frequency threshold value, and the yawning frequency of the driver in the preset duration is greater than a preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
In an optional implementation manner, after the determining, based on the visual information, the single duration of eye closure, the frequency of eye closure, or the frequency of yawning of the driver within the preset time period, the method further includes:
if the duration of a single eye closure time of the driver in the preset time length is greater than a preset second limit time threshold, determining that the driver is in the fatigue driving state, wherein the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset time length is greater than a preset third limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the third limit frequency threshold is equal to k times of the third frequency threshold, and k is a positive integer greater than 1;
or if the yawning frequency of the driver in the preset time length is greater than a preset fourth limiting frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limiting frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
In an alternative implementation, the vehicle driving state information includes a direction angle in which the vehicle is driven;
the determining the left and right deflection frequency of the vehicle during running based on the vehicle running state information comprises:
and determining the left and right deflection frequency of the vehicle during running based on the direction angle of the vehicle running.
A second aspect of the embodiments of the present application provides a driving state determination device, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image frame and vehicle running state information within a preset time length, and the target image frame comprises facial feature information of a driver;
a determination module to determine a driving state of the driver based on the facial feature information and the vehicle driving state information.
In an optional implementation manner, the determining module includes:
a first determination unit configured to determine visual information of the driver based on the facial feature information;
the second determination unit is used for determining the single duration of the deviation of the sight line from the vehicle running direction and/or the frequency of the deviation of the sight line from the vehicle running direction of the driver in the preset time length based on the visual information;
a third determination unit configured to determine a left-right yaw frequency of the vehicle during running based on the vehicle running state information;
the fourth determining unit is used for determining that the driver is in a distracted driving state if the left-right deflection frequency of the vehicle in the driving process is greater than a preset first frequency threshold value and the single duration of the driver deviating from the driving direction of the vehicle from the sight line within the preset time length is greater than a preset first time threshold value;
or if the left-right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the frequency of the line of sight of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset second frequency threshold value, determining that the driver is in the distracted driving state.
In an optional implementation manner, the determining module includes:
a fifth determining unit, configured to determine that the driver is in a distracted driving state if a single duration of time during which the line of sight of the driver deviates from the vehicle driving direction within the preset duration is greater than a preset first limit time threshold; the first threshold is equal to n times the first threshold, n being a positive integer greater than 1.
In an alternative implementation, the visual information includes eye state information;
the first determination unit includes:
analyzing the facial feature information by using a pre-trained visual analysis model to obtain the eye state information;
correspondingly, the second determining unit is configured to:
and determining the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction of the driver in the preset duration based on the eye state information.
In an optional implementation manner, the determining module includes:
a sixth determination unit configured to determine visual information of the driver based on the facial feature information;
a seventh determining unit that determines at least one of an eye-closure single duration, an eye-closure frequency, and a yawning frequency of the driver within the preset duration based on the visual information;
an eighth determining unit configured to determine a left-right yaw frequency of the vehicle during running based on the vehicle running state information;
a ninth determining unit, configured to determine that the driver is in a fatigue driving state if a left-right deflection frequency of the vehicle in a driving process is greater than a preset third frequency threshold, and a single eye-closing duration of the driver in the preset duration is greater than a preset second time threshold;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value, and the eye closing frequency of the driver in the preset time length is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left and right deflection frequency of the vehicle in the driving process is greater than the third frequency threshold value, and the yawning frequency of the driver in the preset duration is greater than a preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
In an optional implementation manner, the determining module includes:
a tenth determining unit, configured to determine that the driver is in the fatigue driving state if a single duration of eye closure of the driver within the preset duration is greater than a preset second limit time threshold, where the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset time length is greater than a preset third limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the third limit frequency threshold is equal to k times of the third frequency threshold, and k is a positive integer greater than 1;
or if the yawning frequency of the driver in the preset time length is greater than a preset fourth limiting frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limiting frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
In an alternative implementation, the vehicle driving state information includes a direction angle in which the vehicle is driven;
the third determining unit or the eighth determining unit is specifically configured to:
and determining the left and right deflection frequency of the vehicle during running based on the direction angle of the vehicle running.
Compared with the prior art, the driving state determining method provided by the first aspect of the embodiment of the application can reduce the probability of misjudgment and improve the accuracy of driving state judgment by determining the driving state of the driver based on the facial feature information and the vehicle driving state information.
Compared with the prior art, the second to fourth aspects of the embodiments of the present application have the same beneficial effects as the first aspect of the embodiments of the present application and the prior art, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a driving state determining method according to a first embodiment of the present application;
FIG. 2 is a flowchart illustrating an implementation of S102 in the first embodiment of FIG. 1;
FIG. 3 is a flowchart illustrating a second embodiment of S102 shown in FIG. 1;
FIG. 4 is a flowchart illustrating a third embodiment of S102 shown in FIG. 1;
FIG. 5 is a flowchart illustrating a fourth embodiment of S102 shown in FIG. 1;
fig. 6 is a schematic structural diagram of a driving state determination device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be noted that, in the conventional vehicle-mounted Driver Status Monitoring System (Driver Status Monitor, DSM, or Driver Monitoring System/Driver Monitor System, DMS), the Driver face image obtained by the camera is mainly analyzed based on the computer vision technology, and whether there is an abnormal driving Status, such as distraction driving, fatigue driving, or abnormal behavior, such as smoking, calling, eating, or the like, for the Driver is determined. The method for determining the driving state based on the computer vision technology only analyzes based on the face image of the driver, and has certain misjudgment phenomenon. For example, in certain situations the driver may frequently look at the outside rear view mirrors, the action may be misjudged as distracted driving, or some drivers may be inherently accustomed to squinting, such people may not be able to accurately determine the driving state by computer vision analysis methods, and the like.
In the embodiment of the present application, the motion state of the vehicle itself is combined with the computer vision-based technology to determine the driving state of the driver, so as to improve the accuracy of the driving state determination.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples. As shown in fig. 1, it is a flowchart of an implementation of a driving state determining method provided in a first embodiment of the present application, and this embodiment may be implemented by hardware or software inside a vehicle. The details are as follows:
s101, acquiring a target image frame and vehicle running state information within a preset time length, wherein the target image frame comprises facial feature information of a driver.
The target image frame can be obtained by shooting through a camera which is installed in the vehicle in advance, and understandably, the camera is usually installed at a position which can shoot the front image of the face of the driver to the maximum extent. The vehicle driving state information may be measured by an inertial measurement module, such as a gyroscope or an accelerometer, pre-installed on the vehicle.
In the present embodiment, the driving state is determined by acquiring the target image frame including the facial feature information of the driver and the vehicle driving state information within a preset time period and combining the facial feature information of the driver and the vehicle driving state information.
S102, determining the driving state of the driver based on the facial feature information and the vehicle driving state information.
It can be understood that if there are objects around the driver, such as cigarettes, mobile phones, cups, food, etc., the driver may be distracted from the driving state, for example, the driver may drink water, smoke, make a call, eat food, etc. during driving; in addition, when the driver is in a fatigue driving state, the driver usually opens yawning or closes eyes, and when the driver is in a distracted driving state or in a fatigue driving state, the facial feature information of the driver changes, and the vehicle driving state information of the vehicle controlled by the driver also changes correspondingly, that is, the driving state of the driver can be determined through the facial feature information and the vehicle driving state information.
Specifically, as shown in fig. 2, it is a flowchart of the first embodiment of S102 in fig. 1. As shown in fig. 2, S102 includes:
s1021, determining visual information of the driver based on the facial feature information.
The facial feature information includes preset facial key feature information, such as at least one of eye feature information, mouth feature information, eyebrow feature information, nose feature information, and the like, by way of example and not limitation, in the present example, the facial feature information includes eye feature information, the visual information includes eye state information, and the determining the visual information of the driver based on the facial feature information includes:
and analyzing the eye characteristic information by using a pre-trained vision analysis model to obtain the eye state information.
The input of the visual analysis model trained in advance is facial feature information, and the output is visual information; in this embodiment, the facial feature information is eye feature information, and the visual information is eye state information.
S1022, determining the single duration of the deviation of the sight line from the vehicle running direction and/or the frequency of the deviation of the sight line from the vehicle running direction of the driver in the preset time length based on the visual information.
Correspondingly, the determining, based on the visual information, the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction by the driver within the preset time period includes:
and determining the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction of the driver in the preset duration based on the eye state information.
And S1023, determining the left and right deflection frequency of the vehicle in the running process based on the vehicle running state information.
By way of example and not limitation, the vehicle travel state information includes a direction angle at which the vehicle travels. The S1023 includes:
and determining the left and right deflection frequency of the vehicle during running based on the direction angle of the vehicle running.
It should be noted that, the vehicle driving state information usually further includes an acceleration value or an angular velocity value, in this embodiment, the frequency of left deflection and right deflection of the vehicle during driving is measured by a plurality of direction angles in which the vehicle is driven, so as to measure the lane keeping ability of the vehicle during driving, and whether the driver has abnormal driving behavior is determined according to the lane keeping ability of the vehicle during driving.
S1024, if the left-right deflection frequency of the vehicle in the running process is larger than a preset first frequency threshold value, and the single duration of the sight line of the driver deviating from the running direction of the vehicle in the preset duration is larger than a preset first time threshold value, determining that the driver is in a distracted driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the frequency of the line of sight of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset second frequency threshold value, determining that the driver is in the distracted driving state.
It can be understood that, when the driver is not paying attention during driving, the eyes usually deviate from the driving direction of the vehicle, but the driver is determined whether to be in the distracted driving state only according to the deviation of the eyes of the driver from the driving direction of the vehicle, and misjudgment may occur.
As shown in fig. 3, it is a flowchart of the second embodiment of S102 in fig. 1. As can be seen from fig. 3, in this embodiment, compared with the embodiment shown in fig. 2, the specific implementation processes of S1031 to S1032 are the same as those of 1021 to S1011, except that S1033 is included after S1032, and the contents of S1023 and S1024 are not included, which is detailed as follows:
s1033, if the single duration that the sight line of the driver deviates from the vehicle running direction within the preset duration is larger than a preset first limit time threshold, determining that the driver is in a distracted driving state; the first threshold is equal to n times the first threshold, n being a positive integer greater than 1.
It will be appreciated that under certain special conditions, such as a failure of the inertial measurement module, a missed determination of distraction may occur according to the method of fig. 2, and therefore, in this example, the accuracy of determining the distraction status is improved by presetting the first threshold time value. It should be noted that the first threshold is generally n times the first time threshold, and needs to be determined empirically and through preset experiments, which needs to consider the risk of accident and the accuracy of distracted driving together.
Fig. 4 is a flowchart illustrating a third embodiment of S102 in fig. 1. As can be seen from fig. 4, in this embodiment, compared with the embodiment shown in fig. 2, the specific implementation process of S1043 is the same as the specific implementation process of S1013, except that S1041, S1042 and S104 do not include the contents of S1011, S1012 and S1013. The details are as follows:
s1041, determining visual information of the driver based on the facial feature information.
By way of example and not limitation, in this example, the facial feature information includes eye feature information and mouth feature information, and the visual information includes eye state information and mouth state information; the determining visual information of the driver based on the facial feature information includes:
and analyzing the eye characteristic information by using a pre-trained visual analysis model to obtain the eye state information, and analyzing the mouth characteristic information by using the visual analysis model to obtain the mouth state information.
In this example, the input of the visual analysis model trained in advance is eye feature information and mouth feature information, and the output is eye state information and mouth state information.
S1042, determining at least one of single eye-closure duration, eye-closure frequency and yawning frequency of the driver in the preset duration based on the visual information.
Correspondingly, the determining, based on the visual information, at least one of the single duration of eye closure, the frequency of eye closure, and the frequency of yawning by the driver for the preset duration comprises:
determining the single duration of eye closure or the frequency of eye closure of the driver within the preset time length based on the eye state information;
determining a yawning frequency of the driver within the preset time period based on the mouth state information.
And S1043, determining the left and right deflection frequency of the vehicle in the driving process based on the vehicle driving state information.
S1044, if the left-right deflection frequency of the vehicle in the running process is greater than a preset third frequency threshold value, and the single eye closing duration of the driver in the preset time length is greater than a preset second time threshold value, determining that the driver is in a fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value, and the eye closing frequency of the driver in the preset time length is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left and right deflection frequency of the vehicle in the driving process is greater than the third frequency threshold value, and the yawning frequency of the driver in the preset duration is greater than a preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
It can be understood that, during driving, the eyes may close or yawn due to sudden change of the surrounding environment, and in order to prevent accidents during driving, the preset threshold value for judging the eyes closing or yawn is not easy to be too long, so in this example, when determining the fatigue driving state, the left and right deflection frequency of the vehicle during driving is combined with the single duration of the eyes closing of the driver within the preset time length, or the left and right deflection frequency of the vehicle during driving is combined with the eye closing frequency of the driver within the preset time length, or the left and right deflection frequency of the vehicle during driving is combined with the yawn frequency of the driver within the preset time length, so as to improve the accuracy of determining the fatigue driving state.
Fig. 5 is a flowchart illustrating a specific implementation of S102 in fig. 1 in the fourth embodiment. As can be seen from fig. 5, in this embodiment, compared with the embodiment shown in fig. 4, the implementation processes of S1051 to S1053 are the same as the implementation processes of S1041 to S1042, but the difference is that S1054 is further included after S1053, and the contents of S1043 and S1044 are not included, which is described in detail below.
S1054, if the duration of a single eye closure time of the driver in the preset time length is greater than a preset second limit time threshold, determining that the driver is in the fatigue driving state, wherein the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset time length is greater than a preset third limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the third limit frequency threshold is equal to k times of the third frequency threshold, and k is a positive integer greater than 1;
or if the yawning frequency of the driver in the preset time length is greater than a preset fourth limiting frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limiting frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
It is understood that under some special conditions, such as when the inertia measurement module is failed, according to the method in fig. 4, a phenomenon of missing determination of fatigue driving may occur, and therefore, in this example, the accuracy of determining the fatigue driving state is improved by a method of presetting at least one of the second limit time threshold, the third limit frequency threshold, and the fourth limit frequency threshold. It should be noted that the second limit time threshold, the third limit frequency threshold, and the fourth limit frequency threshold all need to be determined according to experience and preset experiments, and the risk of accident and the accuracy of fatigue driving need to be comprehensively considered.
As can be seen from the above analysis, the driving state determining method provided in the embodiment of the present application includes: acquiring a target image frame and vehicle running state information within a preset time length, wherein the target image frame comprises facial feature information of a driver; determining a driving state of the driver based on the facial feature information and the vehicle driving state information. Compared with the prior art, the driving state of the driver is determined based on the facial feature information and the vehicle driving state information, so that the probability of misjudgment can be reduced, and the accuracy of driving state judgment is improved.
Fig. 6 is a schematic structural diagram of a driving state determination device according to an embodiment of the present application. As can be seen from fig. 6, the driving state determination device 6 according to the present embodiment includes: the system comprises an acquisition module 601 and a determination module 602, wherein the acquisition module 601 is used for acquiring a target image frame and vehicle running state information within a preset time length, and the target image frame comprises facial feature information of a driver;
a determining module 602, configured to determine a driving state of the driver based on the facial feature information and the vehicle driving state information.
In an optional implementation manner, the determining module 602 includes:
a first determination unit configured to determine visual information of the driver based on the facial feature information;
the second determination unit is used for determining the single duration of the deviation of the sight line from the vehicle running direction and/or the frequency of the deviation of the sight line from the vehicle running direction of the driver in the preset time length based on the visual information;
a third determination unit configured to determine a left-right yaw frequency of the vehicle during running based on the vehicle running state information;
the fourth determining unit is used for determining that the driver is in a distracted driving state if the left-right deflection frequency of the vehicle in the driving process is greater than a preset first frequency threshold value and the single duration of the driver deviating from the driving direction of the vehicle from the sight line within the preset time length is greater than a preset first time threshold value;
or if the left-right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the frequency of the line of sight of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset second frequency threshold value, determining that the driver is in the distracted driving state.
In an optional implementation manner, the determining module 602 includes:
a fifth determining unit, configured to determine that the driver is in a distracted driving state if a single duration of time during which the line of sight of the driver deviates from the vehicle driving direction within the preset duration is greater than a preset first limit time threshold; the first threshold is equal to n times the first threshold, n being a positive integer greater than 1.
In an alternative implementation, the visual information includes eye state information;
the first determination unit includes:
analyzing the facial feature information by using a pre-trained visual analysis model to obtain the eye state information;
correspondingly, the second determining unit is configured to:
and determining the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction of the driver in the preset duration based on the eye state information.
In an optional implementation manner, the determining module 602 includes:
a sixth determination unit configured to determine visual information of the driver based on the facial feature information;
a seventh determining unit that determines at least one of an eye-closure single duration, an eye-closure frequency, and a yawning frequency of the driver within the preset duration based on the visual information;
an eighth determining unit configured to determine a left-right yaw frequency of the vehicle during running based on the vehicle running state information;
a ninth determining unit, configured to determine that the driver is in a fatigue driving state if a left-right deflection frequency of the vehicle in a driving process is greater than a preset third frequency threshold, and a single eye-closing duration of the driver in the preset duration is greater than a preset second time threshold;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value, and the eye closing frequency of the driver in the preset time length is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left and right deflection frequency of the vehicle in the driving process is greater than the third frequency threshold value, and the yawning frequency of the driver in the preset duration is greater than a preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
In an optional implementation manner, the determining module 602 includes:
a tenth determining unit, configured to determine that the driver is in the fatigue driving state if a single duration of eye closure of the driver within the preset duration is greater than a preset second limit time threshold, where the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset time length is greater than a preset third limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the third limit frequency threshold is equal to k times of the third frequency threshold, and k is a positive integer greater than 1;
or if the yawning frequency of the driver in the preset time length is greater than a preset fourth limiting frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limiting frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
In an alternative implementation, the vehicle driving state information includes a direction angle in which the vehicle is driven;
the third determining unit or the eighth determining unit is specifically configured to:
and determining the left and right deflection frequency of the vehicle during running based on the direction angle of the vehicle running.
FIG. 7 is a schematic view of a vehicle provided in an embodiment of the present application. As shown in fig. 7, the vehicle 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a driving state determination program, stored in the memory 71 and operable on the processor 70. The steps in the various driving state determination method embodiments described above, such as steps 101 to 102 shown in fig. 1, are implemented by processor 70 when executing computer program 72.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the vehicle 7. For example, the computer program 72 may be divided into an acquisition module and a determination module (module in the virtual device), each module having the following specific functions:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image frame and vehicle running state information within a preset time length, and the target image frame comprises facial feature information of a driver;
a determination module to determine a driving state of the driver based on the facial feature information and the vehicle driving state information.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of communication units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A driving state determination method characterized by comprising:
acquiring a target image frame and vehicle running state information within a preset time length, wherein the target image frame comprises facial feature information of a driver;
determining a driving state of the driver based on the facial feature information and the vehicle driving state information.
2. The driving state determination method according to claim 1, wherein the determining the driving state of the driver based on the facial feature information and the vehicle travel state information includes:
determining visual information of the driver based on the facial feature information;
determining a single duration of time that the line of sight of the driver deviates from the vehicle driving direction and/or a frequency that the line of sight deviates from the vehicle driving direction within the preset duration based on the visual information;
determining a left-right deflection frequency of the vehicle during running based on the vehicle running state information;
if the left and right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the single duration of the sight line of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset first time threshold value, determining that the driver is in a distracted driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than a preset first frequency threshold value, and the frequency of the line of sight of the driver deviating from the running direction of the vehicle in the preset time length is greater than a preset second frequency threshold value, determining that the driver is in the distracted driving state.
3. The driving state determination method according to claim 2, further comprising, after the determining, based on the visual information, a single duration that the driver deviates from the vehicle travel direction in line of sight and/or a frequency that the line of sight deviates from the vehicle travel direction within the preset time period:
if the single duration of the driver, within the preset duration, of the sight line deviating from the vehicle running direction is greater than a preset first limit time threshold, determining that the driver is in a distracted driving state; the first threshold is equal to n times the first threshold, n being a positive integer greater than 1.
4. The driving state determination method according to claim 2, wherein the facial characteristic includes eye characteristic information, and the visual information includes eye state information;
the determining visual information of the driver based on the facial feature information includes:
analyzing the eye characteristic information by using a pre-trained vision analysis model to obtain the eye state information;
correspondingly, the determining, based on the visual information, the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction by the driver within the preset time period includes:
and determining the single duration of the deviation of the sight line from the vehicle driving direction and/or the frequency of the deviation of the sight line from the vehicle driving direction of the driver in the preset duration based on the eye state information.
5. The driving state determination method according to claim 1, wherein the determining the driving state of the driver based on the facial feature information and the vehicle travel state information includes:
determining visual information of the driver based on the facial feature information;
determining at least one of a single eye closure duration, a frequency of eye closure, and a frequency of yawning for the driver within the preset duration based on the visual information;
determining a left-right deflection frequency of the vehicle during running based on the vehicle running state information;
if the left-right deflection frequency of the vehicle in the driving process is larger than a preset third frequency threshold value, and the single eye closing duration of the driver in the preset duration is larger than a preset second time threshold value, determining that the driver is in a fatigue driving state;
or if the left-right deflection frequency of the vehicle in the running process is greater than the third frequency threshold value, and the eye closing frequency of the driver in the preset time length is greater than the preset third frequency threshold value, determining that the driver is in the fatigue driving state;
or if the left and right deflection frequency of the vehicle in the driving process is greater than the third frequency threshold value, and the yawning frequency of the driver in the preset duration is greater than a preset fourth frequency threshold value, determining that the driver is in the fatigue driving state.
6. The driving state determination method of claim 5, further comprising, after the determining, based on the visual information, the single duration of eye closure, the frequency of eye closure, or the frequency of yawning for the driver for the preset duration:
if the duration of a single eye closure time of the driver in the preset time length is greater than a preset second limit time threshold, determining that the driver is in the fatigue driving state, wherein the second limit time threshold is equal to m times of the second time threshold, and m is a positive integer greater than 1;
or if the eye closing frequency of the driver in the preset time length is greater than a preset third limit frequency threshold, determining that the driver is in the fatigue driving state, wherein the third limit frequency threshold is equal to k times of the third frequency threshold, and k is a positive integer greater than 1;
or if the yawning frequency of the driver in the preset time length is greater than a preset fourth limiting frequency threshold, determining that the driver is in the fatigue driving state, wherein the fourth limiting frequency threshold is equal to g times of the fourth frequency threshold, and g is a positive integer greater than 1.
7. The driving state determination method according to any one of claims 2 to 6, wherein the vehicle travel state information includes a direction angle at which the vehicle travels;
the determining the left and right deflection frequency of the vehicle during running based on the vehicle running state information comprises:
and determining the left and right deflection frequency of the vehicle during running based on the direction angle of the vehicle running.
8. A driving state determination device characterized by comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image frame and vehicle running state information within a preset time length, and the target image frame comprises facial feature information of a driver;
a determination module to determine a driving state of the driver based on the facial feature information and the vehicle driving state information.
9. A vehicle comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the driving state determination method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the driving state determination method according to any one of claims 1 to 7.
CN201911267783.9A 2019-12-11 2019-12-11 Driving state determining method, driving state determining device, vehicle and storage medium Active CN111160126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911267783.9A CN111160126B (en) 2019-12-11 2019-12-11 Driving state determining method, driving state determining device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911267783.9A CN111160126B (en) 2019-12-11 2019-12-11 Driving state determining method, driving state determining device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN111160126A true CN111160126A (en) 2020-05-15
CN111160126B CN111160126B (en) 2023-12-19

Family

ID=70556721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911267783.9A Active CN111160126B (en) 2019-12-11 2019-12-11 Driving state determining method, driving state determining device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN111160126B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883834A (en) * 2021-01-29 2021-06-01 重庆长安汽车股份有限公司 DMS system distraction detection method, DMS system distraction detection system, DMS vehicle, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002331849A (en) * 2001-05-07 2002-11-19 Nissan Motor Co Ltd Driving behavior intention detector
CN101984478A (en) * 2010-08-03 2011-03-09 浙江大学 Abnormal S-type driving warning method based on binocular vision lane marking detection
CN102881116A (en) * 2011-07-13 2013-01-16 上海库源电气科技有限公司 System and method for pre-warning of fatigue driving
CN108021875A (en) * 2017-11-27 2018-05-11 上海灵至科技有限公司 A kind of vehicle driver's personalization fatigue monitoring and method for early warning
WO2019028798A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Method and device for monitoring driving condition, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002331849A (en) * 2001-05-07 2002-11-19 Nissan Motor Co Ltd Driving behavior intention detector
CN101984478A (en) * 2010-08-03 2011-03-09 浙江大学 Abnormal S-type driving warning method based on binocular vision lane marking detection
CN102881116A (en) * 2011-07-13 2013-01-16 上海库源电气科技有限公司 System and method for pre-warning of fatigue driving
WO2019028798A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Method and device for monitoring driving condition, and electronic device
CN109803583A (en) * 2017-08-10 2019-05-24 北京市商汤科技开发有限公司 Driver monitoring method, apparatus and electronic equipment
CN108021875A (en) * 2017-11-27 2018-05-11 上海灵至科技有限公司 A kind of vehicle driver's personalization fatigue monitoring and method for early warning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883834A (en) * 2021-01-29 2021-06-01 重庆长安汽车股份有限公司 DMS system distraction detection method, DMS system distraction detection system, DMS vehicle, and storage medium

Also Published As

Publication number Publication date
CN111160126B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
CN109937152B (en) Driving state monitoring method and device, driver monitoring system and vehicle
CN111079476B (en) Driving state analysis method and device, driver monitoring system and vehicle
US10915769B2 (en) Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
JP7146959B2 (en) DRIVING STATE DETECTION METHOD AND DEVICE, DRIVER MONITORING SYSTEM AND VEHICLE
KR102305914B1 (en) Driving management methods and systems, in-vehicle intelligent systems, electronic devices, media
EP3872689B1 (en) Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method
US20210197849A1 (en) Warning device and driving tendency analysis method
CN110889351B (en) Video detection method, device, terminal equipment and readable storage medium
CA3025964C (en) Eyeball tracking method and apparatus, and device
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
KR102543161B1 (en) Distracted Driving Monitoring Methods, Systems and Electronics
EP3113073A1 (en) Determination device, determination method, and non-transitory storage medium
US20140320320A1 (en) Vehicle detecting system and method
US10002300B2 (en) Apparatus and method for monitoring driver's concentrativeness using eye tracing
WO2016043515A1 (en) Head-mounted display device controlled by tapping, control method therefor, and computer program for controlling display device
CN107405121B (en) Method and device for detecting a fatigue state and/or a sleep state of a driver of a vehicle
EP3951645A1 (en) Method and apparatus for detecting state of holding steering wheel by hands
US20180295290A1 (en) Head-mounted display, display control method, and program
CN110909718B (en) Driving state identification method and device and vehicle
WO2021105985A1 (en) Methods for detecting phantom projection attacks against computer vision algorithms
CN110969116A (en) Method for determining gazing point position and related device
CN111160126A (en) Driving state determination method and device, vehicle and storage medium
US20180022357A1 (en) Driving recorder system
CN111062300A (en) Driving state detection method, device, equipment and computer readable storage medium
CN112052770A (en) Method, apparatus, medium, and electronic device for fatigue detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant