CN114523979A - Fatigue driving detection method and system - Google Patents

Fatigue driving detection method and system Download PDF

Info

Publication number
CN114523979A
CN114523979A CN202011196284.8A CN202011196284A CN114523979A CN 114523979 A CN114523979 A CN 114523979A CN 202011196284 A CN202011196284 A CN 202011196284A CN 114523979 A CN114523979 A CN 114523979A
Authority
CN
China
Prior art keywords
fatigue
driving
vehicle
face image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011196284.8A
Other languages
Chinese (zh)
Inventor
谭敏谊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tuweihui Information Technology Co ltd
Original Assignee
Guangzhou Tuweihui Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tuweihui Information Technology Co ltd filed Critical Guangzhou Tuweihui Information Technology Co ltd
Priority to CN202011196284.8A priority Critical patent/CN114523979A/en
Publication of CN114523979A publication Critical patent/CN114523979A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to the field of automobile driving safety, in particular to a fatigue driving detection method and a system, wherein the method comprises the following steps: step S1: collecting driving information to establish a data set; the data set comprises an image data set and an operational data set; the image data set is used for storing a face image of the driver; the operation data set is used for storing the running information of the vehicle; step S2: inputting the image data set into a first detection model, and calculating a first fatigue value; step S3: inputting the operation data set into a second detection model, and calculating a second fatigue value; step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value; step S5: and taking corresponding measures according to the judgment result. Compared with the prior art, the method has stronger applicability.

Description

Fatigue driving detection method and system
Technical Field
The invention relates to the field of automobile driving safety, in particular to a fatigue driving detection method and system.
Background
The traffic accident easily caused by fatigue driving refers to the phenomenon that after a driver drives a vehicle continuously for a long time, the physiological function and the psychological function are disordered, and the driving skill is objectively reduced. The driver has poor or insufficient sleeping quality, and is easy to have fatigue when driving the vehicle for a long time. Driving fatigue affects the driver's attention, feeling, perception, thinking, judgment, will, decision and movement. When the vehicle is driven continuously after fatigue, the vehicle can feel sleepy, weak limbs, unconsciousness, reduced judgment capability, even absentmindedness or instant memory loss, delayed or early action, improper operation pause or correction time and other unsafe factors, and road traffic accidents are easy to happen. With the rapid development and popularization of automobiles in China, traffic safety is seriously threatened by fatigue driving, and the formation mechanism of fatigue driving, the failure of fatigue driving behaviors, fatigue early warning and control technology and the like are gradually becoming the main research directions of traffic safety.
From the way of fatigue driving detection, existing solutions are mainly classified into the following three categories: based on driver physiological characteristics, based on vehicle motion characteristics, and based on driver facial characteristics. Physiological characteristics generally include the following: an eye wave signal, a heart wave signal, a muscle wave signal, a brain wave signal, and the like, and these biological signals tend to be positively correlated with the state of the driver. The method mainly comprises the steps of directly acquiring a signal value of a driver by wearing related instrument equipment and analyzing to obtain the fatigue state of the driver. Because the information is acquired after the sensor is in contact with the human body, the fatigue state of the driver can be intuitively reflected, and the accuracy is high. However, such a method is generally invasive, and a driver needs to be equipped with a signal processing device to acquire various biological signals of the driver, which causes interference with the driver, and the signal processing device is expensive and is difficult to popularize. The fatigue driving detection method based on the vehicle motion characteristics only needs to acquire vehicle information and does not need to use a sensor to be in contact with a driver, so that no interference is generated on the driver. Therefore, in most cases, it can only be used as a secondary reference factor in fusion detection. Although the face feature of the driver is simple to install and low in cost, when the driver moves, the opening and closing conditions of human eyes are difficult to judge, and in the case of weak light at night, the face feature is difficult to accurately identify through existing face detection, so that the accuracy rate of the face feature is low during night detection. Therefore, a fatigue driving detection method and system with high applicability are needed.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method and a system for detecting fatigue driving, which have higher applicability than the prior art.
The technical scheme adopted by the invention is as follows:
a fatigue driving detection method comprises the following steps:
step S1: collecting driving information to establish a data set;
the data set comprises an image data set and an operational data set;
the image data set is used for storing a face image of the driver; the operation data set is used for storing the running information of the vehicle;
step S2: inputting the image data set into a first detection model, and calculating a first fatigue value;
step S3: inputting the operation data set into a second detection model, and calculating a second fatigue value;
step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value;
step S5: and taking corresponding measures according to the judgment result.
In particular, detection methods based on the physiological characteristics of the driver require the driver to wear invasive detection instruments, which can interfere with the driving operation of the vehicle. Therefore, the scheme adopts a fatigue driving detection method based on the combination of the facial features of the driver and the motion features of the vehicle. The problem that the fatigue state cannot be accurately judged when the clear face image of the driver cannot be acquired due to interference of external factors is avoided; meanwhile, the condition that the fatigue state cannot be accurately judged due to complex road conditions and driving habits of a driver is avoided.
Further, the step S2 includes:
step S2.1: judging the definition of the face image in the image data set, and if the definition is lower than a preset threshold value, performing step S2.2, otherwise, performing step S2.3;
step S2.2: performing image noise reduction and low-light enhancement on the face image;
step S2.3: acquiring a first weight according to the definition of the face image;
step S2.4: performing face detection on a face image in the image data set;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
step S2.5: carrying out human eye positioning on the face image subjected to the human face detection;
the human eye positioning adopts a gray projection method;
step S2.6: extracting a first fatigue characteristic from the face image positioned by human eyes;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency and mouth opening and closing frequency;
step S2.7: a first fatigue value is calculated based on the first weight and the first fatigue characteristic.
Specifically, preprocessing an acquired face image to make the acquired face image clearer, and acquiring a first weight according to the definition of the face image; the sharper the face image is, the higher the first weight is and vice versa. Then, a stacked classifier of an Adaboost algorithm is adopted for face detection; the Adaboost algorithm is an iterative method, and the core idea is to train the same weak classifier against different training sets, and then to assemble the weak classifiers obtained on different training sets to form a final strong classifier. The classifier can quickly detect the human face and has good robustness. Then, carrying out human eye positioning on the face image by using a gray projection method; the gray projection method has small calculation amount, so that the human eyes can be quickly positioned. And finally, extracting the first fatigue-taking characteristic, and acquiring a first fatigue value which is one of the basis for judging the fatigue state according to the first weight and the first fatigue-taking characteristic.
Further, said step S2.6 comprises:
step S2.61: adopting a Canny algorithm to carry out edge detection on the human eye region of the face image to obtain the human eye edge;
step S2.62: judging the opening and closing state of human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
step S2.63: comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
step S2.64: and acquiring a PERCLOS value and a blinking frequency according to the opening and closing state.
Specifically, the existing eye positioning technology generally has many defects, so that the opening and closing state of the eyes is difficult to judge subsequently, and an accurate PERCLOS value and blinking frequency cannot be obtained. The reason why it is difficult to judge the open/close state of human eyes is generally: the action amplitude of the driver is large or the driver rotates. Although the gray projection method has a fast speed for positioning human eyes, the corresponding positioning is not accurate enough, and belongs to rough positioning. Therefore, the scheme also adopts the Canny algorithm to deduce more accurate open and close states of human eyes. Firstly, using a Canny algorithm to carry out edge detection on human eye regions of a face image to obtain human eye edges. Then judging the opening and closing state of the human eyes according to whether the upper edge and the lower edge of the human eyes are overlapped; if the superposition indicates that the human eyes are in a closed state, and if the superposition does not indicate that the human eyes are in an open state; in addition, when the human eyes are in a semi-open-close state, in order to more accurately acquire the open-close state of the human eyes to calculate the fatigue value, the scheme judges the open-close state of the human eyes according to the pixel values in the edges of the human eyes. Firstly, a threshold value is preset, when the number of pixel points of skin color pixels is larger than the threshold value, human eyes are in a closed state, and otherwise, the human eyes are in an open state. When the action amplitude of a driver is large or the driver rotates, the size of the human eye image changes, and the number of the pixels of the skin color pixels changes accordingly; the preset threshold is designed as a dynamic function, and when the size of the human eye image changes, the preset threshold also changes along with the human eye image, so that the comparison between the threshold and the number of the pixel points can accurately judge and acquire the opening and closing state of the human eyes.
Further, the step S3 includes:
step S3.1: acquiring a second weight according to the definition of the face image;
step S3.2: acquiring corresponding driving operation and road driving specifications according to the driving information of the vehicle;
the running information of the vehicle includes: the road type, the road direction, the vehicle running route and the vehicle running speed are obtained through a GPS; the driving operation is throttle control and steering wheel operation; the road driving standard is the driving speed requirement and the driving route requirement of the current road;
step S3.3: extracting a second fatigue feature from the driving operation and the road driving norm;
the second fatigue characteristic is a duration for which the driving operation meets the current road driving specification;
step S3.4: and calculating a second fatigue value according to the second weight and the second fatigue characteristic.
Specifically, a second weight is obtained according to the definition of the face image; the sharper the face image is, the lower the second weight is, and vice versa. Then, acquiring the running information of the vehicle through the GPS, wherein the running information of the vehicle comprises: road type, road direction, vehicle travel route, and vehicle travel speed. The driving speed requirement and the driving route requirement of the current road can be obtained according to the road type and the road direction. And acquiring throttle control and steering wheel operation according to the vehicle running line and the vehicle running speed. And obtaining a second fatigue value, namely a second fatigue value, which is a basis for judging the fatigue state according to the accelerator control and the duration that the steering wheel operation meets the current driving speed requirement and the driving route requirement and the second weight.
Further, the measure in step S5 is at least one of the following: the system comprises a sound alarm, a display alarm, an in-vehicle warning lamp alarm, an out-vehicle warning lamp alarm, a vehicle deceleration control and a control center.
Specifically, when the driver is detected and judged to be in fatigue driving, a sound in the vehicle can send out warning voice to prompt the driver to stop fatigue driving; a display in the automobile can display warning characters to prompt a driver to stop fatigue driving; the warning lamp outside the vehicle is turned on to prompt nearby vehicles that the driver in the vehicle is in fatigue driving; if the driver does not manage the meeting, the in-vehicle system judges the road type, and forces the vehicle to decelerate under the condition of not violating the traffic rules; if the traffic rules do not allow, the information is sent to the control center, and then the control center sends the information to nearby vehicles to inform the nearby vehicles to pay attention to the traffic.
A fatigue driving detection system, comprising:
the acquisition module is used for acquiring driving information;
the driving information includes a face image of a driver and driving information of a vehicle;
the analysis module is used for analyzing the information acquired by the acquisition module, calculating a first fatigue value and a second fatigue value, and judging whether the driver is in a fatigue state according to the first fatigue value and the second fatigue value;
and the response module is used for taking corresponding measures according to the judgment result of the analysis module.
Further, the parsing module comprises:
the image preprocessing unit is used for calculating a first weight according to the definition of the face image and carrying out image noise reduction and low-light enhancement on the face image;
the face detection unit is used for carrying out face detection on the face image;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
the human eye positioning unit is used for carrying out human eye positioning on the face image after the face detection;
the human eye positioning adopts a gray projection method;
the first analysis unit is used for extracting a first fatigue characteristic from the face image positioned by human eyes and calculating a first fatigue value according to the first weight and the first fatigue characteristic;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency, and mouth opening and closing frequency.
Further, the first parsing unit includes:
the human eye edge subunit is used for positioning the human eye edge;
detecting and acquiring the human eye edge by adopting a Canny algorithm;
the first distinguishing subunit is used for judging the opening and closing state of the human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
the second distinguishing subunit is used for comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
the PERCLOS value acquiring subunit is used for acquiring a PERCLOS value according to the opening and closing state;
and the blink frequency acquisition subunit is used for acquiring the blink frequency according to the number of times of the change of the opening and closing state.
Further, the parsing module further comprises:
the driving information preprocessing unit is used for calculating a second weight according to the definition of the face image and acquiring corresponding driving operation and road driving specifications according to the driving information of the vehicle;
the running information of the vehicle includes: the road type, the road direction, the vehicle running line and the vehicle running speed are acquired by a GPS in the acquisition module; the driving operation is throttle control and steering wheel operation; the road driving standard is a driving speed requirement and a driving route requirement of a current road;
a second analysis unit for extracting a second fatigue feature from the driving operation and the road driving norm, and calculating a second fatigue value from the second weight and the second fatigue feature;
the second fatigue characteristic is a length of time that the driving operation meets a current road driving specification.
Further, the response module includes:
the sound alarm unit is used for prompting the driver to be in a fatigue state by voice;
the display warning unit is used for prompting the driver to be in a fatigue state at present through characters;
the warning unit of the warning light in the car, is used for the light to point out the driver is in the fatigue state at present;
the warning unit of the warning light outside the vehicle, is used for the light to point out nearby vehicle, the driver in the car is in fatigue state at present;
the control vehicle deceleration unit is used for controlling the vehicle to decelerate;
and the communication warning unit is used for sending information to the control center, and the control center sends information to other nearby vehicles so as to prompt the nearby vehicles that drivers in the vehicles are in fatigue states at present.
Compared with the prior art, the invention has the beneficial effects that:
(1) meanwhile, the facial features and the vehicle motion features are used for detecting fatigue driving, so that the method for detecting fatigue driving is higher in applicability and less influenced by external environment.
(2) The design of the first weight and the second weight enables the detection of fatigue driving to be more accurate.
(3) The preset threshold value is designed into a dynamic function, so that the obtained opening and closing state of the human eyes is more accurate, and favorable conditions are provided for detecting fatigue driving.
Drawings
FIG. 1 is a block diagram of a fatigue driving detection system of the present invention;
FIG. 2 is a diagram of a parsing module according to the present invention;
FIG. 3 is a diagram of a first parsing unit structure according to the present invention;
fig. 4 is a diagram of a response module structure of the present invention.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For the purpose of better illustrating the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Examples
The embodiment provides a fatigue driving detection method, which comprises the following steps:
step S1: collecting driving information to establish a data set;
the data set comprises an image data set and an operational data set;
the image data set is used for storing a face image of the driver; the operation data set is used for storing the running information of the vehicle;
step S2: inputting the image data set into a first detection model, and calculating a first fatigue value;
step S3: inputting the operation data set into a second detection model, and calculating a second fatigue value;
step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value;
step S5: and taking corresponding measures according to the judgment result.
In particular, detection methods based on the physiological characteristics of the driver require the driver to wear invasive detection instruments, which can interfere with the driving operation of the vehicle. Therefore, the scheme adopts a fatigue driving detection method based on the combination of the facial features of the driver and the motion features of the vehicle. The problem that the fatigue state cannot be accurately judged when the clear face image of the driver cannot be acquired due to interference of external factors is avoided; meanwhile, the condition that the fatigue state cannot be accurately judged due to complex road conditions and driving habits of a driver is avoided.
Further, the step S2 includes:
step S2.1: judging the definition of the face image in the image data set, and if the definition is lower than a preset threshold value, performing step S2.2, otherwise, performing step S2.3;
step S2.2: performing image noise reduction and low-light enhancement on the face image;
step S2.3: acquiring a first weight according to the definition of the face image;
step S2.4: performing face detection on a face image in the image data set;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
step S2.5: carrying out human eye positioning on the face image subjected to the human face detection;
the human eye positioning adopts a gray projection method;
step S2.6: extracting a first fatigue characteristic from the face image positioned by human eyes;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency and mouth opening and closing frequency;
step S2.7: a first fatigue value is calculated based on the first weight and the first fatigue characteristic.
Specifically, preprocessing an acquired face image to make the acquired face image clearer, and acquiring a first weight according to the definition of the face image; the sharper the face image is, the higher the first weight is and vice versa. Then, a stacked classifier of an Adaboost algorithm is adopted for face detection; the Adaboost algorithm is an iterative method, and the core idea is to train the same weak classifier against different training sets, and then to assemble the weak classifiers obtained on different training sets to form a final strong classifier. The classifier can quickly detect the human face and has good robustness. Then, carrying out human eye positioning on the face image by using a gray projection method; the gray projection method has small calculation amount, so that the human eyes can be quickly positioned. And finally, extracting the first fatigue-obtaining characteristic, and obtaining a first fatigue value which is one of the bases for judging the fatigue state according to the first weight and the first fatigue-obtaining characteristic.
Further, said step S2.6 comprises:
step S2.61: adopting a Canny algorithm to carry out edge detection on the human eye region of the face image to obtain the human eye edge;
step S2.62: judging the opening and closing state of human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
step S2.63: comparing the number of pixel points in the edge of the human eye with a preset threshold value, and judging the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
step S2.64: and acquiring a PERCLOS value and a blinking frequency according to the opening and closing state.
Specifically, the existing eye positioning technology generally has many defects, so that the opening and closing state of the eyes is difficult to judge subsequently, and an accurate PERCLOS value and blinking frequency cannot be obtained. The reason why it is difficult to judge the open/close state of human eyes is generally: the action amplitude of the driver is large or the driver rotates. Although the gray projection method has a fast speed for positioning human eyes, the corresponding positioning is not accurate enough, and belongs to rough positioning. Therefore, the scheme also adopts the Canny algorithm to deduce more accurate open and close states of human eyes. Firstly, using a Canny algorithm to carry out edge detection on human eye regions of a face image to obtain human eye edges. Then judging the opening and closing state of the human eyes according to whether the upper edge and the lower edge of the human eyes are overlapped; if the superposition indicates that the human eyes are in a closed state, and if the superposition does not indicate that the human eyes are in an open state; in addition, when the human eyes are in a semi-open-close state, in order to more accurately acquire the open-close state of the human eyes to calculate the fatigue value, the scheme judges the open-close state of the human eyes according to the pixel values in the edges of the human eyes. Firstly, a threshold value is preset, when the number of pixel points of skin color pixels is larger than the threshold value, human eyes are in a closed state, and otherwise, the human eyes are in an open state. When the action amplitude of a driver is large or the driver rotates, the size of the human eye image changes, and the number of the pixels of the skin color pixels changes accordingly; the preset threshold is designed as a dynamic function, and when the size of the human eye image changes, the preset threshold also changes along with the human eye image, so that the comparison between the threshold and the number of the pixel points can accurately judge and acquire the opening and closing state of the human eyes.
Further, the step S3 includes:
step S3.1: acquiring a second weight according to the definition of the face image;
step S3.2: acquiring corresponding driving operation and road driving specifications according to the driving information of the vehicle;
the running information of the vehicle includes: the road type, the road direction, the vehicle running line and the vehicle running speed are obtained through a GPS; the driving operation is throttle control and steering wheel operation; the road driving standard is the driving speed requirement and the driving route requirement of the current road;
step S3.3: extracting a second fatigue feature from the driving operation and the road driving norm;
the second fatigue characteristic is a duration for which the driving operation meets the current road driving specification;
step S3.4: and calculating a second fatigue value according to the second weight and the second fatigue characteristic.
Specifically, a second weight is obtained according to the definition of the face image; the sharper the face image is, the lower the second weight is, and vice versa. Then, acquiring the running information of the vehicle through the GPS, wherein the running information of the vehicle comprises: road type, road direction, vehicle travel route, and vehicle travel speed. The driving speed requirement and the driving route requirement of the current road can be obtained according to the road type and the road direction. And acquiring throttle control and steering wheel operation according to the vehicle running line and the vehicle running speed. And obtaining a second fatigue value, namely a second fatigue value, which is a basis for judging the fatigue state according to the accelerator control and the duration that the steering wheel operation meets the current driving speed requirement and the driving route requirement and the second weight.
Further, the measure in step S5 is at least one of the following: the system comprises a sound alarm, a display alarm, an in-vehicle warning lamp alarm, an out-vehicle warning lamp alarm, a vehicle deceleration control and a control center.
Specifically, when the driver is detected and judged to be in fatigue driving, a sound in the vehicle can send out warning voice to prompt the driver to stop fatigue driving; a display in the automobile can display warning characters to prompt a driver to stop fatigue driving; the warning lamp outside the vehicle is turned on to prompt the nearby vehicle that the driver in the vehicle is in fatigue driving; if the driver does not manage the meeting, the in-vehicle system judges the road type, and forces the vehicle to decelerate under the condition of not violating the traffic rules; if the traffic rules do not allow, the information is sent to the control center, and then the control center sends the information to nearby vehicles to inform the nearby vehicles to pay attention to the traffic.
Fig. 1 is a structural diagram of a fatigue driving detection system according to the present invention, and as shown in the figure, the system includes:
the acquisition module is used for acquiring driving information;
the driving information includes a face image of a driver and driving information of a vehicle;
the analysis module is used for analyzing the information acquired by the acquisition module, calculating a first fatigue value and a second fatigue value, and judging whether the driver is in a fatigue state according to the first fatigue value and the second fatigue value;
and the response module is used for taking corresponding measures according to the judgment result of the analysis module.
Fig. 2 is a structure diagram of an analysis module according to the present invention, and as shown in the figure, the analysis module includes:
the image preprocessing unit is used for calculating a first weight according to the definition of the face image and carrying out image noise reduction and low-light enhancement on the face image;
the face detection unit is used for carrying out face detection on the face image;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
the human eye positioning unit is used for carrying out human eye positioning on the face image after the face detection;
the human eye positioning adopts a gray projection method;
the first analysis unit is used for extracting a first fatigue characteristic from the face image positioned by human eyes and calculating a first fatigue value according to the first weight and the first fatigue characteristic;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency, and mouth opening and closing frequency.
Fig. 3 is a structural diagram of a first parsing unit of the present invention, and as shown in the figure, the first parsing unit includes:
the human eye edge subunit is used for positioning the human eye edge;
detecting and acquiring the human eye edge by adopting a Canny algorithm;
the first distinguishing subunit is used for judging the opening and closing state of the human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
the second distinguishing subunit is used for comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
the PERCLOS value acquiring subunit is used for acquiring a PERCLOS value according to the opening and closing state;
and the blink frequency acquisition subunit is used for acquiring the blink frequency according to the number of times of the change of the opening and closing state.
Further, the parsing module further comprises:
the driving information preprocessing unit is used for calculating a second weight according to the definition of the face image and acquiring corresponding driving operation and road driving specifications according to the driving information of the vehicle;
the running information of the vehicle includes: the road type, the road direction, the vehicle running line and the vehicle running speed are acquired by a GPS in the acquisition module; the driving operation is throttle control and steering wheel operation; the road driving standard is the driving speed requirement and the driving route requirement of the current road;
a second analysis unit for extracting a second fatigue feature from the driving operation and the road driving norm, and calculating a second fatigue value from the second weight and the second fatigue feature;
the second fatigue characteristic is a length of time that the driving operation meets a current road driving specification.
Fig. 4 is a structural diagram of a response module of the present invention, as shown, the response module includes:
the sound alarm unit is used for prompting the driver to be in a fatigue state by voice;
the display warning unit is used for prompting the driver to be in a fatigue state at present through characters;
the warning unit of the warning light in the car, is used for the light to point out the driver is in the fatigue state at present;
the warning unit of the warning light outside the vehicle, is used for the light to point out nearby vehicle, the driver in the car is in fatigue state at present;
the control vehicle deceleration unit is used for controlling the vehicle to decelerate;
and the communication warning unit is used for sending information to the control center, and the control center sends information to other nearby vehicles so as to prompt the nearby vehicles that drivers in the vehicles are in fatigue states at present.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (10)

1. A fatigue driving detection method is characterized by comprising the following steps:
step S1: collecting driving information to establish a data set;
the data set comprises an image data set and an operational data set;
the image data set is used for storing a face image of the driver; the operation data set is used for storing the running information of the vehicle;
step S2: inputting the image data set into a first detection model, and calculating a first fatigue value;
step S3: inputting the operation data set into a second detection model, and calculating a second fatigue value;
step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value;
step S5: and taking corresponding measures according to the judgment result.
2. The fatigue driving detecting method according to claim 1, wherein the step S2 includes:
step S2.1: judging the definition of the face image in the image data set, and if the definition is lower than a preset threshold value, performing step S2.2, otherwise, performing step S2.3;
step S2.2: performing image noise reduction and low-light enhancement on the face image;
step S2.3: acquiring a first weight according to the definition of the face image;
step S2.4: performing face detection on a face image in the image data set;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
step S2.5: carrying out human eye positioning on the face image subjected to the human face detection;
the human eye positioning adopts a gray level projection method;
step S2.6: extracting a first fatigue characteristic from the face image positioned by human eyes;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency and mouth opening and closing frequency;
step S2.7: a first fatigue value is calculated based on the first weight and the first fatigue characteristic.
3. A method of detecting fatigue driving according to claim 2, wherein said step S2.6 comprises:
step S2.61: adopting a Canny algorithm to carry out edge detection on the human eye region of the face image to obtain the human eye edge;
step S2.62: judging the opening and closing state of human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
step S2.63: comparing the number of pixel points in the edge of the human eye with a preset threshold value, and judging the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
step S2.64: and acquiring a PERCLOS value and a blinking frequency according to the opening and closing state.
4. The fatigue driving detecting method according to claim 1, wherein the step S3 includes:
step S3.1: acquiring a second weight according to the definition of the face image;
step S3.2: acquiring corresponding driving operation and road driving specifications according to the driving information of the vehicle;
the running information of the vehicle includes: the road type, the road direction, the vehicle running line and the vehicle running speed are obtained through a GPS; the driving operation is throttle control and steering wheel operation; the road driving standard is the driving speed requirement and the driving route requirement of the current road;
step S3.3: extracting a second fatigue feature from the driving operation and the road driving norm;
the second fatigue characteristic is a duration for which the driving operation meets the current road driving specification;
step S3.4: and calculating a second fatigue value according to the second weight and the second fatigue characteristic.
5. The fatigue driving detecting method according to claim 1, wherein the measure in step S5 is at least one of: the system comprises a sound alarm, a display alarm, an in-vehicle warning lamp alarm, an out-vehicle warning lamp alarm, a vehicle deceleration control and a control center.
6. A fatigue driving detection system, comprising:
the acquisition module is used for acquiring driving information;
the driving information includes a face image of a driver and driving information of a vehicle;
the analysis module is used for analyzing the information acquired by the acquisition module, calculating a first fatigue value and a second fatigue value, and judging whether the driver is in a fatigue state according to the first fatigue value and the second fatigue value;
and the response module is used for taking corresponding measures according to the judgment result of the analysis module.
7. The fatigue driving detection system of claim 6, wherein the parsing module comprises:
the image preprocessing unit is used for calculating a first weight according to the definition of the face image and carrying out image noise reduction and low-light enhancement on the face image;
the face detection unit is used for carrying out face detection on the face image;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
the human eye positioning unit is used for carrying out human eye positioning on the face image after the face detection;
the human eye positioning adopts a gray projection method;
the first analysis unit is used for extracting a first fatigue characteristic from the face image positioned by human eyes and calculating a first fatigue value according to the first weight and the first fatigue characteristic;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency, and mouth opening and closing frequency.
8. The fatigue driving detection system according to claim 7, wherein the first analysis unit includes:
the human eye edge subunit is used for positioning the human eye edge;
detecting and acquiring the human eye edge by adopting a Canny algorithm;
the first distinguishing subunit is used for judging the opening and closing state of the human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
the second distinguishing subunit is used for comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
the PERCLOS value acquiring subunit is used for acquiring a PERCLOS value according to the opening and closing state;
and the blink frequency acquisition subunit is used for acquiring the blink frequency according to the number of times of the change of the opening and closing state.
9. The fatigue driving detection system of claim 6, wherein the parsing module further comprises:
the driving information preprocessing unit is used for calculating a second weight according to the definition of the face image and acquiring corresponding driving operation and road driving specifications according to the driving information of the vehicle;
the running information of the vehicle includes: the road type, the road direction, the vehicle running line and the vehicle running speed are acquired by a GPS in the acquisition module; the driving operation is throttle control and steering wheel operation; the road driving standard is the driving speed requirement and the driving route requirement of the current road;
a second analysis unit for extracting a second fatigue feature from the driving operation and the road driving norm, and calculating a second fatigue value from the second weight and the second fatigue feature;
the second fatigue characteristic is a length of time that the driving operation meets a current road driving specification.
10. The fatigue driving detection system of claim 6, wherein the response module comprises:
the sound alarm unit is used for prompting the driver to be in a fatigue state by voice;
the display warning unit is used for prompting the driver to be in a fatigue state at present through characters;
the warning unit of the warning light in the car, is used for the light to point out the driver is in the fatigue state at present;
the warning unit of the warning light outside the vehicle, is used for the light to point out nearby vehicle, the driver in the car is in fatigue state at present;
the control vehicle deceleration unit is used for controlling the vehicle to decelerate;
and the communication warning unit is used for sending information to the control center, and the control center sends information to other nearby vehicles so as to prompt the nearby vehicles that drivers in the vehicles are in fatigue states at present.
CN202011196284.8A 2020-10-30 2020-10-30 Fatigue driving detection method and system Withdrawn CN114523979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011196284.8A CN114523979A (en) 2020-10-30 2020-10-30 Fatigue driving detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011196284.8A CN114523979A (en) 2020-10-30 2020-10-30 Fatigue driving detection method and system

Publications (1)

Publication Number Publication Date
CN114523979A true CN114523979A (en) 2022-05-24

Family

ID=81619622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011196284.8A Withdrawn CN114523979A (en) 2020-10-30 2020-10-30 Fatigue driving detection method and system

Country Status (1)

Country Link
CN (1) CN114523979A (en)

Similar Documents

Publication Publication Date Title
CN107697069B (en) Intelligent control method for fatigue driving of automobile driver
CN108995654B (en) Driver state identification method and system
Kaplan et al. Driver behavior analysis for safe driving: A survey
WO2020078465A1 (en) Method and device for driving state analysis, driver monitoring system and vehicle
CN110532976A (en) Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN106571015A (en) Driving behavior data collection method based on Internet
CN107972671A (en) A kind of driving behavior analysis system
CN104021370B (en) The driver status monitoring method and system of a kind of view-based access control model information fusion
US20160272217A1 (en) Two-step sleepy driving prevention apparatus through recognizing operation, front face, eye, and mouth shape
CN105303830A (en) Driving behavior analysis system and analysis method
CN108021875A (en) A kind of vehicle driver's personalization fatigue monitoring and method for early warning
CN209795467U (en) Fatigue driving detection system and vehicle
CN111645694B (en) Driver driving state monitoring system and method based on attitude estimation
CN109801475A (en) Method for detecting fatigue driving, device and computer readable storage medium
CN111985328A (en) Unsafe driving behavior detection and early warning method based on facial feature analysis
CN110281944A (en) Driver status based on multi-information fusion monitors system
CN107284449A (en) A kind of traffic safety method for early warning and system, automobile, readable storage medium storing program for executing
CN210000130U (en) automobile self-recognition deceleration system
CN114523979A (en) Fatigue driving detection method and system
CN114529887A (en) Driving behavior analysis method and device
CN114267169A (en) Fatigue driving prevention speed limit control method based on machine vision
CN114419841A (en) Driver fatigue driving early warning system and method
CN113901866A (en) Fatigue driving early warning method based on machine vision
Srivastava Driver's drowsiness identification using eye aspect ratio with adaptive thresholding
CN106710145A (en) Guided driver tiredness prevention method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220524

WW01 Invention patent application withdrawn after publication