CN114529887A - Driving behavior analysis method and device - Google Patents

Driving behavior analysis method and device Download PDF

Info

Publication number
CN114529887A
CN114529887A CN202011196247.7A CN202011196247A CN114529887A CN 114529887 A CN114529887 A CN 114529887A CN 202011196247 A CN202011196247 A CN 202011196247A CN 114529887 A CN114529887 A CN 114529887A
Authority
CN
China
Prior art keywords
fatigue
driver
face image
driving
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011196247.7A
Other languages
Chinese (zh)
Inventor
谭敏谊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tuweihui Information Technology Co ltd
Original Assignee
Guangzhou Tuweihui Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tuweihui Information Technology Co ltd filed Critical Guangzhou Tuweihui Information Technology Co ltd
Priority to CN202011196247.7A priority Critical patent/CN114529887A/en
Publication of CN114529887A publication Critical patent/CN114529887A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1103Detecting eye twinkling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6893Cars
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7445Display arrangements, e.g. multiple display units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness

Abstract

The invention relates to the field of automobile driving safety, in particular to a driving behavior analysis method and a device, wherein the method comprises the following steps: step S1: collecting driving information to establish a data set; the dataset comprises an image dataset and a physiological dataset; the image data set is used for storing a face image of the driver; the physiological data set is used for storing physiological information of the driver; step S2: inputting the image data set into a first detection model, and calculating a first fatigue value; step S3: inputting the physiological data set into a second detection model, and calculating a second fatigue value; step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value; step S5: and taking corresponding measures according to the judgment result. Compared with the prior art, the method has stronger applicability.

Description

Driving behavior analysis method and device
Technical Field
The invention relates to the field of automobile driving safety, in particular to a driving behavior analysis method and device.
Background
The traffic accident easily caused by fatigue driving refers to the phenomenon that after a driver drives a vehicle continuously for a long time, the physiological function and the psychological function are disordered, and the driving skill is objectively reduced. The driver has poor or insufficient sleeping quality, and is easy to have fatigue when driving the vehicle for a long time. Driving fatigue affects the driver's attention, feeling, perception, thinking, judgment, consciousness, decision and movement. When the vehicle is driven continuously after fatigue, the vehicle can feel sleepy, weak limbs, unconsciousness, reduced judgment capability, even absentmindedness or instant memory loss, delayed or early action, improper operation pause or correction time and other unsafe factors, and road traffic accidents are easy to happen. With the rapid development and popularization of automobiles in China, traffic safety is seriously threatened by fatigue driving, and the formation mechanism of fatigue driving, the failure of fatigue driving behaviors, fatigue early warning and control technology and the like are gradually becoming the main research directions of traffic safety.
From the way of fatigue driving detection, existing solutions are mainly classified into the following three categories: based on driver physiological characteristics, based on vehicle motion characteristics, and based on driver facial characteristics. Physiological characteristics generally include the following: an eye wave signal, a heart wave signal, a muscle wave signal, a brain wave signal, and the like, and these biological signals tend to be positively correlated with the state of the driver. The method mainly comprises the steps of directly acquiring a signal value of a driver by wearing related instrument equipment and analyzing to obtain the fatigue state of the driver. Because the information is acquired after the sensor is in contact with the human body, the fatigue state of the driver can be intuitively reflected, and the accuracy is high. However, such a method is generally invasive, and a driver needs to be provided with a signal processing device to acquire various biological signals of the driver, which causes interference to the driver, and the signal processing device is expensive and difficult to popularize. The fatigue driving detection method based on the vehicle motion characteristics only needs to acquire vehicle information and does not need to use a sensor to be in contact with a driver, so that no interference is generated on the driver. Therefore, in most cases, it can only be used as a secondary reference factor in fusion detection. Although the installation is simple and the cost is low based on the facial features of the driver, when the driver moves, the opening and closing conditions of human eyes are difficult to judge, and in the condition that the illumination is weak at night, the existing face detection is difficult to accurately identify the facial features, so that the accuracy rate of the face detection at night is low. Therefore, a driving behavior analysis method and device with high applicability are needed.
Disclosure of Invention
In order to solve the above problems, the present invention provides a driving behavior analysis method and apparatus, which have higher applicability than the prior art.
The technical scheme adopted by the invention is as follows:
a driving behavior analysis method, comprising the steps of:
step S1: collecting driving information to establish a data set;
the dataset comprises an image dataset and a physiological dataset;
the image data set is used for storing a face image of the driver; the physiological data set is used for storing physiological information of a driver;
step S2: inputting the image data set into a first detection model, and calculating a first fatigue value;
step S3: inputting the physiological data set into a second detection model, and calculating a second fatigue value;
step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value;
step S5: and taking corresponding measures according to the judgment result.
In particular, fatigue detection methods based on driver facial features are susceptible to interference from external factors, such as: when the driver moves, the opening and closing of human eyes cannot be judged, the facial features cannot be accurately identified due to insufficient light, and the like. Therefore, the scheme adopts a method of combining the facial features based on the driver and the physiological features based on the driver to detect the fatigue driving. The condition that the fatigue state cannot be accurately judged when the clear face image of the driver cannot be acquired due to interference of external factors is avoided.
Further, the step S2 includes:
step S2.1: judging the definition of the face image in the image data set, and if the definition is lower than a preset threshold value, performing step S2.2, otherwise, performing step S2.3;
step S2.2: performing image noise reduction and low-light enhancement on the face image;
step S2.3: acquiring a first weight according to the definition of the face image;
step S2.4: performing face detection on a face image in the image data set;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
step S2.5: carrying out human eye positioning on the face image subjected to the human face detection;
the human eye positioning adopts a gray projection method;
step S2.6: extracting a first fatigue characteristic from the face image positioned by human eyes;
the first fatigue characteristic is at least one of: PERCLOS values, pupil diameter, blink frequency, and mouth opening and closing frequency;
step S2.7: a first fatigue value is calculated based on the first weight and the first fatigue characteristic.
Specifically, preprocessing an acquired face image to make the acquired face image clearer, and acquiring a first weight according to the definition of the face image; the sharper the face image is, the higher the first weight is and vice versa. Then, a stacked classifier of an Adaboost algorithm is adopted for face detection; the Adaboost algorithm is an iterative method, and the core idea is to train the same weak classifier against different training sets, and then to assemble the weak classifiers obtained on different training sets to form a final strong classifier. The classifier can quickly detect the human face and has good robustness. Then, carrying out human eye positioning on the face image by using a gray projection method; the gray projection method has small calculation amount, so that the human eyes can be quickly positioned. And finally, extracting the first fatigue-taking characteristic, and acquiring a first fatigue value which is one of the basis for judging the fatigue state according to the first weight and the first fatigue-taking characteristic.
Further, said step S2.6 comprises:
step S2.61: carrying out edge detection on the human eye region of the face image by adopting a Canny algorithm to obtain the human eye edge;
step S2.62: judging the opening and closing state of human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
step S2.63: comparing the number of pixel points in the edge of the human eye with a preset threshold value, and judging the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
step S2.64: and acquiring a PERCLOS value and a blinking frequency according to the opening and closing state.
Specifically, the existing eye positioning technology generally has a lot of defects, so that the opening and closing states of the eyes are difficult to judge in the following process, and accurate PERCLOS values and blink frequencies cannot be obtained. The reason why it is difficult to judge the open/close state of human eyes is generally: the action amplitude of the driver is large or the driver rotates. Although the gray projection method has a fast speed for positioning human eyes, the corresponding positioning is not accurate enough, and belongs to rough positioning. Therefore, the scheme also adopts the Canny algorithm to deduce more accurate open and close states of human eyes. Firstly, using a Canny algorithm to carry out edge detection on human eye regions of a face image to obtain human eye edges. Then judging the opening and closing state of the human eyes according to whether the upper edge and the lower edge of the human eyes are overlapped; if the superposition indicates that the human eyes are in a closed state, and if the superposition does not indicate that the human eyes are in an open state; in addition, when the human eyes are in a semi-open-close state, in order to more accurately acquire the open-close state of the human eyes to calculate the fatigue value, the scheme judges the open-close state of the human eyes according to the pixel values in the edges of the human eyes. Firstly, a threshold value is preset, when the number of pixel points of skin color pixels is larger than the threshold value, human eyes are in a closed state, and otherwise, the human eyes are in an open state. When the action amplitude of a driver is large or the driver rotates, the size of the human eye image changes, and the number of the pixels of the skin color pixels changes accordingly; the preset threshold is designed as a dynamic function, and when the size of the human eye image changes, the preset threshold also changes along with the human eye image, so that the comparison between the threshold and the number of the pixel points can accurately judge and obtain the opening and closing states of the human eyes.
Further, the step S3 includes:
step S3.1: acquiring a second weight according to the definition of the face image;
step S3.2: extracting a second fatigue feature from the physiological data set;
the second fatigue characteristic includes: heart rate, blood oxygen saturation and driving duration;
the heart rate and the blood oxygen saturation are acquired by a physiological information sensor arranged on a steering wheel; the physiological information sensor also collects fingerprints; the fingerprint is used for identifying whether the driving vehicles are the same driver or not; the driving duration is obtained according to the fingerprint and the continuous driving time;
step S3.3: and calculating a second fatigue value according to the second weight and the second fatigue characteristic.
Specifically, a second weight is obtained according to the definition of the face image; the sharper the face image is, the lower the second weight is, and vice versa. The physiological characteristics of the driver generally need to wear equipment to acquire human body data, and invasive equipment often hinders the driving operation of the driver. In view of this, the human body data selected and collected by the scheme are as follows: the heart rate, the blood oxygen saturation and the fingerprint can not influence the driving operation of a driver due to simple equipment for collecting the heart rate, the blood oxygen saturation and the fingerprint. The driver holds the steering wheel by hand in the area where the physiological information sensor is installed, and the physiological information sensor can acquire heart rate, blood oxygen saturation and fingerprints. Whether the driving in the vehicle is the same person or not in a limited time period can be identified according to the fingerprint, and if the driving in the vehicle is the same person, the driving time length can be obtained according to the driving time of the vehicle. According to the changes of the heart rate and the blood oxygen saturation, the approximate fatigue degree of the driver in the vehicle can be judged. And then, calculating a second fatigue value according to the second weight and the second fatigue characteristic to obtain a second fatigue value which is a basis for judging the fatigue state.
Further, the measure in step S5 is at least one of the following: the system comprises a sound alarm, a display alarm, an in-vehicle warning lamp alarm, an out-vehicle warning lamp alarm, a vehicle deceleration control and a control center.
Specifically, when the driver is detected and judged to be in fatigue driving, a sound in the vehicle can send out warning voice to prompt the driver to stop fatigue driving; a display in the automobile can display warning words to prompt a driver to stop fatigue driving; the warning lamp outside the vehicle is turned on to prompt nearby vehicles that the driver in the vehicle is in fatigue driving; if the driver does not manage the meeting, the in-vehicle system judges the road type, and forces the vehicle to decelerate under the condition of not violating the traffic rules; if the traffic rules do not allow, the information is sent to the control center, and then the control center sends the information to nearby vehicles to inform the nearby vehicles to pay attention to the traffic.
A driving behavior analysis device comprising:
the acquisition module is used for acquiring driving information;
the driving information comprises a face image of the driver and physiological information of the driver;
the analysis module is used for analyzing the information acquired by the acquisition module, calculating a first fatigue value and a second fatigue value, and judging whether the driver is in a fatigue state according to the first fatigue value and the second fatigue value;
and the response module is used for taking corresponding measures according to the judgment result of the analysis module.
Further, the parsing module comprises:
the image preprocessing unit is used for calculating a first weight according to the definition of the face image and carrying out image noise reduction and low-light enhancement on the face image;
the face detection unit is used for carrying out face detection on the face image;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
the human eye positioning unit is used for carrying out human eye positioning on the face image after the face detection;
the human eye positioning adopts a gray projection method;
the first analysis unit is used for extracting a first fatigue characteristic from the face image positioned by human eyes and calculating a first fatigue value according to the first weight and the first fatigue characteristic;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency, and mouth opening and closing frequency.
Further, the first parsing unit includes:
the human eye edge subunit is used for positioning the human eye edge;
detecting and acquiring the human eye edge by adopting a Canny algorithm;
the first distinguishing subunit is used for judging the opening and closing state of the human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
the second distinguishing subunit is used for comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
a PERCLOS value acquiring subunit, configured to acquire a PERCLOS value according to an open-close state;
and the blink frequency acquisition subunit is used for acquiring the blink frequency according to the number of times of the change of the opening and closing state.
Further, the parsing module further comprises:
the physiological information preprocessing unit is used for calculating a second weight according to the definition of the face image and extracting a second fatigue characteristic from the physiological data set;
the second fatigue characteristic includes: heart rate, blood oxygen saturation and driving duration;
the driving duration is calculated by the fingerprint and the time of continuous driving; the fingerprint is used for identifying whether the driving vehicles are the same driver or not; the heart rate, the blood oxygen saturation and the fingerprint are acquired by a physiological information sensor in an acquisition module; the physiological information sensor is arranged on a steering wheel;
and a second analyzing unit for calculating a second fatigue value according to the second weight and the second fatigue characteristic.
Further, the response module includes:
the sound alarm unit is used for prompting the driver to be in a fatigue state by voice;
the display warning unit is used for prompting the driver to be in a fatigue state at present through characters;
the warning unit of the warning light in the car, is used for the light to point out the driver is in the fatigue state at present;
the warning unit of the warning light outside the vehicle, is used for the light to point out nearby vehicle, the driver in the car is in fatigue state at present;
the control vehicle deceleration unit is used for controlling the vehicle to decelerate;
and the communication warning unit is used for sending information to the control center, and the control center sends information to other nearby vehicles so as to prompt the nearby vehicles that drivers in the vehicles are in fatigue states at present.
Compared with the prior art, the invention has the beneficial effects that:
(1) meanwhile, the fatigue driving is detected by using the facial features and the physiological features, so that the fatigue driving detection method is higher in applicability and less influenced by the external environment.
(2) The design of the first weight and the second weight enables the detection of fatigue driving to be more accurate.
(3) The preset threshold value is designed as a dynamic function, so that the obtained open-close state of the human eyes is more accurate, and favorable conditions are provided for detecting fatigue driving.
Drawings
FIG. 1 is a view showing a structure of a driving behavior analyzing apparatus according to the present invention;
FIG. 2 is a diagram of a parsing module according to the present invention;
FIG. 3 is a diagram of a first parsing unit structure according to the present invention;
fig. 4 is a diagram of a response module structure of the present invention.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Examples
The driving behavior analysis method of the embodiment comprises the following steps:
step S1: collecting driving information to establish a data set;
the dataset comprises an image dataset and a physiological dataset;
the image data set is used for storing a face image of the driver; the physiological data set is used for storing physiological information of the driver;
step S2: inputting the image data set into a first detection model, and calculating a first fatigue value;
step S3: inputting the physiological data set into a second detection model, and calculating a second fatigue value;
step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value;
step S5: and taking corresponding measures according to the judgment result.
In particular, fatigue detection methods based on driver facial features are susceptible to interference from external factors, such as: when the driver moves, the opening and closing of human eyes cannot be judged, the facial features cannot be accurately identified due to insufficient light, and the like. Therefore, the scheme adopts a method of combining the facial features based on the driver and the physiological features based on the driver to detect the fatigue driving. The condition that the fatigue state cannot be accurately judged when the clear face image of the driver cannot be acquired due to interference of external factors is avoided.
Further, the step S2 includes:
step S2.1: judging the definition of the face image in the image data set, and if the definition is lower than a preset threshold value, performing step S2.2, otherwise, performing step S2.3;
step S2.2: performing image noise reduction and low-light enhancement on the face image;
step S2.3: acquiring a first weight according to the definition of the face image;
step S2.4: performing face detection on a face image in the image data set;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
step S2.5: carrying out human eye positioning on the face image subjected to the human face detection;
the human eye positioning adopts a gray projection method;
step S2.6: extracting a first fatigue characteristic from the face image positioned by human eyes;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency and mouth opening and closing frequency;
step S2.7: and calculating a first fatigue value according to the first weight and the first fatigue characteristic.
Specifically, preprocessing an acquired face image to make the acquired face image clearer, and acquiring a first weight according to the definition of the face image; the sharper the face image is, the higher the first weight is and vice versa. Then, a stacked classifier of an Adaboost algorithm is adopted for face detection; the Adaboost algorithm is an iterative method, and the core idea is to train the same weak classifier against different training sets, and then to assemble the weak classifiers obtained on different training sets to form a final strong classifier. The classifier can quickly detect the human face and has good robustness. Then, carrying out human eye positioning on the face image by using a gray projection method; the gray projection method has small calculation amount, so that the human eyes can be quickly positioned. And finally, extracting the first fatigue-taking characteristic, and acquiring a first fatigue value which is one of the basis for judging the fatigue state according to the first weight and the first fatigue-taking characteristic.
Further, said step S2.6 comprises:
step S2.61: adopting a Canny algorithm to carry out edge detection on the human eye region of the face image to obtain the human eye edge;
step S2.62: judging the opening and closing state of human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
step S2.63: comparing the number of pixel points in the edge of the human eye with a preset threshold value, and judging the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
step S2.64: and acquiring a PERCLOS value and a blinking frequency according to the opening and closing state.
Specifically, the existing eye positioning technology generally has many defects, so that the opening and closing state of the eyes is difficult to judge subsequently, and an accurate PERCLOS value and blinking frequency cannot be obtained. The reason why it is difficult to judge the open/close state of human eyes is generally: the action amplitude of the driver is large or the driver rotates. Although the gray projection method has a fast speed for positioning human eyes, the corresponding positioning is not accurate enough, and belongs to rough positioning. Therefore, the scheme also adopts the Canny algorithm to deduce more accurate open and close states of human eyes. Firstly, using a Canny algorithm to carry out edge detection on human eye regions of a face image to obtain human eye edges. Then judging the opening and closing state of the human eyes according to whether the upper edge and the lower edge of the human eyes are overlapped; if the superposition indicates that the human eyes are in a closed state, and if the superposition does not indicate that the human eyes are in an open state; in addition, when the human eyes are in a semi-open-close state, in order to more accurately acquire the open-close state of the human eyes to calculate the fatigue value, the scheme judges the open-close state of the human eyes according to the pixel values in the edges of the human eyes. Firstly, a threshold value is preset, when the number of pixel points of skin color pixels is larger than the threshold value, human eyes are in a closed state, and otherwise, the human eyes are in an open state. When the action amplitude of a driver is large or the driver rotates, the size of the human eye image changes, and the number of the pixels of the skin color pixels changes accordingly; the preset threshold is designed as a dynamic function, and when the size of the human eye image changes, the preset threshold also changes along with the human eye image, so that the comparison between the threshold and the number of the pixel points can accurately judge and acquire the opening and closing state of the human eyes.
Further, the step S3 includes:
step S3.1: acquiring a second weight according to the definition of the face image;
step S3.2: extracting a second fatigue feature from the physiological data set;
the second fatigue characteristic includes: heart rate, blood oxygen saturation and driving duration;
the heart rate and the blood oxygen saturation are acquired by a physiological information sensor arranged on a steering wheel; the physiological information sensor also collects fingerprints; the fingerprint is used for identifying whether the driving vehicles are the same driver or not; the driving duration is obtained according to the fingerprint and the continuous driving time;
step S3.3: and calculating a second fatigue value according to the second weight and the second fatigue characteristic.
Specifically, a second weight is obtained according to the definition of the face image; the sharper the face image is, the lower the second weight is, and vice versa. The physiological characteristics of the driver generally need to wear equipment to acquire human body data, and invasive equipment often hinders the driving operation of the driver. In view of this, the human body data selected and collected by the scheme are as follows: the heart rate, the blood oxygen saturation and the fingerprint can not influence the driving operation of a driver due to simple equipment for collecting the heart rate, the blood oxygen saturation and the fingerprint. The driver holds the steering wheel by hand in the area where the physiological information sensor is installed, and the physiological information sensor can acquire heart rate, blood oxygen saturation and fingerprints. Whether the driving in the vehicle is the same person or not in a limited time period can be identified according to the fingerprint, and if the driving in the vehicle is the same person, the driving time length can be obtained according to the driving time of the vehicle. From the changes in heart rate and blood oxygen saturation, the approximate degree of fatigue of the driver in the vehicle can be determined. And then, calculating a second fatigue value according to the second weight and the second fatigue characteristic to obtain a second fatigue value which is a basis for judging the fatigue state.
Further, the measure in step S5 is at least one of the following: the system comprises a sound alarm, a display alarm, an in-vehicle warning lamp alarm, an out-vehicle warning lamp alarm, a vehicle deceleration control and a control center.
Specifically, when the driver is detected and judged to be in fatigue driving, a sound in the vehicle can send out warning voice to prompt the driver to stop fatigue driving; a display in the automobile can display warning characters to prompt a driver to stop fatigue driving; the warning lamp outside the vehicle is turned on to prompt nearby vehicles that the driver in the vehicle is in fatigue driving; if the driver does not manage the meeting, the in-vehicle system judges the road type, and forces the vehicle to decelerate under the condition of not violating the traffic rules; if the traffic rules do not allow, the information is sent to the control center, and then the control center sends the information to nearby vehicles to inform the nearby vehicles to pay attention to the traffic.
Fig. 1 is a structural view of a driving behavior analysis device according to the present invention, and includes:
the acquisition module is used for acquiring driving information;
the driving information comprises a face image of the driver and physiological information of the driver;
the analysis module is used for analyzing the information acquired by the acquisition module, calculating a first fatigue value and a second fatigue value, and judging whether the driver is in a fatigue state according to the first fatigue value and the second fatigue value;
and the response module is used for taking corresponding measures according to the judgment result of the analysis module.
Fig. 2 is a structure diagram of an analysis module according to the present invention, and as shown in the figure, the analysis module includes:
the image preprocessing unit is used for calculating a first weight according to the definition of the face image and carrying out image noise reduction and low-light enhancement on the face image;
the face detection unit is used for carrying out face detection on the face image;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
the human eye positioning unit is used for carrying out human eye positioning on the face image after the face detection;
the human eye positioning adopts a gray projection method;
the first analysis unit is used for extracting a first fatigue characteristic from the face image positioned by human eyes and calculating a first fatigue value according to the first weight and the first fatigue characteristic;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency, and mouth opening and closing frequency.
Fig. 3 is a structural diagram of a first parsing unit of the present invention, and as shown in the figure, the first parsing unit includes:
the human eye edge subunit is used for positioning the human eye edge;
detecting and acquiring the human eye edge by adopting a Canny algorithm;
the first distinguishing subunit is used for judging the opening and closing state of the human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
the second distinguishing subunit is used for comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
the PERCLOS value acquiring subunit is used for acquiring a PERCLOS value according to the opening and closing state;
and the blink frequency acquisition subunit is used for acquiring the blink frequency according to the number of times of the change of the opening and closing state.
Further, the parsing module further comprises:
the physiological information preprocessing unit is used for calculating a second weight according to the definition of the face image and extracting a second fatigue characteristic from the physiological data set;
the second fatigue characteristic includes: heart rate, blood oxygen saturation and driving duration;
the driving duration is calculated by the fingerprint and the time of continuous driving; the fingerprint is used for identifying whether the driving vehicles are the same driver or not; the heart rate, the blood oxygen saturation and the fingerprint are acquired by a physiological information sensor in an acquisition module; the physiological information sensor is arranged on a steering wheel;
and a second analyzing unit for calculating a second fatigue value according to the second weight and the second fatigue characteristic.
Fig. 4 is a structural diagram of a response module of the present invention, as shown, the response module includes:
the sound alarm unit is used for prompting the driver to be in a fatigue state by voice;
the display warning unit is used for prompting the driver to be in a fatigue state at present through characters;
the warning unit of the warning light in the car, is used for the light to point out the driver is in the fatigue state at present;
the warning unit of the warning light outside the vehicle, is used for the light to point out nearby vehicle, the driver in the car is in fatigue state at present;
the control vehicle deceleration unit is used for controlling the vehicle to decelerate;
and the communication warning unit is used for sending information to the control center, and the control center sends information to other nearby vehicles so as to prompt the nearby vehicles that drivers in the vehicles are in fatigue states at present.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (10)

1. A driving behavior analysis method, characterized by comprising the steps of:
step S1: collecting driving information to establish a data set;
the dataset comprises an image dataset and a physiological dataset;
the image data set is used for storing a face image of the driver; the physiological data set is used for storing physiological information of the driver;
step S2: inputting the image data set into a first detection model, and calculating a first fatigue value;
step S3: inputting the physiological data set into a second detection model, and calculating a second fatigue value;
step S4: judging whether the driver is in a fatigue state or not according to the first fatigue value and the second fatigue value;
step S5: and taking corresponding measures according to the judgment result.
2. The driving behavior analysis method according to claim 1, wherein the step S2 includes:
step S2.1: judging the definition of the face image in the image data set, and if the definition is lower than a preset threshold value, performing step S2.2, otherwise, performing step S2.3;
step S2.2: performing image noise reduction and low-light enhancement on the face image;
step S2.3: acquiring a first weight according to the definition of the face image;
step S2.4: performing face detection on a face image in the image data set;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
step S2.5: carrying out human eye positioning on the face image subjected to the human face detection;
the human eye positioning adopts a gray projection method;
step S2.6: extracting a first fatigue characteristic from the face image positioned by human eyes;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency and mouth opening and closing frequency; step S2.7: a first fatigue value is calculated based on the first weight and the first fatigue characteristic.
3. A driving behaviour analysis method according to claim 2, characterised in that said step S2.6 comprises:
step S2.61: adopting a Canny algorithm to carry out edge detection on the human eye region of the face image to obtain the human eye edge;
step S2.62: judging the opening and closing state of human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
step S2.63: comparing the number of pixel points in the edge of the human eye with a preset threshold value, and judging the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
step S2.64: and acquiring a PERCLOS value and a blinking frequency according to the opening and closing state.
4. The driving behavior analysis method according to claim 1, wherein the step S3 includes:
step S3.1: acquiring a second weight according to the definition of the face image;
step S3.2: extracting a second fatigue feature from the physiological data set;
the second fatigue characteristic includes: heart rate, blood oxygen saturation and driving duration;
the heart rate and the blood oxygen saturation are acquired by a physiological information sensor arranged on a steering wheel; the physiological information sensor also collects fingerprints; the fingerprint is used for identifying whether the driving vehicles are the same driver or not; the driving duration is obtained according to the fingerprint and the continuous driving time;
step S3.3: and calculating a second fatigue value according to the second weight and the second fatigue characteristic.
5. The driving behavior analysis method according to claim 1, wherein the measure in step S5 is at least one of: the system comprises a sound alarm, a display alarm, an in-vehicle warning lamp alarm, an out-vehicle warning lamp alarm, a vehicle deceleration control and a control center.
6. A driving behavior analysis device characterized by comprising:
the acquisition module is used for acquiring driving information;
the driving information comprises a face image of the driver and physiological information of the driver;
the analysis module is used for analyzing the information acquired by the acquisition module, calculating a first fatigue value and a second fatigue value, and judging whether the driver is in a fatigue state according to the first fatigue value and the second fatigue value;
and the response module is used for taking corresponding measures according to the judgment result of the analysis module.
7. The driving behavior analysis device according to claim 6, wherein the analysis module includes:
the image preprocessing unit is used for calculating a first weight according to the definition of the face image and carrying out image noise reduction and low-light enhancement on the face image;
the face detection unit is used for carrying out face detection on the face image;
the face detection is carried out by adopting a stacked classifier of an Adaboost algorithm;
the human eye positioning unit is used for carrying out human eye positioning on the face image after the face detection;
the human eye positioning adopts a gray projection method;
the first analysis unit is used for extracting a first fatigue characteristic from the face image positioned by human eyes and calculating a first fatigue value according to the first weight and the first fatigue characteristic;
the first fatigue characteristic is at least one of: PERCLOS value, pupil diameter, blink frequency, and mouth opening and closing frequency.
8. The driving behavior analysis device according to claim 7, wherein the first analysis means includes:
the human eye edge subunit is used for positioning the human eye edge;
detecting and acquiring the human eye edge by adopting a Canny algorithm;
the first distinguishing subunit is used for judging the opening and closing state of the human eyes according to the coincidence of the upper edge and the lower edge of the human eyes;
the second distinguishing subunit is used for comparing the number of the pixel points in the edge of the human eye with a preset threshold value so as to judge the opening and closing state of the human eye again;
the pixel points are the pixel points of skin color pixels; the preset threshold is a dynamic function and changes along with the change of the face image;
the PERCLOS value acquiring subunit is used for acquiring a PERCLOS value according to the opening and closing state;
and the blink frequency acquisition subunit is used for acquiring the blink frequency according to the number of times of the change of the opening and closing state.
9. The driving behavior analysis device according to claim 6, wherein the analysis module further comprises:
the physiological information preprocessing unit is used for calculating a second weight according to the definition of the face image and extracting a second fatigue characteristic from the physiological data set;
the second fatigue characteristic includes: heart rate, blood oxygen saturation and driving duration;
the driving duration is calculated by the fingerprint and the time of continuous driving; the fingerprint is used for identifying whether the driving vehicles are the same driver or not; the heart rate, the blood oxygen saturation and the fingerprint are acquired by a physiological information sensor in an acquisition module; the physiological information sensor is arranged on a steering wheel;
and a second analyzing unit for calculating a second fatigue value according to the second weight and the second fatigue characteristic.
10. A driving behavior analysis apparatus according to claim 6, wherein the response module comprises:
the sound alarm unit is used for prompting the driver to be in a fatigue state by voice;
the display warning unit is used for prompting the driver to be in a fatigue state at present through characters;
the warning unit of the warning light in the car, is used for the light to point out the driver is in the fatigue state at present;
the warning unit of the warning light outside the vehicle, is used for the light to point out nearby vehicle, the driver in the car is in fatigue state at present;
the control vehicle deceleration unit is used for controlling the vehicle to decelerate;
and the communication warning unit is used for sending information to the control center, and the control center sends information to other nearby vehicles so as to prompt the nearby vehicles that drivers in the vehicles are in fatigue states at present.
CN202011196247.7A 2020-10-30 2020-10-30 Driving behavior analysis method and device Withdrawn CN114529887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011196247.7A CN114529887A (en) 2020-10-30 2020-10-30 Driving behavior analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011196247.7A CN114529887A (en) 2020-10-30 2020-10-30 Driving behavior analysis method and device

Publications (1)

Publication Number Publication Date
CN114529887A true CN114529887A (en) 2022-05-24

Family

ID=81618861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011196247.7A Withdrawn CN114529887A (en) 2020-10-30 2020-10-30 Driving behavior analysis method and device

Country Status (1)

Country Link
CN (1) CN114529887A (en)

Similar Documents

Publication Publication Date Title
Kaplan et al. Driver behavior analysis for safe driving: A survey
CN101593425B (en) Machine vision based fatigue driving monitoring method and system
Kang Various approaches for driver and driving behavior monitoring: A review
CN108446600A (en) A kind of vehicle driver's fatigue monitoring early warning system and method
Hossain et al. IOT based real-time drowsy driving detection system for the prevention of road accidents
CN110532976A (en) Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN105261153A (en) Vehicle running monitoring method and device
CN105286802B (en) Driver Fatigue Detection based on video information
CN106571015A (en) Driving behavior data collection method based on Internet
CN105788176A (en) Fatigue driving monitoring and prompting method and system
CN108021875A (en) A kind of vehicle driver's personalization fatigue monitoring and method for early warning
CN111753674A (en) Fatigue driving detection and identification method based on deep learning
Charniya et al. Drunk driving and drowsiness detection
CN112220480A (en) Driver state detection system and vehicle based on millimeter wave radar and camera fusion
CN112220481B (en) Driver driving state detection method and safe driving method thereof
Mašanović et al. Driver monitoring using the in-vehicle camera
Yin et al. A driver fatigue detection method based on multi-sensor signals
CN114529887A (en) Driving behavior analysis method and device
CN114523979A (en) Fatigue driving detection method and system
CN114267169A (en) Fatigue driving prevention speed limit control method based on machine vision
CN113901866A (en) Fatigue driving early warning method based on machine vision
CN106710145A (en) Guided driver tiredness prevention method
CN113420656A (en) Fatigue driving detection method and device, electronic equipment and storage medium
Srivastava Driver's drowsiness identification using eye aspect ratio with adaptive thresholding
Qureshi et al. An Effective IOT based Driver's Drowsiness Detection and Monitoring System to Avoid Real-Time Road Accidents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220524