CN103714659B - Fatigue driving identification system based on double-spectrum fusion - Google Patents

Fatigue driving identification system based on double-spectrum fusion Download PDF

Info

Publication number
CN103714659B
CN103714659B CN201310731288.5A CN201310731288A CN103714659B CN 103714659 B CN103714659 B CN 103714659B CN 201310731288 A CN201310731288 A CN 201310731288A CN 103714659 B CN103714659 B CN 103714659B
Authority
CN
China
Prior art keywords
image
fatigue
eye
black
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310731288.5A
Other languages
Chinese (zh)
Other versions
CN103714659A (en
Inventor
张伟
成波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd, Suzhou Automotive Research Institute of Tsinghua University filed Critical Suzhou Tsingtech Microvision Electronic Science & Technology Co Ltd
Priority to CN201310731288.5A priority Critical patent/CN103714659B/en
Publication of CN103714659A publication Critical patent/CN103714659A/en
Application granted granted Critical
Publication of CN103714659B publication Critical patent/CN103714659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fatigue driving identification system based on double-spectrum fusion. The fatigue driving identification system based on double-spectrum fusion comprises a double-spectrum image collection device, an image fusion device and a fatigue identification device, and is characterized in that the double-spectrum image collection device comprises a light source, a semi-reflecting semi-permeable mirror, a black-and-white image sensor module and a color image sensor module, wherein a color image imaging system is formed by the light source, the semi-reflecting semi-permeable mirror and the color image sensor module, and a black-and-white image imaging system is formed by the light source, the semi-reflecting semi-permeable mirror and the black-and-white image sensor module; the image fusion device is connected with the black-and-white image sensor module and the color image sensor module, and is used for obtaining black-and-white images collected by the black-and-white image sensor module and color images collected by the color image sensor module, and fusing the black-and-white images and the color images to form driver images for identification; the fatigue identification device is used for carrying out image analysis on the fused driver images, determining the eye region of a driver through characteristic point locating, carrying out fatigue state judgment, and carrying out prompting or early warning according to the judgment result. According to the fatigue driving identification system based on double-spectrum fusion, due to the fact that locating is carried out on the eyes in high-quality face images, effectiveness and accuracy of fatigue driving judgment are greatly improved.

Description

Fatigue driving recognition system based on double-spectrum fusion
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a fatigue driving identification system based on double-spectrum fusion.
Background
Among human factors of drivers who cause road traffic accidents, fatigue driving is one of the main reasons, economic loss caused by traffic accidents caused by fatigue driving in China is up to hundreds of millions of yuan each year, 9 tens of thousands of people die or seriously hurt each year because of accidents caused by fatigue driving, the happiness of hundreds of thousands of families is destroyed, and the fatigue driving brings great harm to the harmony and stability of the society.
In order to reduce the occurrence of similar traffic accidents, the fatigue early warning method for drivers is vigorously researched at home and abroad since the 90 s. From the technology currently employed, these methods fall into three main categories.
(1) The detection method based on the physiological indexes adopts a contact type measurement mode, and the fatigue state of the driver is presumed by testing the physiological signals of the driver. In this method, the subject is required to wear a corresponding device and then fatigue state recognition is performed through electroencephalogram analysis, electrocardiographic analysis, pulse measurement, steering wheel grip measurement analysis, and the like. The method can cause great interference to driving behaviors and is not suitable for application in actual driving environments.
(2) A fatigue detection method based on the behavior characteristics of a driver estimates the fatigue state of the driver by analyzing the characteristics of the steering wheel, the pedal operation characteristics, or the vehicle running trajectory of the driver. Chinese patent application No. ZL200820177083.1 published on 9, 16, 2009 discloses a fatigue driving prevention alarm system, comprising: the monitoring unit, the alarm control unit and the vehicle control unit; the monitoring unit detects information of a driver for controlling the vehicle and outputs the detection information to the alarm control unit; the alarm control unit is connected with the monitoring unit, receives the detection information and outputs an alarm control instruction to the vehicle control unit according to the detection information; the vehicle control unit is connected with the alarm control unit, receives an alarm control instruction sent by the alarm control unit, and controls the vehicle to alarm according to the alarm control instruction. In fact, the patent judges whether the driver is in fatigue driving by comparing the driving habits of the driver such as the number of times of driving the steering wheel, the number of times of stepping on the accelerator and the driving speed within a preset time. Although the fatigue detection method based on the operating characteristics of the driver can achieve certain identification precision, the measurement process does not bring interference to the driver. However, the operation of the driver is affected by the road environment, the driving speed, the personal habits, the operation skills, and the like, in addition to the fatigue state, and the accuracy thereof needs to be improved.
(3) The fatigue detection method based on the facial expressions takes machine vision as a means, utilizes an image sensor to collect facial images of a driver, and judges the fatigue state through analyzing facial expression characteristics of the driver. Both the eye characteristics and the mouth movement characteristics of the driver can be used directly for detecting fatigue, wherein information about the eye state is currently most widely used.
The rapid and accurate positioning of the face and eyes of the driver is a prerequisite for a fatigue detection method based on facial expressions. Under the condition of sufficient light in the daytime, the requirement of a fatigue detection system can be generally met by acquiring images, but at night or under the condition of low illumination such as rain and snow, an infrared emission tube is usually installed in a collecting device to supplement light so as to assist a camera to work, and the Chinese patent documents with the publication numbers of CN201765668 and CN102436715 both adopt an infrared light supplementing mode. However, the infrared light source is only sensitive to the black-and-white CCD, the acquired image is a black-and-white image, and the image quality is reduced, and the driver is more likely to be tired when driving at night, so that the effectiveness of the fatigue detection method based on facial expression is reduced. The invention is achieved accordingly.
Disclosure of Invention
The invention provides a fatigue driving identification system based on double-spectrum fusion, which solves the problems that the image quality is not enough in the fatigue detection method based on facial expression in the prior art, so that the precision and the accuracy of the fatigue detection method are limited to a certain extent, and the like.
In order to solve the problems in the prior art, the technical scheme provided by the invention is as follows:
a fatigue driving recognition system based on double-spectrum fusion comprises a double-spectrum image acquisition device, an image fusion device and a fatigue recognition device, and is characterized in that the double-spectrum image acquisition device comprises a light source, a semi-reflecting and semi-transmitting mirror, a black-and-white image sensor module and a color image sensor module, wherein the light source, the semi-reflecting and semi-transmitting mirror and the color image sensor module form a color image imaging system, and the light source, the semi-reflecting and semi-transmitting mirror and the black-and-white image sensor module form a black-and-white image imaging system; the image fusion device is respectively connected with the black-and-white image sensor module and the color image sensor module, acquires a black-and-white image acquired by the black-and-white image sensor module and a color image acquired by the color image sensor module, and fuses the color image and the black-and-white image into a driver image for identification; the fatigue identification device is used for carrying out image analysis on the fused driver image, determining the eye area of the driver through feature point positioning, judging the fatigue state and carrying out prompt or early warning according to the judged result.
The preferred technical scheme is as follows: the light source is an infrared emission tube and is used for generating an infrared light source.
The preferred technical scheme is as follows: a 940nm optical filter is arranged between the black-and-white image sensor module and the semi-reflecting and semi-transmitting mirror; the light source, the semi-reflecting and semi-transmitting mirror, the optical filter and the black-and-white image sensor form a black-and-white image imaging system.
The preferred technical scheme is as follows: fatigue recognition device contains image processing module, face detection module, characteristic point location module, fatigue identification module, tired early warning module, wherein:
the image processing module is used for carrying out domain transformation on the fused driver face image to obtain the amplitude characteristic and the phase characteristic of the image and then carrying out dimension reduction processing;
the face detection module is used for judging whether a face exists in the image or not, and if so, outputting the position, size and pose information of the face;
the characteristic point positioning module is used for positioning the face characteristics in the image, finding out key points of characteristic description and determining the eye area of the driver;
the fatigue identification module is used for carrying out fatigue judgment according to the PERCLOS index according to the positioned eye area of the driver and judging whether the driver is in a fatigue state;
and the fatigue early warning module is used for reminding the driver or sending early warning information to a related management department in a voice mode according to the fatigue judgment result.
The preferred technical scheme is as follows: the image fusion device can perform wavelet transformation and multi-resolution fusion on the black-and-white image and the color image obtained by the image acquisition device, reconstruct the image to obtain a higher-quality face image, and prepare for fatigue identification in the later period, so that the fatigue identification system can normally work under all weather conditions, and the system work is not limited by weather conditions, light changes and daytime.
The image fusion device comprises wavelet transformation, multi-resolution fusion and image reconstruction, wherein:
wavelet transformation, namely obtaining high-frequency information of the image in the low-frequency direction, the horizontal direction, the vertical direction and the 45-degree angle direction under different scales through multi-scale wavelet transformation.
And performing multi-resolution fusion, namely performing weighted fusion on the feature maps of the two images in different levels and different feature layers to obtain a wavelet multi-resolution structure of the fused image.
And (4) image reconstruction, namely performing inverse wavelet transform according to the wavelet sequence of the fused image to reconstruct the fused image.
Another object of the present invention is to provide a fatigue driving identification method based on dual spectrum fusion, which is characterized in that the method comprises the following steps:
(1) collecting black-white images and color images of the face of a driver;
(2) carrying out image fusion on the color image and the black-and-white image to form an identified driver image;
(3) and carrying out image analysis on the fused driver image, determining the eye area of the driver through feature point positioning, judging the fatigue state, and prompting or early warning according to the judgment result.
The preferred technical scheme is as follows: in the step (2), the image fusion of the color image and the black-and-white image is carried out according to the following steps:
1) respectively carrying out wavelet transformation processing on the color image and the black-and-white image, and obtaining image information in the low-frequency direction, the horizontal direction, the vertical direction and the 450-angle direction of the image under different scales through wavelet transformation of different scales;
2) multi-resolution fusion of color and black-and-white images: weighting and fusing the feature images of different levels and different feature layers of the color image and the black-white image at the same moment to obtain a wavelet multi-resolution structure of a fused image;
3) image reconstruction: and performing inverse wavelet transform according to the wavelet sequence of the fused image to reconstruct the fused image.
In view of the above situation, the present invention aims to obtain a high-quality face image of a driver at night or under low illumination, and quickly and accurately position the face and eyes of the driver, so as to ensure that a fatigue driving recognition system works in all weather conditions and obtain good performance.
In order to solve the defects of the prior art, the invention aims to provide a fatigue driving recognition system based on dual-spectrum fusion, which obtains a high-quality face image of a driver by fusing two acquired images under white light and infrared light, thereby solving the problem that the fatigue driving recognition system acquires the high-quality face image at night or under the condition of low illumination. The method comprises the steps of carrying out face detection and facial feature point positioning through processing of high-quality face images, and judging driver fatigue through a PERCLOS index according to the eye state of a driver obtained through analysis.
In order to achieve the above purpose, the fatigue driving identification system based on dual-spectrum fusion in the invention comprises a dual-spectrum image acquisition device, an image fusion device and a fatigue identification device, wherein the dual-spectrum image acquisition device can respectively acquire images under the conditions of white light and infrared light supplement through a single semi-reflecting and semi-transparent mirror, and a light source comprises a luminous source besides the white light under the normal condition. The luminous source is an infrared transmitting tube and is arranged in an image acquisition device of the fatigue driving recognition system.
The reflection function of the semi-reflecting and semi-transmitting mirror can obtain a white light source under normal conditions, and the transmission function of the semi-reflecting and semi-transmitting mirror can obtain a light source except the white light. Under the condition of sufficient light in the daytime, the white light source can obtain a color image with very good quality through the color CCD. The transmitted infrared light source passes through a 940nm optical filter to obtain a 940nm infrared light source, and then a black-and-white CCD is used to obtain a black-and-white image.
The image fusion device is used for finally fusing the two images into a pair of images after the two images are subjected to an image processing technology. And performing fusion processing on the obtained color image and the black-and-white image, and fusing to obtain an image with relatively clear quality according to the image quality obtained under different illumination conditions.
The fatigue identification device mainly analyzes the image of the collected face image of the driver, determines the eye area of the driver through the feature point positioning, and judges the fatigue mode.
According to the fatigue driving identification system based on the double-spectrum fusion, the wavelength of an infrared light source adopted in the light source system is 940 nm.
Fatigue recognition device contain image processing module, face detection module, characteristic point location module, fatigue identification module, tired early warning module, wherein:
and the image processing module is used for carrying out domain transformation on the fused image, respectively obtaining the amplitude characteristic and the phase characteristic of the image, respectively carrying out dimension reduction processing and preparing for subsequent face detection.
And the face detection module is used for modeling the processed image, judging whether a face exists in the image or not, and outputting information such as the position, size, pose and the like of the face if the face exists in the image.
And the characteristic point positioning module is used for positioning the face characteristics in the image and finding out the key points of the characteristic description. Facial features as used in this patent include eyebrows, eyes, nose and mouth. The driver eye area is finally determined.
And the fatigue identification module is used for carrying out fatigue judgment according to the PERCLOS index according to the positioned eye area and judging whether the driver is in a fatigue state.
And the fatigue early warning module is used for reminding the driver in a voice mode according to the fatigue judgment result, and the early warning information can be sent to a relevant management department.
Compared with the scheme in the prior art, the invention has the advantages that:
the double-spectrum system used in the invention can simultaneously obtain two images of the same scene under two light sources of white light and infrared light, thereby being capable of shooting the scene image in all weather. The image fusion module can fuse the black and white images and the color images of the same scene into an image with higher quality according to the image characteristics. According to the invention, the effectiveness and accuracy of fatigue driving judgment are greatly improved by positioning the eyes in the high-quality face image.
Drawings
The invention is further described with reference to the following figures and examples:
FIG. 1 is a block diagram of a fatigue driving identification system based on dual spectrum fusion according to the present invention;
fig. 2 is a block diagram of a dual spectrum image acquisition device of a fatigue driving recognition system based on dual spectrum fusion according to the present invention;
fig. 3 is a block diagram illustrating the structure of an image fusion device of a fatigue driving recognition system based on dual spectrum fusion according to the present invention.
Fig. 4 is a block diagram of a fatigue identification device of a fatigue driving identification system based on dual spectrum fusion according to the present invention.
Detailed Description
The above-described scheme is further illustrated below with reference to specific examples. It should be understood that these examples are for illustrative purposes and are not intended to limit the scope of the present invention. The conditions used in the examples may be further adjusted according to the conditions of the particular manufacturer, and the conditions not specified are generally the conditions in routine experiments.
Examples
Fig. 1 is a block diagram of a fatigue driving identification system based on dual-spectrum fusion according to the present invention, which includes a dual-spectrum image acquisition device 101, an image fusion device 102, and a fatigue identification device 103.
The dual-spectrum image acquisition device 101 comprises a light source, a half-reflecting and half-transmitting mirror, a light filter, a black-white image sensor (CCD) module and a color image sensor (CCD) module, and forms a two-path light path system. The light source, the half-reflecting and half-transmitting mirror, the optical filter and the black-and-white image sensor (CCD) module form a 1 st light path system (black-and-white image imaging system), can perform black-and-white image imaging and is used for collecting black-and-white images of a driver. The light source, the semi-reflecting and semi-transmitting mirror and the color image sensor (CCD) module form a 2 nd light path system (a color image imaging system), can perform color image imaging and is used for collecting color images of drivers.
Specifically, the light source is a light emitting source installed on an automobile, the light emitting source is an infrared emitting tube, and the infrared wavelength is 940 nm. The light source may provide both white light and infrared light for the dual-spectrum image capture device 101. Normally white light is obtained by normal lighting (daylight, sunlight, etc.).
The dual-spectrum image acquisition device 101 can reflect white light and transmit infrared light, so that the white light enters the color CCD to perform color imaging on the same scene; on the other hand, infrared light enters the black-and-white CCD to perform black-and-white imaging on the same scene, so that the dual-spectrum image acquisition device 101 realizes color and black-and-white imaging on the same scene. Color imaging and black and white imaging can be simultaneous or asynchronous.
Fig. 2 is a block diagram of a dual-spectrum image acquisition device 101 of a fatigue driving identification system based on dual-spectrum fusion according to the invention, which comprises a half-reflecting and half-transmitting mirror 201, a color CCD202, a 940nm filter 203, a black-and-white CCD204, and an image fusion module 102.
The half-reflecting and half-transmitting mirror 201 can reflect white light and transmit infrared light, so that two paths of light sources are separated. The color CCD202, the white light source reflected by the half-reflecting and half-transmitting mirror 201, performs color imaging on the scene. And the 940nm filter 203 is used for filtering the light transmitted through the semi-reflecting and semi-transmitting lens 201 to obtain infrared light with the wavelength of 940 nm. The black-and-white CCD204 images a scene black-and-white with infrared light having a wavelength of 940nm passing through the filter.
Fig. 3 is a block diagram of the image fusion device 102 of the fatigue driving identification system based on the dual spectrum fusion according to the present invention, and the fusion device 102 fuses new images with better quality according to the quality of two images obtained under different exposure conditions, so as to prepare for the fatigue identification device 103 to perform fatigue identification. The image capturing device 101 obtains a color image from the color CCD202 and a black-and-white image from the black-and-white CCD204 for the same scene. Under the condition of sufficient light in the daytime, the quality of the color image is better, but at night and under the condition of low illumination, the quality of the color image is sharply reduced, or no effective information can be acquired, but the black-and-white image obtained by the black-and-white CCD204 under the condition of infrared light supplement can meet the requirement. Therefore, two images need to be fused, so that a high-quality face image of the driver can be obtained in all weather conditions.
The image fusion apparatus 102 has three modules, a wavelet transform module 301, a multi-resolution fusion module 302, and an image reconstruction module 303. The wavelet transform module 301 is configured to perform multi-scale wavelet transform on the two images respectively to obtain high-frequency information of the two images at different scales. And a multiresolution fusion module 302, configured to perform weighted fusion on the feature maps of the two images at different levels and different feature layers to obtain a wavelet sequence of a fused image. And an image reconstruction module 303, configured to reconstruct the fused image according to the wavelet sequence of the fused image.
The fatigue recognition device 103 processes the image sent from the dual spectrum image fusion device 102, thereby performing face detection and feature point positioning, and determining driver fatigue. And if the driver drives in a fatigue way, a fatigue early warning signal is sent out.
Fig. 4 is a block diagram of a fatigue recognition device of a fatigue driving recognition system based on dual spectrum fusion according to the present invention, which includes an image processing module 401, a face detection module 402, a feature point positioning module 403, a fatigue recognition module 404, and a fatigue warning module 405.
And the image processing module 401 is configured to perform dimension reduction processing on the image obtained by fusing the dual-spectrum image fusion device 102.
The face detection module 402 is configured to model the image obtained by the image processing module 401, determine whether a face exists in the image, and if so, output information such as a position, a size, and a pose of the face.
A feature point positioning module 403, configured to position features of a face in an image, and find feature points of eyebrows, eyes, nose, and mouth, which describe key features of the face.
And the fatigue identification module 404 is configured to segment eyes in the image according to the feature point positioning result, and determine a fatigue mode according to the determination of the eye state to determine whether the driver is in a fatigue driving state.
And the fatigue early warning module 405 is configured to remind the driver in an early warning manner if the driver is in a fatigue state according to the fatigue identification result, and the related early warning information can be uploaded to a management department.
Correspondingly, the fatigue driving recognition system based on the double-spectrum fusion carries out recognition according to the following working procedures:
(1) capturing black and white and color images of a driver's face
The light source works all weather, and provides white light and an infrared light source with the wavelength of 940 nm. The dual-spectrum image acquisition device 101 acquires the face image of the driver for the same cab scene under two light sources respectively. Under the condition of sufficient light in the daytime, the color CCD202 can obtain a color image with better quality; in the case of low illumination such as at night or in the case of rain or snow, the quality of a color image is degraded and even difficult to recognize, and the quality of a black-and-white image obtained by the black-and-white CCD204 is relatively good.
(2) And fusing the images of the color image and the black and white image to form an image of the identified driver. The image fusion device 102 performs fusion and processing on the two images, so as to obtain a face image with better quality.
The image fusion of the color image and the black-and-white image is carried out according to the following steps:
1) respectively carrying out wavelet transformation processing on the color image and the black-and-white image, and obtaining image information in the low-frequency direction, the horizontal direction, the vertical direction and the 450-angle direction of the image under different scales through wavelet transformation of different scales;
2) multi-resolution fusion of color and black-and-white images: weighting and fusing the feature images of different levels and different feature layers of the color image and the black-white image at the same moment to obtain a wavelet multi-resolution structure of a fused image;
3) image reconstruction: and performing inverse wavelet transform according to the wavelet sequence of the fused image to reconstruct the fused image.
(3) And carrying out image analysis on the fused driver image, determining the eye area of the driver through feature point positioning, judging the fatigue state, and prompting or early warning according to the judgment result.
The fatigue identification device 103 locates the face in the image by a machine vision method, confirms the eye area of the driver by description of the corresponding feature points, and performs fatigue judgment according to the PERCLOS index. If the driver is found to be driving fatigue, the fatigue warning module 405 issues a warning to alert the driver. Here, the eyes, nose, and mouth of the face image may be positioned using an ASM (active shape Model), and the present embodiment mainly performs the eye positioning. The active shape model includes two parts of training and searching:
the ASM training comprises the following steps:
(1) collecting n sample pictures containing face regions;
(2) for each sample picture, manually calibrating k key feature points in each training sample, and forming a shape vector aiThus, n training sample pictures constitute n shape vectors, where aiIs represented as follows:
a i = ( x 1 i , y 1 i , x 2 i , y 2 i , . . . , x k i , y k i ) , i = 1,2 , . . . , n
wherein,representing the coordinates of the jth characteristic point on the ith training sample;
(3) in order to eliminate non-shape interference caused by external factors such as different angles, distance, posture change and the like of a face in a picture and enable a point distribution model to be more effective, a Procrustes method is adopted for normalization or alignment operation;
(4) and performing PCA processing on the aligned shape vectors:
calculate the average shape vector:
calculating a covariance matrix Φ: Φ = 1 n Σ i = 1 n ( a i - a ‾ ) T · ( a i - a ‾ )
then solving the eigenvalue of the covariance matrix phi and sequencing the eigenvalues in turn from big to small;
(5) calculating n local textures g of the ith characteristic point on the jth training imagei1,gi2,...,ginCalculating the mean value thereofAnd variance SiAnd obtaining the feature point to construct local features:
g i ‾ = 1 n Σ j = 1 n g ij
S i = 1 n Σ j = 1 n ( g ij - g i ‾ ) T · ( g ij - g i ‾ )
in each iteration process, the similarity measure between the new feature g of a feature point and the trained local feature is represented by mahalanobis distance:
f sim = ( g - g i ‾ ) S i - 1 · ( g - g i ‾ ) T
training a sample set to obtain an ASM model, then performing ASM search, scaling the average shape by rotating the center of the average shape counterclockwise by theta(s), and translating by XcObtaining an initial model X ═ M (s, theta) [ a ]i]+XcAnd calculating the new position of each feature point by using the target shape shown in the new image by the initial model through radial transformation and parameter adjustment so as to enable the feature points in the searched final shape to be closest to the corresponding real feature points.
After the positions of the eyes are positioned, the positions of edge feature points such as the eyes are found out, the proper height and width are determined according to the positions of pixel points of the edge feature points and the size of a face image, the image of the eye area is intercepted, and fatigue judgment is carried out according to PERCLOS indexes.
After the positions of the two eyes of the face image are obtained, the eye characteristics are analyzed, an eye opening and closing state distinguishing model is established, and whether the eyes are closed or not is determined through Hough transformation.
The Hough transform is a method of connecting edge pixels to form a region closed boundary by using the global characteristics of an image. The Hough transform can be used for directly detecting some targets with known shapes, and the method has the main advantage of being less influenced by noise and curve discontinuity, and comprises the following specific steps:
1) the eyeball area of the eye is a relatively standard circle, and the central position and the radius of the eyeball can be effectively detected by using Hough transformation of the circle: firstly, edge detection is carried out, then an accumulator array of fast Hough transformation is obtained by utilizing an edge detection result, and then the accumulator array is accumulated. After traversing all boundary points, acquiring the maximum value of an accumulator array, wherein the coordinates of the maximum value are the eyeball center position and the radius and are marked as (a, b and r);
2) the difference between the orientation and the position of the upper eyelid in different states of the eye is obvious, so that the upper eyelid parameters (a0, x, y) are obtained by using a parabolic fast Hough transformation (in order to avoid confusion, a0 is used for replacing a in the upper eyelid parameters), and the eye state openness is characterized in an auxiliary manner;
3) based on the parameters of the eyeball and the upper eyelid acquired above, the following evaluation criteria for the degree of opening and closing of the eye state are given:
a) a0<0 is in eye-closing state;
b) a0>0, and taking the difference between the vertex of the upper eyelid and the ordinate of the eyeball center position as the evaluation criterion of the degree of eye openness, the degree of closure can be measured by the following formula:
ClosureRate = ( b - y ) r + 0.5 .
the process of fully opening to closing the eye can be described in terms of the degree of eye openness for images and data over a period of time.
PERCLOS is an abbreviation for Percent Eye Closure for the proportion of time the Eye occupies in a given time period.
Let t1 be the time for the eyes to be fully open to 20% closed; t2 is the time for the eye to be fully open to 80% closed; t3 is the time from full opening of the eye to 20% of the next opening; t4 is the time from full opening of the eye to 80% of the next opening. The value f of PERCLOS can be calculated by measuring the values from t1 to t 4: f = (t3-t2)/(t4-t 1);
where f is the percentage of the eye closure time to a particular time. When the PERCLOS value f >0.15, the driver may be considered to be in a tired state.
The above examples are only for illustrating the technical idea and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (7)

1. A fatigue driving recognition system based on double-spectrum fusion comprises a double-spectrum image acquisition device, an image fusion device and a fatigue recognition device, and is characterized in that the double-spectrum image acquisition device comprises a light source, a semi-reflecting and semi-transmitting mirror, a black-and-white image sensor module and a color image sensor module, wherein the light source, the semi-reflecting and semi-transmitting mirror and the color image sensor module form a color image imaging system, and the light source, the semi-reflecting and semi-transmitting mirror and the black-and-white image sensor module form a black-and-white image imaging system; the image fusion device is respectively connected with the black-and-white image sensor module and the color image sensor module, acquires a black-and-white image acquired by the black-and-white image sensor module and a color image acquired by the color image sensor module, and fuses the color image and the black-and-white image into a driver image for identification; the fatigue identification device is used for carrying out image analysis on the fused driver image, determining the eye area of the driver through feature point positioning, judging the fatigue state and carrying out prompt or early warning according to the judgment result;
determining an eye region using an active shape model, the active shape model comprising two parts of training and searching:
the ASM training comprises the following steps:
(1) collecting n sample pictures containing face regions;
(2) for each sample picture, manually calibrating k key feature points in each training sample, and forming a shape vector aiThus, n training sample pictures constitute n shape vectors, where aiIs represented as follows:
a i = ( x 1 i , y 1 i , x 2 i , y 2 i , ... , x k i , y k i ) , i = 1 , 2 , ... , n
wherein,representing the coordinates of the jth characteristic point on the ith training sample;
(3) carrying out normalization or alignment operation by adopting a Procrustes method;
(4) and performing PCA processing on the aligned shape vectors:
calculate the average shape vector:
calculating a covariance matrix Φ:
then solving the eigenvalue of the covariance matrix phi and sequencing the eigenvalues in turn from big to small;
(5) calculating n local textures g of the ith characteristic point on the jth training imagei1,gi2,...,ginCalculating the mean value thereofAnd variance SiAnd obtaining the feature point to construct local features:
g i &OverBar; = 1 n &Sigma; j = 1 n g i j
S i = 1 n &Sigma; j = 1 n ( g i j - g i &OverBar; ) T &CenterDot; ( g i j - g i &OverBar; )
in each iteration process, the similarity measure between the new feature g of a feature point and the trained local feature is represented by mahalanobis distance:
f s i m = ( g - g i &OverBar; ) S i - 1 &CenterDot; ( g - g i &OverBar; ) T
training a sample set to obtain an ASM model, then carrying out ASM search, carrying out scaling s on the average shape by rotating the average shape by theta in a counterclockwise direction at the center of the average shape, and then translating XcObtaining an initial model X ═ M (s, theta) [ a ]i]+XcBy adjusting the radiation transformation and parameters, displaying the target shape in a new image by using the initial model, and calculating the new position of each feature point to enable the feature points in the searched final shape to be closest to the corresponding real feature points;
after the positions of the two eyes of the face image are obtained, analyzing the eye characteristics and establishing an eye opening and closing state discrimination model, and determining whether the eyes are closed through Hough transformation, wherein the method specifically comprises the following steps:
1) detecting the central position and the radius of an eyeball by using Hough transformation of a circle: firstly, edge detection is carried out, an accumulator array of fast Hough transformation is obtained by utilizing an edge detection result, then the accumulator array is accumulated, the maximum value of the accumulator array is obtained after all boundary points are traversed, and the coordinates of the maximum value are eyeball center position b and radius r and are marked as (a, b and r);
2) acquiring upper eyelid parameters (a0, x, y) by using a parabolic fast Hough transformation to assist in characterizing the degree of opening of the eye state;
3) based on the parameters of the eyeball and the upper eyelid acquired above, the evaluation criterion of the degree of opening of the eye state is according to:
a) a0<0 is in eye-closing state;
b) a0>0, and taking the difference between the vertex of the upper eyelid and the ordinate of the eyeball center position as the evaluation criterion of the degree of eye openness, the degree of closure is measured by the following formula:
C l o s u r e R a t e = ( b - y ) r + 0.5 ;
describing the process from full opening to closing of the eyes according to the evaluation standard of the eye opening degree for the images and data in a certain time period;
the values from t1 to t4 are measured to calculate the PERCLOS value f: f ═ t3-t2)/(t4-t 1; t1 is the time for the eye to be fully open to 20% closed; t2 is the time for the eye to be fully open to 80% closed; t3 is the time from full opening of the eye to 20% of the next opening; t4 is the time from full opening of the eye to 80% of the next opening;
wherein f is the percentage of the eye closing time in a specific time, and when the PERCLOS value f is greater than 0.15, the driver is judged to be in a fatigue state.
2. The dual spectrum fusion based fatigue driving identification system of claim 1, wherein said light source is an infrared emitting tube for generating an infrared light source.
3. The fatigue driving identification system based on double spectrum fusion of claim 2, wherein a 940nm filter is arranged between the black-and-white image sensor module and the half-reflecting and half-transmitting mirror; the light source, the semi-reflecting and semi-transmitting mirror, the optical filter and the black-and-white image sensor form a black-and-white image imaging system.
4. The dual spectrum fusion based fatigue driving recognition system of claim 1, wherein the fatigue recognition device comprises an image processing module, a face detection module, a feature point positioning module, a fatigue recognition module, a fatigue pre-warning module, wherein:
the image processing module is used for carrying out domain transformation on the fused driver face image to obtain the amplitude characteristic and the phase characteristic of the image and then carrying out dimension reduction processing;
the face detection module is used for judging whether a face exists in the image or not, and if so, outputting the position, size and pose information of the face;
the characteristic point positioning module is used for positioning the face characteristics in the image, finding out key points of characteristic description and determining the eye area of the driver;
the fatigue identification module is used for carrying out fatigue judgment according to the PERCLOS index according to the positioned eye area of the driver and judging whether the driver is in a fatigue state;
and the fatigue early warning module is used for reminding the driver or sending early warning information to a related management department in a voice mode according to the fatigue judgment result.
5. The fatigue driving identification system based on dual-spectrum fusion of claim 1, wherein the image fusion device is capable of reconstructing the black-and-white image and the color image obtained by the image acquisition device into the reconstructed face image through wavelet transformation and multi-resolution fusion.
6. A fatigue driving identification method based on double-spectrum fusion is characterized by comprising the following steps:
(1) collecting black-white images and color images of the face of a driver;
(2) carrying out image fusion on the color image and the black-and-white image to form an identified driver image;
(3) carrying out image analysis on the fused driver image, determining the eye area of the driver through feature point positioning, judging the fatigue state, and prompting or early warning according to the judgment result;
determining an eye region using an active shape model, the active shape model comprising two parts of training and searching:
the ASM training comprises the following steps:
1. collecting n sample pictures containing face regions;
2. for each sample picture, manually calibrating k key feature points in each training sample, and forming a shape vector aiThus, n training sample picturesN shape vectors are formed, where aiIs represented as follows:
a i = ( x 1 i , y 1 i , x 2 i , y 2 i , ... , x k i , y k i ) , i = 1 , 2 , ... , n
wherein,representing the coordinates of the jth characteristic point on the ith training sample;
3. carrying out normalization or alignment operation by adopting a Procrustes method;
4. and performing PCA processing on the aligned shape vectors:
calculate the average shape vector:
calculating a covariance matrix Φ:
then solving the eigenvalue of the covariance matrix phi and sequencing the eigenvalues in turn from big to small;
5. calculating n local textures g of the ith characteristic point on the jth training imagei1,gi2,...,ginCalculating the mean value thereofAnd variance SiAnd obtaining the feature point to construct local features:
g i &OverBar; = 1 n &Sigma; j = 1 n g i j
S i = 1 n &Sigma; j = 1 n ( g i j - g i &OverBar; ) T &CenterDot; ( g i j - g i &OverBar; )
in each iteration process, the similarity measure between the new feature g of a feature point and the trained local feature is represented by mahalanobis distance:
f s i m = ( g - g i &OverBar; ) S i - 1 &CenterDot; ( g - g i &OverBar; ) T
training a sample set to obtain an ASM model, then carrying out ASM search, carrying out scaling s on the average shape by rotating the average shape by theta in a counterclockwise direction at the center of the average shape, and then translating XcObtaining an initial model X ═ M (s, theta) [ a ]i]+XcBy adjusting the radiation transformation and parameters, displaying the target shape in a new image by using the initial model, and calculating the new position of each feature point to enable the feature points in the searched final shape to be closest to the corresponding real feature points;
after the positions of the two eyes of the face image are obtained, analyzing the eye characteristics and establishing an eye opening and closing state discrimination model, and determining whether the eyes are closed through Hough transformation, wherein the method specifically comprises the following steps:
1) detecting the central position and the radius of an eyeball by using Hough transformation of a circle: firstly, edge detection is carried out, an accumulator array of fast Hough transformation is obtained by utilizing an edge detection result, then the accumulator array is accumulated, the maximum value of the accumulator array is obtained after all boundary points are traversed, and the coordinates of the maximum value are eyeball center position b and radius r and are marked as (a, b and r);
2) acquiring upper eyelid parameters (a0, x, y) by using a parabolic fast Hough transformation to assist in characterizing the degree of opening of the eye state;
3) based on the parameters of the eyeball and the upper eyelid acquired above, the evaluation criterion of the degree of opening of the eye state is according to:
a) a0<0 is in eye-closing state;
b) a0>0, and taking the difference between the vertex of the upper eyelid and the ordinate of the eyeball center position as the evaluation criterion of the degree of eye openness, the degree of closure is measured by the following formula:
C l o s u r e R a t e = ( b - y ) r + 0.5 ;
describing the process from full opening to closing of the eyes according to the evaluation standard of the eye opening degree for the images and data in a certain time period;
the values from t1 to t4 are measured to calculate the PERCLOS value f: f ═ t3-t2)/(t4-t 1; t1 is the time for the eye to be fully open to 20% closed; t2 is the time for the eye to be fully open to 80% closed; t3 is the time from full opening of the eye to 20% of the next opening; t4 is the time from full opening of the eye to 80% of the next opening;
wherein f is the percentage of the eye closing time in a specific time, and when the PERCLOS value f is greater than 0.15, the driver is judged to be in a fatigue state.
7. The fatigue driving recognition method according to claim 6, wherein the image fusion of the color image and the black-and-white image in the method step (2) is performed as follows:
1) respectively performing wavelet transformation processing on the color image and the black-and-white image, and obtaining image information in the low-frequency direction, the horizontal direction, the vertical direction and the 45-degree angle direction of the image under different scales through wavelet transformation of different scales;
2) multi-resolution fusion of color and black-and-white images: weighting and fusing the feature images of different levels and different feature layers of the color image and the black-white image at the same moment to obtain a wavelet multi-resolution structure of a fused image;
3) image reconstruction: and performing inverse wavelet transform according to the wavelet sequence of the fused image to reconstruct the fused image.
CN201310731288.5A 2013-12-26 2013-12-26 Fatigue driving identification system based on double-spectrum fusion Active CN103714659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310731288.5A CN103714659B (en) 2013-12-26 2013-12-26 Fatigue driving identification system based on double-spectrum fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310731288.5A CN103714659B (en) 2013-12-26 2013-12-26 Fatigue driving identification system based on double-spectrum fusion

Publications (2)

Publication Number Publication Date
CN103714659A CN103714659A (en) 2014-04-09
CN103714659B true CN103714659B (en) 2017-02-01

Family

ID=50407588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310731288.5A Active CN103714659B (en) 2013-12-26 2013-12-26 Fatigue driving identification system based on double-spectrum fusion

Country Status (1)

Country Link
CN (1) CN103714659B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104183091B (en) * 2014-08-14 2017-02-08 苏州清研微视电子科技有限公司 System for adjusting sensitivity of fatigue driving early warning system in self-adaptive mode
CN106161925B (en) * 2015-05-14 2019-03-01 聚晶半导体股份有限公司 The image processing method of image acquiring device and its combined type
WO2018040751A1 (en) * 2016-08-29 2018-03-08 努比亚技术有限公司 Image generation apparatus and method therefor, and image processing device and storage medium
WO2018082165A1 (en) * 2016-11-03 2018-05-11 华为技术有限公司 Optical imaging method and apparatus
CN106919898A (en) * 2017-01-16 2017-07-04 北京龙杯信息技术有限公司 Feature modeling method in recognition of face
CN109243144A (en) * 2018-10-16 2019-01-18 南京伊斯特机械设备有限公司 A kind of recognition of face warning system and its method for fatigue driving
CN110435665B (en) * 2019-08-14 2024-07-02 东风汽车有限公司 Driver detection device and car
CN110516435B (en) * 2019-09-02 2021-01-22 国网电子商务有限公司 Private key management method and device based on biological characteristics
CN110781793A (en) * 2019-10-21 2020-02-11 合肥成方信息技术有限公司 Artificial intelligence real-time image recognition method based on quadtree algorithm
CN111783563A (en) * 2020-06-15 2020-10-16 深圳英飞拓科技股份有限公司 Double-spectrum-based face snapshot and monitoring method, system and equipment
CN114697630A (en) * 2020-12-28 2022-07-01 比亚迪半导体股份有限公司 Driver state monitoring method and device, electronic equipment and readable storage medium
CN112866531A (en) * 2021-01-13 2021-05-28 辽宁省视讯技术研究有限公司 Multi-camera image synthesis micro-gloss image photographing system and implementation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510007A (en) * 2009-03-20 2009-08-19 北京科技大学 Real time shooting and self-adapting fusing device for infrared light image and visible light image
CN101814137A (en) * 2010-03-25 2010-08-25 浙江工业大学 Driver fatigue monitor system based on infrared eye state identification
CN102306381A (en) * 2011-06-02 2012-01-04 西安电子科技大学 Method for fusing images based on beamlet and wavelet transform
CN103273882A (en) * 2013-06-08 2013-09-04 无锡北斗星通信息科技有限公司 Predetermining system for fatigue state of automobile driver

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007295263A (en) * 2006-04-25 2007-11-08 Fujitsu Ltd Driving control circuit and infrared imaging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510007A (en) * 2009-03-20 2009-08-19 北京科技大学 Real time shooting and self-adapting fusing device for infrared light image and visible light image
CN101814137A (en) * 2010-03-25 2010-08-25 浙江工业大学 Driver fatigue monitor system based on infrared eye state identification
CN102306381A (en) * 2011-06-02 2012-01-04 西安电子科技大学 Method for fusing images based on beamlet and wavelet transform
CN103273882A (en) * 2013-06-08 2013-09-04 无锡北斗星通信息科技有限公司 Predetermining system for fatigue state of automobile driver

Also Published As

Publication number Publication date
CN103714659A (en) 2014-04-09

Similar Documents

Publication Publication Date Title
CN103714659B (en) Fatigue driving identification system based on double-spectrum fusion
CN101950355B (en) Method for detecting fatigue state of driver based on digital video
CN103714660B (en) System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN100462047C (en) Safe driving auxiliary device based on omnidirectional computer vision
CN101375796B (en) Real-time detection system of fatigue driving
CN105769120B (en) Method for detecting fatigue driving and device
CN104013414B (en) A kind of Study in Driver Fatigue State Surveillance System based on intelligent movable mobile phone
CN103824420B (en) Fatigue driving identification system based on heart rate variability non-contact measurement
CN107292251B (en) Driver fatigue detection method and system based on human eye state
US20140204193A1 (en) Driver gaze detection system
CN104573646A (en) Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
CN104318237A (en) Fatigue driving warning method based on face identification
CN112241658A (en) Fatigue driving early warning system and method based on depth camera
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN104224204A (en) Driver fatigue detection system on basis of infrared detection technology
CN102085099A (en) Method and device for detecting fatigue driving
CN104881956A (en) Fatigue driving early warning system
CN109664889A (en) A kind of control method for vehicle, device and system and storage medium
CN113140093A (en) Fatigue driving detection method based on AdaBoost algorithm
CN103465825A (en) Vehicle-mounted system and control method thereof
CN102610057B (en) Vehicle-mounted information intelligent processing system and method
CN115147820A (en) Glasses type fatigue state monitoring system based on eye characteristics
CN113673403B (en) Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile
CN201927155U (en) Vehicle-mounted information intelligent processing system
CN116012822B (en) Fatigue driving identification method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20240705

Granted publication date: 20170201

PP01 Preservation of patent right