WO2020034541A1 - Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus - Google Patents

Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus Download PDF

Info

Publication number
WO2020034541A1
WO2020034541A1 PCT/CN2018/123790 CN2018123790W WO2020034541A1 WO 2020034541 A1 WO2020034541 A1 WO 2020034541A1 CN 2018123790 W CN2018123790 W CN 2018123790W WO 2020034541 A1 WO2020034541 A1 WO 2020034541A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin color
image
color pixel
sub
vector
Prior art date
Application number
PCT/CN2018/123790
Other languages
French (fr)
Chinese (zh)
Inventor
姜军
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020034541A1 publication Critical patent/WO2020034541A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present application belongs to the field of computer technology, and particularly relates to a fatigue driving detection method, a computer-readable storage medium, a terminal device, and a device.
  • embodiments of the present application provide a fatigue driving detection method, a computer-readable storage medium, a terminal device, and a device to solve the current reliability of the fatigue driving detection method, which is extremely unreliable and can easily cause traffic accidents. problem.
  • a first aspect of the embodiments of the present application provides a fatigue driving detection method, which may include:
  • the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold, it is determined that the driver is in a fatigue driving state.
  • a second aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the fatigue driving detection method is implemented. step.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, where the processor executes the computer When the instructions are readable, the steps of the fatigue driving detection method described above are implemented.
  • a fourth aspect of the embodiments of the present application provides a fatigue driving detection device, which may include a module for implementing the steps of the foregoing fatigue driving detection method.
  • the embodiment of the present application implements an automatic detection of a driver's fatigue driving state by means of image analysis processing, provides a reliable detection standard for the driver's fatigue driving detection, and can greatly reduce the occurrence of traffic accidents.
  • FIG. 1 is a flowchart of an embodiment of a fatigue driving detection method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of collecting a driver's face image
  • FIG. 3 is a schematic flowchart of extracting a first sub-image from a driver's face image
  • FIG. 4 is a structural diagram of an embodiment of a fatigue driving detection device in an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a terminal device according to an embodiment of the present application.
  • an embodiment of a fatigue driving detection method in an embodiment of the present application may include:
  • Step S101 Collect a driver's face image.
  • this embodiment uses a method based on skin color judgment to discriminate the collected image.
  • Skin color is one of the salient features of a person's body. Because of the different races, the skin tone of the skin appears different colors, but after excluding the effects of brightness and visual environment on the skin tone, the skin tone is basically the same, so it can be used as a basis for identification.
  • step S101 may include the steps shown in FIG. 2:
  • Step S1011 an image of the driving area is collected by a camera disposed in front of the driving area.
  • Step S1012 converting the image of the driving area from RGB space to YCbCr space to obtain a converted driving area image.
  • YCbCr space In YCbCr space, Y represents brightness, Cb and Cr represent blue component and red component, respectively, and the two are collectively called color component.
  • YCbCr space has the characteristics of separating chrominance and brightness. In YCbCr space, the clustering characteristics of skin color are better, and it is two-dimensional independent distribution, which can better limit the distribution area of skin color and is not affected by race. .
  • the conversion from RGB space to YCbCr space can be achieved by the following formula to obtain the converted driving area image:
  • Step S1013 Determine, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and construct a skin color pixel set composed of each skin color pixel.
  • the skin color clustering is good, and the skin color pixels can be determined by using preset skin color determination conditions.
  • the skin color determination conditions that are preferably used are: 77 ⁇ Cb ⁇ 127 and 133 ⁇ Cr ⁇ 173, the pixels satisfying the skin color determination condition are skin color pixels.
  • Step S1014 Count the number of skin color pixels in the skin color pixel set, and calculate a dispersion degree of the skin color pixel set.
  • the dispersion degree of the skin color pixel set can be calculated according to the following formula:
  • n is the number of skin color pixels in the skin color pixel set, 1 ⁇ n ⁇ N, N is the number of skin color pixels in the skin color pixel set, and SkinPixX n is nth in the skin color pixel set.
  • the horizontal coordinate of each skin color pixel, SkinPixY n is the vertical coordinate of the nth skin color pixel in the skin color pixel set, and DisperDeg is the dispersion of the skin color pixel set. The larger the value, the more these pixels are explained. The more scattered the points, the smaller the value, which means that the more concentrated these pixels are.
  • Step S1015 Determine whether a preset face determination condition is established.
  • the face determination condition is that the number of skin color pixels in the skin color pixel set is greater than a preset number threshold , And the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, that is:
  • NumThresh is the number threshold, which can be set to 1000, 2000, 5000, or other values according to the actual situation
  • DisperThresh is the dispersion threshold, which can be set to 30, 50, 100, or according to the actual situation. Other values.
  • step S1011 If the face determination condition is not satisfied, the process returns to step S1011, and if the face determination condition is established, step S1016 and subsequent steps are performed.
  • Step S1016 Determine a region covered by the skin color pixel set as a face image region.
  • Step S1017 Extract an image in the face image area, and determine an image in the face image area as a face image of the driver.
  • Step S102 Extract a first sub-image from a face image of the driver.
  • the first sub-image is an image of a region where the eyes of the driver are located.
  • step S102 may include the steps shown in FIG. 3:
  • Step S1021 dividing the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset.
  • the pixels in the left skin color pixel subset are all satisfied:
  • the pixels in the right skin color pixel subset all satisfy:
  • ln is the sequence number of the pixels in the left skin color pixel subset, 1 ⁇ ln ⁇ LN, LN is the total number of pixels in the left skin color pixel subset, LSkinPixX ln is the left skin color pixel subset
  • the abscissa of the lnth pixel, rn is the sequence number of the pixels in the right skin color pixel subset, 1 ⁇ rn ⁇ RN, RN is the total number of pixels in the right skin color pixel subset, RSkinPixX rn is the abscissa of the rnth pixel in the right skin color pixel subset.
  • step S1022 the abscissa of the center position of the left eye and the abscissa of the center position of the right eye are calculated respectively.
  • the abscissa of the center position of the left eye and the abscissa of the center position of the right eye can be calculated according to the following formulas:
  • LeftEyeX is the abscissa of the center position of the left eye
  • RightEyeX is the abscissa of the center position of the right eye
  • Step S1023 Divide the upper skin color pixel subset from the skin color pixel set.
  • the pixels in the upper skin color pixel subset all satisfy:
  • tn is the number of pixels in the upper skin color pixel subset, 1 ⁇ tn ⁇ TN, TN is the total number of pixels in the upper skin color pixel subset, TopSkinPixY tn is the upper skin color pixel subset The ordinate of the tnth pixel of.
  • Step S1024 Calculate the ordinate of the center position of the left eye and the ordinate of the center position of the right eye, respectively.
  • the ordinate of the center position of the left eye and the ordinate of the center position of the right eye can be calculated according to the following formulas:
  • LeftEyeY is the ordinate of the center position of the left eye
  • RightEyeY is the ordinate of the center position of the right eye.
  • Step S1025 Determine the area where the eyes are located according to the center position of the left eye, the center position of the right eye, the preset height of the eye area, and the preset width of the eye area, and use the image extracted from the area where the eyes are located as the first A child image.
  • [X1, X2, Y1, Y2] can be used to represent a rectangular region with the abscissa from X1 to X2 and the ordinate from Y1 to Y2, then it can be determined that the area where the eyes are located is:
  • Width is the preset eye area width, which can be set to 10, 15, 20, or other values according to the actual situation
  • Height is the preset eye area height, which can be set to 5, 8, 10, or according to the actual situation Other values.
  • Step S103 Calculate a feature vector of the first sub-image.
  • a local binary pattern (Local Binary Patterns, LBP) algorithm can be used to calculate the feature vector of the first sub-image. Specifically, a relationship between a pixel and its surrounding pixels is constructed. For each pixel in the first sub-image, the gray value of the pixel is converted into an eight-bit binary sequence by calculating the size relationship between each pixel in the neighborhood centered on the pixel and the center pixel. The pixel value of the center point is the threshold. If the pixel value of the neighborhood point is smaller than the center point, the neighborhood point is binarized to 0, otherwise it is 1. The sequence of 0 and 1 obtained by binarization is regarded as an 8-bit Binary number. Convert the binary number to decimal to get the LBP value at the center point. After the LBP value of each first sub-image pixel is calculated, the statistical histogram of the LBP feature spectrum is determined as the feature vector of the first sub-image.
  • LBP Local Binary Patterns
  • This point is quantified because the relationship between the surrounding point and the point is used. After quantization, the effect of lighting on the image can be eliminated more effectively. As long as the change in illumination is not sufficient to change the size relationship between the pixel values of the two points, the LBP value will not change, which ensures the accuracy of the feature information extraction of the first sub-image.
  • Step S104 Calculate a vector similarity between a feature vector of the first sub-image and a preset reference vector.
  • the reference vector is a feature vector of the eye image in a fatigue state, and a specific calculation process thereof is similar to step S103. For details, refer to the detailed description in step S103, and details are not described herein again.
  • the vector similarity between the feature vector of the first sub-image and the reference vector may be specifically calculated according to the following formula:
  • Step S105 Determine whether the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold.
  • step S106 If the vector similarity between the feature vector of the first sub-image and the reference vector is greater than the similarity threshold, step S106 is performed. If the feature vector of the first sub-image and the reference vector are If the similarity of the vector is less than or equal to the similarity threshold, step S107 is performed.
  • the similarity threshold may be set according to actual conditions, for example, it may be set to 60%, 70%, 80%, or other values.
  • Step S106 Determine that the driver is in a fatigue driving state.
  • Step S107 Determine that the driver is in a normal state.
  • the driver can be set every 5 minutes, 10 minutes, 20 minutes, or other values within a set period of time.
  • the cycle (1 second, 2 seconds, 10 seconds, or other values) is to acquire an image once and repeat the above process. Then calculate the proportion of the face image in the fatigue driving state in the total face image, if the proportion is greater than a preset proportion threshold (the proportion threshold can be set to 60%, 70%, 80% or other according to the actual situation) Value), the driver can be determined to be fatigue driving, and if the ratio is less than or equal to the ratio threshold, the driver can be determined to be in a normal state.
  • a preset proportion threshold the proportion threshold can be set to 60%, 70%, 80% or other according to the actual situation
  • the embodiment of the present application realizes the automatic detection of the driver's fatigue driving state by means of image analysis processing. Considering that the driver's eye area will have significant features that can be easily identified during fatigue driving, the embodiment of the present application Using the feature vector of the eye image as the basis for detection, first collect the driver's face image, extract the image of the eye area from it, calculate the feature vector, and then use the feature vector of the eye image in the fatigued state as the reference vector for comparison If the similarity between the two is greater than a certain threshold value, this indicates that the driver's eye area has already exhibited the physiological characteristics of fatigue, and it can be determined that the driver is in a fatigued driving state.
  • a reliable detection standard is provided for the driver to perform fatigue driving detection, which can greatly reduce the occurrence of traffic accidents.
  • FIG. 4 shows a structural diagram of an embodiment of a fatigue driving detection device provided by an embodiment of the present application.
  • a fatigue driving detection device may include:
  • a face image acquisition module 401 configured to collect a driver's face image
  • a sub-image extraction module 402 configured to extract a first sub-image from a face image of the driver, where the first sub-image is an image of an area where the driver's eyes are located;
  • a feature vector calculation module 403, configured to calculate a feature vector of the first sub-image
  • a vector similarity calculation module 404 configured to calculate a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
  • the fatigue driving state determination module 405 is configured to determine that the driver is in a fatigue driving state if the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold.
  • the face image acquisition module includes:
  • An image acquisition unit configured to acquire an image of the driving area through a camera disposed in front of the driving area
  • An image space conversion unit configured to convert an image of the driving area from an RGB space to a YCbCr space to obtain a converted driving area image
  • a skin color pixel determination unit configured to determine, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and construct a skin color pixel set composed of each skin color pixel;
  • Skin color pixel counting unit configured to count the number of skin color pixels in the skin color pixel set
  • a dispersion degree calculating unit configured to calculate a dispersion degree of the skin color pixel set
  • a face image region determining unit is configured to: if the number of skin color pixels in the skin color pixel set is greater than a preset number threshold, and the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, The area covered by the skin color pixel set is determined as a face image area;
  • a face image determination unit is configured to extract an image in the face image area, and determine an image in the face image area as a face image of the driver.
  • the dispersion calculation unit may include:
  • a dispersion degree calculation subunit is configured to calculate a dispersion degree of the skin color pixel set.
  • sub-image extraction module may include:
  • a first dividing unit configured to divide the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset
  • a first calculation unit configured to calculate the abscissa of the center position of the left eye and the abscissa of the center position of the right eye, respectively;
  • a second dividing unit configured to divide an upper skin color pixel subset from the skin color pixel set
  • a second calculation unit configured to calculate the vertical coordinate of the center position of the left eye and the vertical coordinate of the center position of the right eye, respectively;
  • a sub-image extraction unit configured to determine an area where an eye is located according to the center position of the left eye, the center position of the right eye, a preset height of the eye area, and a preset width of the eye area, and extract an image from the area where the eye is As the first sub-image.
  • the vector similarity calculation module may include:
  • a vector similarity calculation unit is configured to calculate a vector similarity between a feature vector of the first sub-image and the reference vector.
  • FIG. 5 shows a schematic block diagram of a terminal device according to an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the terminal device 5 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device 5 may include a processor 50, a memory 51, and computer-readable instructions 52 stored in the memory 51 and executable on the processor 50, such as computer-readable instructions for performing the fatigue driving detection method described above. instruction.
  • the processor 50 executes the computer-readable instructions 52, the steps in the foregoing embodiments of the fatigue driving detection method are implemented.
  • each functional unit in each embodiment of the present application When each functional unit in each embodiment of the present application is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium.
  • Including computer-readable instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks, etc. which can store computer-readable instructions The medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present application belongs to the technical field of computers, and particularly relates to a driver drowsiness detection method, a computer readable storage medium, a terminal device, and an apparatus. In the method, an image of the face of a driver is collected; a first sub-image is extracted from the image of the face of the driver, the first sub-image being an image of a region where the eyes of the driver are located; a feature vector of the first sub-image is calculated; a vector similarity degree between the feature vector of the first sub-image and a preset reference vector is calculated, the reference vector being a feature vector of an image of eyes in a fatigued state; if the vector similarity degree between the feature vector of the first sub-image and the reference vector is larger than a preset similarity degree threshold, the driver is determined to be in a drowsy state. By means of the embodiments of the present application, a reliable detection standard is provided for driver drowsiness detection which can greatly reduce the occurrence of traffic accidents.

Description

一种疲劳驾驶检测方法、可读存储介质、终端设备及装置Fatigue driving detection method, readable storage medium, terminal equipment and device
本申请要求于2018年8月14日提交中国专利局、申请号为201810921792.4、发明名称为“一种疲劳驾驶检测方法、计算机可读存储介质及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on August 14, 2018, with an application number of 201810921792.4, and the invention name is "a fatigue driving detection method, a computer-readable storage medium, and a terminal device." The contents are incorporated herein by reference.
技术领域Technical field
本申请属于计算机技术领域,尤其涉及一种疲劳驾驶检测方法、计算机可读存储介质、终端设备及装置。The present application belongs to the field of computer technology, and particularly relates to a fatigue driving detection method, a computer-readable storage medium, a terminal device, and a device.
背景技术Background technique
目前,随着汽车的普及程度越来越高,由汽车驾驶而带来的安全隐患也越来越多。驾驶员在驾车过程中,因自身疲劳而引发交通意外事故的情况时有发生,如何能够对驾驶员是否处于疲劳驾驶状态进行掌握已成为亟待解决的问题。但目前一般只能依靠驾驶员自身的感觉或者身边乘客对其状态的观察来判断其是否处于疲劳驾驶状态,可靠性极差,极易导致交通意外事故的发生。At present, with the increasing popularity of cars, there are more and more hidden safety hazards caused by car driving. Drivers often experience traffic accidents due to their own fatigue during driving. How to grasp whether the driver is in a fatigued driving situation has become an urgent problem. But currently, the driver can only judge whether he is in a fatigue driving state by relying on the driver's own feelings or the observation of the passengers around him. The reliability is extremely poor and it is easy to cause traffic accidents.
技术问题technical problem
有鉴于此,本申请实施例提供了一种疲劳驾驶检测方法、计算机可读存储介质、终端设备及装置,以解决目前的疲劳驾驶检测方法可靠性极差,极易导致交通意外事故的发生的问题。In view of this, embodiments of the present application provide a fatigue driving detection method, a computer-readable storage medium, a terminal device, and a device to solve the current reliability of the fatigue driving detection method, which is extremely unreliable and can easily cause traffic accidents. problem.
技术解决方案Technical solutions
本申请实施例的第一方面提供了一种疲劳驾驶检测方法,可以包括:A first aspect of the embodiments of the present application provides a fatigue driving detection method, which may include:
采集驾驶员的人脸图像;Collect the driver's face image;
从所述驾驶员的人脸图像中提取出第一子图像,所述第一子图像为所述驾驶员的眼睛所在区域的图像;Extracting a first sub-image from the driver's face image, where the first sub-image is an image of a region where the driver's eyes are located;
计算所述第一子图像的特征向量;Calculating a feature vector of the first sub-image;
计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度,所述基准向量为处于疲劳状态的眼睛图像的特征向量;Calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于预设的相似度阈值,则确定所述驾驶员处于疲劳驾驶状态。If the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold, it is determined that the driver is in a fatigue driving state.
本申请实施例的第二方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述疲劳驾驶检测方法的步骤。A second aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the fatigue driving detection method is implemented. step.
本申请实施例的第三方面提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算 机可读指令时实现上述疲劳驾驶检测方法的步骤。A third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, where the processor executes the computer When the instructions are readable, the steps of the fatigue driving detection method described above are implemented.
本申请实施例的第四方面提供了一种疲劳驾驶检测装置,可以包括用于实现上述疲劳驾驶检测方法的步骤的模块。A fourth aspect of the embodiments of the present application provides a fatigue driving detection device, which may include a module for implementing the steps of the foregoing fatigue driving detection method.
有益效果Beneficial effect
本申请实施例通过图像分析处理的手段实现了对驾驶员疲劳驾驶状态的自动化检测,为对驾驶员进行疲劳驾驶检测提供了可靠的检测标准,可以大大减少交通意外事故的发生。The embodiment of the present application implements an automatic detection of a driver's fatigue driving state by means of image analysis processing, provides a reliable detection standard for the driver's fatigue driving detection, and can greatly reduce the occurrence of traffic accidents.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本申请实施例中一种疲劳驾驶检测方法的一个实施例流程图;FIG. 1 is a flowchart of an embodiment of a fatigue driving detection method according to an embodiment of the present application;
图2为采集驾驶员的人脸图像的示意流程图;FIG. 2 is a schematic flowchart of collecting a driver's face image;
图3为从驾驶员的人脸图像中提取出第一子图像的示意流程图;3 is a schematic flowchart of extracting a first sub-image from a driver's face image;
图4为本申请实施例中一种疲劳驾驶检测装置的一个实施例结构图;4 is a structural diagram of an embodiment of a fatigue driving detection device in an embodiment of the present application;
图5为本申请实施例中一种终端设备的示意框图。FIG. 5 is a schematic block diagram of a terminal device according to an embodiment of the present application.
本发明的实施方式Embodiments of the invention
请参阅图1,本申请实施例中一种疲劳驾驶检测方法的一个实施例可以包括:Referring to FIG. 1, an embodiment of a fatigue driving detection method in an embodiment of the present application may include:
步骤S101、采集驾驶员的人脸图像。Step S101: Collect a driver's face image.
为了确保采集到的是人脸图像,而不是无驾驶员时的背景图像,本实施例通过基于肤色判断的方法对采集到的图像进行鉴别,肤色作为人的体表显著特征之一,尽管人的肤色因为人种的不同有差异,呈现出不同的颜色,但是在排除了亮度和视觉环境等对肤色的影响后,皮肤的色调基本一致,因此可作为鉴别的依据。In order to ensure that the face image is collected instead of the background image when there is no driver, this embodiment uses a method based on skin color judgment to discriminate the collected image. Skin color is one of the salient features of a person's body. Because of the different races, the skin tone of the skin appears different colors, but after excluding the effects of brightness and visual environment on the skin tone, the skin tone is basically the same, so it can be used as a basis for identification.
具体地,步骤S101可以包括如图2所示的步骤:Specifically, step S101 may include the steps shown in FIG. 2:
步骤S1011、通过设置在驾驶区域前方的摄像头采集所述驾驶区域的图像。Step S1011, an image of the driving area is collected by a camera disposed in front of the driving area.
步骤S1012、将所述驾驶区域的图像由RGB空间转换到YCbCr空间,得到转换后的驾驶区域图像。Step S1012, converting the image of the driving area from RGB space to YCbCr space to obtain a converted driving area image.
在YCbCr空间中,Y代表亮度,Cb和Cr分别代表蓝色分量和红色分量,两者合称为色彩分量。YCbCr空间具有将色度与亮度分离的特点,在YCbCr空间中,肤色的聚类特性比较好,而且是两维独立分布,能够比较好地限制肤色的分布区域,并且受人种的影响不大。对比RGB空间和YCbCr空间,当光强发生变化时,RGB空间中的R(红色分量)、G(绿色分量)、B(蓝色分量)三个颜色分量会同时发生变化,而YCbCr空间中受光强影响相对独立,色彩分量受光强度影响不大,因此YCbCr空间更适合用于肤色识别。In YCbCr space, Y represents brightness, Cb and Cr represent blue component and red component, respectively, and the two are collectively called color component. YCbCr space has the characteristics of separating chrominance and brightness. In YCbCr space, the clustering characteristics of skin color are better, and it is two-dimensional independent distribution, which can better limit the distribution area of skin color and is not affected by race. . Comparing RGB space and YCbCr space, when the light intensity changes, the three color components R (red component), G (green component), and B (blue component) in RGB space will change at the same time, while the light received in YCbCr space The strong influence is relatively independent, and the color component is not greatly affected by the light intensity, so the YCbCr space is more suitable for skin tone recognition.
具体地,可以通过以下公式实现由RGB空间到YCbCr空间的转换,得到转换后 的驾驶区域图像:Specifically, the conversion from RGB space to YCbCr space can be achieved by the following formula to obtain the converted driving area image:
Y=0.257×R+0.564×G+0.098×B+16;Y = 0.257 × R + 0.564 × G + 0.098 × B + 16;
Cb=-0.148*R-0.291*G+0.439*B+128;Cb = -0.148 * R-0.291 * G + 0.439 * B + 128;
Cr=0.439*R-0.368*G-0.071*B+128。Cr = 0.439 * R-0.368 * G-0.071 * B + 128.
步骤S1013、在所述转换后的驾驶区域图像中将满足预设的肤色判定条件的像素点确定为肤色像素点,并构造由各个肤色像素点组成的肤色像素点集合。Step S1013: Determine, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and construct a skin color pixel set composed of each skin color pixel.
由于肤色在YCbCr空间的两路色彩分量受亮度信息的影响较小,本方案直接考虑YCbCr空间的CbCr分量,映射为两维独立分布的CbCr空间。在CbCr空间下,肤色类聚性好,利用预设的肤色判定条件即可将肤色像素点确定出来,在本实施例中,优选采用的肤色判定条件为:77<Cb<127且133<Cr<173,满足该肤色判定条件的像素点即为肤色像素点。Since the two color components of the skin color in the YCbCr space are less affected by the brightness information, this solution directly considers the CbCr component of the YCbCr space and maps it into a two-dimensionally independent CbCr space. In the CbCr space, the skin color clustering is good, and the skin color pixels can be determined by using preset skin color determination conditions. In this embodiment, the skin color determination conditions that are preferably used are: 77 <Cb <127 and 133 <Cr <173, the pixels satisfying the skin color determination condition are skin color pixels.
步骤S1014、统计所述肤色像素点集合中肤色像素点的数目,并计算所述肤色像素点集合的分散度。Step S1014: Count the number of skin color pixels in the skin color pixel set, and calculate a dispersion degree of the skin color pixel set.
具体地,可以根据下式计算所述肤色像素点集合的分散度:Specifically, the dispersion degree of the skin color pixel set can be calculated according to the following formula:
Figure PCTCN2018123790-appb-000001
Figure PCTCN2018123790-appb-000001
其中,n为所述肤色像素点集合中肤色像素点的序号,1≤n≤N,N为所述肤色像素点集合中肤色像素点的数目,SkinPixX n为所述肤色像素点集合中第n个肤色像素点的横坐标,SkinPixY n为所述肤色像素点集合中第n个肤色像素点的纵坐标,DisperDeg为所述肤色像素点集合的分散度,其取值越大,则说明这些像素点越分散,其取值越小,则说明这些像素点越集中。 Wherein, n is the number of skin color pixels in the skin color pixel set, 1≤n≤N, N is the number of skin color pixels in the skin color pixel set, and SkinPixX n is nth in the skin color pixel set. The horizontal coordinate of each skin color pixel, SkinPixY n is the vertical coordinate of the nth skin color pixel in the skin color pixel set, and DisperDeg is the dispersion of the skin color pixel set. The larger the value, the more these pixels are explained. The more scattered the points, the smaller the value, which means that the more concentrated these pixels are.
步骤S1015、判断预设的人脸判定条件是否成立。Step S1015: Determine whether a preset face determination condition is established.
对于人脸图像而言,应该是由众多的肤色像素点连接而成的一个集中区域,因此,所述人脸判定条件为所述肤色像素点集合中肤色像素点的数目大于预设的数目阈值,且所述肤色像素点集合的分散度小于预设的分散度阈值,即:For a face image, it should be a concentrated area connected by a large number of skin color pixels. Therefore, the face determination condition is that the number of skin color pixels in the skin color pixel set is greater than a preset number threshold , And the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, that is:
Figure PCTCN2018123790-appb-000002
Figure PCTCN2018123790-appb-000002
其中,NumThresh为所述数目阈值,可以根据实际情况将其设置为1000、2000、5000或者其它的取值,DisperThresh为所述分散度阈值,可以根据实际情况将其设置为30、50、100或者其它的取值。Among them, NumThresh is the number threshold, which can be set to 1000, 2000, 5000, or other values according to the actual situation, and DisperThresh is the dispersion threshold, which can be set to 30, 50, 100, or according to the actual situation. Other values.
若所述人脸判定条件不成立,则返回执行步骤S1011,若所述人脸判定条件成立,则执行步骤S1016及其后续步骤。If the face determination condition is not satisfied, the process returns to step S1011, and if the face determination condition is established, step S1016 and subsequent steps are performed.
步骤S1016、将所述肤色像素点集合所覆盖的区域确定为人脸图像区域。Step S1016: Determine a region covered by the skin color pixel set as a face image region.
步骤S1017、提取所述人脸图像区域中的图像,并将所述人脸图像区域中的图像确定为所述驾驶员的人脸图像。Step S1017: Extract an image in the face image area, and determine an image in the face image area as a face image of the driver.
步骤S102、从所述驾驶员的人脸图像中提取出第一子图像。Step S102: Extract a first sub-image from a face image of the driver.
所述第一子图像为所述驾驶员的眼睛所在区域的图像。The first sub-image is an image of a region where the eyes of the driver are located.
具体地,步骤S102可以包括如图3所示的步骤:Specifically, step S102 may include the steps shown in FIG. 3:
步骤S1021、将所述肤色像素点集合划分为左侧肤色像素点子集和右侧肤色像素点子集。Step S1021, dividing the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset.
其中,所述左侧肤色像素点子集中的像素点均满足:Wherein, the pixels in the left skin color pixel subset are all satisfied:
Figure PCTCN2018123790-appb-000003
Figure PCTCN2018123790-appb-000003
所述右侧肤色像素点子集中的像素点均满足:The pixels in the right skin color pixel subset all satisfy:
Figure PCTCN2018123790-appb-000004
Figure PCTCN2018123790-appb-000004
ln为所述左侧肤色像素点子集中的像素点的序号,1≤ln≤LN,LN为所述左侧肤色像素点子集中的像素点的总数目,LSkinPixX ln为所述左侧肤色像素点子集中的第ln个像素点的横坐标,rn为所述右侧肤色像素点子集中的像素点的序号,1≤rn≤RN,RN为所述右侧肤色像素点子集中的像素点的总数目,RSkinPixX rn为所述右侧肤色像素点子集中的第rn个像素点的横坐标。 ln is the sequence number of the pixels in the left skin color pixel subset, 1≤ln≤LN, LN is the total number of pixels in the left skin color pixel subset, LSkinPixX ln is the left skin color pixel subset The abscissa of the lnth pixel, rn is the sequence number of the pixels in the right skin color pixel subset, 1≤rn≤RN, RN is the total number of pixels in the right skin color pixel subset, RSkinPixX rn is the abscissa of the rnth pixel in the right skin color pixel subset.
步骤S1022、分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标。In step S1022, the abscissa of the center position of the left eye and the abscissa of the center position of the right eye are calculated respectively.
具体地,可以根据下式分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标:Specifically, the abscissa of the center position of the left eye and the abscissa of the center position of the right eye can be calculated according to the following formulas:
Figure PCTCN2018123790-appb-000005
Figure PCTCN2018123790-appb-000005
其中,LeftEyeX为左眼的中心位置的横坐标,RightEyeX为右眼的中心位置的横坐标;Among them, LeftEyeX is the abscissa of the center position of the left eye, and RightEyeX is the abscissa of the center position of the right eye;
步骤S1023、从所述肤色像素点集合划分出上侧肤色像素点子集。Step S1023: Divide the upper skin color pixel subset from the skin color pixel set.
其中,所述上侧肤色像素点子集中的像素点均满足:The pixels in the upper skin color pixel subset all satisfy:
Figure PCTCN2018123790-appb-000006
Figure PCTCN2018123790-appb-000006
tn为所述上侧肤色像素点子集中的像素点的序号,1≤tn≤TN,TN为所述上侧肤色像素点子集中的像素点的总数目,TopSkinPixY tn为所述上侧肤色像素点子集中的第tn 个像素点的纵坐标。 tn is the number of pixels in the upper skin color pixel subset, 1≤tn≤TN, TN is the total number of pixels in the upper skin color pixel subset, TopSkinPixY tn is the upper skin color pixel subset The ordinate of the tnth pixel of.
步骤S1024、分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标。Step S1024: Calculate the ordinate of the center position of the left eye and the ordinate of the center position of the right eye, respectively.
具体地,可以根据下式分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标:Specifically, the ordinate of the center position of the left eye and the ordinate of the center position of the right eye can be calculated according to the following formulas:
Figure PCTCN2018123790-appb-000007
Figure PCTCN2018123790-appb-000007
其中,LeftEyeY为左眼的中心位置的纵坐标,RightEyeY为右眼的中心位置的纵坐标。Among them, LeftEyeY is the ordinate of the center position of the left eye, and RightEyeY is the ordinate of the center position of the right eye.
步骤S1025、根据左眼的中心位置、右眼的中心位置、预设的眼睛区域高度和预设的眼睛区域宽度确定眼睛所在区域,并将从所述眼睛所在区域提取出的图像作为所述第一子图像。Step S1025: Determine the area where the eyes are located according to the center position of the left eye, the center position of the right eye, the preset height of the eye area, and the preset width of the eye area, and use the image extracted from the area where the eyes are located as the first A child image.
在本实施例中,可以使用[X1,X2,Y1,Y2]来表示横坐标从X1到X2,纵坐标从Y1到Y2的矩形区域,则可确定出眼睛所在区域为:In this embodiment, [X1, X2, Y1, Y2] can be used to represent a rectangular region with the abscissa from X1 to X2 and the ordinate from Y1 to Y2, then it can be determined that the area where the eyes are located is:
LeftEyeArea=[LeftEyeX1,LeftEyeX2,LeftEyeY1,LeftEyeY2]LeftEyeArea = [LeftEyeX1, LeftEyeX2, LeftEyeY1, LeftEyeY2]
RightEyeArea=[RightEyeX1,RightEyeX2,RightEyeY1,RightEyeY2]RightEyeArea = [RightEyeX1, RightEyeX2, RightEyeY1, RightEyeY2]
其中:among them:
Figure PCTCN2018123790-appb-000008
Figure PCTCN2018123790-appb-000009
Figure PCTCN2018123790-appb-000008
Figure PCTCN2018123790-appb-000009
Figure PCTCN2018123790-appb-000010
Figure PCTCN2018123790-appb-000011
Figure PCTCN2018123790-appb-000010
Figure PCTCN2018123790-appb-000011
Figure PCTCN2018123790-appb-000012
Figure PCTCN2018123790-appb-000013
Figure PCTCN2018123790-appb-000012
Figure PCTCN2018123790-appb-000013
Figure PCTCN2018123790-appb-000014
Figure PCTCN2018123790-appb-000015
Figure PCTCN2018123790-appb-000014
Figure PCTCN2018123790-appb-000015
Width为预设的眼睛区域宽度,可以根据实际情况将其设置为10、15、20或者其它取值,Height为预设的眼睛区域高度,可以根据实际情况将其设置为5、8、10或者其它取值。Width is the preset eye area width, which can be set to 10, 15, 20, or other values according to the actual situation, and Height is the preset eye area height, which can be set to 5, 8, 10, or according to the actual situation Other values.
步骤S103、计算所述第一子图像的特征向量。Step S103: Calculate a feature vector of the first sub-image.
在本实施例中,可以通过局部二值模式(Local Binary Patterns,LBP)算法来计算所述第一子图像的特征向量,具体地,构造一种衡量一个像素点与其周围像素点的关系,对所述第一子图像中的每个像素,通过计算以其为中心的邻域内各像素和中心像素的大小关系,把像素的灰度值转化为一个八位二进制序列。以中心点的像素值为阈值,如果邻域点的像素值小于中心点,则邻域点被二值化为0,否则为1;将二值化得到的0、1序列看成一个8位二进制数,将该二进制数转化为十进制就可得到中心点处 的LBP值。计算出每个所述第一子图像像素点的LBP值后,将LBP特征谱的统计直方图确定为所述第一子图像的特征向量。In this embodiment, a local binary pattern (Local Binary Patterns, LBP) algorithm can be used to calculate the feature vector of the first sub-image. Specifically, a relationship between a pixel and its surrounding pixels is constructed. For each pixel in the first sub-image, the gray value of the pixel is converted into an eight-bit binary sequence by calculating the size relationship between each pixel in the neighborhood centered on the pixel and the center pixel. The pixel value of the center point is the threshold. If the pixel value of the neighborhood point is smaller than the center point, the neighborhood point is binarized to 0, otherwise it is 1. The sequence of 0 and 1 obtained by binarization is regarded as an 8-bit Binary number. Convert the binary number to decimal to get the LBP value at the center point. After the LBP value of each first sub-image pixel is calculated, the statistical histogram of the LBP feature spectrum is determined as the feature vector of the first sub-image.
由于利用了周围点与该点的关系对该点进行量化。量化后可以更有效地消除光照对图像的影响。只要光照的变化不足以改变两个点像素值之间的大小关系,那么LBP值不会发生变化,即保证了所述第一子图像特征信息提取的准确性。This point is quantified because the relationship between the surrounding point and the point is used. After quantization, the effect of lighting on the image can be eliminated more effectively. As long as the change in illumination is not sufficient to change the size relationship between the pixel values of the two points, the LBP value will not change, which ensures the accuracy of the feature information extraction of the first sub-image.
步骤S104、计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度。Step S104: Calculate a vector similarity between a feature vector of the first sub-image and a preset reference vector.
所述基准向量为处于疲劳状态的眼睛图像的特征向量,其具体计算过程与步骤S103类似,具体可参照步骤S103中的详细叙述,此处不再赘述。The reference vector is a feature vector of the eye image in a fatigue state, and a specific calculation process thereof is similar to step S103. For details, refer to the detailed description in step S103, and details are not described herein again.
在本实施例中,具体可以根据下式计算所述第一子图像的特征向量与所述基准向量之间的向量相似度:In this embodiment, the vector similarity between the feature vector of the first sub-image and the reference vector may be specifically calculated according to the following formula:
Figure PCTCN2018123790-appb-000016
Figure PCTCN2018123790-appb-000016
其中,所述第一子图像的特征向量为X=(x 1,x 2,...,x d,...,x Dim),所述基准向量为Y=(y 1,y 2,...,y d,...,y Dim),d为向量的维度序号,1≤d≤Dim,Dim为所述第一子图像的特征向量或所述基准向量的维度数目,x d为所述第一子图像的特征向量在第d个维度上的分量,y d为所述基准向量在第d个维度上的分量,SimDeg为所述第一子图像的特征向量与所述基准向量之间的向量相似度。 Wherein, the feature vector of the first sub-image is X = (x 1 , x 2 , ..., x d , ..., x Dim ), and the reference vector is Y = (y 1 , y 2 , ..., y d , ..., y Dim ), d is the dimension number of the vector, 1≤d≤Dim, Dim is the feature vector of the first sub-image or the number of dimensions of the reference vector, x d Is the component of the feature vector of the first sub-image in the d-th dimension, y d is the component of the reference vector in the d-th dimension, and SimDeg is the feature vector of the first sub-image and the reference Vector similarity between vectors.
步骤S105、判断所述第一子图像的特征向量与所述基准向量之间的向量相似度是否大于预设的相似度阈值。Step S105: Determine whether the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold.
若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于所述相似度阈值,则执行步骤S106,若所述第一子图像的特征向量与所述基准向量之间的向量相似度小于或等于所述相似度阈值,则执行步骤S107。If the vector similarity between the feature vector of the first sub-image and the reference vector is greater than the similarity threshold, step S106 is performed. If the feature vector of the first sub-image and the reference vector are If the similarity of the vector is less than or equal to the similarity threshold, step S107 is performed.
所述相似度阈值可以根据实际情况进行设置,例如,可以将其设置为60%、70%、80%或者其它取值。The similarity threshold may be set according to actual conditions, for example, it may be set to 60%, 70%, 80%, or other values.
步骤S106、确定所述驾驶员处于疲劳驾驶状态。Step S106: Determine that the driver is in a fatigue driving state.
步骤S107、确定所述驾驶员处于正常状态。Step S107: Determine that the driver is in a normal state.
由于驾驶员的某些正常眨眼动作也可能被误判为疲劳状态,因此,为了增加准确度,可以在设定的时间段内(5分钟、10分钟、20分钟或者其它取值)每隔一定的周期(1秒、2秒、10秒或者其它取值)即采集一次图像并重复上述过程。然后统计疲劳 驾驶状态的人脸图像在总的人脸图像中的占比,若占比大于预设的比例阈值时(该比例阈值可以根据实际情况设置为60%、70%、80%或者其它取值),则可判定驾驶员为疲劳驾驶驾驶,若占比小于或等于该比例阈值时,则可判定驾驶员为正常状态。Since some normal blinking actions of the driver may also be misjudged as fatigued, in order to increase the accuracy, it can be set every 5 minutes, 10 minutes, 20 minutes, or other values within a set period of time. The cycle (1 second, 2 seconds, 10 seconds, or other values) is to acquire an image once and repeat the above process. Then calculate the proportion of the face image in the fatigue driving state in the total face image, if the proportion is greater than a preset proportion threshold (the proportion threshold can be set to 60%, 70%, 80% or other according to the actual situation) Value), the driver can be determined to be fatigue driving, and if the ratio is less than or equal to the ratio threshold, the driver can be determined to be in a normal state.
综上所述,本申请实施例通过图像分析处理的手段实现了对驾驶员疲劳驾驶状态的自动化检测,考虑到驾驶员在疲劳驾驶时眼睛区域会出现较易识别的显著特征,本申请实施例将眼睛图像的特征向量作为检测的依据,首先采集驾驶员的人脸图像,从中提取出眼睛区域的图像,并计算其特征向量,然后将处于疲劳状态的眼睛图像的特征向量作为对比的基准向量,若两者的相似度大于一定的阈值,这说明驾驶员的眼睛区域已经呈现出了疲劳的生理特征,可确定该驾驶员正处于疲劳驾驶状态。通过本申请实施例,为对驾驶员进行疲劳驾驶检测提供了可靠的检测标准,可以大大减少交通意外事故的发生。In summary, the embodiment of the present application realizes the automatic detection of the driver's fatigue driving state by means of image analysis processing. Considering that the driver's eye area will have significant features that can be easily identified during fatigue driving, the embodiment of the present application Using the feature vector of the eye image as the basis for detection, first collect the driver's face image, extract the image of the eye area from it, calculate the feature vector, and then use the feature vector of the eye image in the fatigued state as the reference vector for comparison If the similarity between the two is greater than a certain threshold value, this indicates that the driver's eye area has already exhibited the physiological characteristics of fatigue, and it can be determined that the driver is in a fatigued driving state. Through the embodiments of the present application, a reliable detection standard is provided for the driver to perform fatigue driving detection, which can greatly reduce the occurrence of traffic accidents.
对应于上文实施例所述的一种疲劳驾驶检测方法,图4示出了本申请实施例提供的一种疲劳驾驶检测装置的一个实施例结构图。Corresponding to the fatigue driving detection method described in the above embodiment, FIG. 4 shows a structural diagram of an embodiment of a fatigue driving detection device provided by an embodiment of the present application.
本实施例中,一种疲劳驾驶检测装置可以包括:In this embodiment, a fatigue driving detection device may include:
人脸图像采集模块401,用于采集驾驶员的人脸图像;A face image acquisition module 401, configured to collect a driver's face image;
子图像提取模块402,用于从所述驾驶员的人脸图像中提取出第一子图像,所述第一子图像为所述驾驶员的眼睛所在区域的图像;A sub-image extraction module 402, configured to extract a first sub-image from a face image of the driver, where the first sub-image is an image of an area where the driver's eyes are located;
特征向量计算模块403,用于计算所述第一子图像的特征向量;A feature vector calculation module 403, configured to calculate a feature vector of the first sub-image;
向量相似度计算模块404,用于计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度,所述基准向量为处于疲劳状态的眼睛图像的特征向量;A vector similarity calculation module 404, configured to calculate a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
疲劳驾驶状态确定模块405,用于若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于预设的相似度阈值,则确定所述驾驶员处于疲劳驾驶状态。The fatigue driving state determination module 405 is configured to determine that the driver is in a fatigue driving state if the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold.
进一步地,所述人脸图像采集模块包括:Further, the face image acquisition module includes:
图像采集单元,用于通过设置在驾驶区域前方的摄像头采集所述驾驶区域的图像;An image acquisition unit, configured to acquire an image of the driving area through a camera disposed in front of the driving area;
图像空间转换单元,用于将所述驾驶区域的图像由RGB空间转换到YCbCr空间,得到转换后的驾驶区域图像;An image space conversion unit, configured to convert an image of the driving area from an RGB space to a YCbCr space to obtain a converted driving area image;
肤色像素点确定单元,用于在所述转换后的驾驶区域图像中将满足预设的肤色判定条件的像素点确定为肤色像素点,并构造由各个肤色像素点组成的肤色像素点集合;A skin color pixel determination unit, configured to determine, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and construct a skin color pixel set composed of each skin color pixel;
肤色像素点统计单元,用于统计所述肤色像素点集合中肤色像素点的数目;Skin color pixel counting unit, configured to count the number of skin color pixels in the skin color pixel set;
分散度计算单元,用于计算所述肤色像素点集合的分散度;A dispersion degree calculating unit, configured to calculate a dispersion degree of the skin color pixel set;
人脸图像区域确定单元,用于若所述肤色像素点集合中肤色像素点的数目大于预 设的数目阈值,且所述肤色像素点集合的分散度小于预设的分散度阈值,则将所述肤色像素点集合所覆盖的区域确定为人脸图像区域;A face image region determining unit is configured to: if the number of skin color pixels in the skin color pixel set is greater than a preset number threshold, and the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, The area covered by the skin color pixel set is determined as a face image area;
人脸图像确定单元,用于提取所述人脸图像区域中的图像,并将所述人脸图像区域中的图像确定为所述驾驶员的人脸图像。A face image determination unit is configured to extract an image in the face image area, and determine an image in the face image area as a face image of the driver.
进一步地,所述分散度计算单元可以包括:Further, the dispersion calculation unit may include:
分散度计算子单元,用于计算所述肤色像素点集合的分散度。A dispersion degree calculation subunit is configured to calculate a dispersion degree of the skin color pixel set.
进一步地,所述子图像提取模块可以包括:Further, the sub-image extraction module may include:
第一划分单元,用于将所述肤色像素点集合划分为左侧肤色像素点子集和右侧肤色像素点子集;A first dividing unit, configured to divide the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset;
第一计算单元,用于分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标;A first calculation unit, configured to calculate the abscissa of the center position of the left eye and the abscissa of the center position of the right eye, respectively;
第二划分单元,用于从所述肤色像素点集合划分出上侧肤色像素点子集;A second dividing unit, configured to divide an upper skin color pixel subset from the skin color pixel set;
第二计算单元,用于分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标;A second calculation unit, configured to calculate the vertical coordinate of the center position of the left eye and the vertical coordinate of the center position of the right eye, respectively;
子图像提取单元,用于根据左眼的中心位置、右眼的中心位置、预设的眼睛区域高度和预设的眼睛区域宽度确定眼睛所在区域,并将从所述眼睛所在区域提取出的图像作为所述第一子图像。A sub-image extraction unit, configured to determine an area where an eye is located according to the center position of the left eye, the center position of the right eye, a preset height of the eye area, and a preset width of the eye area, and extract an image from the area where the eye is As the first sub-image.
进一步地,所述向量相似度计算模块可以包括:Further, the vector similarity calculation module may include:
向量相似度计算单元,用于计算所述第一子图像的特征向量与所述基准向量之间的向量相似度。A vector similarity calculation unit is configured to calculate a vector similarity between a feature vector of the first sub-image and the reference vector.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置,模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the devices, modules, and units described above can refer to the corresponding processes in the foregoing method embodiments, and are not repeated here.
图5示出了本申请实施例提供的一种终端设备的示意框图,为了便于说明,仅示出了与本申请实施例相关的部分。FIG. 5 shows a schematic block diagram of a terminal device according to an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
在本实施例中,所述终端设备5可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该终端设备5可包括:处理器50、存储器51以及存储在所述存储器51中并可在所述处理器50上运行的计算机可读指令52,例如执行上述的疲劳驾驶检测方法的计算机可读指令。所述处理器50执行所述计算机可读指令52时实现上述各个疲劳驾驶检测方法实施例中的步骤。In this embodiment, the terminal device 5 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device 5 may include a processor 50, a memory 51, and computer-readable instructions 52 stored in the memory 51 and executable on the processor 50, such as computer-readable instructions for performing the fatigue driving detection method described above. instruction. When the processor 50 executes the computer-readable instructions 52, the steps in the foregoing embodiments of the fatigue driving detection method are implemented.
在本申请各个实施例中的各功能单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本 申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干计算机可读指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储计算机可读指令的介质。When each functional unit in each embodiment of the present application is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium. Including computer-readable instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. The foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks, etc. which can store computer-readable instructions The medium.

Claims (20)

  1. 一种疲劳驾驶检测方法,其特征在于,包括:A fatigue driving detection method, comprising:
    采集驾驶员的人脸图像;Collect the driver's face image;
    从所述驾驶员的人脸图像中提取出第一子图像,所述第一子图像为所述驾驶员的眼睛所在区域的图像;Extracting a first sub-image from the driver's face image, where the first sub-image is an image of a region where the driver's eyes are located;
    计算所述第一子图像的特征向量;Calculating a feature vector of the first sub-image;
    计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度,所述基准向量为处于疲劳状态的眼睛图像的特征向量;Calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
    若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于预设的相似度阈值,则确定所述驾驶员处于疲劳驾驶状态。If the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold, it is determined that the driver is in a fatigue driving state.
  2. 根据权利要求1所述的疲劳驾驶检测方法,其特征在于,所述采集驾驶员的人脸图像包括:The fatigue driving detection method according to claim 1, wherein the collecting a driver's face image comprises:
    通过设置在驾驶区域前方的摄像头采集所述驾驶区域的图像;Collecting an image of the driving area through a camera disposed in front of the driving area;
    将所述驾驶区域的图像由RGB空间转换到YCbCr空间,得到转换后的驾驶区域图像;Converting the image of the driving area from RGB space to YCbCr space to obtain a converted image of the driving area;
    在所述转换后的驾驶区域图像中将满足预设的肤色判定条件的像素点确定为肤色像素点,并构造由各个肤色像素点组成的肤色像素点集合;Determining, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and constructing a skin color pixel set composed of each skin color pixel;
    统计所述肤色像素点集合中肤色像素点的数目,并计算所述肤色像素点集合的分散度;Counting the number of skin color pixels in the skin color pixel set, and calculating a dispersion degree of the skin color pixel set;
    若所述肤色像素点集合中肤色像素点的数目大于预设的数目阈值,且所述肤色像素点集合的分散度小于预设的分散度阈值,则将所述肤色像素点集合所覆盖的区域确定为人脸图像区域;If the number of skin color pixels in the skin color pixel set is greater than a preset number threshold, and the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, the area covered by the skin color pixel set is Determined as a face image area;
    提取所述人脸图像区域中的图像,并将所述人脸图像区域中的图像确定为所述驾驶员的人脸图像。An image in the face image area is extracted, and an image in the face image area is determined as a face image of the driver.
  3. 根据权利要求2所述的疲劳驾驶检测方法,其特征在于,所述计算所述肤色像素点集合的分散度包括:The fatigue driving detection method according to claim 2, wherein the calculating the dispersion degree of the skin color pixel set comprises:
    根据下式计算所述肤色像素点集合的分散度:Calculate the dispersion of the skin color pixel set according to the following formula:
    Figure PCTCN2018123790-appb-100001
    Figure PCTCN2018123790-appb-100001
    其中,n为所述肤色像素点集合中肤色像素点的序号,1≤n≤N,N为所述肤色像素点集合中肤色像素点的数目,SkinPixX n为所述肤色像素点集合中第n个肤色像素点 的横坐标,SkinPixY n为所述肤色像素点集合中第n个肤色像素点的纵坐标,DisperDeg为所述肤色像素点集合的分散度。 Wherein, n is the number of skin color pixels in the skin color pixel set, 1≤n≤N, N is the number of skin color pixels in the skin color pixel set, and SkinPixX n is nth in the skin color pixel set. The horizontal coordinate of each skin color pixel, SkinPixY n is the vertical coordinate of the nth skin color pixel in the skin color pixel set, and DisperDeg is the dispersion degree of the skin color pixel set.
  4. 根据权利要求3所述的疲劳驾驶检测方法,其特征在于,所述从所述驾驶员的人脸图像中提取出第一子图像包括:The fatigue driving detection method according to claim 3, wherein the extracting a first sub-image from a face image of the driver comprises:
    将所述肤色像素点集合划分为左侧肤色像素点子集和右侧肤色像素点子集,其中,所述左侧肤色像素点子集中的像素点均满足:Dividing the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset, wherein the pixels in the left skin color pixel subset all satisfy:
    Figure PCTCN2018123790-appb-100002
    Figure PCTCN2018123790-appb-100002
    所述右侧肤色像素点子集中的像素点均满足:The pixels in the right skin color pixel subset all satisfy:
    Figure PCTCN2018123790-appb-100003
    Figure PCTCN2018123790-appb-100003
    ln为所述左侧肤色像素点子集中的像素点的序号,1≤ln≤LN,LN为所述左侧肤色像素点子集中的像素点的总数目,LSkinPixX ln为所述左侧肤色像素点子集中的第ln个像素点的横坐标,rn为所述右侧肤色像素点子集中的像素点的序号,1≤rn≤RN,RN为所述右侧肤色像素点子集中的像素点的总数目,RSkinPixX rn为所述右侧肤色像素点子集中的第rn个像素点的横坐标; ln is the sequence number of the pixels in the left skin color pixel subset, 1≤ln≤LN, LN is the total number of pixels in the left skin color pixel subset, LSkinPixX ln is the left skin color pixel subset The abscissa of the lnth pixel, rn is the sequence number of the pixels in the right skin color pixel subset, 1≤rn≤RN, RN is the total number of pixels in the right skin color pixel subset, RSkinPixX rn is the abscissa of the rnth pixel in the right skin color pixel subset;
    根据下式分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标:Calculate the abscissa of the center position of the left eye and the abscissa of the center position of the right eye according to the following formulas:
    Figure PCTCN2018123790-appb-100004
    Figure PCTCN2018123790-appb-100004
    其中,LeftEyeX为左眼的中心位置的横坐标,RightEyeX为右眼的中心位置的横坐标;Among them, LeftEyeX is the abscissa of the center position of the left eye, and RightEyeX is the abscissa of the center position of the right eye;
    从所述肤色像素点集合划分出上侧肤色像素点子集,其中,所述上侧肤色像素点子集中的像素点均满足:An upper skin color pixel subset is divided from the skin color pixel set, and the pixels in the upper skin color pixel subset all satisfy:
    Figure PCTCN2018123790-appb-100005
    Figure PCTCN2018123790-appb-100005
    tn为所述上侧肤色像素点子集中的像素点的序号,1≤tn≤TN,TN为所述上侧肤色像素点子集中的像素点的总数目,TopSkinPixY tn为所述上侧肤色像素点子集中的第tn个像素点的纵坐标; tn is the number of pixels in the upper skin color pixel subset, 1≤tn≤TN, TN is the total number of pixels in the upper skin color pixel subset, TopSkinPixY tn is the upper skin color pixel subset The ordinate of the tnth pixel point;
    根据下式分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标:Calculate the ordinate of the center position of the left eye and the ordinate of the center position of the right eye according to the following formulas:
    Figure PCTCN2018123790-appb-100006
    Figure PCTCN2018123790-appb-100006
    其中,LeftEyeY为左眼的中心位置的纵坐标,RightEyeY为右眼的中心位置的纵坐标;Among them, LeftEyeY is the ordinate of the center position of the left eye, and RightEyeY is the ordinate of the center position of the right eye;
    根据左眼的中心位置、右眼的中心位置、预设的眼睛区域高度和预设的眼睛区域宽度确定眼睛所在区域,并将从所述眼睛所在区域提取出的图像作为所述第一子图像。Determine the area of the eye according to the center position of the left eye, the center position of the right eye, a preset height of the eye area, and a preset width of the eye area, and use the image extracted from the area where the eye is located as the first sub-image .
  5. 根据权利要求1至4中任一项所述的疲劳驾驶检测方法,其特征在于,所述计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度包括:The fatigue driving detection method according to any one of claims 1 to 4, wherein calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector includes:
    根据下式计算所述第一子图像的特征向量与所述基准向量之间的向量相似度:The vector similarity between the feature vector of the first sub-image and the reference vector is calculated according to the following formula:
    Figure PCTCN2018123790-appb-100007
    Figure PCTCN2018123790-appb-100007
    其中,所述第一子图像的特征向量为X=(x 1,x 2,...,x d,...,x Dim),所述基准向量为Y=(y 1,y 2,...,y d,...,y Dim),d为向量的维度序号,1≤d≤Dim,Dim为所述第一子图像的特征向量或所述基准向量的维度数目,x d为所述第一子图像的特征向量在第d个维度上的分量,y d为所述基准向量在第d个维度上的分量,SimDeg为所述第一子图像的特征向量与所述基准向量之间的向量相似度。 Wherein, the feature vector of the first sub-image is X = (x 1 , x 2 , ..., x d , ..., x Dim ), and the reference vector is Y = (y 1 , y 2 , ..., y d , ..., y Dim ), d is the dimension number of the vector, 1≤d≤Dim, Dim is the feature vector of the first sub-image or the number of dimensions of the reference vector, x d Is the component of the feature vector of the first sub-image in the d-th dimension, y d is the component of the reference vector in the d-th dimension, and SimDeg is the feature vector of the first sub-image and the reference Vector similarity between vectors.
  6. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:A computer-readable storage medium storing computer-readable instructions, wherein the computer-readable instructions implement the following steps when executed by a processor:
    采集驾驶员的人脸图像;Collect the driver's face image;
    从所述驾驶员的人脸图像中提取出第一子图像,所述第一子图像为所述驾驶员的眼睛所在区域的图像;Extracting a first sub-image from the driver's face image, where the first sub-image is an image of a region where the driver's eyes are located;
    计算所述第一子图像的特征向量;Calculating a feature vector of the first sub-image;
    计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度,所述基准向量为处于疲劳状态的眼睛图像的特征向量;Calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
    若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于预设的相似度阈值,则确定所述驾驶员处于疲劳驾驶状态。If the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold, it is determined that the driver is in a fatigue driving state.
  7. 根据权利要求6所述的计算机可读存储介质,其特征在于,所述采集驾驶员的人脸图像包括:The computer-readable storage medium of claim 6, wherein the collecting a driver's face image comprises:
    通过设置在驾驶区域前方的摄像头采集所述驾驶区域的图像;Collecting an image of the driving area through a camera disposed in front of the driving area;
    将所述驾驶区域的图像由RGB空间转换到YCbCr空间,得到转换后的驾驶区域图像;Converting the image of the driving area from RGB space to YCbCr space to obtain a converted image of the driving area;
    在所述转换后的驾驶区域图像中将满足预设的肤色判定条件的像素点确定为肤色像素点,并构造由各个肤色像素点组成的肤色像素点集合;Determining, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and constructing a skin color pixel set composed of each skin color pixel;
    统计所述肤色像素点集合中肤色像素点的数目,并计算所述肤色像素点集合的分散度;Counting the number of skin color pixels in the skin color pixel set, and calculating a dispersion degree of the skin color pixel set;
    若所述肤色像素点集合中肤色像素点的数目大于预设的数目阈值,且所述肤色像素点集合的分散度小于预设的分散度阈值,则将所述肤色像素点集合所覆盖的区域确定为人脸图像区域;If the number of skin color pixels in the skin color pixel set is greater than a preset number threshold, and the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, the area covered by the skin color pixel set is Determined as a face image area;
    提取所述人脸图像区域中的图像,并将所述人脸图像区域中的图像确定为所述驾驶员的人脸图像。An image in the face image area is extracted, and an image in the face image area is determined as a face image of the driver.
  8. 根据权利要求7所述的计算机可读存储介质,其特征在于,所述计算所述肤色像素点集合的分散度包括:The computer-readable storage medium of claim 7, wherein the calculating the dispersion degree of the skin color pixel set comprises:
    根据下式计算所述肤色像素点集合的分散度:Calculate the dispersion of the skin color pixel set according to the following formula:
    Figure PCTCN2018123790-appb-100008
    Figure PCTCN2018123790-appb-100008
    其中,n为所述肤色像素点集合中肤色像素点的序号,1≤n≤N,N为所述肤色像素点集合中肤色像素点的数目,SkinPixX n为所述肤色像素点集合中第n个肤色像素点的横坐标,SkinPixY n为所述肤色像素点集合中第n个肤色像素点的纵坐标,DisperDeg为所述肤色像素点集合的分散度。 Wherein, n is the number of skin color pixels in the skin color pixel set, 1≤n≤N, N is the number of skin color pixels in the skin color pixel set, and SkinPixX n is nth in the skin color pixel set. The horizontal coordinate of each skin color pixel, SkinPixY n is the vertical coordinate of the nth skin color pixel in the skin color pixel set, and DisperDeg is the dispersion degree of the skin color pixel set.
  9. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述从所述驾驶员的人脸图像中提取出第一子图像包括:The computer-readable storage medium according to claim 8, wherein the extracting a first sub-image from a face image of the driver comprises:
    将所述肤色像素点集合划分为左侧肤色像素点子集和右侧肤色像素点子集;Dividing the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset;
    分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标;Calculate the abscissa of the center position of the left eye and the abscissa of the center position of the right eye, respectively;
    从所述肤色像素点集合划分出上侧肤色像素点子集;Dividing an upper skin color pixel subset from the skin color pixel set;
    分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标;Calculate the ordinate of the center position of the left eye and the ordinate of the center position of the right eye, respectively;
    根据左眼的中心位置、右眼的中心位置、预设的眼睛区域高度和预设的眼睛区域宽度确定眼睛所在区域,并将从所述眼睛所在区域提取出的图像作为所述第一子图像。Determine the area of the eye according to the center position of the left eye, the center position of the right eye, a preset height of the eye area, and a preset width of the eye area, and use the image extracted from the area where the eye is located as the first sub-image .
  10. 根据权利要求6至9中任一项所述的计算机可读存储介质,其特征在于,所述计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度包括:The computer-readable storage medium according to any one of claims 6 to 9, wherein calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector comprises:
    根据下式计算所述第一子图像的特征向量与所述基准向量之间的向量相似度:The vector similarity between the feature vector of the first sub-image and the reference vector is calculated according to the following formula:
    Figure PCTCN2018123790-appb-100009
    Figure PCTCN2018123790-appb-100009
    其中,所述第一子图像的特征向量为X=(x 1,x 2,...,x d,...,x Dim),所述基准向量为Y=(y 1,y 2,...,y d,...,y Dim),d为向量的维度序号,1≤d≤Dim,Dim为所述第一子图像的特征向量或所述基准向量的维度数目,x d为所述第一子图像的特征向量在第d个维度上的分量,y d为所述基准向量在第d个维度上的分量,SimDeg为所述第一子图像的特征向量与所述基准向量之间的向量相似度。 Wherein, the feature vector of the first sub-image is X = (x 1 , x 2 , ..., x d , ..., x Dim ), and the reference vector is Y = (y 1 , y 2 , ..., y d , ..., y Dim ), d is the dimension number of the vector, 1≤d≤Dim, Dim is the feature vector of the first sub-image or the number of dimensions of the reference vector, x d Is the component of the feature vector of the first sub-image in the d-th dimension, y d is the component of the reference vector in the d-th dimension, and SimDeg is the feature vector of the first sub-image and the reference Vector similarity between vectors.
  11. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:A terminal device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, and is characterized in that the processor implements the computer-readable instructions as follows: step:
    采集驾驶员的人脸图像;Collect the driver's face image;
    从所述驾驶员的人脸图像中提取出第一子图像,所述第一子图像为所述驾驶员的眼睛所在区域的图像;Extracting a first sub-image from the driver's face image, where the first sub-image is an image of a region where the driver's eyes are located;
    计算所述第一子图像的特征向量;Calculating a feature vector of the first sub-image;
    计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度,所述基准向量为处于疲劳状态的眼睛图像的特征向量;Calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
    若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于预设的相似度阈值,则确定所述驾驶员处于疲劳驾驶状态。If the vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold, it is determined that the driver is in a fatigue driving state.
  12. 根据权利要求11所述的终端设备,其特征在于,所述采集驾驶员的人脸图像包括:The terminal device according to claim 11, wherein the collecting a driver's face image comprises:
    通过设置在驾驶区域前方的摄像头采集所述驾驶区域的图像;Collecting an image of the driving area through a camera disposed in front of the driving area;
    将所述驾驶区域的图像由RGB空间转换到YCbCr空间,得到转换后的驾驶区域图像;Converting the image of the driving area from RGB space to YCbCr space to obtain a converted image of the driving area;
    在所述转换后的驾驶区域图像中将满足预设的肤色判定条件的像素点确定为肤色像素点,并构造由各个肤色像素点组成的肤色像素点集合;Determining, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and constructing a skin color pixel set composed of each skin color pixel;
    统计所述肤色像素点集合中肤色像素点的数目,并计算所述肤色像素点集合的分散度;Counting the number of skin color pixels in the skin color pixel set, and calculating a dispersion degree of the skin color pixel set;
    若所述肤色像素点集合中肤色像素点的数目大于预设的数目阈值,且所述肤色像素点集合的分散度小于预设的分散度阈值,则将所述肤色像素点集合所覆盖的区域确 定为人脸图像区域;If the number of skin color pixels in the skin color pixel set is greater than a preset number threshold, and the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, the area covered by the skin color pixel set is Determined as a face image area;
    提取所述人脸图像区域中的图像,并将所述人脸图像区域中的图像确定为所述驾驶员的人脸图像。An image in the face image area is extracted, and an image in the face image area is determined as a face image of the driver.
  13. 根据权利要求12所述的终端设备,其特征在于,所述计算所述肤色像素点集合的分散度包括:The terminal device according to claim 12, wherein the calculating the dispersion degree of the skin color pixel set comprises:
    根据下式计算所述肤色像素点集合的分散度:Calculate the dispersion of the skin color pixel set according to the following formula:
    Figure PCTCN2018123790-appb-100010
    Figure PCTCN2018123790-appb-100010
    其中,n为所述肤色像素点集合中肤色像素点的序号,1≤n≤N,N为所述肤色像素点集合中肤色像素点的数目,SkinPixX n为所述肤色像素点集合中第n个肤色像素点的横坐标,SkinPixY n为所述肤色像素点集合中第n个肤色像素点的纵坐标,DisperDeg为所述肤色像素点集合的分散度。 Wherein, n is the number of skin color pixels in the skin color pixel set, 1≤n≤N, N is the number of skin color pixels in the skin color pixel set, and SkinPixX n is nth in the skin color pixel set. The horizontal coordinate of each skin color pixel, SkinPixY n is the vertical coordinate of the nth skin color pixel in the skin color pixel set, and DisperDeg is the dispersion degree of the skin color pixel set.
  14. 根据权利要求13所述的终端设备,其特征在于,所述从所述驾驶员的人脸图像中提取出第一子图像包括:The terminal device according to claim 13, wherein the extracting a first sub-image from a face image of the driver comprises:
    将所述肤色像素点集合划分为左侧肤色像素点子集和右侧肤色像素点子集;Dividing the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset;
    分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标;Calculate the abscissa of the center position of the left eye and the abscissa of the center position of the right eye, respectively;
    从所述肤色像素点集合划分出上侧肤色像素点子集;Dividing an upper skin color pixel subset from the skin color pixel set;
    分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标;Calculate the ordinate of the center position of the left eye and the ordinate of the center position of the right eye, respectively;
    根据左眼的中心位置、右眼的中心位置、预设的眼睛区域高度和预设的眼睛区域宽度确定眼睛所在区域,并将从所述眼睛所在区域提取出的图像作为所述第一子图像。Determine the area of the eye according to the center position of the left eye, the center position of the right eye, a preset height of the eye area, and a preset width of the eye area, and use the image extracted from the area where the eye is located as the first sub-image .
  15. 根据权利要求11至14中任一项所述的终端设备,其特征在于,所述计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度包括:The terminal device according to any one of claims 11 to 14, wherein calculating a vector similarity between a feature vector of the first sub-image and a preset reference vector includes:
    根据下式计算所述第一子图像的特征向量与所述基准向量之间的向量相似度:The vector similarity between the feature vector of the first sub-image and the reference vector is calculated according to the following formula:
    Figure PCTCN2018123790-appb-100011
    Figure PCTCN2018123790-appb-100011
    其中,所述第一子图像的特征向量为X=(x 1,x 2,...,x d,...,x Dim),所述基准向量为Y=(y 1,y 2,...,y d,...,y Dim),d为向量的维度序号,1≤d≤Dim,Dim为所述第一子图像的特征向量或所述基准向量的维度数目,x d为所述第一子图像的特征向量在第d个维度 上的分量,y d为所述基准向量在第d个维度上的分量,SimDeg为所述第一子图像的特征向量与所述基准向量之间的向量相似度。 Wherein, the feature vector of the first sub-image is X = (x 1 , x 2 , ..., x d , ..., x Dim ), and the reference vector is Y = (y 1 , y 2 , ..., y d , ..., y Dim ), d is the dimension number of the vector, 1≤d≤Dim, Dim is the feature vector of the first sub-image or the number of dimensions of the reference vector, x d Is the component of the feature vector of the first sub-image in the d-th dimension, y d is the component of the reference vector in the d-th dimension, and SimDeg is the feature vector of the first sub-image and the reference Vector similarity between vectors.
  16. 一种疲劳驾驶检测装置,其特征在于,包括:A fatigue driving detection device, comprising:
    人脸图像采集模块,用于采集驾驶员的人脸图像;A face image acquisition module for collecting a driver's face image;
    子图像提取模块,用于从所述驾驶员的人脸图像中提取出第一子图像,所述第一子图像为所述驾驶员的眼睛所在区域的图像;A sub-image extraction module, configured to extract a first sub-image from a face image of the driver, where the first sub-image is an image of a region where the driver's eyes are located;
    特征向量计算模块,用于计算所述第一子图像的特征向量;A feature vector calculation module, configured to calculate a feature vector of the first sub-image;
    向量相似度计算模块,用于计算所述第一子图像的特征向量与预设的基准向量之间的向量相似度,所述基准向量为处于疲劳状态的眼睛图像的特征向量;A vector similarity calculation module, configured to calculate a vector similarity between a feature vector of the first sub-image and a preset reference vector, where the reference vector is a feature vector of an eye image in a fatigue state;
    疲劳驾驶状态确定模块,用于若所述第一子图像的特征向量与所述基准向量之间的向量相似度大于预设的相似度阈值,则确定所述驾驶员处于疲劳驾驶状态。The fatigue driving state determination module is configured to determine that the driver is in a fatigue driving state if a vector similarity between the feature vector of the first sub-image and the reference vector is greater than a preset similarity threshold.
  17. 根据权利要求16所述的疲劳驾驶检测装置,其特征在于,所述人脸图像采集模块包括:The fatigue driving detection device according to claim 16, wherein the face image acquisition module comprises:
    图像采集单元,用于通过设置在驾驶区域前方的摄像头采集所述驾驶区域的图像;An image acquisition unit, configured to acquire an image of the driving area through a camera disposed in front of the driving area;
    图像空间转换单元,用于将所述驾驶区域的图像由RGB空间转换到YCbCr空间,得到转换后的驾驶区域图像;An image space conversion unit, configured to convert an image of the driving area from an RGB space to a YCbCr space to obtain a converted driving area image;
    肤色像素点确定单元,用于在所述转换后的驾驶区域图像中将满足预设的肤色判定条件的像素点确定为肤色像素点,并构造由各个肤色像素点组成的肤色像素点集合;A skin color pixel determination unit, configured to determine, in the converted driving area image, pixels that meet preset skin color determination conditions as skin color pixels, and construct a skin color pixel set composed of each skin color pixel;
    肤色像素点统计单元,用于统计所述肤色像素点集合中肤色像素点的数目;Skin color pixel counting unit, configured to count the number of skin color pixels in the skin color pixel set;
    分散度计算单元,用于计算所述肤色像素点集合的分散度;A dispersion degree calculating unit, configured to calculate a dispersion degree of the skin color pixel set;
    人脸图像区域确定单元,用于若所述肤色像素点集合中肤色像素点的数目大于预设的数目阈值,且所述肤色像素点集合的分散度小于预设的分散度阈值,则将所述肤色像素点集合所覆盖的区域确定为人脸图像区域;A face image region determining unit is configured to: if the number of skin color pixels in the skin color pixel set is greater than a preset number threshold, and the dispersion degree of the skin color pixel set is less than a preset dispersion degree threshold, The area covered by the skin color pixel set is determined as a face image area;
    人脸图像确定单元,用于提取所述人脸图像区域中的图像,并将所述人脸图像区域中的图像确定为所述驾驶员的人脸图像。A face image determination unit is configured to extract an image in the face image area, and determine an image in the face image area as a face image of the driver.
  18. 根据权利要求17所述的疲劳驾驶检测装置,其特征在于,所述分散度计算单元包括:The fatigue driving detection device according to claim 17, wherein the dispersion calculation unit includes:
    分散度计算子单元,用于根据下式计算所述肤色像素点集合的分散度:The dispersion degree calculation subunit is configured to calculate the dispersion degree of the skin color pixel set according to the following formula:
    Figure PCTCN2018123790-appb-100012
    Figure PCTCN2018123790-appb-100012
    其中,n为所述肤色像素点集合中肤色像素点的序号,1≤n≤N,N为所述肤色像 素点集合中肤色像素点的数目,SkinPixX n为所述肤色像素点集合中第n个肤色像素点的横坐标,SkinPixY n为所述肤色像素点集合中第n个肤色像素点的纵坐标,DisperDeg为所述肤色像素点集合的分散度。 Wherein, n is the number of skin color pixels in the skin color pixel set, 1≤n≤N, N is the number of skin color pixels in the skin color pixel set, and SkinPixX n is nth in the skin color pixel set. The horizontal coordinate of each skin color pixel, SkinPixY n is the vertical coordinate of the nth skin color pixel in the skin color pixel set, and DisperDeg is the dispersion degree of the skin color pixel set.
  19. 根据权利要求18所述的疲劳驾驶检测装置,其特征在于,所述子图像提取模块包括:The fatigue driving detection device according to claim 18, wherein the sub-image extraction module comprises:
    第一划分单元,用于将所述肤色像素点集合划分为左侧肤色像素点子集和右侧肤色像素点子集;A first dividing unit, configured to divide the skin color pixel set into a left skin color pixel subset and a right skin color pixel subset;
    第一计算单元,用于分别计算左眼的中心位置的横坐标以及右眼的中心位置的横坐标;A first calculation unit, configured to calculate the abscissa of the center position of the left eye and the abscissa of the center position of the right eye, respectively;
    第二划分单元,用于从所述肤色像素点集合划分出上侧肤色像素点子集;A second dividing unit, configured to divide an upper skin color pixel subset from the skin color pixel set;
    第二计算单元,用于分别计算左眼的中心位置的纵坐标以及右眼的中心位置的纵坐标;A second calculation unit, configured to calculate the vertical coordinate of the center position of the left eye and the vertical coordinate of the center position of the right eye, respectively;
    子图像提取单元,用于根据左眼的中心位置、右眼的中心位置、预设的眼睛区域高度和预设的眼睛区域宽度确定眼睛所在区域,并将从所述眼睛所在区域提取出的图像作为所述第一子图像。A sub-image extraction unit, configured to determine an area where an eye is located according to the center position of the left eye, the center position of the right eye, a preset height of the eye area, and a preset width of the eye area, and extract an image from the area where the eye is As the first sub-image.
  20. 根据权利要求16至19中任一项所述的疲劳驾驶检测装置,其特征在于,所述向量相似度计算模块包括:The fatigue driving detection device according to any one of claims 16 to 19, wherein the vector similarity calculation module includes:
    向量相似度计算单元,用于根据下式计算所述第一子图像的特征向量与所述基准向量之间的向量相似度:A vector similarity calculation unit is configured to calculate a vector similarity between a feature vector of the first sub-image and the reference vector according to the following formula:
    Figure PCTCN2018123790-appb-100013
    Figure PCTCN2018123790-appb-100013
    其中,所述第一子图像的特征向量为X=(x 1,x 2,...,x d,...,x Dim),所述基准向量为Y=(y 1,y 2,...,y d,...,y Dim),d为向量的维度序号,1≤d≤Dim,Dim为所述第一子图像的特征向量或所述基准向量的维度数目,x d为所述第一子图像的特征向量在第d个维度上的分量,y d为所述基准向量在第d个维度上的分量,SimDeg为所述第一子图像的特征向量与所述基准向量之间的向量相似度。 Wherein, the feature vector of the first sub-image is X = (x 1 , x 2 , ..., x d , ..., x Dim ), and the reference vector is Y = (y 1 , y 2 , ..., y d , ..., y Dim ), d is the dimension number of the vector, 1≤d≤Dim, Dim is the feature vector of the first sub-image or the number of dimensions of the reference vector, x d Is the component of the feature vector of the first sub-image in the d-th dimension, y d is the component of the reference vector in the d-th dimension, and SimDeg is the feature vector of the first sub-image and the reference Vector similarity between vectors.
PCT/CN2018/123790 2018-08-14 2018-12-26 Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus WO2020034541A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810921792.4A CN109190515A (en) 2018-08-14 2018-08-14 A kind of method for detecting fatigue driving, computer readable storage medium and terminal device
CN201810921792.4 2018-08-14

Publications (1)

Publication Number Publication Date
WO2020034541A1 true WO2020034541A1 (en) 2020-02-20

Family

ID=64921459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123790 WO2020034541A1 (en) 2018-08-14 2018-12-26 Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus

Country Status (2)

Country Link
CN (1) CN109190515A (en)
WO (1) WO2020034541A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI741892B (en) * 2020-12-01 2021-10-01 咸瑞科技股份有限公司 In-car driving monitoring system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226690A (en) * 2012-01-30 2013-07-31 展讯通信(上海)有限公司 Red eye detection method and device and red eye removing method and device
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN105354985A (en) * 2015-11-04 2016-02-24 中国科学院上海高等研究院 Fatigue driving monitoring device and method
CN106485191A (en) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
CN107578008A (en) * 2017-09-02 2018-01-12 吉林大学 Fatigue state detection method based on blocking characteristic matrix algorithm and SVM

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006051607A1 (en) * 2004-11-12 2006-05-18 Omron Corporation Face feature point detector and feature point detector
CN104346621A (en) * 2013-07-30 2015-02-11 展讯通信(天津)有限公司 Method and device for creating eye template as well as method and device for detecting eye state
CN106096575A (en) * 2016-06-24 2016-11-09 苏州大学 A kind of driving states monitoring method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226690A (en) * 2012-01-30 2013-07-31 展讯通信(上海)有限公司 Red eye detection method and device and red eye removing method and device
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states
CN106485191A (en) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
CN105354985A (en) * 2015-11-04 2016-02-24 中国科学院上海高等研究院 Fatigue driving monitoring device and method
CN107578008A (en) * 2017-09-02 2018-01-12 吉林大学 Fatigue state detection method based on blocking characteristic matrix algorithm and SVM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIU. YE.: "The Research of Text Feature Selection Applied in Information Filtering System Science", ELECTRONIC TECHNOLOGY & LNFORMATION SCIENCE, CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 March 2011 (2011-03-15), pages 15 - 16, ISSN: 1674-0246 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI741892B (en) * 2020-12-01 2021-10-01 咸瑞科技股份有限公司 In-car driving monitoring system

Also Published As

Publication number Publication date
CN109190515A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110084135B (en) Face recognition method, device, computer equipment and storage medium
CN108182409B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
WO2020151307A1 (en) Automatic lesion recognition method and device, and computer-readable storage medium
Khan et al. Saliency-based framework for facial expression recognition
KR20220050977A (en) Medical image processing method, image processing method and apparatus
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
WO2019114145A1 (en) Head count detection method and device in surveillance video
CN111476849B (en) Object color recognition method, device, electronic equipment and storage medium
CN108564034A (en) The detection method of operating handset behavior in a kind of driver drives vehicle
CN110598574A (en) Intelligent face monitoring and identifying method and system
CN109117723A (en) Blind way detection method based on color mode analysis and semantic segmentation
CN104036291A (en) Race classification based multi-feature gender judgment method
WO2020034541A1 (en) Driver drowsiness detection method, computer readable storage medium, terminal device, and apparatus
CN117392733B (en) Acne grading detection method and device, electronic equipment and storage medium
JP2009169518A (en) Area identification apparatus and content identification apparatus
KR101350882B1 (en) Server for analysing video
CN113868457A (en) Image processing method based on image gathering and related device
Manaf et al. Color recognition system with augmented reality concept and finger interaction: Case study for color blind aid system
CN102542304B (en) Region segmentation skin-color algorithm for identifying WAP (Wireless Application Protocol) mobile porn image
CN111582278B (en) Portrait segmentation method and device and electronic equipment
CN112464765A (en) Safety helmet detection algorithm based on single-pixel characteristic amplification and application thereof
CN110866470A (en) Face anti-counterfeiting detection method based on random image characteristics
CN110598521A (en) Behavior and physiological state identification method based on intelligent analysis of face image
CN112507903B (en) False face detection method, false face detection device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18929883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/06/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18929883

Country of ref document: EP

Kind code of ref document: A1