CN113255478A - Composite fatigue detection method, terminal equipment and storage medium - Google Patents

Composite fatigue detection method, terminal equipment and storage medium Download PDF

Info

Publication number
CN113255478A
CN113255478A CN202110504110.1A CN202110504110A CN113255478A CN 113255478 A CN113255478 A CN 113255478A CN 202110504110 A CN202110504110 A CN 202110504110A CN 113255478 A CN113255478 A CN 113255478A
Authority
CN
China
Prior art keywords
frame
fatigue detection
video
rate
respiratory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110504110.1A
Other languages
Chinese (zh)
Inventor
苏鹭梅
陈兴
陈鑫强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202110504110.1A priority Critical patent/CN113255478A/en
Publication of CN113255478A publication Critical patent/CN113255478A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/148Wavelet transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a composite fatigue detection method, a terminal device and a storage medium, wherein the method comprises the following steps: s1: collecting a video in the driving process of a driver, and decomposing the video into images frame by frame; s2: performing facial fatigue detection according to each decomposed frame image; s3: after carrying out optical flow coding on each frame image by a dense optical flow method, carrying out gray level processing on an interested area corresponding to each frame image according to a set interested area, taking the average value of gray levels of all pixels in the interested area as a respiratory signal corresponding to each frame image, forming the respiratory signals of all the frame images into a respiratory signal curve, and calculating the respiratory rate corresponding to the video according to the number of peaks in the respiratory signal curve, the frame rate of the video and the total frame number; s4: and judging whether the driver is in a fatigue state or not according to the facial fatigue detection result and the breathing rate. The invention adopts a comprehensive detection method of integrating facial features and respiratory rate, and can improve the accuracy of fatigue detection.

Description

Composite fatigue detection method, terminal equipment and storage medium
Technical Field
The present invention relates to the field of fatigue detection, and in particular, to a composite fatigue detection method, a terminal device, and a storage medium.
Background
Domestic and foreign researches show that the sensing ability, the danger judging ability and the vehicle control ability of a driver in a fatigue state are reduced to different degrees compared with normal conditions, so that traffic accidents are caused.
In recent traffic accidents, the proportion of the traffic accidents caused by fatigue driving reaches six, and the traffic accidents become a social problem with much attention. Therefore, the development of a fatigue driving detection and reminding system is one of the methods for preventing the accidents, the occurrence of the traffic accidents can be reduced to a certain extent, and the driving safety index is improved.
The existing fatigue detection methods are many, most of the fatigue detection methods are contact-type, the accuracy of the methods is guaranteed, but the methods are not very friendly to detected persons, particularly drivers need to wear heavy detection instruments. Most of the existing non-contact methods carry out fatigue judgment through the features of the face or the head, and have the defects of low real-time performance, insufficient detection precision and the like, which are caused by insufficient fatigue features. However, the convenience of non-contact detection is a trend of fatigue detection development, and a non-contact fatigue detection system with comprehensive functions, relative safety and accuracy is urgently needed at present.
Disclosure of Invention
In order to solve the above problems, the present invention provides a composite fatigue detection method, a terminal device, and a storage medium.
The specific scheme is as follows:
a composite fatigue detection method comprises the following steps:
s1: collecting a video in the driving process of a driver, and decomposing the video into images frame by frame;
s2: performing facial fatigue detection according to each decomposed frame image;
s3: after carrying out optical flow coding on each frame image by a dense optical flow method, carrying out gray level processing on an interested area corresponding to each frame image according to a set interested area, taking the average value of gray levels of all pixels in the interested area as a respiratory signal corresponding to each frame image, forming the respiratory signals of all the frame images into a respiratory signal curve, and calculating the respiratory rate corresponding to the video according to the number of peaks in the respiratory signal curve, the frame rate of the video and the total frame number;
s4: and judging whether the driver is in a fatigue state or not according to the facial fatigue detection result and the breathing rate.
Further, the facial fatigue detection in step S2 includes face detection and feature point labeling, where the feature point labeling adopts an integrated regression tree algorithm training model.
Further, in step S3, the thoracoabdominal region is set as the region of interest.
Further, the calculation formula of the respiration rate in step S3 is:
Figure BDA0003057639810000021
wherein, P represents the respiration rate, R represents the number of peaks in the respiration signal curve, T represents the frame rate of the video, and F represents the total frame number of the video.
Further, in step S3, filtering processing needs to be performed on the formed respiration signal curve before calculating the respiration rate, and the respiration rate is calculated according to the number of peaks of the respiration signal curve after filtering processing.
Further, the filtering process adopts a wavelet transform algorithm.
A composite fatigue detection terminal device comprises a processor, a memory and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the method of the embodiment of the invention when executing the computer program.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above for an embodiment of the invention.
By adopting the technical scheme, the comprehensive detection method which integrates fatigue detection based on facial features and fatigue detection based on respiratory rate can improve the accuracy of fatigue detection.
Drawings
Fig. 1 is a flowchart illustrating a first embodiment of the present invention.
Detailed Description
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures.
The invention will now be further described with reference to the accompanying drawings and detailed description.
The first embodiment is as follows:
an embodiment of the present invention provides a composite fatigue detection method, as shown in fig. 1, which is a flowchart of the composite fatigue detection method according to the embodiment of the present invention, and the method includes the following steps:
s1: the method comprises the steps of collecting videos of a driver in the driving process, and decomposing the videos into images frame by frame.
S2: and performing facial fatigue detection according to each decomposed frame image.
The facial fatigue detection comprises face detection and feature point labeling, in the feature point labeling, face 68 feature points are adopted for labeling, the labeling mode enables the facial fatigue detection to be more accurate, each feature point has a corresponding coordinate, the coordinate of each point is changed, the coordinate needs to be further converted into a calculable scalar, each state is set in advance through a fixed value, and the purpose of accurately judging the fatigue state can be achieved. The characteristic point marking model adopted in the marking process is an integrated regression tree algorithm training model, and the specific process is as follows: and after the characteristic points of the face image are labeled in the training set, training by using a regression tree model. Before training, the average face needs to be calculated as the shape of the model initialized at the time of testing, and is set as shape. During training, the intensity of the pixel points is used as characteristics, the distance between the pixel points near the calibrated training set and the point pairs is used as a characteristic pool, and the distance is divided by the distance between the two eyes to perform normalization. In the embodiment, exponential distance prior is introduced, an integrated regression tree model is applied mechanically, the model is cascaded 10 regression trees, each regression tree is provided with 500 weak regressors, the depth of each tree is 5, a gradient lifting algorithm (integration) is used, continuous regression is performed by using residual errors, and a final regression tree model is obtained by fitting errors.
During testing, inputting a face detection result into a model, firstly pasting an average face in a new testing face to obtain an initial shape, predicting feature points by using the shape of the face, meanwhile, predicting the shape of the face by using the feature points, performing regression by using the same error function as that during training, continuously performing regression and reducing the error with group, and obtaining a final face feature point positioning result through a 10-level coupled regression tree.
The fatigue characteristics of the face can be converted into a point-to-point relation under the action of the characteristic points, a threshold value is defined through a numerical value, and whether fatigue occurs or not is judged through the threshold value. The fatigue features of the face in this embodiment include the eyes and mouth.
S3: after optical flow coding is carried out on each frame image through a dense optical flow method, gray processing is carried out on an interested area corresponding to each frame image according to a set interested area, the average value of the gray values of all pixels in the interested area is used as a respiratory signal corresponding to each frame image, the respiratory signals of all the frame images form a respiratory signal curve, and the respiratory rate corresponding to the video is calculated according to the number of peaks in the respiratory signal curve, the frame rate of the video and the total frame number.
Due to the improvement of computer computing power, the dense optical flow method is more accurate in capturing dynamic features, and therefore can be used for detecting respiratory signals. The dense optical flow method mainly captures dynamic objects in a video according to pixel displacement on an image, and the main idea is to approximately represent neighborhood information of each pixel by using a polynomial, wherein the represented quadratic polynomial is as follows:
f(x)~xTAx+bTx+c
where A is a symmetric matrix, b is a vector, c is a scalar, represents an approximation of pixel neighborhood information, and f (x) represents neighborhood information for a pixel.
The image of the previous frame can be represented according to the above polynomial as:
Figure BDA0003057639810000051
the distance between the image of the next frame and the previous frame is represented by a unique d, and the image of the next frame can be represented as:
Figure BDA0003057639810000052
the method is simplified and can be obtained:
Figure BDA0003057639810000053
since the appearance information of pixels in an image scene does not change between frames, it can be obtained that the corresponding coefficients are the same if A is the same1In the case of a non-singular matrix, the calculation formula of d is:
Figure BDA0003057639810000054
and tracking the characteristic points in the image by combining the obtained specific numerical values with the image pyramid.
After the optical flow coding is carried out on the image by the method, a vector (u, v) with dense optical flow of the image as one or two channels is obtained, and the vector is visualized by coding with different colors according to the size and the direction of the vector.
The image also needs to be grayed to obtain the breathing signal. And setting the chest and abdomen area as an interested area, converting an image after optical flow coding corresponding to the interested area into a gray image, and taking the average gray value of the interested area as a respiratory signal.
In this embodiment, theThe average gray value is the average value of the gray values of all pixels in the region of interest, and the calculation method comprises the following steps: the pixel coordinates of the grayscale image are set to (x, y). Selecting a pixel region (x + w, y + h) with the width of w and the height of h as an interested region, carrying out matrix summation on the gray value of each pixel in the interested region corresponding to each image, and dividing the summed result by the total pixel number w multiplied by h to obtain an average gray value IDNamely:
Figure BDA0003057639810000061
and forming a respiratory signal curve by the respiratory signals of the continuous frames.
Since there is some disturbance in the acquired respiratory signal, a filtering process is required. The filtering algorithm chosen in this embodiment is a wavelet transform, which can greatly increase the reliability of the respiration signal. The meaning of wavelet transform is that after a certain function called basic wavelet is shifted by tau, signals X (t) are analyzed and inner products are made under different scales alpha, and the formula is shown as follows:
Figure BDA0003057639810000062
wherein alpha is>0, called scale factor, whose effect is on the basic wavelet
Figure BDA0003057639810000063
The function 0 is used for scaling, tau reflects displacement, the value can be positive or negative, and alpha and tau are continuous variables, so that the function is called continuous wavelet transformation. Under different scales, the continuous time of the wavelet widens along with the increase of the value, the amplitude is reduced in inverse proportion to the root sign alpha, and the shape of the waveform is kept unchanged.
The expression of the frequency domain is:
Figure BDA0003057639810000064
where X (ω) and ψ (α ω) are Fourier transforms of X (t) and ψ (t), respectively.
The definition of the continuous wavelet transform is therefore as follows:
CWTf(a,b)=<x(t),ψa,b(t)>=∫x(t)ψa,b(t)dt
wherein a is a translation factor and b is a scaling factor.
After filtering disturbance to obtain a relatively smooth respiration signal curve, a calculation formula of the respiration rate P can be obtained by calculating the number of peaks R of the curve (the number of the peaks represents the number of respiration times) and then combining the frame rate T of the video and the total frame number F of the video:
Figure BDA0003057639810000071
s4: and determining whether the driver is in a fatigue driving state or not according to the face fatigue detection result and the breathing rate.
The determination method of the facial fatigue detection in this embodiment is: setting Y to the eye state and M to the mouth state, when the driver is in the normal state, the eyes are open, so Y is 0, and the mouth is closed, so M is 0; when a person is in grade 1 fatigue, the eyes are open, so Y is 0, the mouth is open, so M is 1 and the opening time T is2Greater than 3 s; when a person is in 2-grade fatigue, the eyes are in an eye-closing state, so that Y is equal to 1 and the eye-closing time T is1More than 3s and less than 5s, the mouth is in an open state, so that M is 1, and the opening time T is2Greater than 3 s; when a person is in 3-grade fatigue, the eyes are in an eye-closing state, so that Y is equal to 1 and the eye-closing time T is1Above 5s, the mouth is closed, so M is 0.
The method for judging the breathing rate comprises the following steps: according to diagnostics, the breathing of normal adults is 12-22 times/minute, so that when the breathing rate P is lower than 0.25HZ, the people can be judged as 3-grade fatigue in advance; when the respiration rate is too fast, that is, when P is greater than 0.37HZ, it is determined to be abnormal.
The respiratory rate in this embodiment has a higher priority than the facial fatigue detection, and the criteria for both are shown in table 1.
TABLE 1
Figure BDA0003057639810000081
Through the multi-level refining discrimination mode, the discrimination result can be more accurate.
Example two:
the invention further provides a composite fatigue detection terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method embodiment of the first embodiment of the invention.
Further, as an executable scheme, the composite fatigue detection terminal device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The composite fatigue detection terminal device may include, but is not limited to, a processor, and a memory. It is understood by those skilled in the art that the above-mentioned composition structure of the composite fatigue detection terminal device is only an example of the composite fatigue detection terminal device, and does not constitute a limitation to the composite fatigue detection terminal device, and may include more or less components than the above, or combine some components, or different components, for example, the composite fatigue detection terminal device may further include an input-output device, a network access device, a bus, etc., which is not limited by the embodiment of the present invention.
Further, as an executable solution, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor is a control center of the composite fatigue detection terminal device, and various interfaces and lines are used to connect various parts of the entire composite fatigue detection terminal device.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the composite fatigue detection terminal device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention also provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned method of an embodiment of the invention.
The integrated module/unit of the composite fatigue detection terminal device may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A composite fatigue detection method is characterized by comprising the following steps:
s1: collecting a video in the driving process of a driver, and decomposing the video into images frame by frame;
s2: performing facial fatigue detection according to each decomposed frame image;
s3: after carrying out optical flow coding on each frame image by a dense optical flow method, carrying out gray level processing on an interested area corresponding to each frame image according to a set interested area, taking the average value of gray levels of all pixels in the interested area as a respiratory signal corresponding to each frame image, forming the respiratory signals of all the frame images into a respiratory signal curve, and calculating the respiratory rate corresponding to the video according to the number of peaks in the respiratory signal curve, the frame rate of the video and the total frame number;
s4: and judging whether the driver is in a fatigue state or not according to the facial fatigue detection result and the breathing rate.
2. The composite fatigue detection method according to claim 1, characterized in that: the facial fatigue detection in the step S2 includes face detection and feature point labeling, where the feature point labeling adopts an integrated regression tree algorithm training model.
3. The composite fatigue detection method according to claim 1, characterized in that: in step S3, the thoracoabdominal region is set as the region of interest.
4. The composite fatigue detection method according to claim 1, characterized in that: the calculation formula of the respiration rate in step S3 is:
Figure FDA0003057639800000011
wherein, P represents the respiration rate, R represents the number of peaks in the respiration signal curve, T represents the frame rate of the video, and F represents the total frame number of the video.
5. The composite fatigue detection method according to claim 1, characterized in that: in step S3, filtering processing needs to be performed on the formed respiration signal curve before calculating the respiration rate, and the respiration rate is calculated according to the number of peaks of the filtered respiration signal curve.
6. The composite fatigue detection method according to claim 5, characterized in that: the filtering process adopts a wavelet transform algorithm.
7. A compound fatigue detection terminal equipment which characterized in that: comprising a processor, a memory and a computer program stored in the memory and running on the processor, the processor implementing the steps of the method according to any one of claims 1 to 6 when executing the computer program.
8. A computer-readable storage medium storing a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the method as claimed in any one of claims 1 to 6.
CN202110504110.1A 2021-05-10 2021-05-10 Composite fatigue detection method, terminal equipment and storage medium Pending CN113255478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110504110.1A CN113255478A (en) 2021-05-10 2021-05-10 Composite fatigue detection method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110504110.1A CN113255478A (en) 2021-05-10 2021-05-10 Composite fatigue detection method, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113255478A true CN113255478A (en) 2021-08-13

Family

ID=77222369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110504110.1A Pending CN113255478A (en) 2021-05-10 2021-05-10 Composite fatigue detection method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113255478A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548132A (en) * 2016-10-16 2017-03-29 北海益生源农贸有限责任公司 The method for detecting fatigue driving of fusion eye state and heart rate detection
CN109460703A (en) * 2018-09-14 2019-03-12 华南理工大学 A kind of non-intrusion type fatigue driving recognition methods based on heart rate and facial characteristics
KR20190060243A (en) * 2017-11-24 2019-06-03 연세대학교 산학협력단 Respiratory measurement system using thermovision camera
CN110276273A (en) * 2019-05-30 2019-09-24 福建工程学院 Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate
CN111657973A (en) * 2020-07-09 2020-09-15 海南科技职业大学 Fatigue degree detecting system based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548132A (en) * 2016-10-16 2017-03-29 北海益生源农贸有限责任公司 The method for detecting fatigue driving of fusion eye state and heart rate detection
KR20190060243A (en) * 2017-11-24 2019-06-03 연세대학교 산학협력단 Respiratory measurement system using thermovision camera
CN109460703A (en) * 2018-09-14 2019-03-12 华南理工大学 A kind of non-intrusion type fatigue driving recognition methods based on heart rate and facial characteristics
CN110276273A (en) * 2019-05-30 2019-09-24 福建工程学院 Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate
CN111657973A (en) * 2020-07-09 2020-09-15 海南科技职业大学 Fatigue degree detecting system based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘今越等: "《基于视觉的非接触呼吸频率自动检测方法》", 《仪器仪表学报》 *

Similar Documents

Publication Publication Date Title
CN108460345A (en) A kind of facial fatigue detection method based on face key point location
EP3168810A1 (en) Image generating method and apparatus
CN112084856A (en) Face posture detection method and device, terminal equipment and storage medium
WO2019014813A1 (en) Method and apparatus for quantitatively detecting skin type parameter of human face, and intelligent terminal
CN112733823B (en) Method and device for extracting key frame for gesture recognition and readable storage medium
Al-Ameen et al. Enhancing the contrast of CT medical images by employing a novel image size dependent normalization technique
CN109344801A (en) A kind of object detecting method and device
CN110728692A (en) Image edge detection method based on Scharr operator improvement
CN117935177B (en) Road vehicle dangerous behavior identification method and system based on attention neural network
KR101791604B1 (en) Method and apparatus for estimating position of head, computer readable storage medium thereof
CN111488779A (en) Video image super-resolution reconstruction method, device, server and storage medium
CN111723688B (en) Human body action recognition result evaluation method and device and electronic equipment
JP2012048326A (en) Image processor and program
Maity et al. Background modeling and foreground extraction in video data using spatio-temporal region persistence features
CN117056786A (en) Non-contact stress state identification method and system
CN113255478A (en) Composite fatigue detection method, terminal equipment and storage medium
CN112101139B (en) Human shape detection method, device, equipment and storage medium
CN111612712B (en) Face correction degree determination method, device, equipment and medium
Chang et al. Multi-level smile intensity measuring based on mouth-corner features for happiness detection
CN109389489B (en) Method for identifying fraudulent behavior, computer readable storage medium and terminal equipment
CN113128505A (en) Method, device, equipment and storage medium for detecting local visual confrontation sample
CN111523373A (en) Vehicle identification method and device based on edge detection and storage medium
CN113610071A (en) Face living body detection method and device, electronic equipment and storage medium
CN113705660A (en) Target identification method and related equipment
CN113763313A (en) Text image quality detection method, device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210813