CN111310673B - Sleepiness prediction method, device and storage medium - Google Patents

Sleepiness prediction method, device and storage medium Download PDF

Info

Publication number
CN111310673B
CN111310673B CN202010104790.3A CN202010104790A CN111310673B CN 111310673 B CN111310673 B CN 111310673B CN 202010104790 A CN202010104790 A CN 202010104790A CN 111310673 B CN111310673 B CN 111310673B
Authority
CN
China
Prior art keywords
sleepiness
calculating
clustering
value
mean
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010104790.3A
Other languages
Chinese (zh)
Other versions
CN111310673A (en
Inventor
王宇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUXI HONGYU AUTO PARTS MANUFACTURING CO LTD
Original Assignee
WUXI HONGYU AUTO PARTS MANUFACTURING CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUXI HONGYU AUTO PARTS MANUFACTURING CO LTD filed Critical WUXI HONGYU AUTO PARTS MANUFACTURING CO LTD
Priority to CN202010104790.3A priority Critical patent/CN111310673B/en
Publication of CN111310673A publication Critical patent/CN111310673A/en
Application granted granted Critical
Publication of CN111310673B publication Critical patent/CN111310673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Psychology (AREA)
  • Surgery (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a sleepiness prediction method, which comprises the following steps: step S100, acquiring an image of a detection target, and performing face detection and skin identification, wherein the method comprises the following steps: detecting a target face region and extracting and clustering related skin regions; simultaneously measuring and calculating the eye aspect ratio; step S200, processing the clustering result of each skin area respectively; extracting and calculating an rPPG signal by comparing the clustering result with the RGB average value corresponding to each frame, and finally estimating the target heart rhythm by a fast Fourier transform method; and step S300, selecting the optimal signals detected by different clustering results to estimate the target sleepiness level. The invention can objectively and truly reflect the sleepiness of the driver.

Description

Sleepiness prediction method, device and storage medium
Technical Field
The invention relates to the technical field of biotechnology, in particular to a sleepiness prediction method and device for a driver.
Background
In the prior art, a method for judging sleepiness and giving an early warning by dividing a monitoring image and comparing the facial features, including the changes of the positions of key points such as a mouth, eyes and the like along with time is adopted, and a Doppler radar and a complex signal processing method are adopted to obtain fatigue data such as dysphoric emotional activity, blinking frequency, duration and the like of a tested person so as to judge whether the tested person sleeps or sleeps; the sleepiness is judged by the change of the positions of key points such as the mouth, the eyes and the like along with the time, the reference element is single, the sleepiness of a tested person cannot be objectively and truly reflected, and the method is low in technology, low in accuracy and hysteresis.
The term rppg (remote Photo plethsmograph) to which the present invention relates.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a sleepiness prediction method and a sleepiness prediction device, which are applied to early warning of fatigue driving of a driver and objectively and truly reflect sleepiness of the driver by monitoring physiological response characteristics of the driver.
The embodiment of the invention adopts the technical scheme that:
a sleepiness prediction method comprises the following steps:
step S100, acquiring an image of a detection target, and performing face detection and skin identification, wherein the method comprises the following steps: detecting a target face region and extracting and clustering related skin regions; simultaneously measuring and calculating the eye aspect ratio;
step S200, processing the clustering result of each skin area respectively; extracting and calculating an rPPG signal by comparing the clustering result with the RGB average value corresponding to each frame, and finally estimating the target heart rhythm by a fast Fourier transform method;
and step S300, selecting the optimal signals detected by different clustering results to estimate the target sleepiness level.
Further, the step S100 includes:
step S101, embedding an image into a long vector, using the long vector as the input of a neural network, and training the neural network to identify a face through a feed-forward algorithm;
step S102, inputting the image into the trained neural network, outputting face coordinates and eye coordinates, and then calculating the aspect ratio of the eyes;
in step S103, the skin image pixels of the captured face area are classified.
Further, the step S200 includes:
step S201, calculating an RGB average value of each classification of the clustered image pixels;
step S202, calculating an rPPG signal and estimating a heart rate, wherein the specific process is as follows:
in a first step, considering that for any one cluster k at any time t, an average value μ of RGB can be obtainedkThen will μkComparing the RGB average value corresponding to each frame, and converting the RGB average value into a signal function PV (t, k) related to t; i.e. rPPG signal;
second, PV (t, k) is time-sequentially averaged, PVmean(t, k) ═ PV (t, k) -DC, where DC is the average amplitude of PV;
third, to PVmean(t, k) performing a fast fourier transform thereby obtaining a frequency domain signal fpv (t);
and fourthly, taking the number of peaks of FPV (t) within 1 minute as a heart rate value HR (t) and the standard deviation SDNN (t) of the heart beat intervals within minutes.
Further, in step S300, the estimated indicators include the heart rate hr (t), the standard deviation of the heart beat interval within several minutes sdnn (t), and the eye aspect ratio; the step S300 specifically includes:
step S301, arranging the amplitude of FPV (t) of each cluster into Peak according to the sequence from large to small1,Peak2……PeaknCalculating the SNR of the signal to noise ratioiSelecting the first N FPVs (t) with the highest signal-to-noise ratio within 1 minute, and recording the values as
Figure BDA0002388185120000021
The optimal HR (t) is calculated correspondinglybestAnd SDNN (t)bestAs a sleepiness estimation index;
step S302, based on the past minutes HR (t)best、SDNN(t)bestAnd Eye aspect ratio Eye Index (t) determining the sleepiness of the target;
is provided with
Figure BDA0002388185120000022
Figure BDA0002388185120000023
Mean () represents the Mean value;
mean (ind), slope (ind) were aligned to the corresponding recognition benchmarks as follows:
Figure BDA0002388185120000024
when mean (Ind) < thresholdmeanAnd abs (slope (Ind)) > thresholdslopeAnd judging that the target is sleepy.
Further, step S101 specifically includes:
(a1) the image is embedded as a long vector:
let X be { X ═ X1,x2,…xi…,xnA random vector with an observation value representing face image data;
calculating the mean value mu and the covariance matrix S;
Figure BDA0002388185120000025
Figure BDA0002388185120000026
calculating an eigenvalue λ of the covariance matrix SiAnd a feature vector vi:Svi=λivi,i=1,2,…n;
Sorting the eigenvectors, and taking the eigenvector corresponding to the maximum eigenvalue:
y=WT(x- μ) wherein W ═ v1,v2,…,vk (3)
x ═ Wy + μ where W ═ v1,v2,…,vk (4)
And orthogonally processing the vector:
XTXvi=λivi (5)
XXT(Xvi)=λi(Xvi) (6)
(a2) the long vector is used as the input of the neural network, the neural network is trained to recognize the face through a feedforward algorithm, and the training objective function is as follows:
Figure BDA0002388185120000031
wherein m is the number of samples, W is the weight matrix of the neural network, b is the offset of each layer in the neural network, h is the activation function, x is the vector value obtained in the previous step (a1), y is the label value (0, 1), n is the label valuelNumber of layers of neural network, slIs the neuron number, and λ is the canonical coefficient;
further, step S103 specifically includes:
set the RGB value of each captured pixel to xab,xab∈R3Determining a value of N, i.e. it is desired to put the data set { x }ab,xab∈R3Obtaining N sets through clustering; in the N clusters, the initial RGB mean value of any one cluster is set as miIn the category of
Figure BDA0002388185120000032
t represents time, and m is updated by the following methodiAnd finally obtain the category of each pixel
Figure BDA0002388185120000033
Figure BDA0002388185120000034
Figure BDA0002388185120000035
When it is satisfied with
Figure BDA0002388185120000036
When it is time, stopping clustering and comparing the time
Figure BDA0002388185120000037
As final class of this type of pixel
Figure BDA0002388185120000038
Where threshold is the criterion for stopping clustering.
The embodiment of the present invention further provides a sleepiness prediction apparatus, including:
a memory storing a computer program;
a processor for executing the computer program to implement the steps of the sleepiness prediction method as described above.
The embodiment of the present invention further provides a computer storage medium, in which a computer program is stored, and the computer program is used to implement the steps of the sleepiness prediction method when being executed by a processor.
The invention has the advantages that:
1) the heart rate change of the driver is analyzed by monitoring the physiological response characteristics of the driver and utilizing data of facial characteristics, eye signals, head motility and the like of the driver, the sleepiness of the driver is objectively and truly reflected by human physiological indexes, and the prediction method is objective and accurate and has extremely small error rate.
2) When the method is applied, the whole monitoring process is non-contact, and interference to a monitored person is avoided.
Drawings
FIG. 1 is a flow chart of a sleepiness prediction method of the present invention.
Detailed Description
The invention is further illustrated by the following specific figures and examples.
The embodiment of the invention provides a sleepiness prediction method which is mainly applied to driver fatigue driving early warning, objectively and truly reflects sleepiness of a driver through monitoring physiological response characteristics of the driver, and comprises the following steps of:
step S100, acquiring an image of a detection target, and performing face detection and skin identification, wherein the method comprises the following steps: detecting a target face region and extracting and clustering related skin regions; simultaneously measuring and calculating the eye aspect ratio; the method comprises the following specific steps:
step S101, embedding an image into a long vector by adopting an OpenCV facial image embedding technology, taking the long vector as the input of a neural network, and training the neural network to identify a face through a feedforward algorithm;
(a1) the image is embedded as a long vector:
let X be { X ═ X1,x2,…xi…,xnA random vector with an observation value representing face image data;
calculating the mean value mu and the covariance matrix S;
Figure BDA0002388185120000041
Figure BDA0002388185120000042
calculating an eigenvalue λ of the covariance matrix SiAnd a feature vector vi:Svi=λivi,i=1,2,…n;
Sorting the eigenvectors, and taking the eigenvector corresponding to the maximum eigenvalue:
y=WT(x- μ) wherein W ═ v1,v2,…,vk (3)
x ═ Wy + μ where W ═ v1,v2,…,vk (4)
And orthogonally processing the vector:
XTXvi=λivi (5)
XXT(Xvi)=λi(Xvi) (6)
(a2) the long vector is used as the input of the neural network, the neural network is trained to recognize the face through a feedforward algorithm, and the training objective function is as follows:
Figure BDA0002388185120000043
whereinM is the number of samples, W is the weight matrix of the neural network, b is the offset of each layer in the neural network, h is the activation function, x is the vector value obtained in the previous step (a1), y is the label value (0, 1), nlNumber of layers of neural network, slIs the neuron number, and λ is the canonical coefficient;
step S102, inputting the image into the trained neural network, and outputting the face coordinates
Figure BDA0002388185120000044
Figure BDA0002388185120000045
And eye coordinates (taking left eye coordinates as an example),
Figure BDA0002388185120000046
then calculating Eye aspect ratio Eye Index;
Figure BDA0002388185120000047
step S103, classifying the skin image pixels of the captured face area;
set the RGB value of each captured pixel to xab,xab∈R3Determining a value of N, i.e. it is desired to put the data set { x }ab,xab∈R3Obtaining N sets through clustering; in the N clusters, the initial RGB mean value of any one cluster is set as miIn the category of
Figure BDA0002388185120000051
t represents time, and m is updated by the following methodiAnd finally obtain the category of each pixel
Figure BDA0002388185120000052
Figure BDA0002388185120000053
Figure BDA0002388185120000054
When it is satisfied with
Figure BDA0002388185120000055
When it is time, stopping clustering and comparing the time
Figure BDA0002388185120000056
As final class of this type of pixel
Figure BDA0002388185120000057
Wherein threshold is a criterion for stopping clustering;
step S200, processing the clustering result of each skin area respectively; extracting and calculating an rPPG signal by comparing the clustering result with the RGB average value corresponding to each frame, and finally estimating the target heart rhythm by a fast Fourier transform method; the method specifically comprises the following steps:
step S201, calculating an average RGB value of each of the classifications of the clustered image pixels, where the calculation formula is as follows:
Figure BDA0002388185120000058
step S202, calculating an rPPG signal and estimating a heart rate, wherein the specific process is as follows:
in a first step, considering that for any one cluster k at any time t, an average value μ of RGB can be obtainedkThen will μkComparing the RGB average value corresponding to each frame, and converting the RGB average value into a signal function PV (t, k) related to t; i.e. rPPG signal;
second, PV (t, k) is time-sequentially averaged, PVmean(t, k) ═ PV (t, k) -DC, where DC is the average amplitude of PV;
third, to PVmean(t, k) performing a Fast Fourier Transform (FFT) thereby obtaining a frequency domain signal fpv (t);
taking the number of peaks of FPV (t) within 1 minute as a heart rate value HR (t) and the standard deviation SDNN (t) of the heart beat interval within minutes (such as five minutes);
s300, selecting the optimal signals detected by different clustering results to estimate the target sleepiness level; the indicators used for the estimation include the heart rate hr (t), the standard deviation of the heart beat interval within minutes sdnn (t) and the eye aspect ratio; the method specifically comprises the following steps:
step S301, arranging the amplitude of FPV (t) of each cluster into Peak according to the sequence from large to small1,Peak2……PeaknCalculating the SNR of the signal to noise ratioiSelecting the first N FPVs (t) with the highest signal-to-noise ratio within 1 minute, and recording the values as
Figure BDA0002388185120000059
The optimal HR (t) is calculated correspondinglybestAnd SDNN (t)bestAs a sleepiness estimation index;
step S302, based on the past minutes (e.g. three minutes) HR (t)best、SDNN(t)bestAnd Eye aspect ratio Eye Index (t) determining the sleepiness of the target;
for convenience of representation, set
Figure BDA00023881851200000510
Figure BDA00023881851200000511
Mean () represents the Mean value;
mean (ind), slope (ind) were compared to the corresponding identification standards, which were obtained from experimental data as follows:
Figure BDA0002388185120000061
when mean (Ind) < thresholdmeanAnd abs (slope (Ind)) > thresholdslopeIn the meantime, it is determined that the target, i.e., the driver, is sleepy.
The embodiment of the present invention further provides a sleepiness prediction apparatus, including:
a memory storing a computer program; the program instructions of the computer program are for being loaded and executed by a processor to carry out the steps of the sleepiness prediction method as described hereinbefore.
A processor for loading and executing the computer program to implement the steps of the sleepiness prediction method as described hereinbefore.
An embodiment of the present invention further provides a computer storage medium, in which a computer program is stored, and program instructions of the computer program are executed by a processor to implement the steps of the sleepiness prediction method as described above.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (5)

1. A sleepiness prediction method is characterized by comprising the following steps:
step S100, acquiring an image of a detection target, and performing face detection and skin identification, wherein the method comprises the following steps: detecting a target face region and extracting and clustering related skin regions; simultaneously measuring and calculating the eye aspect ratio;
step S200, processing the clustering result of each skin area respectively; extracting and calculating an rPPG signal by comparing the clustering result with the RGB average value corresponding to each frame, and finally estimating the target heart rhythm by a fast Fourier transform method;
s300, selecting the optimal signals detected by different clustering results to estimate the target sleepiness level;
the step S100 includes:
step S101, embedding an image into a long vector, using the long vector as the input of a neural network, and training the neural network to identify a face through a feed-forward algorithm;
step S102, inputting the image into the trained neural network, outputting face coordinates and eye coordinates, and then calculating the aspect ratio of the eyes;
step S103, classifying the skin image pixels of the captured face area;
the step S200 includes:
step S201, calculating an RGB average value of each classification of the clustered image pixels;
step S202, calculating an rPPG signal and estimating a heart rate, wherein the specific process is as follows:
in a first step, considering that for any one cluster k at any time t, an average value μ of RGB can be obtainedkThen will μkComparing the RGB average value corresponding to each frame, and converting the RGB average value into a signal function PV (t, k) related to t; i.e. rPPG signal;
second, PV (t, k) is time-sequentially averaged, PVmean(t, k) ═ PV (t, k) -DC, where DC is the average amplitude of PV;
third, to PVmean(t, k) performing a fast fourier transform thereby obtaining a frequency domain signal fpv (t);
fourthly, taking the number of the peak values of FPV (t) within 1 minute as a heart rate value HR (t) and the standard deviation SDNN (t) of the heart beat intervals within minutes;
in step S300, the estimated indicators include the heart rate hr (t), the standard deviation sdnn (t) of the heart beat interval within several minutes, and the eye aspect ratio; the step S300 specifically includes:
step S301, arranging the amplitude of FPV (t) of each cluster into Peak according to the sequence from large to small1,Peak2……PeaknCalculating the SNR of the signal to noise ratioiSelecting the first N FPVs (t) with the highest signal-to-noise ratio within 1 minute, and recording the values as
Figure FDA0003413864310000011
The optimal HR (t) is calculated correspondinglybestAnd SDNN (t)bestAs a sleepiness estimation index;
step S302, based on the past minutes HR (t)best、SDNN(t)bestAnd Eye aspect ratio Eye Index (t) determining the sleepiness of the target;
is provided with
Figure FDA0003413864310000012
Figure FDA0003413864310000021
Mean () represents the Mean value;
mean (ind), slope (ind) were aligned to the corresponding recognition benchmarks as follows:
Figure FDA0003413864310000022
when mean (Ind) < thresholdmeanAnd abs (slope (Ind)) > thresholdslopeAnd judging that the target is sleepy.
2. The sleepiness prediction method of claim 1,
step S101 specifically includes:
(a1) the image is embedded as a long vector:
let X be { X ═ X1,x2,...xi...,xnA random vector with an observation value representing face image data;
calculating the mean value mu and the covariance matrix S;
Figure FDA0003413864310000023
Figure FDA0003413864310000024
calculating an eigenvalue λ of the covariance matrix SiAnd a feature vector vi:Svi=λivi,i=1,2,...n;
Sorting the eigenvectors, and taking the eigenvector corresponding to the maximum eigenvalue:
Figure FDA0003413864310000026
x ═ Wy + mu where W ═ vv1,v2,...,vk (4)
And orthogonally processing the vector:
Figure FDA0003413864310000027
Figure FDA0003413864310000028
(a2) the long vector is used as the input of the neural network, the neural network is trained to recognize the face through a feedforward algorithm, and the training objective function is as follows:
Figure FDA0003413864310000025
wherein m is the number of samples, W is the weight matrix of the neural network, b is the offset of each layer in the neural network, h is the activation function, x is the vector value obtained in the previous step (a1), y is the label value (0, 1), n is the label valuelNumber of layers of neural network, slIs the neuron number, and λ is a regular coefficient.
3. The sleepiness prediction method of claim 1,
step S103 specifically includes:
set the RGB value of each captured pixel to xab,xab∈R3Determining a value of N, i.e. it is desired to put the data set { x }ab,xab∈R3Obtaining N sets through clustering; in the N numberIn clustering, the initial RGB mean value of any one cluster is set as miIn the category of
Figure FDA0003413864310000031
t represents time, and m is updated by the following methodiAnd finally obtain the category of each pixel
Figure FDA0003413864310000032
Figure FDA0003413864310000033
Figure FDA0003413864310000034
When it is satisfied with
Figure FDA0003413864310000035
When it is time, stopping clustering and comparing the time
Figure FDA0003413864310000036
As final class of this type of pixel
Figure FDA0003413864310000037
Where threshold is the criterion for stopping clustering.
4. A sleepiness prediction apparatus, comprising:
a memory storing a computer program;
a processor for executing the computer program to implement the steps of the sleepiness prediction method as claimed in any one of claims 1 to 3.
5. A computer storage medium comprising, in combination,
the computer storage medium has stored therein a computer program which, when executed by a processor, is adapted to carry out the steps of the sleepiness prediction method according to any one of claims 1-3.
CN202010104790.3A 2020-02-20 2020-02-20 Sleepiness prediction method, device and storage medium Active CN111310673B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010104790.3A CN111310673B (en) 2020-02-20 2020-02-20 Sleepiness prediction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010104790.3A CN111310673B (en) 2020-02-20 2020-02-20 Sleepiness prediction method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111310673A CN111310673A (en) 2020-06-19
CN111310673B true CN111310673B (en) 2022-02-08

Family

ID=71148981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010104790.3A Active CN111310673B (en) 2020-02-20 2020-02-20 Sleepiness prediction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111310673B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2960862A1 (en) * 2014-06-24 2015-12-30 Vicarious Perception Technologies B.V. A method for stabilizing vital sign measurements using parametric facial appearance models via remote sensors
CN110084085A (en) * 2018-11-06 2019-08-02 天津工业大学 RPPG high-precision heart rate detection method based on shaped signal
CN110384491A (en) * 2019-08-21 2019-10-29 河南科技大学 A kind of heart rate detection method based on common camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11234658B2 (en) * 2018-03-28 2022-02-01 Livmor, Inc. Photoplethysmogram data analysis and presentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2960862A1 (en) * 2014-06-24 2015-12-30 Vicarious Perception Technologies B.V. A method for stabilizing vital sign measurements using parametric facial appearance models via remote sensors
CN110084085A (en) * 2018-11-06 2019-08-02 天津工业大学 RPPG high-precision heart rate detection method based on shaped signal
CN110384491A (en) * 2019-08-21 2019-10-29 河南科技大学 A kind of heart rate detection method based on common camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Video-Based Heart Rate Measurement: Recent Advances and Future Prospects 》;Xun Chen et al;;《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》;20191031;第68卷(第10期);第3600-3615页; *

Also Published As

Publication number Publication date
CN111310673A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
US11793406B2 (en) Image processing method and corresponding system
Moghaddasi et al. Automatic assessment of mitral regurgitation severity based on extensive textural features on 2D echocardiography videos
Subasi Epileptic seizure detection using dynamic wavelet network
CN111863244B (en) Functional connection mental disease classification method and system based on sparse pooling graph convolution
JP6359123B2 (en) Inspection data processing apparatus and inspection data processing method
Rohmantri et al. Arrhythmia classification using 2D convolutional neural network
Mohebbian et al. Semi-supervised active transfer learning for fetal ECG arrhythmia detection
Sharathappriyaa et al. Auto-encoder based automated epilepsy diagnosis
Greene et al. Classifier models and architectures for EEG-based neonatal seizure detection
Fikri et al. ECG signal classification review
Gopalakrishnan et al. Itl-cnn: Integrated transfer learning-based convolution neural network for ultrasound pcos image classification
Vrbancic et al. Automatic detection of heartbeats in heart sound signals using deep convolutional neural networks
Boonyakitanont et al. ScoreNet: A Neural network-based post-processing model for identifying epileptic seizure onset and offset in EEGs
El Boujnouni et al. Automatic diagnosis of cardiovascular diseases using wavelet feature extraction and convolutional capsule network
Puri et al. Detection of Alcoholism from EEG signals using Spectral and Tsallis Entropy with SVM
CN114595725A (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN111310673B (en) Sleepiness prediction method, device and storage medium
Mihandoost et al. Automatic feature extraction using generalised autoregressive conditional heteroscedasticity model: an application to electroencephalogram classification
Manocha et al. An overview of ischemia detection techniques
EBRAHIMNEZHAD et al. Classification of arrhythmias using linear predictive coefficients and probabilistic neural network
Koçyiğit Heart sound signal classification using fast independent component analysis
Nehary et al. A deep convolutional neural network classification of heart sounds using fractional fourier transform
JP2016187555A (en) Biological parameter estimation apparatus or method
Shcherbakova et al. Determination of characteristic points of electrocardiograms using multi-start optimization with a wavelet transform
Übeyli Implementing eigenvector methods/probabilistic neural networks for analysis of EEG signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant