CN110119672A - A kind of embedded fatigue state detection system and method - Google Patents

A kind of embedded fatigue state detection system and method Download PDF

Info

Publication number
CN110119672A
CN110119672A CN201910232993.8A CN201910232993A CN110119672A CN 110119672 A CN110119672 A CN 110119672A CN 201910232993 A CN201910232993 A CN 201910232993A CN 110119672 A CN110119672 A CN 110119672A
Authority
CN
China
Prior art keywords
eye
mouth
frame number
image
unit time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910232993.8A
Other languages
Chinese (zh)
Inventor
李璋
任雄
石鑫
吴志伟
李华涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University
Original Assignee
Hubei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University filed Critical Hubei University
Priority to CN201910232993.8A priority Critical patent/CN110119672A/en
Publication of CN110119672A publication Critical patent/CN110119672A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of embedded fatigue state detection system and method, the system comprises: image collection module: for the video being converted into hardwood image by default frame period, and be normalized by camera dynamic capture learner's video;Data processing module: it for the locating human face from the frame image, repositions and intercepts eye and mouth image;The opening and closed state of eye and mouth are identified respectively by trained convolutional neural networks;Using Parclos and Fom rule in convolutional neural networks close one's eyes and yawn frame number frequency carry out calculate and compared with default joint decision threshold, judge whether target tired;Output decision-making module: for being shown according to the judging result of data processing module control loudspeaker and display screen.System and method robustness of the invention is high, practicability is good, is easy to product conversion.

Description

A kind of embedded fatigue state detection system and method
Technical field
The invention belongs to fatigue state detection fields, and in particular to a kind of flush type learning based on convolutional neural networks is tired Labor condition detecting system and method.
Background technique
With the development that information technology is maked rapid progress, more and more learners can carry out line using Internet resources and go to school It practises and gathers information using tool autonomous learning, and the supervision of learning state may be decreased due to lacking in such circumstances Efficiency is so that be easier fatigue.Fatigue state can effectively reflect learner for Current Content interest and understand depth, because This detection learning fatigue state is for reminding learner to focus on adjustment state with very big value and significance.
Mainly have for fatigue detection method at present and utilizes the physiology such as biosensor detection brain electricity, electrocardio, heart rate, breathing Change indicator;And by generating learning state picture, detected person's facial characteristics is analyzed, artificially defined Efficient Characterization letter is compared (eye contour, distance etc. of opening one's mouth) is ceased to determine whether fatigue.Physical signs detection needs to install physiological signal sensor and big Mostly it is connected with learner's body, it is poor for the applicability for learning scene.And pass through artificially defined correlation in facial feature detection Parameter algorithm is complicated, system structure is cumbersome;And for the picture to be detected by external environment such as camera deviation angle, eye The interference of the factors such as mirror, hair, light luminance does not have preferable stability, increases difficulty for feature extraction and comparison, together When significantly impact judgement precision.Therefore improving upgrading method of discrimination and technology has practical significance.
Summary of the invention
Larger for Manual definition's state feature difficulty in existing fatigue state detection method, algorithm is cumbersome so that system is multiple Miscellaneous, the problems such as accuracy of identification is low and application scenarios applicability is low, the present invention provides a kind of insertion based on depth learning technology Formula learning fatigue condition detecting system and method introduce convolutional neural networks in recognition methods and carry out to test object facial characteristics Study and classification judgement realize fatigue state detection in conjunction with Perclos and Fom dicision rules.
First aspect present invention proposes a kind of embedded fatigue state detection system, and the system comprises the following contents:
Image collection module: for capturing learner's video by camera dynamic, the video is pressed into default frame period It is converted into hardwood image, and is normalized;
Data processing module: it for the locating human face from the frame image, repositions and intercepts eye and mouth image;It is logical Trained convolutional neural networks are crossed respectively to identify the opening and closed state of eye and mouth;Using Parclos and Fom rule carries out calculating to the frame number frequency of eye closing and yawn in convolutional neural networks and compared with default joint decision threshold, Judge whether target is tired;
Output decision-making module: for being shown according to the judging result of data processing module control loudspeaker and display screen.
Optionally, the data processing module specifically includes:
Face datection unit: facial image is extracted from the frame image using based on histograms of oriented gradients HOG algorithm;
Feature extraction unit: detecting human face characteristic point using facial marks estimating algorithm and is aligned face, the detection of ERT algorithm Key point in face image out is aligned face, and orients eye, mouth region according to key point serial number, is partitioned into eye And mouth image;
Model training unit: design convolutional neural networks structure is mentioned by optical data library CelebA and the feature The data set for eye and the mouth image composition for taking unit to obtain is trained convolutional neural networks;
Classification and Identification unit: the eye and mouth figure of the acquisition are identified by the trained convolutional neural networks Picture obtains eye, the opening of mouth and closure two states, and is saved in the queue of regular length;
Joint judging unit: presetting joint decision threshold, and the distribution situation being respectively worth in queue described in real-time detection uses Parclos and Fom rule calculates separately the frame number frequency that eye closure and mouth open in the unit time, if eye closure or mouth The frame number frequency that portion is opened then determines fatigue when being more than the threshold value of the joint judgement, and will determine that result is sent to the output Decision-making module.
Optionally, in the Classification and Identification unit, the full articulamentum of the convolutional neural networks is classified using softmax Device, softmax is defined as:
Wherein, j=0,1;And pjIndicate that output result is the probability of jth class;y'kWith y 'j=∑ihiwi,j+bjIndicate convolution The full articulamentum the last layer of neural network exports;I=0,1, hiIt is upper one layer of output;wi,jAnd bjThe respectively power of the last layer Weight and biasing.
Optionally, described to be calculated separately in the unit time using Parclos and Fom rule in the joint judging unit The frame number frequency that eye closure and mouth open specifically:
If fperIt is closed frame number frequency for eye, what is indicated is that eye is closed between frame number and totalframes in the unit time Ratio, it may be assumed that
Wherein, n indicates that eye closing frame number in the unit time, N indicate that the collected video of camera is total in the unit time Frame number;
If ffomFor the frame number frequency that mouth opens, what is indicated is mouth opens in the unit time frame number and when unit Ratio between interior totalframes, it may be assumed that
The frame number that mouth opens in the n ' expression unit time, N indicate the collected video of camera in the unit time Totalframes.
Optionally, in the joint judging unit, the setting of joint decision threshold and dicision rules are as follows: work as fper> 0.4 or ffomIt is determined as fatigue when > 0.4;Work as fper> 0.25 and ffomEqually it is determined as fatigue when > 0.25, remaining is not tired.
Optionally, the output decision-making module controls shown loudspeaker and issues ring, remind and learn when determining target fatigue Habit person's attention state, shown display screen record times of fatigue and study duration are raised if study duration reaches setting study duration Sound device ring reminds learner to take a good rest.
Second aspect of the present invention proposes a kind of embedded fatigue state detection method, which comprises
S1, camera dynamic capture learner's motion video in study, and the video is changed into hardwood by default frame period Image;
S2, facial image is extracted from the frame image using based on histograms of oriented gradients HOG algorithm;
S3, it detects the key point in the facial image using regression tree set ERT algorithm, is aligned face, and according to Key point serial number positions and intercepts out eye, mouth image;
S4, design convolutional neural networks structure, pass through optical data library CelebA and the eye and mouth intercepted out The data set of portion's image composition is trained convolutional neural networks;
S5, the eye and mouth image that the acquisition is identified by the trained convolutional neural networks, obtain eye, The opening and closure two states of mouth, and be saved in the queue of regular length;
The distribution situation being respectively worth in queue described in S6, real-time detection presets joint decision threshold, using Parclos and Fom Rule calculates separately the frame number frequency that eye closure and mouth open in the unit time, if the frame number that eye closure or mouth open Frequency then determines learner's fatigue when being more than the threshold value of the joint judgement.
Optionally, after the step S6 further include:
S7, if it is determined that learner is tired, ring is issued by loudspeaker and is reminded, and by display screen record times of fatigue with Learn duration, if study duration is more than setting study duration, same ring is reminded.
Optionally, in the step S6, the use Parclos and Fom rule calculates separately eye in the unit time and closes Close the frame number frequency opened with mouth specifically:
If fperIt is closed frame number frequency for eye, what is indicated is that eye is closed between frame number and totalframes in the unit time Ratio, it may be assumed that
Wherein, n indicates that eye closing frame number in the unit time, N indicate that the collected video of camera is total in the unit time Frame number;
If ffomFor the frame number frequency that mouth opens, what is indicated is mouth opens in the unit time frame number and when unit Between ratio between totalframes, it may be assumed that
The frame number that mouth opens in the n ' expression unit time, N indicate the collected video of camera in the unit time Totalframes.
Optionally, in the step S6, the default joint decision threshold and dicision rules are as follows: work as fper> 0.4 or ffom It is determined as fatigue when > 0.4;Work as fper> 0.25 and ffomEqually it is determined as fatigue when > 0.25, remaining is not tired.
The present invention using convolutional neural networks come identification feature, can in training each category feature of autonomous learning, do not depend on people Work participates in, and can reach actually available Generalization Capability, and more robustness.In conjunction with Parclos and Fom rule to eyes and mouth Two kinds of features carry out joint setting threshold value, and the fusion for comparing existing its multi information of single characteristic recognition method has preferably characterization energy Power and accuracy.It is easy to for entire detection system, method being transplanted to embedded platform, structure is simple, wide usage is good, is conducive to produce Product conversion.
Detailed description of the invention
It, below will be to needed in the technology of the present invention description in order to illustrate more clearly of technical solution of the present invention Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without any creative labor, it can also be obtained according to these attached drawings others Attached drawing.
Fig. 1 is fatigue detecting system structural schematic diagram provided in an embodiment of the present invention;
Fig. 2 is fatigue detecting device structure schematic diagram used in system provided in an embodiment of the present invention;
Fig. 3 is fatigue detection method flow diagram provided in an embodiment of the present invention;
Fig. 4 is volume collection neural network structure schematic diagram provided in an embodiment of the present invention.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field Those of ordinary skill's all other embodiment obtained without making creative work, belongs to protection of the present invention Range.
Referring to Figure 1, a kind of embedded fatigue state detection system proposed by the present invention, the system comprises:
Image collection module 110: including video acquisition unit 1101, for capturing learner's view by camera dynamic Frequently;Further include image conversion unit 1102, for the video to be converted into hardwood image by default frame period, and is normalized Processing;
Data processing module 120: for the locating human face from the frame image, reposition and intercept eye and mouth figure Picture;The opening and closed state of eye and mouth are identified respectively by trained convolutional neural networks;Using Parclos and Fom rule calculate to the frame number frequency of eye closing and yawn in convolutional neural networks and combines judgement with default Threshold value comparison judges whether target is tired;
Export decision-making module 130: including loudspeaker control unit 1301 and display screen display unit 1302, for according to number Loudspeaker is controlled according to the judging result of processing module and display screen is shown.
Preferably, the data processing module 120 may particularly include:
Face datection unit 1201: face is extracted from the frame image using based on histograms of oriented gradients HOG algorithm Image;
Feature extraction unit 1202: human face characteristic point is detected using facial marks estimating algorithm and is aligned face, ERT algorithm It detects the key point in face image, is aligned face, and eye, mouth region are oriented according to key point serial number, is partitioned into Eye and mouth image;
Model training unit 1203: design convolutional neural networks structure passes through optical data library CelebA and the spy The data set of eye and mouth image composition that sign extraction unit obtains is trained convolutional neural networks;
Classification and Identification unit 1204: the eye and mouth of the acquisition are identified by the trained convolutional neural networks Image obtains eye, the opening of mouth and closure two states, and is saved in the queue of regular length;The convolutional Neural The full articulamentum of network uses softmax classifier, softmax is defined as:
Wherein, j=0,1;And pjIndicate that output result is the probability of jth class;y'kWith y 'j=∑ihiwi,j+bjIndicate convolution The full articulamentum the last layer of neural network exports;I=0,1, hiIt is upper one layer of output;wi,jAnd bjThe respectively power of the last layer Weight and biasing.
Joint judging unit 1205: presetting joint decision threshold, and the distribution situation being respectively worth in queue described in real-time detection is adopted Calculate separately the closure of eye in the unit time with Parclos and Fom rule and frame number frequency that mouth opens, if eye closure or The frame number frequency that mouth opens then determines fatigue when being more than the threshold value of the joint judgement, and it is described defeated to determine that result is sent to Decision-making module out.
In the joint judging unit 1205, if fperIt is closed frame number frequency for eye, what is indicated is eye in the unit time Portion is closed the ratio between frame number and totalframes, it may be assumed thatWherein, n indicates that eye closing frame number in the unit time, N indicate single The collected video totalframes of camera in the time of position;If ffomFor the frame number frequency that mouth opens, what is indicated is unit The frame number and the ratio in the unit time between totalframes that mouth opens in time, it may be assumed thatIn the n ' expression unit time The frame number that mouth opens, N indicate the collected video totalframes of camera in the unit time.
In the joint judging unit 1205, the setting of joint decision threshold and dicision rules are as follows: work as fper> 0.4 or ffom> It is determined as fatigue when 0.4;Work as fper> 0.25 and ffomEqually it is determined as fatigue when > 0.25, remaining is not tired.
Fig. 2 is fatigue detecting device structure schematic diagram used in the fatigue state detection system, which sets Standby is only one embodiment of the fatigue state detection system, and the fatigue detecting equipment includes arm processor 2, and respectively With camera 1, the embedded gpu 3, display screen 4 of the arm processor 2 communication connection, loudspeaker 5, the camera 1 is used for Obtain detected person learning state motion video, embedded ARM processor 2 for receive, processing system relevant information, insertion Formula GPU 3 is used for graphics process and Classification and Identification.Display screen 4 is for recording and displaying study duration and times of fatigue, loudspeaking Device 5 is for issuing ring.Different function module in the system, which is implanted into the fatigue detecting equipment, can be realized learner Fatigue state detection, specifically:
Image acquisition unit 110 includes camera 1 shown in Fig. 2, acquires learner's video.
Shown data processing unit includes a series of videos, the processing of picture and the deployment of convolutional neural networks and fortune Make, its implementation setting and Processing Algorithm etc. are embedded in the arm processor 2 and GPU3, at execution Video Quality Metric, picture Reason and identification, control program operation etc..Convolutional neural networks algorithm is run using embedded gpu 3, identifies eye, mouth region Opening and closed state, 2 eye of arm processor, the opening of mouth region and closed state comprehensive descision fatigue state.
Exporting decision package includes the display screen 4 and loudspeaker 5, for receiving the signal of arm processor 2 and making phase It should respond.
The present invention also proposes a kind of embedded fatigue state detection method, referring to Fig. 3, Fig. 3 is fatigue state detection side Method flow diagram, which comprises
S1, camera dynamic capture learner's motion video in study, and the video is changed into hardwood by default frame period Image;
Specifically, acquiring learner's activity in study after opening above-mentioned fatigue detecting system by camera first and regarding Frequently, collected video is stored in arm processor, and video is changed into hardwood image as desired by design program and is stored in finger Catalogue is determined as detection input data.Due to not high to the fatigue detecting requirement of real-time of study, frame period can be according to practical feelings Condition value.
Record operation duration and display screen display shown in Fig. 2 when system is opened, setting and matched of classroom duration Duration is practised, it is appropriate to issue ringring prompting learner for system control loudspeaker after the learning time of record being more than setting study duration Rest.
S2, facial image is extracted from the frame image using based on histograms of oriented gradients HOG algorithm;
Specifically, after getting frame image, being needed using Face datection quickly people since face only accounts for a part of full figure Face image is extracted from original image.Using the extraction for carrying out facial image based on histograms of oriented gradients HOG algorithm, walk substantially It suddenly include: that 2. gradient calculates 3. gradient orientation histogram calculating 4. histogram normalization for 1. color space normalization.
S3, it detects the key point in the facial image using regression tree set ERT algorithm, is aligned face, and according to Key point serial number positions and intercepts out eye, mouth image;
Specifically, need to continue to obtain after obtaining facial image as the eyes and mouth picture for judging tired foundation, The present invention is using facial marks estimating algorithm detection human face characteristic point and is aligned face.68 distinctive marks of face are chosen first Point model, from eyebrow outer to lower jaw bottom, including eye contour and mouth profile etc..Then using the algorithm propose based on The frame of grad enhancement (radient boosting) learns regression tree set by the summation of optimization loss function and error (Ensemble of Regression Trees, ERT), detects 68 key points in facial image, is finally aligned face, And required eye, mouth region are oriented according to key point serial number, position be partitioned into after eye and mouth region eye and Mouth feature picture.
S4, design convolutional neural networks structure, pass through optical data library CelebA and the eye and mouth intercepted out The data set of portion's image composition is trained convolutional neural networks;
Fig. 4 is referred to, Fig. 4 is that feature designed by the present invention identifies that convolutional neural networks, the convolutional neural networks include 2 A convolutional layer, 2 pond layers, 1 full articulamentum, the data that the convolutional layer is used to input carry out feature extraction, pond layer For compressing the characteristic pattern of input, characteristic pattern is made to become smaller, simplifies network query function complexity, while carrying out Feature Compression, Main feature is extracted, full articulamentum is for connecting all features and output.The picture that one size is 36 × 28 is inputted into volume Product neural network, after the operation of multiple convolution pondization, full articulamentum identifies eye and mouth figure by softmax classifier Picture judges to open eyes, close one's eyes or open one's mouth, state of shutting up.
Incorporated by reference to Fig. 3, the training process of convolutional neural networks are as follows: the frame picture for obtaining step S1 is as experimenter's sample This, eye, the mouth feature pictures obtained after the Face datection of step S2, S3, Image Segmentation Methods Based on Features, in training convolutional nerve net When network model, using the eye, mouth feature pictures as the number in a part of training set, with optical data library CelebA It is combined according to collection, collectively constitutes the training set of convolutional neural networks, be then loaded into designed convolutional neural networks structure, training volume Product neural network model, ultimately generates identification model.The convolutional neural networks model is constantly trained by training set, generates eye Portion's mouth state recognition model.
S5, the eye and mouth image that the acquisition is identified by the trained convolutional neural networks, obtain eye, The opening and closure two states of mouth, and be saved in the queue of regular length;
Softmax classifier, softmax are used in the full articulamentum of convolutional neural networks is defined as:
Wherein, j=0,1;And pjIndicate that output result is the probability of jth class;y'kWith y 'j=∑ihiwi,j+bjIndicate convolution The full articulamentum the last layer of neural network exports;I=0,1, hiIt is upper one layer of output;wi,jAnd bjThe respectively power of the last layer Weight and biasing.
The distribution situation being respectively worth in queue described in S6, real-time detection presets joint decision threshold, using Parclos and Fom Rule calculates separately the frame number frequency that eye closure and mouth open in the unit time, if the frame number that eye closure or mouth open Frequency then determines learner's fatigue when being more than the threshold value of the joint judgement.
It is described that the frame number frequency that eye closure and mouth in the unit time open is calculated separately using Parclos and Fom rule Rate specifically: set fperIt is closed frame number frequency for eye, what is indicated is that eye is closed between frame number and totalframes in the unit time Ratio, it may be assumed thatWherein, n indicates that eye closing frame number in the unit time, N indicate the camera acquisition in the unit time The video totalframes arrived;If fofmFor mouth open frame number frequency, indicate be in the unit time mouth open frame number and Ratio in unit time between totalframes, it may be assumed thatThe frame number that mouth opens in the n ' expression unit time, N indicate single The collected video totalframes of camera in the time of position, if T is the unit time of setting, f0For camera video acquisition frame Frequency, then N=T × f0。fper、ffomIt can quantify eye, mouth closure degree well, the two indexs are all to be worth bigger, expression Degree of fatigue is bigger.
In the step S6, the default joint decision threshold and dicision rules are as follows: work as fper> 0.4 or ffomWhen > 0.4 It is determined as fatigue;Work as fper> 0.25 and ffomEqually it is determined as fatigue when > 0.25, remaining is not tired.
S7, if it is determined that learner is tired, ring is issued by loudspeaker and is reminded, and by display screen record times of fatigue with Learn duration, if study duration is more than setting study duration, same ring is reminded.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations, although referring to before Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of embedded fatigue state detection system, which is characterized in that the system comprises the following contents:
Image collection module: for capturing learner's video by camera dynamic, the video is pressed into default frame period conversion At hardwood image, and it is normalized;
Data processing module: it for the locating human face from the frame image, repositions and intercepts eye and mouth image;Pass through instruction The convolutional neural networks perfected respectively identify the opening and closed state of eye and mouth;Using Parclos and Fom method Then in convolutional neural networks close one's eyes and yawn frame number frequency carry out calculate and compared with default joint decision threshold, judge mesh Whether mark is tired;
Output decision-making module: for being shown according to the judging result of data processing module control loudspeaker and display screen.
2. embedded fatigue state detection system according to claim 1, which is characterized in that the data processing module is specific Include:
Face datection unit: facial image is extracted from the frame image using based on histograms of oriented gradients HOG algorithm;
Feature extraction unit: human face characteristic point is detected using facial marks estimating algorithm and is aligned face, ERT algorithm detects face Key point in portion's image is aligned face, and orients eye, mouth region according to key point serial number, is partitioned into eye and mouth Portion's image;
Model training unit: design convolutional neural networks structure passes through optical data library CelebA and the feature extraction list The data set for eye and the mouth image composition that member obtains is trained convolutional neural networks;
Classification and Identification unit: the eye and mouth image of the acquisition are identified by the trained convolutional neural networks, is obtained To eye, the opening of mouth and closure two states, and it is saved in the queue of regular length;
Joint judging unit: presetting joint decision threshold, and the distribution situation being respectively worth in queue described in real-time detection uses Parclos and Fom rule calculates separately the frame number frequency that eye closure and mouth open in the unit time, if eye closure or mouth The frame number frequency that portion is opened then determines fatigue when being more than the threshold value of the joint judgement, and will determine that result is sent to the output Decision-making module.
3. embedded fatigue state detection system according to claim 2, which is characterized in that in the Classification and Identification unit, The full articulamentum of the convolutional neural networks uses softmax classifier, softmax is defined as:
Wherein, j=0,1, and pjIndicate that output result is the probability of jth class;y'kWith y 'j=∑ihiwi,j+bjIndicate convolutional Neural The full articulamentum the last layer of network exports;I=0,1, hiIt is upper one layer of output;wi,jAnd bjRespectively the weight of the last layer and Biasing.
4. embedded fatigue state detection system according to claim 2, which is characterized in that in the joint judging unit, It is described that the frame number frequency that eye closure and mouth open in the unit time is calculated separately using Parclos and Fom rule specifically:
If fperIt is closed frame number frequency for eye, what is indicated is the ratio in the unit time between eye closure frame number and totalframes Value, it may be assumed that
Wherein, n indicates that eye closing frame number in the unit time, N indicate the collected video totalframes of camera in the unit time;
If ffomFor mouth open frame number frequency, indicate be in the unit time mouth open frame number and unit time in Ratio between totalframes, it may be assumed that
Wherein, the frame number that mouth opens in the n ' expression unit time, N indicate the collected video of camera in the unit time Totalframes.
5. embedded fatigue state detection system according to claim 4, which is characterized in that in the joint judging unit, The setting of joint decision threshold and dicision rules are as follows: work as fper> 0.4 or ffomIt is determined as fatigue when > 0.4;Work as fper> 0.25 and ffomEqually it is determined as fatigue when > 0.25, remaining is not tired.
6. embedded fatigue state detection system according to claim 1, which is characterized in that the output decision-making module is being sentenced When the fatigue that sets the goal, controls shown loudspeaker and issue ring, remind learner's attention state, shown display screen records times of fatigue With study duration, loudspeaker ring reminds learner to take a good rest if study duration is more than setting study duration.
7. a kind of embedded fatigue state detection method, which is characterized in that the described method includes:
S1, camera dynamic capture learner's motion video in study, and the video is changed into hardwood image by default frame period;
S2, facial image is extracted from the frame image using based on histograms of oriented gradients HOG algorithm;
S3, the key point in the facial image is detected using regression tree set ERT algorithm, be aligned face, and according to key Point serial number positions and intercepts out eye, mouth image;
S4, design convolutional neural networks structure, pass through optical data library CelebA and the eye intercepted out and mouth figure As the data set of composition is trained convolutional neural networks;
S5, the eye and mouth image that the acquisition is identified by the trained convolutional neural networks, obtain eye, mouth Opening and closure two states, and be saved in the queue of regular length;
The distribution situation being respectively worth in queue described in S6, real-time detection presets joint decision threshold, using Parclos and Fom rule The frame number frequency that eye closure and mouth open in the unit time is calculated separately, if the frame number frequency that eye closure or mouth open Learner's fatigue is then determined when more than the threshold value combined and determined.
8. embedded fatigue state detection method according to claim 7, which is characterized in that also wrapped after the step S6 It includes:
S7, if it is determined that learner is tired, ring is issued by loudspeaker and is reminded, and passes through display screen record times of fatigue and study Duration, if study duration is more than setting study duration, same ring is reminded.
9. embedded fatigue state detection method according to claim 7, which is characterized in that described to adopt in the step S6 The frame number frequency that eye closure and mouth open in the unit time is calculated separately with Parclos and Fom rule specifically:
If fperIt is closed frame number frequency for eye, what is indicated is the ratio in the unit time between eye closure frame number and totalframes Value, it may be assumed that
Wherein, n indicates that eye closing frame number in the unit time, N indicate the collected video totalframes of camera in the unit time;
If ffomFor mouth open frame number frequency, indicate be in the unit time mouth open frame number and unit time in Ratio between totalframes, it may be assumed that
The frame number that mouth opens in the n ' expression unit time, N indicate the total frame of the collected video of camera in the unit time Number.
10. embedded fatigue state detection system according to claim 9, which is characterized in that described pre- in the step S6 If joint decision threshold and dicision rules are as follows: work as fper> 0.4 or ffomIt is determined as fatigue when > 0.4;Work as fper> 0.25 and ffom Equally it is determined as fatigue when > 0.25, remaining is not tired.
CN201910232993.8A 2019-03-26 2019-03-26 A kind of embedded fatigue state detection system and method Pending CN110119672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910232993.8A CN110119672A (en) 2019-03-26 2019-03-26 A kind of embedded fatigue state detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910232993.8A CN110119672A (en) 2019-03-26 2019-03-26 A kind of embedded fatigue state detection system and method

Publications (1)

Publication Number Publication Date
CN110119672A true CN110119672A (en) 2019-08-13

Family

ID=67520643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910232993.8A Pending CN110119672A (en) 2019-03-26 2019-03-26 A kind of embedded fatigue state detection system and method

Country Status (1)

Country Link
CN (1) CN110119672A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532976A (en) * 2019-09-03 2019-12-03 湘潭大学 Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN111179552A (en) * 2019-12-31 2020-05-19 苏州清研微视电子科技有限公司 Driver state monitoring method and system based on multi-sensor fusion
CN111445496A (en) * 2020-02-26 2020-07-24 沈阳大学 Underwater image recognition tracking system and method
CN112052775A (en) * 2020-08-31 2020-12-08 同济大学 Fatigue driving detection method based on gradient histogram video recognition technology
CN113066264A (en) * 2021-02-22 2021-07-02 广州铁路职业技术学院(广州铁路机械学校) Fatigue state identification method and desk lamp
CN113239841A (en) * 2021-05-24 2021-08-10 桂林理工大学博文管理学院 Classroom concentration state detection method based on face recognition and related instrument
CN113524182A (en) * 2021-07-13 2021-10-22 东北石油大学 Device and method for intelligently adjusting distance between person and screen
CN114049676A (en) * 2021-11-29 2022-02-15 中国平安财产保险股份有限公司 Fatigue state detection method, device, equipment and storage medium
CN114241719A (en) * 2021-12-03 2022-03-25 广州宏途教育网络科技有限公司 Visual fatigue state monitoring method and device in student learning and storage medium
CN114298189A (en) * 2021-12-20 2022-04-08 深圳市海清视讯科技有限公司 Fatigue driving detection method, device, equipment and storage medium
WO2022252673A1 (en) * 2021-05-31 2022-12-08 青岛海尔空调器有限总公司 Control method and apparatus for household appliance for adjusting fatigue degree, and household appliance
CN116645917A (en) * 2023-06-09 2023-08-25 浙江技加智能科技有限公司 LED display screen brightness adjusting system and method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617421A (en) * 2013-12-17 2014-03-05 上海电机学院 Fatigue detecting method and system based on comprehensive video feature analysis
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN108545080A (en) * 2018-03-20 2018-09-18 北京理工大学 Driver Fatigue Detection and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617421A (en) * 2013-12-17 2014-03-05 上海电机学院 Fatigue detecting method and system based on comprehensive video feature analysis
CN108545080A (en) * 2018-03-20 2018-09-18 北京理工大学 Driver Fatigue Detection and system
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532976A (en) * 2019-09-03 2019-12-03 湘潭大学 Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN111179552A (en) * 2019-12-31 2020-05-19 苏州清研微视电子科技有限公司 Driver state monitoring method and system based on multi-sensor fusion
CN111445496A (en) * 2020-02-26 2020-07-24 沈阳大学 Underwater image recognition tracking system and method
CN111445496B (en) * 2020-02-26 2023-06-30 沈阳大学 Underwater image recognition tracking system and method
CN112052775A (en) * 2020-08-31 2020-12-08 同济大学 Fatigue driving detection method based on gradient histogram video recognition technology
CN113066264A (en) * 2021-02-22 2021-07-02 广州铁路职业技术学院(广州铁路机械学校) Fatigue state identification method and desk lamp
CN113239841A (en) * 2021-05-24 2021-08-10 桂林理工大学博文管理学院 Classroom concentration state detection method based on face recognition and related instrument
WO2022252673A1 (en) * 2021-05-31 2022-12-08 青岛海尔空调器有限总公司 Control method and apparatus for household appliance for adjusting fatigue degree, and household appliance
CN113524182B (en) * 2021-07-13 2023-05-16 东北石油大学 Device and method for intelligently adjusting distance between person and screen
CN113524182A (en) * 2021-07-13 2021-10-22 东北石油大学 Device and method for intelligently adjusting distance between person and screen
CN114049676A (en) * 2021-11-29 2022-02-15 中国平安财产保险股份有限公司 Fatigue state detection method, device, equipment and storage medium
CN114241719A (en) * 2021-12-03 2022-03-25 广州宏途教育网络科技有限公司 Visual fatigue state monitoring method and device in student learning and storage medium
CN114241719B (en) * 2021-12-03 2023-10-31 广州宏途数字科技有限公司 Visual fatigue state monitoring method, device and storage medium in student learning
CN114298189A (en) * 2021-12-20 2022-04-08 深圳市海清视讯科技有限公司 Fatigue driving detection method, device, equipment and storage medium
CN116645917A (en) * 2023-06-09 2023-08-25 浙江技加智能科技有限公司 LED display screen brightness adjusting system and method thereof

Similar Documents

Publication Publication Date Title
CN110119672A (en) A kind of embedded fatigue state detection system and method
Liao et al. Deep facial spatiotemporal network for engagement prediction in online learning
CN105516280B (en) A kind of Multimodal Learning process state information packed record method
Dewan et al. A deep learning approach to detecting engagement of online learners
CN109635727A (en) A kind of facial expression recognizing method and device
KR102174595B1 (en) System and method for identifying faces in unconstrained media
CN108549854B (en) A kind of human face in-vivo detection method
CN106951867A (en) Face identification method, device, system and equipment based on convolutional neural networks
CN107085715A (en) A kind of television set intelligently detects the dormant system and method for user
Liu et al. Region based parallel hierarchy convolutional neural network for automatic facial nerve paralysis evaluation
CN108182409A (en) Biopsy method, device, equipment and storage medium
Hu et al. Research on abnormal behavior detection of online examination based on image information
WO2020140723A1 (en) Method, apparatus and device for detecting dynamic facial expression, and storage medium
CN109902558A (en) A kind of human health deep learning prediction technique based on CNN-LSTM
CN106599800A (en) Face micro-expression recognition method based on deep learning
CN104143079A (en) Method and system for face attribute recognition
Huang et al. RF-DCM: multi-granularity deep convolutional model based on feature recalibration and fusion for driver fatigue detection
KR102263840B1 (en) AI (Artificial Intelligence) based fitness solution display device and method
CN103479367A (en) Driver fatigue detection method based on facial action unit recognition
CN109431523A (en) Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form
Li et al. Research on leamer's emotion recognition for intelligent education system
CN106874867A (en) A kind of face self-adapting detecting and tracking for merging the colour of skin and profile screening
CN106485232A (en) A kind of personal identification method based on nose image feature in respiratory
Cowie et al. Recognition of emotional states in natural human-computer interaction
CN110473176A (en) Image processing method and device, method for processing fundus images, electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190813