CN110063736A - The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network - Google Patents
The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network Download PDFInfo
- Publication number
- CN110063736A CN110063736A CN201910372045.4A CN201910372045A CN110063736A CN 110063736 A CN110063736 A CN 110063736A CN 201910372045 A CN201910372045 A CN 201910372045A CN 110063736 A CN110063736 A CN 110063736A
- Authority
- CN
- China
- Prior art keywords
- mod
- layer
- module
- fatigue
- eye movement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/18—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6803—Head-worn items, e.g. helmets, masks, headphones or goggles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0022—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the tactile sense, e.g. vibrations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0044—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0083—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus especially for waking up
Abstract
The invention discloses a kind of fatigue detecting of eye movement parameter monitoring based on MOD-Net network and promote system of waking up, comprising: lens body;Image capture module, for acquiring the branch hole image data of wearer;Promote module of waking up;Power module;Background processing module comprising MOD-Net network eye movement data analysis module, degree of fatigue analysis module and main control module.The present invention is analyzed by eye movement data, can automatic real-time judgment fatigue level of human body, and carry out early warning according to human-body fatigue grade, and impose sound, the rush that light, vibration combine is waken up stimulation, promote fatigue level of human body to reduce, promote human body alertness and work capacity;The present invention analyzes eye movement data using MOD-Net network eye movement data analysis module, has preferable antinoise, anti-rotation and anti-change of scale ability, and can capture fuzzy ocular, can improve the accuracy rate of fatigue detecting.
Description
Technical field
The present invention relates to fatigue detecting and awakening stimulating technology field, in particular to a kind of eyes based on MOD-Net network
The awake system of fatigue detecting and rush of dynamic parameter monitoring.
Background technique
The development of random society, the automatization level of work at present are improving, and all kinds of work positions require staff such as
Pilot keeps the dexterity and abundant energy and attention of height, it is ensured that high-end devices operation is without fault and related operation
Smooth implementation, but judgement, decision and the operation of the j ob impact staff of duration, high intensity, non-temporal fixed pattern are held
Row ability.Therefore, under the conditions of accurate evaluation continuous operation, the shadow of time biological rhythm and its functional status to related work capacity
It rings, and gives the intervention of science, to promote related personnel's operational efficiency.
Currently, subjective detection and objective two kinds of detection are broadly divided into for the means of fatigue detecting both at home and abroad, wherein objective
Detection mainly includes behavioural characteristic detection (as blinked, headwork, mouth action etc.) and physiological characteristic detection (such as brain electricity, eye
Electricity, myoelectricity etc.);Subjectivity detection mainly includes the detection of evaluation property and physiological reaction detection.Promoting awake method mainly includes that physics is adjusted,
Chemical Regulation and biological regulation.Correlative study shows that the degree of fatigue correlation of 80% PRECLOS and people is larger, certain wave
The sonic stimulation and vibratory stimulation of long light stimulus, specific frequency and loudness have preferable effect to the raising of degree of awakening.
Eye is detected, eye movement parameter is analyzed by using algorithm, can determine whether degree of fatigue.And the calculation used
The superiority and inferiority of method has a significant impact to testing result tool.Algorithm of target detection based on regional choice can be used for carrying out eye movement parameter inspection
It surveys.Algorithm of target detection based on regional choice is object detection and recognition frame more mature at this stage, it will test process
It is reduced to classification task, and promotes detection accuracy using deep learning method, wherein representative algorithm is RCNN series:
FastRCNN and FasterRCNN.RCNN extracts object candidate area using SelectiveSearch, leads in candidate region
It crosses CNN network and extracts feature, and training SVM classifier, classify to feature.Its improved model FastRCNN proposes ROI
Pond (Region of Interest Pooling) layer simultaneously uses the estimation range Softmax classification, improves the detection of model
Precision and efficiency.In order to further increase detection rates, FasterRCNN algorithm is proposed on the basis of FastRCNN.RCNN system
The algorithm of column carries out target detection using the method for 2-stage, higher for multiple target and fine granularity target detection precision, but
There are still defects to be unable to reach real-time etc. if its operand is big for it.And currently, commercially available relevant device is concentrated mainly on
In the research and development of driving fatigue detection device, lacks the research for the related fieldss project such as individual work capability improving, more lack
Fatigue of convergence detection, early warning and the stimulation that promotees to wake up are in the portable wearable device of one.
Summary of the invention
In view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is that providing a kind of based on MOD-
The awake system of fatigue detecting and rush of the eye movement parameter monitoring of Net network.
In order to solve the above technical problems, the technical solution adopted by the present invention is that: a kind of eye movement ginseng based on MOD-Net network
The awake system of fatigue detecting and rush of number monitoring, comprising:
Lens body is used to be worn to eye;
Image capture module is set on the lens body, for acquiring the branch hole image data of wearer;
Promote module of waking up, is set on the lens body, including light stimulus unit, sonic stimulation unit and vibratory stimulation
Unit;
Power module is set on the lens body, for being above-mentioned each module for power supply;
Background processing module comprising MOD-Net network eye movement data analysis module, degree of fatigue analysis module and master control
Module;
Wherein, analysis method of the MOD-Net network eye movement data analysis module to eye movement data are as follows: the MOD-
Net network first carries out feature extraction to the branch hole image of acquisition, then classifies to judge whether present image includes eye
Region;Frame recurrence is carried out to the image comprising ocular later, to determine the position of ocular, and uses rectangle collimation mark
Out;The depth-width ratio for finally calculating rectangle frame, is opened, closed-eye state with judgement;
Wherein, the background processing module is communicated to connect with image capture module and the awake module of rush, the degree of fatigue
Analysis module carries out degree of fatigue judgement, the master control mould according to the result of the MOD-Net network eye movement data analysis module
Root tuber according to the degree of fatigue analysis module judgement result to the module that promotees to wake up send light stimulus and/or Sound stimulat and/or
The signal of vibratory stimulation carries out promoting to wake up by the awake module of the rush to human body.
Preferably, the MOD-Net network eye movement data analysis module specifically includes the analysis method of eye movement data
Following steps:
1) branch hole image is acquired, MOD-Net network is input to;
2) MOD-Net network carries out feature extraction to image, then classifies to judge whether present image includes eye
Portion region;Frame recurrence is carried out to the image comprising ocular later, to determine the position of ocular, and uses rectangle frame
It marks;
3) depth-width ratio for calculating the rectangle frame that the step 2) obtains, by opening the value of obtained depth-width ratio with what is set
Eye closing threshold value is compared, judgement open, closed-eye state.
Preferably, the opening of the eyes image in degree of fatigue analysis module statistics a period of time, closed-eye state, meter
The frame number for calculating its eye closing accounts for the percentage of totalframes, when percentage is greater than the fatigue threshold of setting, is determined as fatigue state.
Preferably, the lens body includes lens supports and the left and right supporting leg for being connected to the lens supports two sides;
Described image acquisition module includes camera and infrared illumination source, for acquiring the branch hole image of wearer in real time, and is transmitted
To the MOD-Net network eye movement data analysis module;
The background processing module is to be embedded with the MOD-Net network eye movement data analysis module, degree of fatigue analysis
The tablet computer or smart mobile phone based on android system of module and main control module, the tablet computer or smart mobile phone with it is described
Image capture module and the awake module of rush pass through wired or wireless way and establish communication connection.
Preferably, the light stimulus unit includes two groups of blue lights being separately positioned on the left and right supporting leg of the glasses
Light source, for issuing blue light to stimulate eye;
The sonic stimulation unit includes the buzzer being arranged on the supporting leg of the glasses, for provide different frequency and
The sonic stimulation of loudness;
The vibratory stimulation unit includes the vibrating motor being arranged on the supporting leg of the glasses, for providing particular sequence
Vibration.
Preferably, the MOD-Net network include input layer, characteristic pattern extract network layer, target detection network layer and
Output layer;
The characteristic pattern extracts network layer and carries out feature extraction to the branch hole image of input, obtains the characteristic pattern of eye;Institute
It states target detection network layer to classify to characteristic pattern, judges whether present image includes ocular, and to including eye area
The image in domain determines the position of ocular using frame Recurrent networks, and is marked using rectangle frame.
Preferably, it includes input layer, the common volume with input layer parallel connection that the characteristic pattern, which extracts network layer,
Lamination, depth convolutional layer, cross convolutional layer and the splicing fused layer merged for the output to three convolutional layers.
The common convolutional layer is the convolution kernel of 3*3, and the depth convolutional layer is the convolution kernel of 1*1;
The cross convolutional layer includes the first convolution pair with the input parallel connection of the cross convolutional layer
With the second convolution pair, the convolution kernel of the described first convolution kernel and 10*1 including sequentially connected 1*10, second convolution pair
The convolution kernel of convolution kernel and 1*10 including sequentially connected 10*1;
The processing step of the cross convolutional layer includes:
A. input picture respectively enter the first convolution to and the second convolution pair;
B. the convolution kernel of the convolution kernel of the 1*10 of the first convolution centering and 10*1 successively carry out convolution to input picture;Second
The convolution kernel of the 10*1 of convolution centering and the convolution kernel of 1*10 successively carry out convolution to input picture;
C. by the first convolution to and the convolution output of the second convolution pair rolled up as the cross after splicing fusion
The convolution results of lamination are exported.
Preferably, the target detection network layer includes attention layer and is connected in parallel with the output of the attention layer
Sorter network layer and frame Recurrent networks layer;
The sorter network layer includes full articulamentum and activation primitive, if the frame Recurrent networks layer include convolutional layer,
Do full articulamentum and activation primitive;
The attention layer carries out attention weighting to the characteristic pattern of input, to emphasize target information, and inhibits unrelated
Detailed information;Whether the sorter network layer judges in present image to include ocular;The frame Recurrent networks layer is to eye
Portion region is positioned, and is marked using rectangle frame.
Preferably, the attention layer includes several convolutional layers being connected in series, connect with the output of the convolutional layer
Sigmoid function and Mutiply layers, described Mutiply layers for by the input of the target detection network layer with
The output of Sigmoid function is multiplied and using its result as the output of the attention layer.
Preferably, the sorter network layer includes Dense 1 and Sigmoid function, and Dense 1 is used for the classification
The input of network layer is compressed in a characteristic value, this characteristic value is passed through after Sigmoid activation primitive, is classified for two,
Whether judge in present image comprising human eye area;
The frame Recurrent networks layer includes the convolution kernel of sequentially connected 3*3, Dense100, Dense 4 and Sigmoid
Function, Dense 4 are used to the input of the sorter network layer being compressed to 4 characteristic values, this 4 characteristic values are passed through
After Sigmoid activation primitive, eye detection two of rectangle frame has been respectively represented to angular coordinate, can be used for ocular
Positioning, i.e., find ocular, and drawn with rectangle frame in the picture.
The beneficial effects of the present invention are: the fatigue detecting of the eye movement parameter monitoring of the invention based on MOD-Net network and
Promote system of waking up, is analyzed by eye movement data, the automatic real-time judgment fatigue level of human body of energy, and it is pre- according to the progress of human-body fatigue grade
It is alert, and impose sound, the rush that light, vibration combine is waken up stimulation, promote fatigue level of human body to reduce, enable human body alertness and operation
Power is promoted;The present invention analyzes eye movement data using MOD-Net network eye movement data analysis module, has preferable anti-noise
Sound, anti-rotation and anti-change of scale ability, and fuzzy ocular can be captured, the accuracy rate of fatigue detecting can be improved.
Detailed description of the invention
Fig. 1 is the fatigue detecting of the eye movement parameter monitoring of the invention based on MOD-Net network and the principle for promoting awake system
Block diagram;
Fig. 2 is the structural schematic diagram of the glasses in a kind of embodiment of the invention.
Fig. 3 is the flow chart of the fatigue detection method of the invention that MOD-Net is monitored based on depth targets;
Fig. 4 is the functional block diagram of MOD-Net network of the invention;
Fig. 5 is the functional block diagram that characteristic pattern of the invention extracts network layer;
Fig. 6 is the functional block diagram of target detection network layer of the invention.
Description of symbols:
1-lens supports;2-left branch legs;3-right supporting legs;4-power modules;5-blue light sources;6-blue light sources and
Infrared illumination source integration module;7-buzzers;8-vibrating motors;9-cameras;10-USB interfaces.
Specific embodiment
The present invention will be further described in detail below with reference to the embodiments, to enable those skilled in the art referring to specification
Text can be implemented accordingly.
It should be appreciated that such as " having ", "comprising" and " comprising " term used herein are not precluded one or more
The presence or addition of a other elements or combinations thereof.
As shown in Figure 1, the fatigue detecting and rush of a kind of eye movement parameter monitoring based on MOD-Net network of the present embodiment are waken up
System, comprising: lens body, promotees wake up module, power module 4, background processing module at image capture module.
Wherein, lens body is for being worn to eye;Image capture module is set on lens body, is worn for acquiring
The eye movement data of person;Promote awake module to be set on lens body, including light stimulus unit, sonic stimulation unit and vibratory stimulation list
Member;Power module 4 is set on lens body, for being above-mentioned each module for power supply.
Wherein, background processing module includes MOD-Net network eye movement data analysis module, degree of fatigue analysis module and master
Control module;Wherein, background processing module is communicated to connect with image capture module and the awake module of rush, degree of fatigue analysis module root
According to MOD-Net network eye movement data analysis module result carry out degree of fatigue judgement, main control module further according to determine result to
Promote the signal that awake module sends light stimulus and/or Sound stimulat and/or vibratory stimulation, human body promote by rush awake module awake;
Specifically, obtaining human body awakening grade according to fatigue level of human body;When grade of awakening is lower than setting value, main control module, which issues, feels
Awake early warning, and control the awake module of rush and send light stimulus and/or Sound stimulat and/or vibratory stimulation.
Wherein, analysis method of the MOD-Net network eye movement data analysis module to eye movement data are as follows: MOD-Net network is first
Feature extraction is carried out to the branch hole image of acquisition, is then classified to judge whether present image includes ocular;Later
Frame recurrence is carried out to the image comprising ocular, to determine the position of ocular, and is marked using rectangle frame;Finally count
The depth-width ratio of rectangle frame is calculated, is opened with judgement, closed-eye state;
Referring to Fig. 2, lens body includes lens supports 1 and the left and right supporting leg 3 for being connected to 1 two sides of lens supports;The system
It further include the power module 4 being arranged on the supporting leg of glasses.Light stimulus unit includes the left and right supporting leg 3 for being separately positioned on glasses
On two groups of blue light sources 5, for issuing blue light to stimulate eye;Wherein, one group of blue light source 5 and infrared illumination source are integrated
Integrally formed blue light source and infrared illumination source integration module, and be arranged on left branch leg 2.Sonic stimulation unit includes setting
Buzzer 7 on the supporting leg of glasses, for providing the sonic stimulation of different frequency and loudness.Vibratory stimulation unit includes setting
Vibrating motor 8 on the supporting leg of glasses, for providing the vibration of particular sequence, it is further preferred that vibratory stimulation unit is also
Including linear motor controller, vibrating motor 8 uses minitype polarization motor.Usb 10 is additionally provided on the supporting leg of glasses, with
With background processing module wired connection.
Image capture module includes camera 9 and infrared illumination source, for acquiring the branch hole image of wearer in real time, and
It is transmitted to MOD-Net network eye movement data analysis module;Infrared illumination source is that camera 9 provides illumination.Image capture module
It may include 1 group, left eye be set or has the side of eye, to acquire two branch hole images;It may also comprise 2 groups, be separately positioned on
Right and left eyes two sides, to acquire two branch hole images.
In a further preferred embodiment, referring to Fig. 2, one group of blue light source 5 is integrally disposed in infrared illumination source
On glasses left branch leg 2, camera 9 (image capture module includes 1 group), vibrating motor 8, usb 10, buzzer 7 are respectively provided with
On left branch leg 2, and the angles and positions of camera 9 are adjustable;Power module 4 and another group of blue light source 5 are arranged in right branch
On leg 3, and blue light source 5 is arranged in the junction of glasses supporting leg Yu lens supports 1.Lens body structure uses 3D printing technique
Processing and manufacturing, using the resin material of lightweight, total quality is slim and graceful, comfortable wearing.Glasses supporting leg is in flat structure, curve construction
It is suitble to human ear to wear and support.
Wherein, background processing module is to be embedded with MOD-Net network eye movement data analysis module, degree of fatigue analysis module
And the external tablet computer or smart mobile phone based on android system of main control module, tablet computer or smart mobile phone are adopted with image
Collect module and promote awake module and is communicated to connect by wired or wireless way foundation.Background processing module can also be to be embedded with MOD-
The processing chip of Net network eye movement data analysis module, degree of fatigue analysis module and main control module, and be embedded in glasses.
In one embodiment, MOD-Net network eye movement data analysis module specifically wraps the analysis method of eye movement data
Include following steps:
1) branch hole image is acquired, MOD-Net network is input to;
2) MOD-Net network carries out feature extraction to image, then classifies to judge whether present image includes eye
Portion region;Frame recurrence is carried out to the image comprising ocular later, to determine the position of ocular, and uses rectangle frame
It marks;
3) depth-width ratio for calculating the rectangle frame that step 2) obtains, by the way that the value of obtained depth-width ratio opens and closes eyes with what is set
Threshold value is compared, judgement open, closed-eye state.In a kind of preferred embodiment, which is specifically included: first defining eye up and down
The maximum distance of eyelid is the high H of eye, and the eye width of branch hole is W, and eye height and the wide ratio of eye are eye depth-width ratio β, i.e.,Setting is opened
Eye closing threshold value is βt;Then according to step 2) as a result, the depth-width ratio β of rectangle frame in present image is calculatedxIf βx≥βt,
Then indicate currently be eyes-open state, on the contrary it is then be closed-eye state.In more preferred embodiment, open and close eyes threshold value betat=0.2.
The present invention indicates eye state by calculating the depth-width ratio of eyes, even if during real-time monitoring, because acutely
Head movement lens body or camera 9 can be made to be displaced, the relative position of eyes changes, but the height of eye is wide
Ratio is still able to maintain more stable numerical value, this is that human eye structure feature is determined.First define upper palpebra inferior it is maximum away from
It is W from the eye width for the high H of eye, branch hole, eye height and the wide ratio of eye are eye depth-width ratio β, i.e.,When human eye closure, on
Palpebra inferior is overlapped, and eye depth-width ratio β is minimum;Otherwise human eye when opening completely eye depth-width ratio β it is maximum, β takes under normal conditions
Value is between [0,2].
In one embodiment, the opening of eyes image in degree of fatigue analysis module statistics a period of time, closed-eye state,
The frame number for calculating its eye closing accounts for the percentage of totalframes, when percentage is greater than the fatigue threshold of setting, is determined as fatigue state.
In a preferred embodiment, the fatigue threshold set is 0.33-0.42.
Wherein, MOD-Net network includes input layer (Input), characteristic pattern extraction network layer (Feature Generator
Networks, FGN), target detection network layer (Detecion Networks, DN) and output layer (Output);
It is entirety MOD-Net network structure referring to Fig. 4, in the present embodiment, it includes series connection that characteristic pattern, which extracts network layer,
Two.Characteristic pattern extracts network layer and carries out feature extraction to the branch hole image of input, obtains the characteristic pattern of eye;Target detection
Network layer classifies to characteristic pattern, judges whether present image includes ocular, and make to the image comprising ocular
The position of ocular is determined with frame Recurrent networks, and is marked using rectangle frame.The training process of MOD-Net is arrived using end
The mode of (End-to-end) is held, Classification Loss and frame are returned loss joint by balance factor, utilized by loss function
The learning ability that the loss function can preferably promote network completes the classification of image and the positioning of ocular.The mesh of classification
Be to judge in present image that the purpose of positioning is to orient eye if comprising ocular whether comprising ocular
The location information in portion.In traditional image procossing, ocular is often extracted according to the structure feature of image, therefore
When people picks off glasses, traditional image processing algorithm still is able to find the characteristic area similar to eye, thus
Lead to tired erroneous judgement.This algorithm is extracted by the semantic information to image, and on the basis of semantic feature, judgement is currently collected
Image in whether include eye, and decide whether to carry out eye positioning and analysis of fatigue accordingly.Overall network structure is such as
Shown in Fig. 1.
In a kind of preferred embodiment, referring to Fig. 5, characteristic pattern extract network layer include input layer, it is parallel with input layer
Common convolutional layer, depth convolutional layer, cross convolutional layer and the output for three convolutional layers of connection are merged
Splicing fused layer.
In further preferred embodiment, common convolutional layer is the convolution kernel (Conv3*3) of 3*3, and depth convolutional layer is 1*1
Convolution kernel (Conv1*1);
Cross convolutional layer (CConv) include with cross convolutional layer input parallel connection the first convolution to
Second convolution pair, the convolution kernel (Conv10*1) of the first convolution kernel (Conv1*10) and 10*1 including sequentially connected 1*10,
Convolution kernel (Conv1*10) of second convolution to convolution kernel (Conv10*1) and 1*10 including sequentially connected 10*1;
The processing step of cross convolutional layer includes:
A. input picture respectively enter the first convolution to and the second convolution pair;
B. the convolution kernel of the convolution kernel of the 1*10 of the first convolution centering and 10*1 successively carry out convolution to input picture;Second
The convolution kernel of the 10*1 of convolution centering and the convolution kernel of 1*10 successively carry out convolution to input picture;
C. by the first convolution to and the second convolution pair convolution output carry out splicing fusion (Concatenate) afterwards as hand over
The convolution results of fork cross convolutional layer are exported.
Characteristic pattern extracts network layer to be come using the design philosophy of parallel common convolution, depth convolution and cross convolution
Sufficiently improve feature capture ability.The purpose of common convolution is the convolution kernel of 3*3 is the capture ability for promoting local feature, enhancing office
Portion's context relation;The purpose of depth convolution is the convolution kernel of 1*1 is to enhance various features under the premise of not increasing multi-parameter
Capture ability;The purpose of cross convolution CConv (Cross Convolution) is not carry out multilayer pond and contracting to image
Under the premise of putting, does not increase excessive parameter amount and expand the global context dependency of convolution receptive field enhancing.
In a kind of preferred embodiment, referring to Fig. 6, target detection network layer include attention layer (Attention) and
With the sorter network layer and frame Recurrent networks layer of the output parallel connection of attention layer.Attention layer to the characteristic pattern of input into
The weighting of row attention, to emphasize target information, and inhibits unrelated detailed information;Sorter network layer judge in present image whether
Including ocular;Frame Recurrent networks layer positions ocular, and is marked using rectangle frame.
In further preferred embodiment, referring to Fig. 4, attention layer includes several convolutional layers being connected in series and convolution
The Sigmoid function of the output connection of layer and Mutiply layer, Mutiply layer be used for the input of target detection network layer and
The output of Sigmoid function is multiplied (Mutiply) and using its result as the output of attention layer.
In further preferred embodiment, sorter network layer includes full articulamentum and activation primitive, frame Recurrent networks layer
Including convolutional layer, several full articulamentums and activation primitive.
In embodiment still more preferably, sorter network layer includes Dense 1 and Sigmoid function, and Dense 1 is used for
The input of sorter network layer is compressed in a characteristic value, this characteristic value is passed through after Sigmoid activation primitive, is used for two
Whether classification judges in present image comprising human eye area;Dense, that is, full articulamentum;
Frame Recurrent networks layer includes the convolution kernel of sequentially connected 3*3, Dense100, Dense 4 and Sigmoid letter
Number, Dense 100, the purpose is on compressive features vector to 100 dimensions, and by Sigmoid activation primitive to enhance network
Nonlinear fitting ability;Dense 4 is used to the input of sorter network layer being compressed to 4 characteristic values, this 4 characteristic value warps
It crosses after Sigmoid activation primitive, has respectively represented eye detection two of rectangle frame to angular coordinate (such as top-left coordinates and bottom right
Coordinate), it can be used for the positioning of ocular, i.e., find ocular in the picture, and drawn with rectangle frame.
The activation primitive that MOD-Net network uses can be sigmoid or Relu, in one embodiment Conv (convolutional layer)
Used activation primitive is Relu, and activation primitive used by Dense (full articulamentum) is sigmoid.
The region obtained by above-mentioned steps is ocular, this algorithm has translation rotation and Scale invariant shape, and
Robustness is high, noise resisting ability and anti-light relatively strong according to unbalanced ability, i.e., fatigue monitoring equipment is worn nonstandard, this algorithm according to
Human eye area can so be accurately positioned out.
It is above-mentioned that the fatigue detecting of parameter monitoring is moved based on eye movement and head and promotees a kind of workflow of awake system are as follows:
1, camera 9 acquires the branch hole image of wearer, and is transmitted to MOD-Net network eye movement data analysis module;
2, MOD-Net network eye movement data analysis module to eye movement data carry out analytical calculation, judgement open, closed-eye state;
3, the opening of eyes image in degree of fatigue analysis module statistics a period of time, closed-eye state, calculate its eye closing
Frame number accounts for the percentage of totalframes, determines fatigue level of human body, and obtains human body awakening grade;When awakening grade is lower than setting value
When, main control module issues awakening early warning, and controls the awake module of rush and send light stimulus and/or Sound stimulat and/or vibratory stimulation;Specifically
Are as follows: blue light source 5 issues blue light to stimulate eye, and buzzer 7 makes a sound stimulation, and vibrating motor 8 issues vibratory stimulation.Its
In, according to the different awakening grade of human body, can match and different rush is set wake up level of signal, i.e., the blue light stimulation of varying strength,
The vibratory stimulation of the sonic stimulation of different loudness and frequency, different frequency and sequence.
4, when wearer response early warning promotees awake information (awakening grade is higher than setting value), promote module stopping stimulation of waking up,
System continues to monitor state.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed
With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily
Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited
In specific details.
Claims (10)
- The system 1. fatigue detecting and rush of a kind of eye movement parameter monitoring based on MOD-Net network are waken up characterized by comprisingLens body is used to be worn to eye;Image capture module is set on the lens body, for acquiring the branch hole image data of wearer;Promote module of waking up, is set on the lens body, including light stimulus unit, sonic stimulation unit and vibratory stimulation list Member;Power module is set on the lens body, for being above-mentioned each module for power supply;Background processing module comprising MOD-Net network eye movement data analysis module, degree of fatigue analysis module and master control mould Block;Wherein, analysis method of the MOD-Net network eye movement data analysis module to eye movement data are as follows: the MOD-Net net Network first carries out feature extraction to the branch hole image of acquisition, then classifies to judge whether present image includes ocular; Frame recurrence is carried out to the image comprising ocular later, to determine the position of ocular, and is marked using rectangle frame;Most The depth-width ratio for calculating rectangle frame afterwards, is opened, closed-eye state with judgement;Wherein, the background processing module is communicated to connect with image capture module and the awake module of rush, the degree of fatigue analysis Module carries out degree of fatigue judgement, the main control module root according to the result of the MOD-Net network eye movement data analysis module Promote wake up module transmission light stimulus and/or Sound stimulat and/or vibration to described according to the judgement result of the degree of fatigue analysis module The signal of stimulation carries out promoting to wake up by the awake module of the rush to human body.
- The system 2. fatigue detecting and rush of the eye movement parameter monitoring according to claim 1 based on MOD-Net network are waken up, Be characterized in that, the MOD-Net network eye movement data analysis module to the analysis method of eye movement data specifically includes the following steps:1) branch hole image is acquired, MOD-Net network is input to;2) MOD-Net network carries out feature extraction to image, then classifies to judge whether present image includes eye area Domain;Frame recurrence is carried out to the image comprising ocular later, to determine the position of ocular, and uses rectangle collimation mark Out;3) depth-width ratio for calculating the rectangle frame that the step 2) obtains, by the way that the value of obtained depth-width ratio opens and closes eyes with what is set Threshold value is compared, judgement open, closed-eye state.
- The system 3. fatigue detecting and rush of the eye movement parameter monitoring according to claim 2 based on MOD-Net network are waken up, It is characterized in that, the opening of the eyes image in degree of fatigue analysis module statistics a period of time, closed-eye state calculate its eye closing Frame number account for the percentage of totalframes, when percentage is greater than the fatigue threshold of setting, be determined as fatigue state.
- The system 4. fatigue detecting and rush of the eye movement parameter monitoring according to claim 1 based on MOD-Net network are waken up, It is characterized in that, the lens body includes lens supports and the left and right supporting leg for being connected to the lens supports two sides;Described image Acquisition module includes camera and infrared illumination source, for acquiring the branch hole image of wearer in real time, and is transmitted to described MOD-Net network eye movement data analysis module;The background processing module is to be embedded with the MOD-Net network eye movement data analysis module, degree of fatigue analysis module And the tablet computer or smart mobile phone based on android system of main control module, the tablet computer or smart mobile phone and described image Acquisition module and the awake module of rush pass through wired or wireless way and establish communication connection.
- The system 5. fatigue detecting and rush of the eye movement parameter monitoring according to claim 4 based on MOD-Net network are waken up, It is characterized in that, the light stimulus unit includes two groups of blue light sources being separately positioned on the left and right supporting leg of the glasses, is used for Blue light is issued to stimulate eye;The sonic stimulation unit includes the buzzer being arranged on the supporting leg of the glasses, for providing different frequency and loudness Sonic stimulation;The vibratory stimulation unit includes the vibrating motor being arranged on the supporting leg of the glasses, for providing the vibration of particular sequence It is dynamic.
- The system 6. fatigue detecting and rush of the eye movement parameter monitoring according to claim 2 based on MOD-Net network are waken up, It is characterized in that, the MOD-Net network includes input layer, characteristic pattern extraction network layer, target detection network layer and output layer;The characteristic pattern extracts network layer and carries out feature extraction to the branch hole image of input, obtains the characteristic pattern of eye;The mesh Mark detection network layer classifies to characteristic pattern, judges whether present image includes ocular, and to including ocular Image determines the position of ocular using frame Recurrent networks, and is marked using rectangle frame.
- The system 7. fatigue detecting and rush of the eye movement parameter monitoring according to claim 6 based on MOD-Net network are waken up, It is characterized in that, it includes input layer, the common convolutional layer being connected in parallel with the input layer, depth that the characteristic pattern, which extracts network layer, Convolutional layer, cross convolutional layer and the splicing fused layer merged for the output to three convolutional layers.The common convolutional layer is the convolution kernel of 3*3, and the depth convolutional layer is the convolution kernel of 1*1;The cross convolutional layer include with the first convolution of the input parallel connection of the cross convolutional layer to and the Two convolution pair, the convolution kernel of the described first convolution kernel and 10*1 including sequentially connected 1*10, second convolution is to including The convolution kernel of sequentially connected 10*1 and the convolution kernel of 1*10;The processing step of the cross convolutional layer includes:A. input picture respectively enter the first convolution to and the second convolution pair;B. the convolution kernel of the convolution kernel of the 1*10 of the first convolution centering and 10*1 successively carry out convolution to input picture;Second convolution The convolution kernel of the 10*1 of centering and the convolution kernel of 1*10 successively carry out convolution to input picture;C. by the first convolution to and the convolution output of the second convolution pair carry out after splicing fusion as the cross convolutional layer Convolution results exported.
- The system 8. fatigue detecting and rush of the eye movement parameter monitoring according to claim 7 based on MOD-Net network are waken up, It is characterized in that, the target detection network layer includes attention layer and the classification net with the output parallel connection of the attention layer Network layers and frame Recurrent networks layer;The sorter network layer includes full articulamentum and activation primitive, and the frame Recurrent networks layer includes convolutional layer, Ruo Ganquan Articulamentum and activation primitive;The attention layer carries out attention weighting to the characteristic pattern of input, to emphasize target information, and inhibits unrelated details Information;Whether the sorter network layer judges in present image to include ocular;The frame Recurrent networks floor is to eye area Domain is positioned, and is marked using rectangle frame.
- 9. the fatigue detection method according to claim 8 based on depth targets monitoring MOD-Net, which is characterized in that institute State attention layer include be connected in series several convolutional layers, with the convolutional layer output connection Sigmoid function and Mutiply layers, described Mutiply layers for the input of the target detection network layer to be multiplied with the output of Sigmoid function And using its result as the output of the attention layer.
- 10. the fatigue detection method according to claim 9 based on depth targets monitoring MOD-Net, which is characterized in that institute Stating sorter network layer includes Dense1 and Sigmoid function, and Dense1 is used to the input of the sorter network layer being compressed to one In a characteristic value, this characteristic value is passed through after Sigmoid activation primitive, for two classification, that is, judge in present image whether Include human eye area;The frame Recurrent networks layer includes the convolution kernel of sequentially connected 3*3, Dense100, Dense4 and Sigmoid function, Dense4 is used to the input of the sorter network layer being compressed to 4 characteristic values, this 4 characteristic values are activated by Sigmoid After function, two for having respectively represented eye detection rectangle frame can be used for the positioning of ocular to angular coordinate, that is, scheme Ocular is found as in, and is drawn with rectangle frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910372045.4A CN110063736B (en) | 2019-05-06 | 2019-05-06 | Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910372045.4A CN110063736B (en) | 2019-05-06 | 2019-05-06 | Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110063736A true CN110063736A (en) | 2019-07-30 |
CN110063736B CN110063736B (en) | 2022-03-08 |
Family
ID=67370035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910372045.4A Active CN110063736B (en) | 2019-05-06 | 2019-05-06 | Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110063736B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429316A (en) * | 2020-03-23 | 2020-07-17 | 宁波视科物电科技有限公司 | Online learning special attention detection system and method based on augmented reality glasses |
CN111700585A (en) * | 2020-07-24 | 2020-09-25 | 安徽猫头鹰科技有限公司 | Human eye fatigue degree monitoring system |
CN113317792A (en) * | 2021-06-02 | 2021-08-31 | 樊天放 | Attention detection system and method based on binocular eye vector analysis |
CN115294639A (en) * | 2022-07-11 | 2022-11-04 | 惠州市慧昊光电有限公司 | Color temperature adjustable lamp strip and control method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295474A (en) * | 2015-05-28 | 2017-01-04 | 交通运输部水运科学研究院 | The fatigue detection method of deck officer, system and server |
CN106529496A (en) * | 2016-11-24 | 2017-03-22 | 广西大学 | Locomotive driver real-time video fatigue detection method |
CN106557579A (en) * | 2016-11-28 | 2017-04-05 | 中通服公众信息产业股份有限公司 | A kind of vehicle model searching system and method based on convolutional neural networks |
US20170119298A1 (en) * | 2014-09-02 | 2017-05-04 | Hong Kong Baptist University | Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue |
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
-
2019
- 2019-05-06 CN CN201910372045.4A patent/CN110063736B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170119298A1 (en) * | 2014-09-02 | 2017-05-04 | Hong Kong Baptist University | Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue |
CN106295474A (en) * | 2015-05-28 | 2017-01-04 | 交通运输部水运科学研究院 | The fatigue detection method of deck officer, system and server |
CN106529496A (en) * | 2016-11-24 | 2017-03-22 | 广西大学 | Locomotive driver real-time video fatigue detection method |
CN106557579A (en) * | 2016-11-28 | 2017-04-05 | 中通服公众信息产业股份有限公司 | A kind of vehicle model searching system and method based on convolutional neural networks |
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429316A (en) * | 2020-03-23 | 2020-07-17 | 宁波视科物电科技有限公司 | Online learning special attention detection system and method based on augmented reality glasses |
CN111700585A (en) * | 2020-07-24 | 2020-09-25 | 安徽猫头鹰科技有限公司 | Human eye fatigue degree monitoring system |
CN111700585B (en) * | 2020-07-24 | 2024-01-23 | 安徽猫头鹰科技有限公司 | Human eye fatigue degree monitoring system |
CN113317792A (en) * | 2021-06-02 | 2021-08-31 | 樊天放 | Attention detection system and method based on binocular eye vector analysis |
CN115294639A (en) * | 2022-07-11 | 2022-11-04 | 惠州市慧昊光电有限公司 | Color temperature adjustable lamp strip and control method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN110063736B (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110063736A (en) | The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network | |
CN105050247B (en) | Light intelligent regulating system and its method based on expression Model Identification | |
CN107007257B (en) | The automatic measure grading method and apparatus of the unnatural degree of face | |
CN107423730A (en) | A kind of body gait behavior active detecting identifying system and method folded based on semanteme | |
CN106096662B (en) | Human motion state identification based on acceleration transducer | |
CN109717830A (en) | The fatigue detecting of parameter monitoring is moved based on eye movement and head and promotees system of waking up | |
CN109558865A (en) | A kind of abnormal state detection method to the special caregiver of need based on human body key point | |
CN103211599A (en) | Method and device for monitoring tumble | |
CN103211605B (en) | Psychological testing system and method | |
CN106388771B (en) | A kind of method and motion bracelet of automatic detection human body physiological state | |
CN108683724A (en) | A kind of intelligence children's safety and gait health monitoring system | |
CN109543679A (en) | A kind of dead fish recognition methods and early warning system based on depth convolutional neural networks | |
CN110119672A (en) | A kind of embedded fatigue state detection system and method | |
CN104007822A (en) | Large database based motion recognition method and device | |
CN105843065A (en) | Context awareness system based on wearable equipment, and control method | |
CN109847168A (en) | Wearable fatigue detecting and interfering system | |
CN206314044U (en) | A kind of Intelligent worn device feedback lighting device | |
CN108958482B (en) | Similarity action recognition device and method based on convolutional neural network | |
CN110013231B (en) | Sleep environment illumination condition identification method | |
CN112691292A (en) | Parkinson closed-loop deep brain stimulation system based on wearable intelligent equipment | |
CN109394203A (en) | The monitoring of phrenoblabia convalescence mood and interference method | |
CN113951837A (en) | Reading surface light measuring method in sleeping environment | |
CN107506781A (en) | A kind of Human bodys' response method based on BP neural network | |
CN109567832A (en) | A kind of method and system of the angry driving condition of detection based on Intelligent bracelet | |
CN114255508A (en) | OpenPose-based student posture detection analysis and efficiency evaluation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |