CN106295474B - Fatigue detection method, system and the server of deck officer - Google Patents
Fatigue detection method, system and the server of deck officer Download PDFInfo
- Publication number
- CN106295474B CN106295474B CN201510279711.1A CN201510279711A CN106295474B CN 106295474 B CN106295474 B CN 106295474B CN 201510279711 A CN201510279711 A CN 201510279711A CN 106295474 B CN106295474 B CN 106295474B
- Authority
- CN
- China
- Prior art keywords
- human eye
- fatigue
- eye area
- multiple images
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention proposes fatigue detection method, system and the server of a kind of deck officer.Wherein, which includes: the video flowing for receiving wearable device acquisition;Multiple video frames in video flowing are converted into multiple images information;Obtain the human eye area in multiple images information;And analysis of fatigue is carried out to judge whether deck officer is in a state of fatigue to the human eye area in multiple images information, and analysis result is sent to wearable device.The fatigue detection method of the embodiment of the present invention is obtained the video image of deck officer by wearable device, the quality of the video image of acquisition can be ensured to avoid the influence of various objective factors, to improve the reliability of server fatigue detecting.Also, it is reminded by wearable device when deck officer is in a state of fatigue, to substantially increase safety of the deck officer when driving ship, the generation of accident is avoided, ensure that the security of the lives and property of deck officer.
Description
Technical field
The present invention relates to shipping technical field more particularly to a kind of fatigue detection methods of deck officer, system kimonos
Business device.
Background technique
In recent years, with the fast development of China's shipping industry, see that its comprehensive strength has been obtained significantly on the whole
It improves, has prepared safety, the reach of science condition.However, during shipping business fast development, there are still very much
Problem, safety problem is than more prominent, in the case where major safety risks are unable to get effective control, safety accident Shi Youfa
Raw, the security of the lives and property of deck officer receives great threat.
Compared with the tired identification technology for developing more quick vehicle driver, for the fatigue identification skill of deck officer
Art is also in the budding stage.Currently, the tired identification technology for vehicle driver mainly has, for example, Volvo motor corporation pushes away
Driver alert's system out come assist driver improve traffic safety, give police in time before driver enters sleep state
Show;By Carnegie Mellon University research and development PERCLOS system can by analyze driver's eyes position and aperture, to driving
Member's fatigue state is determined;FaceLAB system passes through monitoring driver head's posture, eyes open and-shut mode, gaze-direction, pupil
The characteristic parameters such as bore dia carry out real-time monitoring to the fatigue state of driver;The AWAKE system of European Union can be by driving
The comprehensively monitoring of member's behavior, by utilizing the multiple sensors such as image, pressure, to driver eye's state, direction of visual lines, direction
The information such as disk grip carry out real-time monitoring.
However, for driver fatigue identification technology, for the tired identification technology of deck officer
Development is mainly influenced by following several respects:
(1) the cockpit area of ship is larger, and deck officer usually wants Cai to take the rows such as Ce Shen lookout in driving procedure
To observe aquatic environment.Therefore, scope of activities when deck officer drives is larger, and existing based on driver fatigue
Identification technology be difficult comprehensively, accurately acquire ship driver status information.
(2) deck officer is there are the features such as easy to operate, single in driving procedure, and the speed of ship is slower in addition,
So that fault-tolerance of the deck officer in driving procedure is stronger, lead to deck officer's regularized operation in driving procedure
Realize weaker.
(3) environment of ship-handling mainly influences in terms of by natural environment and ship environment two.Since offshore environment is by big
The influence of the Multiple factors such as mist, water level fluctuation, usual aquatic environment are more complicated and changeable than road environment.Furthermore the equipment in ship
Noise, vibration degree are more complex, and many such environmental effects cause the labour degree and psychological pressure enhancing of deck officer, pole
Easily cause the fatigue of deck officer.
Therefore, for the fatigue detecting technology of deck officer is compared to the fatigue detecting technology of vehicle driver
It is relative complex, and factor in need of consideration is more.However ship's speed is relatively slow for speed, to ship fatigue driving
Fault-tolerant ability is stronger, therefore not high to the requirement of real-time of ship fatigue detecting.
Summary of the invention
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, the first purpose of this invention is to propose the fatigue detection method of deck officer a kind of, fatigue inspection
Survey method obtains the video image of deck officer by wearable device, can ensure to avoid the influence of various objective factors
The quality of the video image of acquisition, to improve the reliability of server fatigue detecting.Also, it is in tired in deck officer
It is reminded by wearable device when labor state, to substantially increase safety of the deck officer when driving ship
Property, the generation of accident is avoided, ensure that the security of the lives and property of deck officer.
Second object of the present invention is to propose the fatigue detecting system of deck officer a kind of.
Third object of the present invention is to propose a kind of server.
In order to achieve the above object, first aspect present invention embodiment proposes the fatigue detection method of deck officer a kind of,
The following steps are included: receiving the video flowing of wearable device acquisition;Multiple video frames in the video flowing are converted to multiple
Image information;Obtain the human eye area in described multiple images information;And to the human eye area in described multiple images information
Analysis of fatigue is carried out to judge whether deck officer is in a state of fatigue, and analysis result is sent to described wearable set
It is standby.
The fatigue detection method of the deck officer of the embodiment of the present invention obtains deck officer's by wearable device
Video image, can be to avoid the influence of various objective factors, including light environment, water level fluctuation environment, operating environment and ship
Water surface visual field etc. in front of oceangoing ship driver, the front-end collection system based on wearable device can acquire clearly video image,
In ship under the mal-conditions such as shake, illumination deficiency, the quality of the video image of acquisition can be also ensured, to improve service
The reliability of device fatigue detecting.Also, machine vision technique is fused in fatigue detection method, will be regarded by wearable device
Frequency image is sent to server, video image is handled by server, human eye positioning and fatigue detecting, in ship-handling
It is reminded by wearable device when member is in a state of fatigue, to warn deck officer, to substantially increase
Safety of the deck officer when driving ship, avoids the generation of accident, ensure that the lives and properties peace of deck officer
Entirely.
In order to achieve the above object, second aspect of the present invention embodiment proposes the fatigue detecting system of deck officer a kind of,
Including server and wearable device, wherein the wearable device is used to acquire video flowing, and extremely by the video stream
The server, and receive the analysis result that the server is sent;The server is for receiving the wearable device
The video flowing of acquisition, and multiple video frames in the video flowing are converted into multiple images information, and obtain the multiple
Human eye area in image information, and analysis of fatigue is carried out to judge that ship is driven to the human eye area in described multiple images information
Whether the person of sailing is in a state of fatigue, and analysis result is sent to the wearable device.
The fatigue detecting system of the deck officer of the embodiment of the present invention obtains deck officer's by wearable device
Video image, can be to avoid the influence of various objective factors, including light environment, water level fluctuation environment, operating environment and ship
Water surface visual field etc. in front of oceangoing ship driver, the front-end collection system based on wearable device can acquire clearly video image,
In ship under the mal-conditions such as shake, illumination deficiency, the quality of the video image of acquisition can be also ensured, to improve service
The reliability of device fatigue detecting.Also, machine vision technique is fused in fatigue detection method, will be regarded by wearable device
Frequency image is sent to server, video image is handled by server, human eye positioning and fatigue detecting, in ship-handling
It is reminded by wearable device when member is in a state of fatigue, to warn deck officer, to substantially increase
Safety of the deck officer when driving ship, avoids the generation of accident, ensure that the lives and properties peace of deck officer
Entirely.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of server, comprising: receiving module is used for
Receive the video flowing of wearable device acquisition;Conversion module, it is multiple for being converted to multiple video frames in the video flowing
Image information;Module is obtained, for obtaining the human eye area in described multiple images information;And analysis module, for institute
It states the human eye area in multiple images information and carries out analysis of fatigue to judge whether deck officer is in a state of fatigue, and will divide
Analysis result is sent to the wearable device.
The server of the embodiment of the present invention, video image is handled, human eye positioning and fatigue detecting, in ship-handling
It is reminded by wearable device when member is in a state of fatigue, to warn deck officer, to substantially increase
Safety of the deck officer when driving ship, avoids the generation of accident, ensure that the lives and properties peace of deck officer
Entirely.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow chart of the fatigue detection method of the deck officer of one embodiment of the invention;
Fig. 2 is the flow chart of the fatigue detection method of the deck officer of a specific embodiment of the invention;
Fig. 3 is the schematic diagram of Haar feature in the embodiment of the present invention;
Fig. 4 is the schematic diagram of integrogram in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of the fatigue detecting system of the deck officer of one embodiment of the invention;And
Fig. 6 is the structural schematic diagram of the server of one embodiment of the invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include one or more of the features.In the description of the present invention, the meaning of " plurality " is two or more,
Unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Fig. 1 is the flow chart of the fatigue detection method of the deck officer of one embodiment of the invention, and Fig. 2 is the present invention one
The flow chart of the fatigue detection method of the deck officer of a specific embodiment.
As depicted in figs. 1 and 2, the fatigue detection method of deck officer includes:
S101 receives the video flowing of wearable device acquisition.
In one embodiment of the invention, wearable device can be glasses.Specifically, of the invention based on can wear
The fatigue detection method for wearing the deck officer of equipment is to be put down using machine vision based on Raspberry Pi B+ hardware device
What platform was developed, the equipment of front-end collection video flowing can use the form of glasses, and RPi Camera is provided on glasses
Infrared camera.By the way of glasses it is possible to prevente effectively from because of factors such as crewman's scope of activities, driving habit and ship environments
Influence.That is, the eye image of deck officer can be directly acquired by wearable glasses, it not only can be to avoid each
The influence of kind objective factor, can also improve the quality of collected eye image, for the positioning of subsequent human eye and fatigue detecting
Good image information is provided, picture noise is reduced.In addition, can satisfy using infrared camera in the insufficient item of night illumination
The demand of clear eye image is collected under part.
Furthermore, wearable device can first pre-process video image, example after collecting video image
Such as, compression processing is carried out to video image or sets the frame per second etc. of video image, it is possible thereby in the matter for meeting video image
Under the premise of amount requires, video image transfer rate is improved.Then, wearable device passes video image in a manner of video flowing
It send to the image processing server of rear end, wherein server and wearable device can be communicated by wireless network, be led to
The mode of letter may include but be not limited to one of Wifi, infrared, bluetooth, 3G network.Server is receiving wearable device
After the video flowing of acquisition, video flowing is backed up on the server.
Multiple video frames in video flowing are converted to multiple images information by S102.
Specifically, server gets multiple video frames from the video flowing received, and according to preset in server
Multiple video frames are converted to image information by threshold value.For example, wearable device acquires continuous 10 minutes video images, due to just
Ordinary person's wink time was at 0.2-0.4 seconds or so, and speed of blinking under fatigue state is generally relatively slow, is the mistake gradually closed one's eyes
Journey, eyes generally at least need 1 second or so time from closure is opened up into.Therefore, video frame rate can be set as by server
10 (i.e. FPS=10) can meet the real-time capture of human eye state, can produce 6000 sample image letters under this condition
Breath.
It should be understood that server needs to carry out feature learning to the eye information of deck officer in advance, to improve
The accuracy of human eye detection and fatigue detecting.
S103 obtains the human eye area in multiple images information.
In one embodiment of the invention, before the human eye area that server obtains in multiple images information, may be used also
To be pre-processed to multiple images information, thus the image information of the available good quality of server.Wherein, pretreatment packet
Include one or more of image denoising processing, equalization processing, contrast processing.
Specifically, server positions the human eye area in multiple images information, i.e. positioning position of human eye, with from figure
As obtaining the image that accurately part includes human eye area in information, information useless in image information is removed.
In one embodiment of the invention, the human eye area that server obtains in multiple images information specifically includes:
S1031, server carry out human eye positioning to multiple images information according to the Adaboost algorithm based on Haar feature,
And obtain the first human eye area.
S1032, server carries out binary conversion treatment to multiple images information, and calculates according to the Adaboost based on Haar feature
Method carries out human eye positioning to the image information after binary conversion treatment, to obtain the second human eye area.
S1033, server judge whether the first human eye area and the second human eye area match, and will be the first in matching
Vitrea eye domain and/or the second human eye area are as the human eye area in multiple images information.
Specifically, server is combined according to image information to the learning outcome of deck officer's eye feature, using being based on
The Adaboost algorithm of Haar feature carries out just positioning to human eye, obtains the first human eye area.Then, at server by utilizing image
Reason technology is analyzed and is handled to image information, obtains the bianry image of the image information, and according to the bianry image of generation
Positioning again is carried out to human eye using the Adaboost algorithm based on Haar feature, obtains the second human eye area.Then, it takes
Business device matches the first human eye area and the second human eye area, if the image collection of the first human eye area includes the second people
The image collection in Vitrea eye domain then judges human eye detection success, otherwise deletes the image information.
It is extracted specifically, server first carries out the human eye feature based on Haar to image information.Wherein, in image information
Middle human eye feature can be expressed as the information such as coordinate, distance, color, brightness, shape.Haar feature belongs to matrix character, therefore can
To be abstracted as with point, line.The simple graph of the basic sets element such as face composition.Wherein, as shown in figure 3, Haar feature can
It is divided into three classes: edge feature, profile and ring characteristics.The basic thought of Haar feature be exactly first by rectangle frame piecemeal, then
The gray-scale pixels of piecemeal are combined to a kind of characteristic analysis method of analysis with edge feature.It in the target image can be by certain bits
The rectangular image area set is abstracted as Haar feature, target area image feature can be carried out quantification treatment by this method.Figure
As in white area grey scale pixel value and subtract the sum of black region grey scale pixel value, obtained numerical value is exactly institute's area of coverage
The characteristic value in domain.
It is calculated by using the mode of integrogram, feature calculation speed can be improved.Integrogram is that one kind can describe global letter
The matrix representation method of breath, is defined as:
Wherein, f (x, y) is integral image of the original image at (x, y), and g (x ', y ') is the original image at (x, y).
Therefore, as shown in figure 4, at (x, y) point integral image be equal in the sum of this upper left side gray area all pixels value.
In turn, server identifies the position of human eye in image information according to Adaboost algorithm.For capture
For the image of 24*24 pixel, Haar feature wherein only exists a small number of available in the number of images match is up to up to ten thousand
Feature.Quick human eye detection is realized by using Adaboost algorithm in the present invention, and basic thought is to utilize a large amount of training
Collect training Weak Classifier, strong classifier is finally constituted by algorithm superposition.
If human eye area image has k feature, f can be expressed asj(xi), wherein 1≤j≤k, xiIt is expressed as i-th
A sample image.The feature set of so each image is represented by { f1(xi),f2(xi),f3(xi),…fj(xi),…fk(xi),
Wherein, the corresponding Weak Classifier of each feature.
Server is by a Weak Classifier hj(x) composition includes feature fj(x), threshold θjWith symbol pjThree parts,
In, the corresponding Weak Classifier of a feature, classification thresholds are the characteristic value classified to all matrixes, class symbol
It is then the symbol for indicating to possess positive negative direction.Server indicates the Weak Classifier of j-th of feature are as follows:
Wherein, hjIt (x) is the value of Weak Classifier, θjFor threshold value, pjFor controlling sign of inequality direction, value is+1 or -1,
fj(x) it is characterized value.
Based on Adaboost algorithm, to known n training sample (x1,y1),(x2,y2),…,(xn,yn) carry out it is as follows
Step operation, wherein yiThe true and false of the corresponding sample in={ 0,1 }.
(1) n training sample is taken, wherein m human eye sample, l non-human eye samples are expressed as (x1,y1),(x2,
y2),…,(xn,yn), wherein yi=0, yi=1 respectively corresponds human eye sample and non-human eye sample.
(2) initialization error weight, for yi=0 sample,For yi=1 sample,
(3) t=1 is initialized, wherein t≤T, T is training sample classifier number.
(4) weight is normalized to
(5) to each feature f one Weak Classifier h (x, f, p, θ) of training, the weighting (q of its corresponding Weak Classifier is calculatedi)
Error rate εf=∑ | hj(xj)-qi|, and select error εfThe smallest classifier ht, and update weightIts
In, ei=0 indicates correctly to be classified, ei=1 indicates by the classification of mistake,
(6) another t=t+1 is repeated step (4), until t > T.
(7) strong classifier is finally obtained are as follows:
S104 carries out analysis of fatigue to the human eye area in multiple images information to judge whether deck officer is in tired
Labor state, and analysis result is sent to wearable device.
In one embodiment of the invention, server, which carries out analysis of fatigue to human eye area to judge deck officer, is
No in a state of fatigue to specifically include: server calculates the human eye area in multiple images information according to PERCLOS algorithm
PERCLOS value, and PERCLOS value is compared with the threshold value that degree of fatigue differentiates, and in PERCLOS value more than or equal to tired
Judge that deck officer is in a state of fatigue when the threshold value that labor degree differentiates.Wherein, described in server calculates according to the following formula
PERCLOS value:
Wherein, N is human eye area sampling sum in continuous time,
Specifically, since the state of eyes and the degree of fatigue of deck officer have very high correlation, PERCLOS
Algorithm (i.e. Percentage of Eyelid Closure Over the Pupil Over time) is by analyzing eyes
It is opened and closed a kind of method of situation detection fatigue.Wherein, the correlation highest of P80 standard and degree of fatigue is that generally acknowledged " gold is sentenced
It is fixed " standard.
After server positions the human eye area in image information, by image processing techniques to human eye area
The opening and closing degree of middle human eye is determined.That is, after P (i) is calculated in server, server can by P (i) with
The threshold value T that degree of fatigue differentiates is compared, wherein threshold value T is after carrying out Comprehensive Assessment to ship-handling environment according to experiment
Obtained ideal numerical parameter judges that human eye is in closed state, that is, judges deck officer if P (i) >=T
It is in a state of fatigue.If P (i) < T, judges that human eye is in open configuration, that is, judge that deck officer is not in tired shape
State.Then, server will judge that deck officer's analysis result whether in a state of fatigue is sent to wearable device.
In one embodiment of the invention, it will be analyzed after result is sent to wearable device in server, if clothes
When business device judges that deck officer is in a state of fatigue, wearable device carries out warning note.Wherein, warning note includes light
One or more of prompt, voice prompting and vibration prompt.
The fatigue detection method of the deck officer of the embodiment of the present invention obtains deck officer's by wearable device
Video image, can be to avoid the influence of various objective factors, including light environment, water level fluctuation environment, operating environment and ship
Water surface visual field etc. in front of oceangoing ship driver, the front-end collection system based on wearable device can acquire clearly video image,
In ship under the mal-conditions such as shake, illumination deficiency, the quality of the video image of acquisition can be also ensured, to improve service
The reliability of device fatigue detecting.
Also, machine vision technique is fused in fatigue detection method, is sent video image by wearable device
To server, video image is handled by server, human eye positioning and fatigue detecting, be in fatigue in deck officer
It is reminded by wearable device when state, to warn deck officer, to substantially increase deck officer
Safety when driving ship, avoids the generation of accident, ensure that the security of the lives and property of deck officer.
In order to realize above-described embodiment, the present invention also proposes the fatigue detecting system of deck officer a kind of.
Fig. 5 is the structural schematic diagram of the fatigue detecting system of the deck officer of one embodiment of the invention, such as Fig. 5 institute
Show, the fatigue detecting system of deck officer includes server 10 and wearable device 20.
Specifically, wearable device 20 takes for acquiring video flowing, and by video stream to server 10, and reception
The analysis result that business device is sent.Wherein, wearable device 20 can be glasses.Wearable device 20 collect video image it
Afterwards, first video image can be pre-processed, for example, carrying out compression processing to video image or setting the frame of video image
Rate etc., it is possible thereby to improve video image transfer rate under the premise of meeting the quality requirement of video image.Then, it can wear
It wears equipment 20 to be sent to video image in a manner of video flowing on the image processing server 10 of rear end, wherein server 10
It can be communicated by wireless network with wearable device 20, the mode of communication may include but be not limited to Wifi, infrared, blue
One of tooth, 3G network.
Server 10 is used to receive the video flowing of the acquisition of wearable device 20, and multiple video frames in video flowing are converted
For multiple images information, and the human eye area in multiple images information is obtained, and to the human eye area in multiple images information
Analysis of fatigue is carried out to judge whether deck officer is in a state of fatigue, and analysis result is sent to wearable device
20.Specifically, server 10 receive wearable device 20 acquisition video flowing after, by video flowing on server 10
It is backed up.Then, server 10 gets multiple video frames from the video flowing received, and presets according in server 10
Threshold value multiple video frames are converted into image information.For example, wearable device 20 acquires continuous 10 minutes video images, by
In normal person's wink time at 0.2-0.4 seconds or so, and speed of blinking under fatigue state is generally relatively slow, is one and gradually closes one's eyes
Process, eyes generally at least need 1 second or so time from opening up into closure.Therefore, server 10 can be by video frame rate
The real-time capture of human eye state can be met by being set as 10 (i.e. FPS=10), can produce 6000 samples under this condition
Image information.
Wherein, server 10 is also used to before obtaining the human eye area in multiple images information, to multiple images information
It is pre-processed, thus the image information of the available good quality of server 10.Wherein, pretreatment includes at image denoising
One or more of reason, equalization processing, contrast processing.
Then, server 10 positions the human eye area in multiple images information, i.e. positioning position of human eye, with from figure
As obtaining the image that accurately part includes human eye area in information, information useless in image information is removed.
In one embodiment of the invention, server 10 is specifically used for according to the Adaboost algorithm based on Haar feature
Human eye positioning is carried out to multiple images information, and obtains the first human eye area, and carry out at binaryzation to multiple images information
Reason, and human eye positioning is carried out to the image information after binary conversion treatment according to the Adaboost algorithm based on Haar feature, to obtain
Second human eye area, and judge whether the first human eye area and the second human eye area match, and in matching by the first human eye
Region or the second human eye area are as the human eye area in multiple images information.Specifically, server 10 is according to image information knot
The learning outcome to deck officer's eye feature is closed, it is just fixed to carry out using the Adaboost algorithm based on Haar feature to human eye
Position obtains the first human eye area.Then, server 10 is analyzed and is handled to image information using image processing techniques, is obtained
The bianry image of the image information is obtained, and utilizes the Adaboost algorithm based on Haar feature to people according to the bianry image of generation
Eye carries out positioning again, obtains the second human eye area.Then, server 10 is by the first human eye area and the second human eye area
It is matched, if the image collection of the first human eye area includes the image collection of the second human eye area, judges human eye detection
Success, otherwise deletes the image information.
Furthermore, server 10 first carries out the extraction of the human eye feature based on Haar to image information.Wherein, in image
Human eye feature can be expressed as the information such as coordinate, distance, color, brightness, shape in information.Haar feature belongs to matrix character, because
This can be abstracted as with point, line.The simple graph of the basic sets element such as face composition.Wherein, as shown in figure 3, Haar is special
Sign can be divided into three classes: edge feature, profile and ring characteristics.The basic thought of Haar feature is exactly first by rectangle frame point
Block, then the gray-scale pixels of piecemeal are combined to a kind of characteristic analysis method of analysis with edge feature.It can incite somebody to action in the target image
The rectangular image area of specific position is abstracted as Haar feature, can be carried out target area image feature at quantization by this method
Reason.The grey scale pixel value of white area and the sum of black region grey scale pixel value is subtracted in image, obtained numerical value is exactly institute
The characteristic value of overlay area.
Server 10 is calculated by using the mode of integrogram, and feature calculation speed can be improved.Integrogram is that one kind can retouch
The matrix representation method of global information is stated, is defined as:
Wherein, f (x, y) is integral image of the original image at (x, y), and g (x ', y ') is the original image at (x, y).
Therefore, as shown in figure 4, at (x, y) point integral image be equal in the sum of this upper left side gray area all pixels value.
In turn, server 10 identifies the position of human eye in image information according to Adaboost algorithm.For capture
24*24 pixel image for, Haar feature is in the number of images match is up to up to ten thousand, and wherein only existing minority can
Use feature.Quick human eye detection is realized by using Adaboost algorithm in the present invention, and basic thought is to utilize a large amount of instructions
Practice and collect training Weak Classifier, strong classifier is finally constituted by algorithm superposition.
If human eye area image has k feature, f can be expressed asj(xi), wherein 1≤j≤k, xiIt is expressed as i-th
A sample image.The feature set of so each image is represented by { f1(xi),f2(xi),f3(xi),…fj(xi),…fk(xi),
Wherein, the corresponding Weak Classifier of each feature.
Server 10 is by a Weak Classifier hj(x) composition includes feature fj(x), threshold θjWith symbol pjThree parts,
In, the corresponding Weak Classifier of a feature, classification thresholds are the characteristic value classified to all matrixes, class symbol
It is then the symbol for indicating to possess positive negative direction.Server 10 indicates the Weak Classifier of j-th of feature are as follows:
Wherein, hjIt (x) is the value of Weak Classifier, θjFor threshold value, pjFor controlling sign of inequality direction, value is+1 or -1,
fj(x) it is characterized value.
Based on Adaboost algorithm, to known n training sample (x1,y1),(x2,y2),…,(xn,yn) carry out it is as follows
Step operation, wherein yiThe true and false of the corresponding sample in={ 0,1 }.
(1) n training sample is taken, wherein m human eye sample, l non-human eye samples are expressed as (x1,y1),(x2,
y2),…,(xn,yn), wherein yi=0, yi=1 respectively corresponds human eye sample and non-human eye sample.
(2) initialization error weight, for yi=0 sample,For yi=1 sample,
(3) t=1 is initialized, wherein t≤T, T is training sample classifier number.
(4) weight is normalized to
(5) to each feature f one Weak Classifier h (x, f, p, θ) of training, the weighting (q of its corresponding Weak Classifier is calculatedi)
Error rate εf=∑ | hj(xj)-qi|, and select error εfThe smallest classifier ht, and update weightIts
In, ei=0 indicates correctly to be classified, ei=1 indicates by the classification of mistake,
(6) another t=t+1 is repeated step (4), until t > T.
(7) strong classifier is finally obtained are as follows:
In one embodiment of the invention, server 10 is specifically used for calculating multiple images letter according to PERCLOS algorithm
The PERCLOS value of human eye area in breath, and PERCLOS value is compared with the threshold value that degree of fatigue differentiates, Yi Ji
PERCLOS value judges that deck officer is in a state of fatigue when being more than or equal to the threshold value that degree of fatigue differentiates.Wherein, server 10
The PERCLOS value is calculated according to the following formula:Wherein, N is human eye in continuous time
Area sampling sum,Since the state of eyes is driven with ship
The degree of fatigue for the person of sailing has very high correlation, and PERCLOS algorithm is the opening and closing situation detection fatigue by analyzing eyes
A kind of method.Wherein, the correlation highest of P80 standard and degree of fatigue is generally acknowledged " gold judgement " standard.
After server 10 positions the human eye area in image information, by image processing techniques to human eye area
The opening and closing degree of human eye is determined in domain.That is, server 10 can incite somebody to action after P (i) is calculated in server 10
P (i) is compared with the threshold value T that degree of fatigue differentiates, wherein threshold value T is to be integrated according to experiment to ship-handling environment
The ideal numerical parameter obtained after evaluation judges that human eye is in closed state, that is, judges ship if P (i) >=T
Driver is in a state of fatigue.If P (i) < T, judges that human eye is in open configuration, that is, judge that deck officer is not in
Fatigue state.Then, it is wearable will to judge that deck officer's analysis result whether in a state of fatigue is sent to for server 10
Equipment 20.
In one embodiment of the invention, wearable device 20 is also used to judge that deck officer is in when server 10
When fatigue state, warning note is carried out.Wherein, warning note includes one of light prompt, voice prompting and vibration prompt
Or it is a variety of.
The fatigue detecting system of the deck officer of the embodiment of the present invention obtains deck officer's by wearable device
Video image, can be to avoid the influence of various objective factors, including light environment, water level fluctuation environment, operating environment and ship
Water surface visual field etc. in front of oceangoing ship driver, the front-end collection system based on wearable device can acquire clearly video image,
In ship under the mal-conditions such as shake, illumination deficiency, the quality of the video image of acquisition can be also ensured, to improve service
The reliability of device fatigue detecting.
Also, machine vision technique is fused in fatigue detection method, is sent video image by wearable device
To server, video image is handled by server, human eye positioning and fatigue detecting, be in fatigue in deck officer
It is reminded by wearable device when state, to warn deck officer, to substantially increase deck officer
Safety when driving ship, avoids the generation of accident, ensure that the security of the lives and property of deck officer.
In order to realize above-described embodiment, the present invention also proposes a kind of server.
Fig. 6 is the structural schematic diagram of the server of one embodiment of the invention, as shown in fig. 6, server includes receiving mould
Block 110, obtains module 130, analysis module 140 and preprocessing module 150 at conversion module 120, wherein obtaining module 130 includes
First acquisition unit 131, second acquisition unit 132 and judging unit 133, analysis module 140 include computing unit 141, compare
Unit 142 and judging unit 143.
Specifically, receiving module 110 is used to receive the video flowing of wearable device acquisition.
Conversion module 120 is used to multiple video frames in video flowing being converted to multiple images information.Specifically, modulus of conversion
Block 120 gets multiple video frames from the video flowing that receiving module 110 receives, and according to preset threshold value by multiple videos
Frame is converted to image information.For example, wearable device acquires continuous 10 minutes video images, since normal person's wink time exists
0.2-0.4 seconds or so, and speed of blinking under fatigue state is generally relatively slow, is the process gradually closed one's eyes, eyes were from opening
1 second or so time is generally at least needed to closure.Therefore, video frame rate can be set as 10 (i.e. FPS by conversion module 120
=10) the real-time capture that can meet human eye state can produce 6000 sample image information under this condition.
Module 130 is obtained to be used to obtain the human eye area in multiple images information.
In one embodiment of the invention, server further includes preprocessing module 150, preprocessing module 150 for pair
Multiple images information is pre-processed, wherein pretreatment is including in image denoising processing, equalization processing, contrast processing
It is one or more kinds of.
In one embodiment of the invention, obtaining module 130 includes first acquisition unit 131, second acquisition unit 132
With judging unit 133.Wherein, first acquisition unit 131 is used for according to the Adaboost algorithm based on Haar feature to multiple figures
As information carries out human eye positioning, and the first human eye area of acquisition.Second acquisition unit 132 is used to carry out two to multiple images information
Value processing, and human eye positioning is carried out to the image information after binary conversion treatment according to the Adaboost algorithm based on Haar feature,
To obtain the second human eye area.Judging unit 133 for judging whether the first human eye area and the second human eye area match, and
Using the first human eye area and/or the second human eye area as the human eye area in multiple images information when matching.Specifically, first
Acquiring unit 131 combines the learning outcome to deck officer's eye feature according to image information, using based on Haar feature
Adaboost algorithm carries out just positioning to human eye, obtains the first human eye area.Then, second acquisition unit 132 is using at image
Reason technology is analyzed and is handled to image information, obtains the bianry image of the image information, and according to the bianry image of generation
Positioning again is carried out to human eye using the Adaboost algorithm based on Haar feature, obtains the second human eye area.Then, sentence
Disconnected unit 133 matches the first human eye area and the second human eye area, if the image collection of the first human eye area includes
The image collection of second human eye area then judges human eye detection success, otherwise deletes the image information.
Specifically, first acquisition unit 131 and second acquisition unit 132 first carry out the people based on Haar to image information
Eye feature extraction.Wherein, human eye feature can be expressed as the information such as coordinate, distance, color, brightness, shape in image information.
Haar feature belongs to matrix character, therefore can be abstracted as with point, line.The simple graph of the basic sets element such as face composition
Shape.Wherein, as shown in figure 3, Haar feature can be divided into three classes: edge feature, profile and ring characteristics.The base of Haar feature
This thought is exactly then the gray-scale pixels of piecemeal to be combined to a kind of signature analysis of analysis with edge feature first by rectangle frame piecemeal
Method.The rectangular image area of specific position can be abstracted as Haar feature in the target image, it can be by target by this method
Area image feature carries out quantification treatment.In image the grey scale pixel value of white area and subtract black region grey scale pixel value it
With obtained numerical value is exactly the characteristic value of institute overlay area.
It is calculated by using the mode of integrogram, feature calculation speed can be improved.Integrogram is that one kind can describe global letter
The matrix representation method of breath, is defined as:
Wherein, f (x, y) is integral image of the original image at (x, y), and g (x ', y ') is the original image at (x, y).
Therefore, as shown in figure 4, at (x, y) point integral image be equal in the sum of this upper left side gray area all pixels value.
In turn, first acquisition unit 131 and second acquisition unit 132 are according to Adaboost algorithm to the people in image information
Eye position is identified.For the image of the 24*24 pixel of capture, Haar feature images match number up on
Wan Zhong, and wherein only exist a small number of available features.Quick human eye detection is realized by using Adaboost algorithm in the present invention,
Its basic thought is to finally constitute strong classifier by algorithm superposition using a large amount of training sets training Weak Classifier.
If human eye area image has k feature, f can be expressed asj(xi), wherein 1≤j≤k, xiIt is expressed as i-th
A sample image.The feature set of so each image is represented by { f1(xi),f2(xi),f3(xi),…fj(xi),…fk(xi),
Wherein, the corresponding Weak Classifier of each feature.
First acquisition unit 131 and second acquisition unit 132 are by a Weak Classifier hj(x) composition includes feature fj
(x), threshold θjWith symbol pjThree parts, wherein the corresponding Weak Classifier of a feature, classification thresholds are one to all squares
The characteristic value that battle array is classified, class symbol are then the symbols for indicating to possess positive negative direction.Server is by j-th feature
Weak Classifier indicates are as follows:
Wherein, hjIt (x) is the value of Weak Classifier, θjFor threshold value, pjFor controlling sign of inequality direction, value is+1 or -1,
fj(x) it is characterized value.
Based on Adaboost algorithm, to known n training sample (x1,y1),(x2,y2),…,(xn,yn) carry out it is as follows
Step operation, wherein yiThe true and false of the corresponding sample in={ 0,1 }.
(1) n training sample is taken, wherein m human eye sample, l non-human eye samples are expressed as (x1,y1),(x2,
y2),…,(xn,yn), wherein yi=0, yi=1 respectively corresponds human eye sample and non-human eye sample.
(2) initialization error weight, for yi=0 sample,For yi=1 sample,
(3) t=1 is initialized, wherein t≤T, T is training sample classifier number.
(4) weight is normalized to
(5) to each feature f one Weak Classifier h (x, f, p, θ) of training, the weighting of its corresponding Weak Classifier is calculated
(qi) error rate εf=∑ | hj(xj)-qi|, and select error εfThe smallest classifier ht, and update weight
Wherein, ei=0 indicates correctly to be classified, ei=1 indicates by the classification of mistake,
(6) another t=t+1 is repeated step (4), until t > T.
(7) strong classifier is finally obtained are as follows:
Analysis module 140 is used to carry out analysis of fatigue to the human eye area in multiple images information to judge deck officer
It is whether in a state of fatigue, and analysis result is sent to wearable device.
In one embodiment of the invention, analysis module 140 includes that computing unit 141, comparing unit 142 and judgement are single
Member 143.Wherein, computing unit 141 is used to calculate the human eye area in multiple images information according to PERCLOS algorithm
PERCLOS value.Wherein, computing unit 141 calculates PERCLOS value according to the following formula:
Wherein, N is human eye area sampling sum in continuous time, Comparing unit 142
For PERCLOS value to be compared with the threshold value that degree of fatigue differentiates.Judging unit 143 be used for PERCLOS value be greater than etc.
When the threshold value that degree of fatigue differentiates, judge that deck officer is in a state of fatigue.
The server of the embodiment of the present invention, video image is handled, human eye positioning and fatigue detecting, in ship-handling
It is reminded by wearable device when member is in a state of fatigue, to warn deck officer, to substantially increase
Safety of the deck officer when driving ship, avoids the generation of accident, ensure that the lives and properties peace of deck officer
Entirely.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
In the present invention unless specifically defined or limited otherwise, term " installation ", " connected ", " connection ", etc. terms answer
It is interpreted broadly, for example, it may be being fixedly connected, may be a detachable connection, or is integral;It can be mechanical connection,
It can be electrical connection;It can be directly connected, the company inside two elements can also be can be indirectly connected through an intermediary
Logical or two elements interaction relationship, unless otherwise restricted clearly.For the ordinary skill in the art, may be used
To understand the concrete meaning of above-mentioned term in the present invention as the case may be.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example
Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, modifies, replacement and variant.
Claims (18)
1. a kind of fatigue detection method of deck officer, which comprises the following steps:
Receive the video flowing of wearable device acquisition;
Multiple video frames in the video flowing are converted into multiple images information;
Obtain the human eye area in described multiple images information, wherein according to the Adaboost algorithm based on Haar feature to institute
It states multiple images information and carries out human eye positioning, and obtain the first human eye area;Described multiple images information is carried out at binaryzation
Reason, and human eye positioning is carried out to the image information after binary conversion treatment according to the Adaboost algorithm based on Haar feature, to obtain
Second human eye area;Judge whether first human eye area and second human eye area match, and will be described in matching
First human eye area and/or second human eye area are as the human eye area in described multiple images information;And
Analysis of fatigue is carried out to judge whether deck officer is in tired shape to the human eye area in described multiple images information
State, and analysis result is sent to the wearable device.
2. the fatigue detection method of deck officer as described in claim 1, which is characterized in that carried out to the human eye area
Analysis of fatigue is specifically included with judging whether deck officer is in a state of fatigue:
The PERCLOS value of the human eye area in described multiple images information is calculated according to PERCLOS algorithm, and will be described
PERCLOS value is compared with the threshold value that degree of fatigue differentiates, and is more than or equal to the degree of fatigue in the PERCLOS value
Judge that the deck officer is in a state of fatigue when the threshold value of differentiation.
3. the fatigue detection method of deck officer as claimed in claim 2, which is characterized in that calculate institute according to the following formula
State PERCLOS value:
Wherein, N is human eye area sampling sum in continuous time,
4. the fatigue detection method of deck officer as described in claim 1, which is characterized in that be sent to result will be analyzed
After the wearable device, further includes:
When judging that the deck officer is in a state of fatigue, the wearable device carries out warning note.
5. the fatigue detection method of deck officer as claimed in claim 4, which is characterized in that the warning note includes lamp
One or more of light prompt, voice prompting and vibration prompt.
6. the fatigue detection method of deck officer as described in claim 1, which is characterized in that obtaining described multiple images
Before human eye area in information, further includes:
Described multiple images information is pre-processed, wherein it is described pretreatment include image denoising processing, equalization processing,
One or more of contrast processing.
7. the fatigue detection method of deck officer as claimed in any one of claims 1 to 6, which is characterized in that described wearable
Equipment is glasses.
8. a kind of fatigue detecting system of deck officer characterized by comprising server and wearable device, wherein
The wearable device is used to acquire video flowing, and by the video stream to the server, and described in reception
The analysis result that server is sent;And
The server is used to receive the video flowing of wearable device acquisition, and by multiple video frames in the video flowing
Multiple images information is converted to, and obtains the human eye area in described multiple images information, and to described multiple images information
In human eye area carry out analysis of fatigue to judge whether deck officer in a state of fatigue, and by analysis result be sent to
The wearable device;
Wherein, the server be specifically used for according to the Adaboost algorithm based on Haar feature to described multiple images information into
Pedestrian's eye positioning, and the first human eye area is obtained, and binary conversion treatment is carried out to described multiple images information, and according to being based on
The Adaboost algorithm of Haar feature carries out human eye positioning to the image information after binary conversion treatment, to obtain the second human eye area
Domain, and judge whether first human eye area and second human eye area match, and will be described the first in matching
Vitrea eye domain or second human eye area are as the human eye area in described multiple images information.
9. the fatigue detecting system of deck officer as claimed in claim 8, which is characterized in that the service implement body is used
In:
The PERCLOS value of the human eye area in described multiple images information is calculated according to PERCLOS algorithm, and will be described
PERCLOS value is compared with the threshold value that degree of fatigue differentiates, and is more than or equal to the degree of fatigue in the PERCLOS value
Judge that the deck officer is in a state of fatigue when the threshold value of differentiation.
10. the fatigue detecting system of deck officer as claimed in claim 9, which is characterized in that server is according to following public affairs
Formula calculates the PERCLOS value:
Wherein, N is human eye area sampling sum in continuous time,
11. the fatigue detecting system of deck officer as claimed in claim 8, which is characterized in that the wearable device is also
For:
When the server judges that the deck officer is in a state of fatigue, warning note is carried out.
12. the fatigue detecting system of deck officer as claimed in claim 11, which is characterized in that the warning note includes
One or more of light prompt, voice prompting and vibration prompt.
13. the fatigue detecting system of deck officer as claimed in claim 8, which is characterized in that the server is also used to:
Described multiple images information is pre-processed, wherein it is described pretreatment include image denoising processing, equalization processing,
One or more of contrast processing.
14. such as the fatigue detecting system of the described in any item deck officers of claim 8-13, which is characterized in that described to wear
Wearing equipment is glasses.
15. a kind of server characterized by comprising
Receiving module, for receiving the video flowing of wearable device acquisition;
Conversion module, for multiple video frames in the video flowing to be converted to multiple images information;
Module is obtained, for obtaining the human eye area in described multiple images information, wherein the acquisition module is obtained including first
Unit, second acquisition unit and judging unit are taken, the first acquisition unit is used for according to the Adaboost based on Haar feature
Algorithm to described multiple images information carry out human eye positioning, and obtain the first human eye area, the second acquisition unit for pair
Described multiple images information carry out binary conversion treatment, and according to the Adaboost algorithm based on Haar feature to binary conversion treatment after
Image information carries out human eye positioning, and to obtain the second human eye area, the judging unit is for judging first human eye area
Whether matched with second human eye area, and in matching by first human eye area and/or second human eye area
As the human eye area in described multiple images information;And
Analysis module, for being to judge deck officer to the human eye area progress analysis of fatigue in described multiple images information
It is no in a state of fatigue, and analysis result is sent to the wearable device.
16. server as claimed in claim 15, which is characterized in that the analysis module includes:
Computing unit, for calculating the PERCLOS value of the human eye area in described multiple images information according to PERCLOS algorithm;
Comparing unit, for the PERCLOS value to be compared with the threshold value that degree of fatigue differentiates;And
Judging unit, for judging the ship when the PERCLOS value is more than or equal to the threshold value that the degree of fatigue differentiates
Driver is in a state of fatigue.
17. server as claimed in claim 16, which is characterized in that described in the computing unit calculates according to the following formula
PERCLOS value:
Wherein, N is human eye area sampling sum in continuous time,
18. server as claimed in claim 15, which is characterized in that further include:
Preprocessing module, for being pre-processed to described multiple images information, wherein the pretreatment includes at image denoising
One or more of reason, equalization processing, contrast processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510279711.1A CN106295474B (en) | 2015-05-28 | 2015-05-28 | Fatigue detection method, system and the server of deck officer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510279711.1A CN106295474B (en) | 2015-05-28 | 2015-05-28 | Fatigue detection method, system and the server of deck officer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106295474A CN106295474A (en) | 2017-01-04 |
CN106295474B true CN106295474B (en) | 2019-03-22 |
Family
ID=57634266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510279711.1A Active CN106295474B (en) | 2015-05-28 | 2015-05-28 | Fatigue detection method, system and the server of deck officer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295474B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304764B (en) * | 2017-04-24 | 2021-12-24 | 中国民用航空局民用航空医学中心 | Fatigue state detection device and detection method in simulated flight driving process |
CN109407609A (en) * | 2018-12-05 | 2019-03-01 | 江苏永钢集团有限公司 | A kind of facility information point detection system |
CN110063736B (en) * | 2019-05-06 | 2022-03-08 | 苏州国科视清医疗科技有限公司 | Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network |
CN111353636A (en) * | 2020-02-24 | 2020-06-30 | 交通运输部水运科学研究所 | Multi-mode data based ship driving behavior prediction method and system |
CN113947869B (en) * | 2021-10-18 | 2023-09-01 | 广州海事科技有限公司 | Alarm method, system, computer equipment and medium based on ship driving state |
CN114537612A (en) * | 2021-12-31 | 2022-05-27 | 武汉理工大学 | Fatigue detection device and method for crew on duty at ship bridge |
CN114663964A (en) * | 2022-05-24 | 2022-06-24 | 武汉理工大学 | Ship remote driving behavior state monitoring and early warning method and system and storage medium |
CN116824555A (en) * | 2023-06-14 | 2023-09-29 | 交通运输部水运科学研究所 | Monitoring method and system for fatigue degree of crewman during sailing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324166B (en) * | 2011-09-19 | 2013-06-12 | 深圳市汉华安道科技有限责任公司 | Fatigue driving detection method and device |
CN103093215B (en) * | 2013-02-01 | 2016-12-28 | 北京天诚盛业科技有限公司 | Human-eye positioning method and device |
CN104269028B (en) * | 2014-10-23 | 2017-02-01 | 深圳大学 | Fatigue driving detection method and system |
-
2015
- 2015-05-28 CN CN201510279711.1A patent/CN106295474B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106295474A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295474B (en) | Fatigue detection method, system and the server of deck officer | |
CN101593425B (en) | Machine vision based fatigue driving monitoring method and system | |
CN104637246B (en) | Driver multi-behavior early warning system and danger evaluation method | |
CN103440475B (en) | A kind of ATM user face visibility judge system and method | |
CN202257856U (en) | Driver fatigue-driving monitoring device | |
CN109389806A (en) | Fatigue driving detection method for early warning, system and medium based on multi-information fusion | |
CN108647582A (en) | Goal behavior identification and prediction technique under a kind of complex dynamic environment | |
CN202130312U (en) | Driver fatigue driving monitoring device | |
CN109460699A (en) | A kind of pilot harness's wearing recognition methods based on deep learning | |
CN106156688A (en) | A kind of dynamic human face recognition methods and system | |
CN104013414A (en) | Driver fatigue detecting system based on smart mobile phone | |
CN103942850A (en) | Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology | |
Du et al. | A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS | |
CN103366506A (en) | Device and method for automatically monitoring telephone call behavior of driver when driving | |
CN101950355A (en) | Method for detecting fatigue state of driver based on digital video | |
CN102752458A (en) | Driver fatigue detection mobile phone and unit | |
CN102085099A (en) | Method and device for detecting fatigue driving | |
CN105844245A (en) | Fake face detecting method and system for realizing same | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
CN109190475A (en) | A kind of recognition of face network and pedestrian identify network cooperating training method again | |
CN105117681A (en) | Multi-characteristic fatigue real-time detection method based on Android | |
CN116883946B (en) | Method, device, equipment and storage medium for detecting abnormal behaviors of old people in real time | |
CN109002774A (en) | A kind of fatigue monitoring device and method based on convolutional neural networks | |
CN108960216A (en) | A kind of detection of dynamic human face and recognition methods | |
CN113140093A (en) | Fatigue driving detection method based on AdaBoost algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |