Summary of the invention
The object of the present invention is to provide the health protecting equipment that general protection can be provided in the face of the health status of electronic equipments such as display human body.
According to first aspect, the invention provides a kind of health protecting equipment, comprising:
Image capture module is used to gather video image;
Image analysis module, employing automated image analysis method is obtained the human eye information in the image;
Judge driver module, receive the human eye information of image analysis module, according to this human eye information judgement people's health status, whether needs produce drive signal according to this health condition judging;
Prompting module produces prompting message under the driving of described drive signal.
Preferably, image capture module adopts camera.More preferably adopt low resolution CMOS camera.
In preferred embodiments, image analysis module comprises that people's face detects tracking cell, adopt people's face to detect tracking the people's face in the image is detected tracking, and human eye location tracking cell, adopt the human eye positioning and tracing method image that detects people's face to be carried out the location and the tracking of human eye.
Preferably, described people's face detects tracking cell and adopts level type self-adaptation to strengthen the training of (AdaBoost) algorithm to choose microstructure (Haar-like) feature and form sorter and carry out people's face and detect.
In one embodiment, described human eye is located tracking cell and in the following way the human eye in the image is positioned tracking:
On the basis of the people's face positional information that obtains, adopt statistical to determine simple eye region of search, and in described zone, determine simple eye primary election position;
Adopt eyes local feature detecting device that possible simple eye position is differentiated, determine a simple eye similarity numerical value for each simple eye position candidate according to differentiating the result;
On the similarity basis of simple eye position candidate, it is right that eyes are made into eyes position candidate;
Adopt the eyes area detector as global restriction, each eyes zone differentiated, for each eyes candidate wherein to determining an eyes similarity numerical value;
According to eyes similarity numerical value, obtain final left eye, right eye characteristic point position.
Preferably, the human eye information that analyzes of described image analysis module comprises human eye presentative time and/or position of human eye.
In one embodiment, judge that driver module can comprise with identifying unit between at the moment, judges according to described human eye presentative time whether needs produce drive signal.
In another embodiment, described judgement driver module comprises the top rake identifying unit, judges according to position of human eye whether needs produce drive signal.
In another embodiment, described judgement driver module comprises the side rake angle identifying unit, judges according to position of human eye whether needs produce drive signal.
Image analysis module among the present invention and judgement driver module can be realized by PC, single-chip microcomputer, industrial computer.
In preferred embodiments, judge that driver module produces different drive signals according to different health status.Prompting module preferably can produce different prompting messages according to different drive signals.
According to second aspect, the present invention also provides a kind of method that realizes healthy protect, and this method comprises:
Images acquired;
Employing automated image analysis method is obtained the human eye information in the image;
According to described human eye information judgement people's health status, whether needs produce drive signal according to this health condition judging;
Under the driving of drive signal, produce prompting message.
Human eye information preferably includes human eye presentative time and/or position of human eye.Correspondingly, whether the step of judgement health status comprises judges described human eye presentative time at preset range, and/or whether judges top rake and/or side rake angle at preset range according to position of human eye, and whether judge thus needs to produce drive signal.
Preferably, different drive signals can drive the different prompting message of generation.
Health protecting equipment provided by the invention and method thereof can comprehensively be judged the health status of human body, thereby ensure user's health status more all sidedly.
Embodiment
Fig. 1 is the structural drawing of health protecting equipment of the present invention.As shown in Figure 1, this device comprises image capture module, image analysis module, judgement driver module and prompting module.
Image capture module is used for gathering the preceding scene image of display, and image is sent to image analysis module.Image capture module can adopt any device that can obtain image.Consider cost, can adopt camera.Preferably, in one embodiment, image capture module adopts the low CMOS camera of resolution, and more preferably, the camera lens of camera can be made for other cheap transparent materials such as plastics, further to reduce cost.Camera collection is passed to image analysis module after image.
Image analysis module adopts people's face to detect tracking and the human eye positioning and tracing method obtains human eye information in the image, gives the judgement driver module.
Judge the human eye information that driver module obtains according to analysis module, measure people's the residence time and body angle, judge whether to meet healthy way, when judging unhealthy condition, produce drive signal, drive prompting module and produce prompting message.
Image analysis module, judgement driver module, the platform of realization can be PC (PC), also can be any embedded scm, platforms such as industrial computer.
In one embodiment, image analysis module comprises that people's face detects tracking cell and human eye location tracking cell, realizes that respectively people's face detects tracking and human eye is located the function of following the tracks of.
People's face of existing multiple maturation detects tracking in the prior art, and those skilled in the art can detect in the tracking at various popular people's faces as required and select, and are used in people's face of the present invention and detect in the tracking cell.
In one embodiment, people's face detection tracking cell adopts the real-time detection of people's face in a kind of video sequence that provides in the Chinese patent application 200510135668.8 (publication number CN1794264) and the method that continues to follow the trail of to realize.The disclosed content of this application is included in this instructions by the mode of quoting.In this scheme, people's face detect tracking cell at first search for detected image in have nobody's face to occur, then to determining that the people's face that occurs follows the tracks of.
According to the method that provides in the above-mentioned patented claim, the people's face detection tracking cell employing level type AdaBoost method training in the present embodiment is chosen Haar-like microstructure features composition sorter and is carried out the detection of people's face.Specifically, people's face detects the training of adopting the theoretical people of realization of AdaBoost face detection statistics model, and the microstructure features that uses a kind of similar Harr small echo comes expressing human face pattern, in conjunction with the AdaBoost method, form a kind of feature selection approach, a plurality of Weak Classifiers based on single feature are formed a strong classifier, then a plurality of strong classifiers are unified into complete people's face and detect sorter.Adopt this level type multistage classifier preliminary judgement to occur making to nobody's face is arranged in every two field picture.But people's face detects concrete grammar list of references P.Viola and M.Jones.Robust real time object detection.IEEE ICCV Workshop on Statistical and Computational Theoriesof Vision, Vancouver, Canada, method among the July 13,2001.
After have people's face to occur, then pre-in ensuing n two field picture follow the tracks of these people's faces in Preliminary detection, and people's face of following the tracks of in the follow-up n two field picture is carried out people's face detection validation, judge whether the testing result of front is genuine people's face.Wherein n can equal 1, also can be greater than 1.
Fig. 2 shows above-mentioned people's face and detects the process that tracking cell detects image.After step 100 was obtained image by camera, image was transferred into image analysis module, was searched for and was detected by people's face detection tracking cell wherein.At first, utilize above-mentioned level type multistage classifier that image is carried out Preliminary detection, judge, find the face of having no talent in the image, then get back to step 200 and continue the next frame image is detected if detect in step 202 pair testing result in step 200; If in image, detected one or more people's faces, then enter step 204, in ensuing n two field picture, the people's face that occurs is followed the tracks of in advance.Judge in step 206 pair pre-tracking results, if in ensuing n frame on the original position people's face no longer occur, think that then not having real people's face occurs, and gets back to step 200 pair subsequent images and carries out Preliminary detection again; If the n frame all has people's face to occur on certain position continuously, then confirm people's face to have occurred, enter step 300, begin to follow the tracks of this people's face.
In the tracing process, adopt average drifting (Mean shift) algorithm to obtain the matching result in the next frame, and obtain the similarity of itself and former frame facial image.If similarity is lower than certain threshold value, then think not trace into people's face, if similarity is higher than this threshold value, think to trace into people's face.In order further to avoid tracing on the background, every the p frame people's face that traces into is carried out detection validation,, then think to trace into background if all can't authenticate to the existence of people's face continuously for q time, finish tracing process, get back to step 200 pair image and restart the full figure detection.P wherein, q are the integer greater than zero, and more excellent p gets 2-10, and q gets 3-8.
A kind of feasible detection validation embodiment is: suppose the current human face region that traces into be R (x, y, W, H), wherein x is people's face center horizontal ordinate, y is people's face central longitudinal coordinate, W is people's face peak width, H is people's face region height.The setting search zone be SR (x, y, SW, SH), wherein SW is the region of search width, and SW=W*SSR, SH is the region of search height, and SH=H*SSR, wherein the constant of SSR for setting is generally the number between the 0.5-2.0.Seeker's face width range is [W*U1, W*U2], and U1, U2 are constant, and U1 is the number between the 0-1.0, and U2 is the number between 1.0 to 2.0.Then adopt people's face detection model, in region S R, the people face of size in [W*U1, W*U2] scope detected,, think that then this tracking results can pass through people's face detection validation if can detect, otherwise, think and can't pass through people's face detection validation.
If detected a plurality of people's faces in detecting step, people's face detects tracking cell and provides unique ID according to people's face region for everyone face, and everyone face is followed the tracks of.If tracing into someone's face in the two field picture down, the concluding time of then upgrading this people's face correspondence is the current time, if do not trace into someone's face, does not then upgrade the concluding time of this people's face.People's face detects people's face that tracking cell can draw the corresponding people of each ID and presents the duration like this.
Detect tracking cell at people's face and detect on the basis of people's face position, human eye location tracking cell carries out the location and the tracking of human eye to the image that detects people's face.Specifically, the step 206 of Fig. 2 judge draw define the conclusion that people's face occurs after, when this people's face is followed the tracks of, human eye information is analyzed the position of the orienting human eye line trace of going forward side by side with the human eye positioning and tracing method.
The method of existing multiple facial Feature Localization identification in the prior art is as based on people's face direct picture feature extracting method of geometric properties, based on the face feature extraction method of priori template, based on face characteristic point-tracking method of KLT (Kanade-Lucas-Tomasi) method or the like.Those of ordinary skills can be on the basis of existing technology, and the method for selecting to be fit to is applied to human eye location tracking cell, discerns and follows the tracks of with the location of realizing human eye in the facial characteristics.
In a preferred embodiment, the eye locating method that provides in the Chinese patent application 200610011673.2 (publication number CN1822024) can be provided human eye location tracking cell, and the disclosed content of this application is included in this instructions by the mode of quoting.Fig. 3 is according to this method, and human eye location tracking cell carries out human eye location process figure to image.
As shown in Figure 3,, detect on the basis that obtains people's face positional information, adopt the mode of statistics to determine left eye region of search and right eye region of search, and in described zone, determine left eye and right eye primary election position respectively at facial image at first in step 301.
In step 302, adopt eyes local feature detecting device that all possible single eye position is differentiated, determine a simple eye similarity numerical value according to differentiating the result for each simple eye position candidate.That is to say, in described left eye and right eye region of search, adopt left eye local feature detecting device and right eye local feature detecting device respectively, described definite left eye region and right eye region are differentiated respectively, for wherein left eye similarity numerical value and right eye similarity numerical value are determined in each left eye primary election position and right eye primary election position.
In step 303, on the similarity basis of simple eye position candidate, it is right that eyes are made into eyes position candidate.Specifically, from all left eye primary election positions and right eye primary election position, select the preceding N of similarity numerical value maximum respectively
1Individual position is as left-eye candidate positions and right eye position candidate, and it is right that all left eyes and right eye position candidate are made into the eyes candidate, with each candidate to being that benchmark is determined the eyes zone.This N
1Big I preestablish by the system of human eye location tracking cell.
In step 304, adopt the eyes area detector as global restriction, described each eyes zone is differentiated, for each eyes candidate wherein to determining an eyes similarity numerical value.
In step 305,, obtain final left eye, right eye characteristic point position at last according to eyes similarity numerical value.Usually way is, selects the preceding M of eyes similarity numerical value maximum
1Individual eyes candidate is right, to all left-eye candidate positions and all right eye position candidate difference calculating mean value wherein, as left eye characteristic point position and right eye characteristic point position.Equally, this M
1Numerical value also can preestablish by system.
After human eye is carried out feature location, just can in successive image, follow the tracks of this human eye.
Used above-mentioned people's face to detect and followed the tracks of and the human eye positioning and tracing method, image analysis module can be carried out the analyzing and processing of people's face information, human eye information to image, draws required human eye information.In one embodiment, this human eye information comprises human eye presentative time and position of human eye information.Because human eye presentative time and people's face presentative time equate, can be in the process that above-mentioned personnel selection face detection tracking cell is followed the tracks of people's face, personnel selection deducts the start time face concluding time and draws people's face presentative time t, and thinks that this value is equal to the human eye presentative time.The human eye presentative time also can be located after tracking cell positions the human eye in the image by human eye, draws its presentative time by tracking.A kind of embodiment of the someone's of calculating face presentative time is: after detecting someone face for the first time, with time set at that time is the beginning presentative time of this people's face, when detecting or tracing into this people's face once more, the end presentative time that upgrades this people's face is the current time, and then the presentative time of this people's face deducts the end presentative time for the beginning presentative time.A kind of embodiment of calculating certain human eye presentative time is: after detecting certain human eye for the first time, with time set at that time is the beginning presentative time of this human eye, when detecting or tracing into this human eye once more, the end presentative time that upgrades this human eye is the current time, and then the presentative time of this human eye deducts the end presentative time for the beginning presentative time.
Position of human eye information comprise human eye apart from camera or display apart from d and in the image of gathering right and left eyes position coordinates separately.Can drawing by analyzing in the image that detects by prior art apart from d of human eye range display is as the method for disclosed calculating d among the Chinese patent publication number CN101033955.Fig. 4 illustrates the elements of a fix of human eye in the image that obtains.In the coordinate system of setting up as shown in the figure, the horizontal ordinate of right and left eyes is respectively (x
1, y
1), (X
r, y
r) (the corresponding real right eye of the human eye of left position among this figure, the mark of right and left eyes carries out according to true human eye).
After image analysis module draws human eye information, send these information to the judgement driver module, the judgement driver module is judged people's health status according to described human eye information.In one embodiment, judge that driver module comprises with identifying unit, top rake identifying unit and side rake angle identifying unit between at the moment.In other embodiments, judge that driver module can comprise one or more in the above unit as required.
With identifying unit between at the moment human eye presentative time t and threshold value T are compared.If t 〉=T, then think with of a specified duration excessively between at the moment, send with the drive signal of warning between at the moment and give prompting module; If t<T does not then drive.Described threshold value T can be preestablished by system, also can oneself be set as required or change by the user.If detecting, image analysis module traced into a plurality of people's faces, transmit a plurality of human eye presentative times and given the judgement driver module, then each human eye presentative time can be compared with threshold value T respectively with identifying unit between at the moment, as long as one of them human eye presentative time more than threshold value T, then sends drive signal.
The position of human eye coordinate that the top rake identifying unit draws according to image analysis module judge the people in the top rake of display screen front whether within healthy scope.Fig. 5 illustrates the synoptic diagram of top rake.As shown in the figure, the E point is the subpoint of eye position in display plane, and d is the distance of people's eye distance from display, y
EBe the distance of human eye on display plane is vertical.Suppose that in the ordinary course of things the imaging plane of the image device of image capture module is parallel to display, and the relative position of image device and display remains unchanged, so approximately think y
EThe ordinate position of eyes is proportional in the image that obtains with image capture module, with reference to figure 4, i.e. y
EBe proportional to (y
1+ y
r)/2.When imaging plane and display are not parallel, can obtain approximation.According to the analysis result of image analysis module can learn the eye distance screen apart from d, can think the top rake q of human eye when seeing display so
EMSatisfy:
tan(q
EM)=(y
1+y
r)/2d
q
EM=arctan ((y
1+ y
rThe formula of)/2d) (1)
In one embodiment, the top rake identifying unit calculates top rake q according to formula (1)
EM, judge that this angle is whether at the top rake scope [q of health
Min, q
Max] within, if q
EM>q
Max, then top rake is too big, illustrates that user's eye position is too high, need turn down, if q
EM<q
Min, illustrate that then user's eye position is too low, need heighten.Healthy inclination angle minimum value q
Min, maximal value q
MaxSetting can be given according to the research conclusion of healthy aspect, also can set according to personal habits by the user.
In another embodiment, formula (1) is simplified.Under the low-angle situation, be similar to and think:
q
EM=tan (q
EM)=(y
1+ y
r)/2d formula (2)
Therefore in this embodiment, calculate top rake, compare with predefined top rake scope then according to formula (2).
In another embodiment, formula (2) is further simplified.Consider common people from the distance of display all within a small range, normally about 50cm, therefore can be similar to and think that apart from d be a definite value.In this case, only consider the ordinate y of human eye in the image
1+ y
rJust can draw the whether excessive conclusion of top rake.So, in the method for simplifying, can perhaps only incite somebody to action the wherein ordinate and the healthy top rake ordinate scope [y of an eye more simply only with the mean value y of eyes ordinate
Min, y
Max] compare, as y>y
MaxThe time, illustrating that then user's eye position is too high, need turn down, if y<y
Min, illustrate that then user's eye position is too low, need heighten.
The side rake angle identifying unit according to position of human eye coordinate in the image that obtains judge the people at the side rake angle of display screen front whether within healthy scope.For drawing this side rake angle, be similar to and think that the angle of human eye and horizontal direction approximates the side rake angle of people's backbone.Can draw the angle q between eyes and the horizontal direction with reference to human eye location map shown in Figure 4
BLSatisfy:
tan(q
BL)=(y
1-y
r)/(x
1-x
r)
q
BL=arctan ((y
1-y
r)/(x
1-x
r)) formula (3)
In one embodiment, the side rake angle identifying unit calculates side rake angle q according to formula (3)
BL, judge that this angle is whether within the side rake angle scope [Qmin, Qmax] of health, if q
BL>Qmax, then true left eye position is higher, and Right deviation is too serious, needs to adjust left, if q
BL<Qmin illustrates that then "Left"-deviationist is too serious, needs to adjust to the right.Healthy inclination angle minimum value Qmin, the setting of maximal value Qmax can be given according to the research conclusion of healthy aspect, also can be set according to personal habits by the user.
Under the low-angle situation, also can simplify formula (3), draw:
q
BL=tan (q
BL)=(y
1-y
r)/(x
1-x
r) formula (4)
Therefore in another embodiment, also can again this angle and health perspectives scope be compared according to formula (4) calculation side inclination angle.In the method for further simplifying, think that the distance between two of the people is approximately definite value, therefore, can only judge that according to the difference of two ordinates whether side rake angle is in healthy scope.
In preferred embodiments, top rake identifying unit and/or side rake angle identifying unit are judged top rake and/or side rake angle not when healthy scope, do not drive prompting module immediately and produce prompting, but the start time and the concluding time of unhealthy condition all is set at the current time, if the position of human eye that successive image provides still is in unhealthy condition, then upgrade the concluding time, draw the unhealthy condition duration thus.When the unhealthy condition duration surpasses certain threshold value, just produce drive signal, drive prompting module.This threshold value can be preestablished by system, also can be set or be revised by user oneself.
Prompting module can comprise audio output interface for producing the device that prompting people such as sound, image note information arbitrarily, is used to drive audio amplifier, earphone; Comprise that also kinds of displays is used for display alarm information.The content of reminding can be sound prompting, perhaps demonstrates the image of prompting meaning and mild and roundabout statement etc. by display.A plurality of different healthy contents are judged produce different drive signals, therefore preferably, prompting module can be sent different reminded contents according to different drive signals owing to judge driver module.For example, for long between at the moment, just by prompting module temporarily the display displaying contents be set to screen protection, and emit music and impel user's rest eyes, excessive for the inclination angle, by demonstrating prompting statement or the like on the display.Preferably, the mode of prompting and content can be set by the user.
More than specific descriptions of the present invention are intended to illustrate the implementation of specific embodiments can not be interpreted as it is limitation of the present invention.Those of ordinary skills can make various variants on the basis of the embodiment that describes in detail under instruction of the present invention, these variants all should be included within the design of the present invention.The present invention's scope required for protection is only limited by described claims.