CN104571487A - Monitoring method, device and system - Google Patents

Monitoring method, device and system Download PDF

Info

Publication number
CN104571487A
CN104571487A CN201410054265.XA CN201410054265A CN104571487A CN 104571487 A CN104571487 A CN 104571487A CN 201410054265 A CN201410054265 A CN 201410054265A CN 104571487 A CN104571487 A CN 104571487A
Authority
CN
China
Prior art keywords
mentioned
user
image sequence
eye
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410054265.XA
Other languages
Chinese (zh)
Other versions
CN104571487B (en
Inventor
邹嘉骏
方志恒
林伯聪
高嘉文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Utechzone Co Ltd
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Publication of CN104571487A publication Critical patent/CN104571487A/en
Application granted granted Critical
Publication of CN104571487B publication Critical patent/CN104571487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a monitoring method, a monitoring device and a monitoring system. Displaying a user interface in the display unit, wherein the user interface has a plurality of seat blocks corresponding to a plurality of positions. Image sequences are obtained from image capturing units disposed at respective positions in space. And executing a face recognition algorithm on the image sequence to judge whether the image sequence has the portrait of the user. And if the image sequence has the portrait, executing an eye tracking algorithm on the image sequence. Then, based on whether the user looks at the designated direction, the corresponding marking operation is executed for each seat block.

Description

Method for supervising, Apparatus and system
Technical field
The present invention relates to a kind of monitoring mechanism, particularly relate to a kind of method for supervising, the Apparatus and system that move tracking based on eye.
Background technology
Current eye moves tracer technique and mainly can divide into and invasive (invasive) and Noninvasive (non-invasive) two kinds.Invasive eye moves tracer technique and in eyes, mainly arranges search coil method (search Coil) or use eye to move electric wave figure (electrooculogram).But not invasive eye moves tracer technique then can divide into and exempt from wear-type (free-head) or wear-type (head-mount) human eye tracer technique.
Such as, and along with the development of science and technology, eye moves tracer technique and is significantly applied to various field, Neuscience, psychology, Industrial Engineering, Human Engineering, marketing advertisement, computer science etc.Such as, U.S. Patent number US2010/0092929 proposes the cognition and language assessment system that utilize eyeball tracking technology, it makes use of eyeball tracking technology, obtain the eyeball fixes position of ward mate and the time continued, be used for test language comprehension, working memory power, Automobile driving ability by this, and the ability that meaning of one's words connection starts.
Summary of the invention
The invention provides a kind of method for supervising, Apparatus and system, move based on eye the absorbed degree that tracing algorithm assists monitoring user.
Method for supervising of the present invention, it is for a supervising device.At this, show user interface in display unit, wherein user interface comprises multiple seats block, and these seat blocks correspond to the multiple positions with supervising device the same space respectively, and each position is provided with a taking unit.From these positions, respective taking unit obtains image sequence, performs face identification algorithm, to judge whether image sequence exists the portrait of user to image sequence; If image sequence exists portrait, eye is performed to image sequence and moves tracing algorithm; Further, whether watch attentively in assigned direction based on user, and the mark action corresponding to each seat onblock executing.
Supervisory system of the present invention, it comprises the multiple taking unit and supervising device that are arranged at the same space.Above-mentioned taking unit is arranged at the multiple positions in above-mentioned space respectively.Supervising device comprises display unit, communication unit, storage element and processing unit.Display unit is in order to show user interface, and wherein user interface comprises multiple seats block, and these seat blocks correspond to above-mentioned position respectively.Communication unit is in order to obtain image sequence from each taking unit.Storage element is in order to store images sequence.Processing unit is coupled to display unit and communication unit, and in order to drive monitoring module.Wherein, monitoring module performs face identification algorithm, to judge whether image sequence exists the portrait of user to image sequence; If image sequence exists portrait, monitoring module performs eye to image sequence and moves tracing algorithm; Further, whether monitoring module is watched attentively in assigned direction based on user, and the mark action corresponding to each seat onblock executing.
Supervising device of the present invention comprises, its display unit, communication unit, storage element and processing unit.Display unit is in order to show user interface, and wherein user interface comprises multiple seats block, and these seat blocks correspond to the multiple positions being in the same space with supervising device respectively, and each position is provided with a taking unit.Communication unit is in order to obtain image sequence from each taking unit.Storage element is in order to store images sequence.Processing unit is coupled to display unit and communication unit, and in order to drive monitoring module.Wherein, monitoring module performs face identification algorithm, to judge whether image sequence exists the portrait of user to image sequence; If image sequence exists portrait, monitoring module performs eye to image sequence and moves tracing algorithm; Further, whether monitoring module is watched attentively in assigned direction based on user, and the mark action corresponding to each seat onblock executing.
In one embodiment of this invention, above-mentioned monitoring module also comprises: human face recognition module, and it is in order to perform face identification algorithm to image sequence; The dynamic tracing module of eye, it moves tracing algorithm in order to perform eye to image sequence; Mark module, it is in order to the mark action corresponding to each seat onblock executing.
In one embodiment of this invention, above-mentioned monitoring module also comprises: state recognition module, and it is in order to move the result of tracing module according to eye, differentiates that user is in absorbed state, doze state at present or is not absorbed in state; Eye closing detection module, it is in order to perform eye closing detection algorithm to image sequence.Wherein, judge that user does not watch attentively in assigned direction when eye moves tracing module, state recognition module judges that user is in absorbed state at present.Judge that user does not watch attentively in assigned direction when eye moves tracing module, when detecting that user is closed-eye state by eye closing detection module, judge that user is in doze state at present by state recognition module, and when detecting that user is non-closed-eye state, judging that user is at present by state recognition module and not being absorbed in state.
In one embodiment of this invention, above-mentioned mark module, when human face recognition module judges that image sequence does not exist portrait, shows the 4th mark in the seat block of correspondence.
In one embodiment of this invention, above-mentioned monitoring module also comprises: yaw detection module, it judges user's whether rotary head based on naris position information, obtain face's swing information by this, wherein when human face recognition module obtains human face region, detect the nostril region of human face region in human face recognition module, and obtain naris position information.Wherein, when yaw detection module judges user towards assigned direction based on face's swing information, move tracing module by eye and tracing algorithm is moved to image sequence execution eye.
Based on above-mentioned, the absorbed degree that tracing algorithm judges user is moved based on eye, and seat block corresponding in user interface presents corresponding mark, be able to know the attendance rate of user's (such as: listen speaker) and absorbed degree more intuitively by supervising device (such as: teacher side device) by this.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Accompanying drawing explanation
Fig. 1 is the calcspar of the supervisory system according to one embodiment of the invention;
Fig. 2 is the calcspar of the monitoring module according to one embodiment of the invention;
Fig. 3 is the process flow diagram of the method for supervising according to one embodiment of the invention;
Fig. 4 is the schematic diagram at the user interface according to one embodiment of the invention;
Fig. 5 is the process flow diagram of the method for supervising according to another embodiment of the present invention.
Description of reference numerals;
100: supervisory system;
110: supervising device;
120: display unit;
130: communication unit;
140: storage element;
150: processing unit;
160: monitoring module;
170: taking unit;
201: human face recognition module;
203: eye moves tracing module;
205: mark module;
207: state recognition module;
209: eye closing detection module;
211: yaw detection module;
400: user interface;
401: the first marks;
402: the second marks;
403: the three marks;
404: the four marks;
S: space;
S305 ~ S325: each step of method for supervising;
S505 ~ S550: each step of another method for supervising.
Embodiment
Fig. 1 is the calcspar of the supervisory system according to one embodiment of the invention.Please refer to Fig. 1, supervisory system 100 comprises supervising device 110 and multiple stage taking unit 170.Supervisory system 100 is such as arranged in the space S such as classroom, lecture hall, auditorium, that is, supervising device 110 and multiple stage taking unit 170 are arranged in the same space S.
Taking unit 170 is such as video camera or camera, it has charge coupled cell (Chargecoupled device, be called for short: CCD) (Complementary metaloxide semiconductor transistors is called for short: CMOS) camera lens or infrared ray camera lens for camera lens, CMOS (Complementary Metal Oxide Semiconductor).Taking unit 170 is set respectively in multiple positions of space S, to obtain the image sequence of user in position.Take space S as classroom, classroom comprises multiple seat, arranges a taking unit 170 at each seat, and by the camera lens of taking unit 170 towards the direction can taken to being sitting in as user on seat.
Supervising device 110 comprises display unit 120, communication unit 130, storage element 140, processing unit 150 and monitoring module 160.Wherein, processing unit 150 is coupled to display unit 120, communication unit 130, storage element 140 and monitoring module 160.
Display unit 120 is such as liquid crystal display (Liquid-Crystal Display, LCD), plasma display, vacuum fluorescent display, light emitting diode (Light-Emitting Diode be called for short:, be called for short: LED) display, Field Emission Display (Field Emission Display, FED) and/or the display of other suitable species be called for short:, do not limit its kind at this.
Communication unit 130 is in order to receive its respective image sequence from multiple taking unit 170.Such as, communication unit 130 is the Ethernet card or wireless network card etc. of entity, or be the third generation (thirdGeneration, be called for short: 3G) (General PacketRadio Service is called for short: GPRS) module or Wi-Fi module for mobile communication module, general packet radio service technology.
Storage element 140 is such as the fixed of any pattern or packaged type random access memory (Random Access Memory, ROM), the combination of flash memory (Flash memory), hard disk or other similar devices or these devices be called for short: RAM), (Read-Only Memory is called for short: ROM (read-only memory).Many parts of e-files are included in storage element 130, and the temporary image sequence obtained by taking unit 110.
Processing unit 150 is such as CPU (central processing unit) (Central Processing Unit, be called for short: CPU), or the microprocessor (Microprocessor) of other general services able to programme or specific use, digital signal processor (Digital Signal Processor, be called for short: DSP), Programmable Logic Controller, Application Specific Integrated Circuit (Application Specific Integrated Circuits, be called for short: ASIC), programmable logic device (Programmable Logic Device, PLD) or the combination of other similar devices or these devices be called for short:.
Processing unit 150 is coupled to display unit 120, communication unit 130 and storage element 140, and in order to drive monitoring module 160.Monitoring module 160 is such as driver, firmware or the software write by computer program language, and it is stored in storage element 140.Monitoring module 160 substantially by most procedure code fragments form (such as setting up organization chart procedure code fragment, sign-off list procedure code fragment, setting program chip segment and deployment program chip segment), and these procedure code fragments can realize the function monitored after also performing in loading supervising device 110.Or monitoring module 160 is the wafer set etc. formed by various logic door.Processing unit 150 drives monitoring module 160 to perform method for supervising.
Such as, monitoring module 160 pairs of image sequences perform face identification algorithm, and to judge whether image sequence exists the portrait of user, and monitoring module 160 pairs of image sequences execution eyes move tracing algorithm, to obtain the direction that user watches attentively.In addition, whether monitoring module 160 is watched attentively in assigned direction (such as watching attentively in front) based on user, and the mark action corresponding to each seat onblock executing.Such as, corresponding color or the mark (being such as image) showing correspondence on each seat block etc. on each seat block mark.
Fig. 2 is the calcspar of the monitoring module according to one embodiment of the invention.Please refer to Fig. 2, monitoring module 160 mainly comprises human face recognition module 201, eye moves tracing module 203 and mark module 205.Human face recognition module 201, in order to perform face identification algorithm to image sequence, such as, based on the AdaBoost algorithm of Haar-like feature, comes whether there is face in detected image sequence by this.Whether judge whether corresponding position exists user by existence image sequence being detected to face.The dynamic tracing module 203 of eye moves tracing algorithm, to follow the trail of the motion track of the eyeball of user in order to perform eye to image sequence.Mark module 205 is in order to the mark action corresponding to seat onblock executing.
In addition, monitoring module 160 also can comprise state recognition module 207, eye closing detection module 209 and yaw detection module 211 further.State recognition module 207, in order to move the result of tracing module 203 according to eye, differentiates that user is in absorbed state, doze state at present or is not absorbed in state.Eye closing detection module 209 is in order to perform eye closing detection algorithm to image sequence.Yaw detection module 211 judges user's whether rotary head based on naris position information, obtains face's swing information by this.Such as, after face being detected, human face recognition module 201 can also search nostril region further, that is, the position in two nostrils.And naris position information is such as the position in two nostrils.
In addition, in other embodiments, in each position of space S, also a display can be configured with, to show the information such as course content over the display relative to each taking unit 170.Accordingly, before carrying out the dynamic tracking calculation of eye, also first correction program can be performed to taking unit 170.If the hardware specification of each taking unit 170 is identical, then with wherein one carry out correcting; If the hardware specification of each taking unit 170 is different, then one by one each taking unit is corrected.For example, before the detection carrying out eyeball position, sequentially multiple correcting image is received from taking unit 170.At this, above-mentioned correcting image is respectively multiple check points that user watches display unit 120 and obtains.Such as, using display unit 120 upper left, upper right, lower-left, 4, bottom right point is as check point.When carrying out correction program, in display unit 120, pointing out user to watch above-mentioned 4 check points, obtaining 4 correcting images by this.And correction module 314 is according to two bright spot positions of the ocular in each correcting image, obtain N Reference Alignment parameter.The formation of above-mentioned two bright spot positions be due to light emitting module set in taking unit 170 cause on eyeball reflective.By the bright spot position of two in each correcting image, obtain N Reference Alignment parameter.Correction parameter is such as the vector based on bright spot position G1, G2.Further, produce coordinates translation matrix based on correcting image by perspective conversion (perspective transformation) method, this coordinates translation matrix is the coordinate position in order to the coordinate position of ocular to be converted to display.
And eye moves the ocular of the current image in tracing module 203 detected image sequence, to obtain pupil position in current image and two bright spot positions (hereinafter referred to bright spot position G1', G2').Further, eye moves tracing module 203 bright spot position G1', G2' according to current image, obtains comparison correction parameter, and further benchmark correction parameter (C1) and comparison correction parameter (C2), obtain dynamic correction parameter (C3) by this.Such as, dynamic correction parameter is the ratio of N Reference Alignment parameter and comparison correction parameter, that is, C3=C2/C1.Afterwards, eye moves tracing module 203 again according to the bright spot position G1'(in current image or bright spot position G2'), pupil position (such as calculating with the coordinate of pupil center) and dynamic correction parameter, calculate eyeball and move coordinate.Such as, eyeball Mobile base is designated as (X', Y').And eye moves tracing module 203 and utilizes coordinates translation matrix, conversion eyeball move coordinate (X', Y') be corresponding to above-mentioned display unit sight line drop point coordinate (such as, sight line drop point coordinate is (Xs, Ys)), afterwards, record sight line drop point coordinate (Xs, Ys).Accordingly, pass through recorded multiple sight line drop point coordinate and the motion track of eyeball can be obtained, and can obtain according to sight line drop point coordinate the direction that user watches attentively at present.
Each step of i.e. for example bright method for supervising below.Fig. 3 is the process flow diagram of the method for supervising according to one embodiment of the invention.Referring to Fig. 1 ~ Fig. 3, in step S305, supervising device 110 shows user interface in display unit 120.At this, user interface comprises multiple seats block, and these seat blocks correspond to multiple positions of space S respectively, and each position is provided with a taking unit 170.Take space S as classroom, in user interface, show the seating plan in classroom.
Then, in step S310, supervising device 110 obtains image sequence by communication unit 130 from taking unit 170.Afterwards, monitoring module 160 just can be utilized to analyze image sequence.
In step S315, monitoring module 160 performs face identification algorithm by human face recognition module 201, judges whether image sequence exists the portrait of user by this.Namely detect on corresponding position and whether have user, such as, judge whether student attends.
Then, in step s 320, monitoring module 160 moves tracing module 203 by eye and performs eye and move tracing algorithm.Specifically, detecting that image sequence exists the portrait of user, monitoring module 160 just can be enabled eye and move tracing module 203.
Then, in step S325, whether monitoring module 160 is watched attentively in assigned direction based on user, and the mark action corresponding to each seat onblock executing.Whether such as, detecting user has at the display (its collocation taking unit 170 use) of viewing on it, or whether user has and be arranged on the display of forefront or blank (blackboard) etc. in viewing.
In seating plan, represent that whether wholwe-hearted or doze off etc. whether absent, student with color or with pattern.In addition, also move the result of tracing module 203 according to eye by state recognition module 207, differentiate that user is in absorbed state, doze state at present or is not absorbed in state.
In addition, if distributed the student using each position in advance, and in the storage element 140 of supervising device 100, set up a data bank in advance, just can adopt for absent situation action of signing further.
Fig. 4 is the schematic diagram at the user interface according to one embodiment of the invention.Please refer to Fig. 4, user interface 400 is a seating plan, and it comprises 5 × 4 seat block B1 ~ B20.At this only for 5 × 4, so, can adjust according to the number of positions being provided with taking unit 170 in real space S in other embodiments.
Below with seat block B5, B8, B16, B9, it shows the first mark 401, second respectively and marks the 402, the 3rd mark 403 and the 4th mark 404.That is, in seat block B5, show the first mark 401, represent that the physical location of its correspondence has user, and user is at present for being absorbed in state.In seat block B8, show the second mark 402, represent that the physical location of its correspondence has user, and user is doze state at present.In seat block B16, show the 3rd mark 403, represent that the physical location of its correspondence has user, and user is not absorbed in state.In seat block B9, show the 4th mark 404, represent that the physical location of its correspondence does not have user.Accordingly, be applied in teaching, teacher can be allowed to learn the situation of attending of student and whether being absorbed in of attending class rapidly by user interface 400.
Fig. 5 is the process flow diagram of the method for supervising according to another embodiment of the present invention.Referring to Fig. 1 ~ 2, Fig. 4 ~ 5, in step S505, monitoring module 160 performs face identification algorithm by human face recognition module 201.In step S510, judge whether image sequence exists the portrait of user, judge whether physical location exists user by this.If there is not portrait in image sequence, judge that the position set by taking unit of transmitting this image sequence does not have user, and then in step S515, show the 4th mark 404 by mark module 205 at the seat block of correspondence.
Then, in step S520, monitoring module 160 moves tracing module 203 by eye and performs eye and move tracing algorithm.Afterwards, in step S525, eye moves tracing module 203 and judges whether user watches attentively at assigned direction.Judge that user watches attentively at assigned direction when eye moves tracing module 203, judge that user is in absorbed state at present by state recognition module 207, and then perform step S530, show the first mark 401 by mark module 205 at the seat block of correspondence.
In addition, when judging to have portrait in image sequence, also first can judging whether user is intended for assigned direction by yaw detection module 211 based on face's swing information, with when judging user towards assigned direction, then tracing algorithm being moved to image sequence execution eye.But, can optionally determine whether using yaw detection module 211.
If user does not watch attentively in assigned direction, perform step S535, perform eye closing detection algorithm by eye closing detection module 209 pairs of image sequences.Then, in step S540, eye closing detection module 209 judges whether user is closed-eye state.Such as, to detect that the size of eye object is to determine whether closed-eye state.Such as, when the height of eye object is less than height threshold (such as, scope circle of height threshold is in 5 ~ 7 pixels), and the width of eye thing is greater than width threshold value (such as, scope circle of width threshold value is in 60 ~ 80 pixels) time, be judged to be closed-eye state.If do not meet above-mentioned condition, then non-closed-eye state.
When detecting that user is closed-eye state, judge that user is in doze state at present by state recognition module 207, and then in step S545, mark module 205 is in seat block display second mark 402 of correspondence.And when detecting that user is non-closed-eye state, judging that user is at present be not absorbed in state by state recognition module 207, and then in step S550, mark module 205 is at seat block display the 3rd mark 403 of correspondence.
Separately, above-mentioned first mark ~ the four mark also can be the background color of different colours, represents the absorbed degree of user and go out absent situation by the background color showing different colours in seat block.
In sum, utilize eye to move tracing algorithm to judge the absorbed degree of user, and seat block corresponding in user interface present corresponding mark.Accordingly, when being applied in teaching and being upper, teacher can be allowed to be able to know more intuitively the attendance rate of student and absorbed degree.In addition, also can be applicable to general speech, make lecturer can learn the absorbed degree of listening speaker by the mark on user interface, and then suitably adjustment speech mode reaches the object attracting to listen speaker's notice.
One of ordinary skill in the art will appreciate that: all or part of step realizing above-mentioned each embodiment of the method can have been come by the hardware that programmed instruction is relevant.Aforesaid program can be stored in a computer read/write memory medium.Said procedure, when performing, performs the step comprising above-mentioned each embodiment of the method; And aforesaid storage medium comprises: ROM, RAM, magnetic disc or CD etc. various can be program code stored medium.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (13)

1. a method for supervising, for supervising device, is characterized in that, above-mentioned supervising device is arranged in space, and said method comprises:
In display unit, show user interface, wherein above-mentioned user interface comprises multiple seats block, and above-mentioned seat block corresponds to multiple positions in above-mentioned space respectively, and each above-mentioned position is provided with taking unit;
From above-mentioned position, respective above-mentioned taking unit obtains image sequence;
Face identification algorithm is performed, to judge whether above-mentioned image sequence has the portrait of user to above-mentioned image sequence;
If above-mentioned image sequence exists above-mentioned portrait, eye is performed to above-mentioned image sequence and moves tracing algorithm; And
Whether watch attentively in assigned direction based on above-mentioned user, and the mark action corresponding to each above-mentioned seat onblock executing.
2. method according to claim 1, is characterized in that, performs after above-mentioned eye moves the step of tracing algorithm, also comprise above-mentioned image sequence:
If above-mentioned user watches attentively in above-mentioned assigned direction, judge that above-mentioned user is in absorbed state at present; And
If above-mentioned user does not watch attentively in above-mentioned assigned direction, eye closing detection algorithm is performed to above-mentioned image sequence, to judge that above-mentioned user is in doze state at present or is not absorbed in state.
3. method according to claim 2, is characterized in that, after above-mentioned image sequence being performed to the step of above-mentioned eye closing detection algorithm, also comprises:
If detect, above-mentioned user is for closed-eye state, judges that above-mentioned user is in above-mentioned doze state at present; And
If do not detect, above-mentioned user is for above-mentioned closed-eye state, judges that above-mentioned user is in above-mentioned not absorbed state at present.
4. whether method according to claim 2, is characterized in that, watch attentively in above-mentioned assigned direction based on above-mentioned user, and comprises the step of mark action corresponding to each above-mentioned seat onblock executing:
When above-mentioned user is in above-mentioned absorbed state at present, in the above-mentioned seat block of correspondence, show the first mark;
When above-mentioned user is in above-mentioned doze state at present, in the above-mentioned seat block of correspondence, show the second mark;
When above-mentioned user is in above-mentioned not absorbed state at present, in the above-mentioned seat block of correspondence, show the 3rd mark.
5. method according to claim 1, is characterized in that, after the step performing above-mentioned human face recognition algorithm, also comprises:
If above-mentioned image sequence does not exist above-mentioned portrait, in the above-mentioned seat block of correspondence, show the 4th mark.
6. method according to claim 1, is characterized in that, after above-mentioned image sequence being performed to the step of above-mentioned human face recognition algorithm, also comprises:
When obtaining human face region, detecting the nostril region of above-mentioned human face region, and obtaining naris position information;
Judge above-mentioned user whether rotary head based on above-mentioned naris position information, obtain face's swing information by this; And
Judge whether above-mentioned user is intended for above-mentioned assigned direction based on above-mentioned face swing information, with when judging above-mentioned user towards above-mentioned assigned direction, above-mentioned eye being performed to above-mentioned image sequence and moves tracing algorithm.
7. a supervising device, comprising:
Display unit, display user interface, wherein above-mentioned user interface comprises multiple seats block, and above-mentioned seat block corresponds to the multiple positions being in the same space with above-mentioned supervising device respectively, and each above-mentioned position is provided with taking unit;
Communication unit, from above-mentioned position, respective above-mentioned taking unit obtains image sequence;
Storage unit, stores above-mentioned image sequence; And
Processing unit, is coupled to above-mentioned display unit, above-mentioned communication unit and above-mentioned storage element, and drives monitoring module;
Wherein, above-mentioned monitoring module performs face identification algorithm, to judge whether above-mentioned image sequence exists the portrait of user to above-mentioned image sequence; If above-mentioned image sequence exists above-mentioned portrait, above-mentioned monitoring module performs eye to above-mentioned image sequence and moves tracing algorithm; Further, whether above-mentioned monitoring module is watched attentively in assigned direction based on above-mentioned user, and the mark action corresponding to each above-mentioned seat onblock executing.
8. supervising device according to claim 7, is characterized in that, above-mentioned monitoring module also comprises:
Human face recognition module, performs above-mentioned human face recognition algorithm to above-mentioned image sequence;
The dynamic tracing module of eye, performs above-mentioned eye to above-mentioned image sequence and moves tracing algorithm; And
Mark module, the mark action corresponding to each above-mentioned seat onblock executing.
9. supervising device according to claim 8, is characterized in that, above-mentioned monitoring module also comprises:
State recognition module, moves the result of tracing module according to above-mentioned eye, differentiate that above-mentioned user is in absorbed state, doze state at present or is not absorbed in state; And
Eye closing detection module, performs eye closing detection algorithm to above-mentioned image sequence;
Wherein, judge that above-mentioned user does not watch attentively in above-mentioned assigned direction when above-mentioned eye moves tracing module, above-mentioned state recognition module judges that above-mentioned user is in above-mentioned absorbed state at present;
Wherein, judge that above-mentioned user does not watch attentively in above-mentioned assigned direction when above-mentioned eye moves tracing module, when measuring above-mentioned user for closed-eye state by above-mentioned eye closing detection module, judge that above-mentioned user is in above-mentioned doze state at present by above-mentioned state recognition module, and when above-mentioned user being detected for non-closed-eye state, judge that above-mentioned user is in above-mentioned not absorbed state at present by above-mentioned state recognition module.
10. supervising device according to claim 9, is characterized in that, when above-mentioned user is in above-mentioned absorbed state at present, above-mentioned mark module shows the first mark in the above-mentioned seat block of correspondence; When above-mentioned user is in above-mentioned doze state at present, above-mentioned mark module shows the second mark in the above-mentioned seat block of correspondence; When above-mentioned user is in above-mentioned not absorbed state at present, above-mentioned mark module shows the 3rd mark in the above-mentioned seat block of correspondence.
11. supervising devices according to claim 8, is characterized in that, when above-mentioned human face recognition module judges that above-mentioned image sequence does not exist above-mentioned portrait, above-mentioned mark module shows the 4th mark in the above-mentioned seat block of correspondence.
12. supervising devices according to claim 8, is characterized in that, above-mentioned monitoring module also comprises:
Yaw detection module, above-mentioned user whether rotary head is judged based on naris position information, obtain face's swing information by this, wherein when above-mentioned human face recognition module obtains human face region, detect the nostril region of above-mentioned human face region in above-mentioned human face recognition module, and obtain above-mentioned naris position information;
Wherein, when above-mentioned yaw detection module judges above-mentioned user towards above-mentioned assigned direction based on above-mentioned face swing information, move tracing module by above-mentioned eye and above-mentioned eye is performed to above-mentioned image sequence move tracing algorithm.
13. 1 kinds of supervisory systems, comprising:
Multiple taking unit, is arranged at the multiple positions in space respectively;
Supervising device, is arranged at above-mentioned space, and wherein above-mentioned supervising device comprises:
Display unit, display user interface, wherein above-mentioned user interface comprises multiple seats block, and above-mentioned seat block corresponds to above-mentioned position respectively;
Communication unit, obtains image sequence from each above-mentioned taking unit;
Storage element, stores above-mentioned image sequence; And
Processing unit, is coupled to above-mentioned display unit and above-mentioned communication unit, and drives monitoring module;
Wherein, above-mentioned monitoring module performs face identification algorithm, to judge whether above-mentioned image sequence exists the portrait of user to above-mentioned image sequence; If above-mentioned image sequence exists above-mentioned portrait, above-mentioned monitoring module performs eye to above-mentioned image sequence and moves tracing algorithm; Further, whether above-mentioned monitoring module is watched attentively in assigned direction based on above-mentioned user, and the mark action corresponding to each above-mentioned seat onblock executing.
CN201410054265.XA 2013-10-16 2014-02-18 Monitoring method, device and system Active CN104571487B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW102137367A TW201516892A (en) 2013-10-16 2013-10-16 Method, apparatus and system for monitoring
TW102137367 2013-10-16

Publications (2)

Publication Number Publication Date
CN104571487A true CN104571487A (en) 2015-04-29
CN104571487B CN104571487B (en) 2018-04-24

Family

ID=53087773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410054265.XA Active CN104571487B (en) 2013-10-16 2014-02-18 Monitoring method, device and system

Country Status (2)

Country Link
CN (1) CN104571487B (en)
TW (1) TW201516892A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935884A (en) * 2015-06-05 2015-09-23 重庆智韬信息技术中心 Method for intelligently monitoring class attendance order of students
CN105844253A (en) * 2016-04-01 2016-08-10 乐视控股(北京)有限公司 Mobile terminal image identification data comparison method and device
CN106297213A (en) * 2016-08-15 2017-01-04 欧普照明股份有限公司 Detection method, detection device and lighting
CN106372614A (en) * 2016-09-13 2017-02-01 南宁市远才教育咨询有限公司 Class discipline monitoring prompt auxiliary apparatus
CN106933367A (en) * 2017-03-28 2017-07-07 安徽味唯网络科技有限公司 It is a kind of to improve student and attend class the method for notice
CN108460700A (en) * 2017-12-28 2018-08-28 合肥壹佰教育科技有限公司 A kind of intelligence students' educational management regulator control system
CN109448337A (en) * 2018-11-21 2019-03-08 重庆工业职业技术学院 Multimedia teaching is attended class based reminding method and system
CN109740498A (en) * 2018-12-28 2019-05-10 广东新源信息技术有限公司 A kind of wisdom classroom based on face recognition technology
CN109934085A (en) * 2017-12-15 2019-06-25 埃森哲环球解决方案有限公司 Sequence of events is captured in monitoring system
CN110476195A (en) * 2017-03-30 2019-11-19 国际商业机器公司 Based on the classroom note generator watched attentively
CN117137427A (en) * 2023-08-31 2023-12-01 深圳市华弘智谷科技有限公司 Vision detection method and device based on VR and intelligent glasses

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220395206A1 (en) * 2021-06-11 2022-12-15 Ganzin Technology, Inc. Cognitive assessment system based on eye movement

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556717A (en) * 2009-05-19 2009-10-14 上海海隆软件股份有限公司 ATM intelligent security system and monitoring method
CN102018519A (en) * 2009-09-15 2011-04-20 由田新技股份有限公司 Staff concentration degree monitoring system
CN103208212A (en) * 2013-03-26 2013-07-17 陈秀成 Anti-cheating remote online examination method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556717A (en) * 2009-05-19 2009-10-14 上海海隆软件股份有限公司 ATM intelligent security system and monitoring method
CN102018519A (en) * 2009-09-15 2011-04-20 由田新技股份有限公司 Staff concentration degree monitoring system
CN103208212A (en) * 2013-03-26 2013-07-17 陈秀成 Anti-cheating remote online examination method and system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935884A (en) * 2015-06-05 2015-09-23 重庆智韬信息技术中心 Method for intelligently monitoring class attendance order of students
CN105844253A (en) * 2016-04-01 2016-08-10 乐视控股(北京)有限公司 Mobile terminal image identification data comparison method and device
CN106297213A (en) * 2016-08-15 2017-01-04 欧普照明股份有限公司 Detection method, detection device and lighting
CN106372614A (en) * 2016-09-13 2017-02-01 南宁市远才教育咨询有限公司 Class discipline monitoring prompt auxiliary apparatus
CN106933367A (en) * 2017-03-28 2017-07-07 安徽味唯网络科技有限公司 It is a kind of to improve student and attend class the method for notice
CN110476195A (en) * 2017-03-30 2019-11-19 国际商业机器公司 Based on the classroom note generator watched attentively
CN109934085A (en) * 2017-12-15 2019-06-25 埃森哲环球解决方案有限公司 Sequence of events is captured in monitoring system
CN108460700A (en) * 2017-12-28 2018-08-28 合肥壹佰教育科技有限公司 A kind of intelligence students' educational management regulator control system
CN109448337A (en) * 2018-11-21 2019-03-08 重庆工业职业技术学院 Multimedia teaching is attended class based reminding method and system
CN109740498A (en) * 2018-12-28 2019-05-10 广东新源信息技术有限公司 A kind of wisdom classroom based on face recognition technology
CN117137427A (en) * 2023-08-31 2023-12-01 深圳市华弘智谷科技有限公司 Vision detection method and device based on VR and intelligent glasses

Also Published As

Publication number Publication date
TW201516892A (en) 2015-05-01
CN104571487B (en) 2018-04-24

Similar Documents

Publication Publication Date Title
CN104571487A (en) Monitoring method, device and system
CN106652972B (en) Processing circuit of display screen, display method and display device
CA3069173C (en) Language element vision augmentation methods and devices
CN104571488B (en) Electronic file marking method and device
CN104834446B (en) A kind of display screen multi-screen control method and system based on eyeball tracking technology
JP6165846B2 (en) Selective enhancement of parts of the display based on eye tracking
US11693475B2 (en) User recognition and gaze tracking in a video system
US20160054794A1 (en) Eye-control reminding method, eye-control image display method and display system
CN103412643B (en) Terminal and its method for remote control
TW201633215A (en) System and method for protecting eyes
CN109685007B (en) Eye habit early warning method, user equipment, storage medium and device
CN105319718A (en) Wearable glasses and method of displaying image via wearable glasses
CN102542739A (en) Vision protection method and system
CN104657648A (en) Head-mounted display device and login method thereof
CN109241917A (en) A kind of classroom behavior detection system based on computer vision
CN105095885B (en) A kind of detection method and detection device of human eye state
CN104580693A (en) Method and device for detecting user head-lowering posture
CN102610184A (en) Method and device for adjusting display state
CN104267816A (en) Method for adjusting content of display screen and display screen adjusting device
CN110148092A (en) The analysis method of teenager's sitting posture based on machine vision and emotional state
WO2021197369A1 (en) Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN110490173B (en) Intelligent action scoring system based on 3D somatosensory model
US20170212587A1 (en) Electronic device
CN105326471A (en) Children visual acuity testing device and testing method
CN105260697A (en) Method for preventing eye fatigue and applicable mobile device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant