CN105488462A - Eye positioning identification device and method - Google Patents

Eye positioning identification device and method Download PDF

Info

Publication number
CN105488462A
CN105488462A CN201510831395.4A CN201510831395A CN105488462A CN 105488462 A CN105488462 A CN 105488462A CN 201510831395 A CN201510831395 A CN 201510831395A CN 105488462 A CN105488462 A CN 105488462A
Authority
CN
China
Prior art keywords
image
eyes
iris
nictation
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510831395.4A
Other languages
Chinese (zh)
Inventor
李小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510831395.4A priority Critical patent/CN105488462A/en
Publication of CN105488462A publication Critical patent/CN105488462A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an eye positioning identification device, which comprises a capture module, a positioning module and an identification module, wherein the capture module is used for collecting an image and capturing twinkling movement according to the image; the positioning module is used for positioning an eye area in the image according to the twinkling movement and capturing an eye image if the twinkling movement is successfully captured; and the identification module is used for carrying out eye identification according to the eye image. The invention also discloses an eye positioning identification method. The face image of a user is collected, an eye part area is quickly positioned according to the twinkling movement of the user so as to obtain the eye image to carry out eye identification, eye positioning speed and accuracy can be improved, and eye identification can be more efficiently carried out.

Description

Eyes positioning and recognizing device and method
Technical field
The present invention relates to technical field of biometric identification, particularly relate to a kind of eyes positioning and recognizing device and method.
Background technology
Along with the development of mobile Internet and terminal device, user more and more payes attention to for the use safety of intelligent terminal, such as screen lock, application software password and data storage etc.Bio-identification, can the data security of effective guarantee intelligent terminal due to its uniqueness and non-reproduction, receives the favor of consumer.Wherein, because eye recognition can only identify for the reflection of biological living, photo, video cannot identify, avoid because photo or video information are stolen and carry out bio-identification, therefore security is high, is the important development direction of current bio-identification.
Current, first eyeball identification need to locate eye position, then carries out biological information identification to the sclera of eyes or iris.Can find out, carry out eyes location accurately and rapidly, be the important foundation of eye recognition.Gray scale, the geometric properties of main end user's face face-image, carry out eyes location at present.
But because the facial information amount of people is huge, calculating process is complicated, is difficult to quick position eye position, causes eye recognition speed slow.How to reduce information processing capacity, quick position eyes carry out eye recognition, become current important subject.
Summary of the invention
Fundamental purpose of the present invention is to provide a kind of eyes positioning and recognizing device and method, is intended to solve the technical matters that the fixation and recognition of eyes is slow.
For achieving the above object, the invention provides a kind of eyes positioning and recognizing device, described eyes positioning and recognizing device comprises:
Capture module, for gathering image, according to described picture catching action nictation;
Locating module, if for successfully capturing described action nictation, then locates ocular according to described action nictation and obtains eyes image in described image;
Identification module, for according to described eyes image, carries out eye recognition.
In one embodiment, described capture module comprises:
Collecting unit, for gathering the image of predetermined number within the time of presetting, according to the image capturing feature data of described predetermined number;
Matching unit, for described characteristic is mated with preset characteristic parameter nictation, if described characteristic and described preset nictation characteristic parameter the match is successful, then successfully catch action nictation.
In one embodiment, described collecting unit also for,
If described characteristic and described preset nictation characteristic parameter it fails to match, then described action nictation of unsuccessful seizure, again gathers image, again gathers the characteristic of image described in seizure.
In one embodiment, described identification module comprises:
Iris recognition unit, for according to described eyes image, carries out iris recognition;
Sclera recognition unit, for according to described eyes image, carries out sclera identification.
In one embodiment, described iris recognition unit also for,
According to described eyes image, obtain iris image;
Pre-service is carried out to described iris image;
Extract iris feature point according to described pretreated iris image, and carry out iris recognition according to described iris feature point.
In addition, for achieving the above object, the present invention also provides a kind of eyes positioning identifying method, and described eyes positioning identifying method comprises the following steps:
Gather image, according to described picture catching action nictation;
If successfully capture described action nictation, then in described image, locate ocular according to described action nictation and obtain eyes image;
According to described eyes image, carry out eye recognition.
In one embodiment, described collection image, the step according to described picture catching action nictation comprises:
The image of predetermined number is gathered, according to the image capturing feature data of described predetermined number within the time of presetting;
Described characteristic is mated with preset characteristic parameter nictation;
If described characteristic and described preset nictation characteristic parameter the match is successful, then successfully catch action nictation.
In one embodiment, described according to described characteristic, after carrying out the step of mating with preset characteristic parameter nictation, also comprise:
If described characteristic and described preset nictation characteristic parameter it fails to match, then described action nictation of unsuccessful seizure, proceeds to execution step: gather image, catch the characteristic of described image.
In one embodiment, described according to described eyes image, the step of carrying out eye recognition comprises:
According to described eyes image, carry out iris recognition; Or,
According to described eyes image, carry out sclera identification.
In one embodiment, described according to described eyes image, the step of carrying out iris recognition comprises:
According to described eyes image, obtain iris image;
Pre-service is carried out to described iris image;
Extract iris feature point according to described pretreated iris image, and carry out iris recognition according to described iris feature point.
A kind of eyes positioning and recognizing device that the embodiment of the present invention proposes and method, eyes positioning and recognizing device comprises capture module, locating module and identification module, carrying out action recognition, catching action nictation by capture module by gathering image; If successfully capture action nictation, then locating module is located ocular according to action nictation and is obtained eyes image in described image; Then, identification module, according to eyes image, carries out eye recognition.Present invention achieves the face-image by gathering user, according to the action nictation quick position of user to ocular, thus acquisition eyes image carries out eye recognition, decrease the data processing amount for facial information, improve speed and the accuracy of eyes location, carry out eye recognition more efficiently.
Accompanying drawing explanation
Fig. 1 is the hardware configuration schematic diagram of the optional mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the high-level schematic functional block diagram of eyes positioning and recognizing device first embodiment of the present invention;
Fig. 4 is the high-level schematic functional block diagram of eyes positioning and recognizing device second embodiment of the present invention, the 3rd embodiment;
Fig. 5 is the high-level schematic functional block diagram of eyes positioning and recognizing device the 4th embodiment of the present invention, the 5th embodiment;
Fig. 6 is the schematic flow sheet of eyes positioning identifying method first embodiment of the present invention;
Fig. 7 is the schematic flow sheet of eyes positioning identifying method second embodiment of the present invention;
Fig. 8 is the schematic flow sheet of eyes positioning identifying method the 3rd embodiment of the present invention;
Fig. 9 is the schematic flow sheet of eyes positioning identifying method the 4th embodiment of the present invention;
Figure 10 is the schematic flow sheet of eyes positioning identifying method the 5th embodiment of the present invention.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The primary solutions of the embodiment of the present invention is: provide a kind of eyes positioning and recognizing device, and described eyes positioning and recognizing device comprises capture module, locating module and identification module; Image is gathered, according to described picture catching action nictation by capture module; If successfully capture described action nictation, then locating module is located ocular according to described action nictation and is obtained eyes image in described image; Identification module, according to described eyes image, carries out eye recognition.
Due to prior art carry out eye recognition time, the facial information amount of human body is huge, and calculating process is complicated, is difficult to quick position eye position, causes eye recognition speed slow.
The invention provides a solution, by identifying action nictation of human body, quick position ocular, carries out eye recognition, thus improves speed and the efficiency of eye recognition.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the optional mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input block 120, user input unit 130, sensing cell 140, output unit 150, storer 160, interface unit 170, controller 180 and power supply unit 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the wireless communication between mobile terminal 100 and wireless communication system or network.
A/V input block 120 is for audio reception or vision signal.A/V input block 120 can comprise camera 121, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user move and direction etc. for the presence or absence of the contact (that is, touching input) of mobile terminal 100, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as video i/o port etc.Interface unit 170 may be used for receive from external device (ED) input and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
Output unit 150 is constructed to provide output signal (such as, sound signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of the image of display capture and/or reception, the user interface UI that video or image and correlation function are shown or graphic user interface GUI etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input media and output unit.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific embodiment wanted, mobile terminal 100 can comprise two or more display units (or other display device), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).
Dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loudspeaker, hummer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incomingcommunication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Storer 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.。
Storer 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type storer (such as, SD or DX storer etc.), random access storage device (RAM), static random-access memory (SRAM), ROM (read-only memory) (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.
Power supply unit 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various embodiment described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, embodiment described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such embodiment can be implemented in controller 180.For implement software, the embodiment of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in storer 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or Physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA) (TDMA), CDMA (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.In fig. 2, several GPS (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
With reference to Fig. 3, eyes positioning and recognizing device first embodiment of the present invention provides a kind of eyes positioning and recognizing device, and described eyes positioning and recognizing device comprises:
Capture module 10, for gathering image, according to described picture catching action nictation.
The eyes positioning identifying method that the embodiment of the present invention proposes, by the quick position identification to human eye, can be widely used in the field such as identification authentication, security, the safeguard protection of such as intelligent terminal, automatically work attendance, gate control system etc.
The present embodiment is illustrated with the encryption of intelligent terminal.Eyes positioning and recognizing device is positioned on intelligent terminal, for controlling the fixation and recognition of eyes.
After startup eyes positioning and recognizing device, first, capture module 10 controls camera collection user face image.
Such as, the access rights of current intelligent terminal are controlled by eye recognition, then when user accesses intelligent terminal by user input unit 130, if it is closed condition that sensing cell 140 senses current intelligent terminal, then capture module 10 controls the front-facing camera collection image of intelligent terminal, lighted the screen ejection prompting page of intelligent terminal by the display unit 151 in output unit 150 simultaneously, or dio Output Modules 152 exports voice message, or alarm unit 153 carries out action nictation with the form of vibration prompting user, adjustment makes face-image be arranged in image acquisition region clearly with the position of camera.
It should be noted that, gather image to be realized by the camera 121 being arranged in the A/V input block 120 on intelligent terminal, also can be accessed external input device such as camera to realize by the interface unit 170 on intelligent terminal, can arrange flexibly according to actual needs.
Then, capture module 10, according to the user's face image collected, by catching characteristic nictation, catches action nictation.Such as, in Preset Time, collect the image of predetermined number, catch eye opening characteristic or the eye closing characteristic of image, and mate with characteristic parameter nictation be preset in storer 160; Eye opening characteristic or eye closing characteristic are mated with preset characteristic parameter nictation, then obtain eye opening action or eye closing action; If obtaining eye opening action and eye closing action, to meet default actuation time nictation poor, then judge to capture action nictation.
Locating module 20, if for successfully capturing described action nictation, then locates eye areas according to described action nictation and obtains eyes image in described image.
After successfully capturing action nictation, locating module 20 according to operating position fixing eye areas nictation, and obtains eyes image.
Concrete, as a kind of embodiment, first, locating module 20 obtains eye opening image and eye closing image according to action nictation captured, and obtains eye opening image and eye closing image and to have any different the image-region of motion characteristic.
Then, locating module 20, according to the feature of eye contour, removes the interference region in image-region.Such as: with nictation, there is the mouth of same motion characteristic, but can be removed by the size of eye contour and nictation action generation area occurs do not meet the interference region of pre-set dimension; Or according to the symmetry of eye contour, remove interference region horizontal direction only having an action of blinking.
Thus, locating module 20 obtains the image-region after removing interference region, and this region of locating in all face-images is eye areas, and the image obtaining this region is eyes image, for eye recognition.
Identification module 30, for according to described eyes image, carries out eye recognition.
After acquisition eyes image, identification module 30 carries out eye recognition according to eyes image.
In the present embodiment, eye recognition can be divided into iris recognition and sclera identification.Iris, also namely usually said iris removes the annulus of middle black pupil; Sclera, also namely usually said white of the eye part.Iris and sclera include the biological information that can be used for eye recognition.
Concrete, as a kind of embodiment, identification module 30 obtains the eyes image in eye opening image.
Then, identification module 30, according to the eyes image in eye opening image, extracts iris image, extracts the iris feature point of iris image, obtains the iris feature point of predetermined number.Then, identification module 30 is encoded to the iris feature point obtained, and obtains the coding of predetermined number.
Then, the iris feature point coding that identification module 30 will obtain, mates one by one with preset coding, carries out eye recognition.If iris feature point coding is completely the same with preset coding, then judge iris feature point coding and the success of preset codes match, also namely the iris of active user is successfully identified.
Thus, identification module 30 obtains the fixation and recognition result of eyes, can manage identity authority according to recognition result.
It should be noted that, preset is encoded to preset iris feature point coding, for identification authentication.Such as, when controlling the access rights of intelligent terminal, preset being encoded to allows the client iris unique point coding of access intelligent terminal; During access control system, preset being encoded to allows the client iris unique point coding opened the door.
As another kind of embodiment, after acquisition eyes image, identification module 30 carries out sclera identification according to eyes image.
First, identification module 30 obtains the eyes image of eye opening image, obtains the image information of sclera according to eyes image.Then, the sclera image information of acquisition is converted into sclera characteristic information by identification module 30.
Then, the sclera characteristic information obtained mates with preset sclera characteristic information by identification module 30 one by one, carries out sclera identification; If the sclera characteristic information obtained and preset sclera characteristic information completely the same, then namely judge that the sclera characteristic information obtained mates with preset sclera characteristic information, also the sclera of active user is successfully identified.
Thus, identification module 30 obtains sclera recognition result, is also eye recognition result.
In the present embodiment, capture module 10 carries out action recognition by gathering image, catches action nictation; If successfully capture action nictation, then locating module 20 is located ocular according to action nictation and is obtained eyes image in described image; Then, identification module 30, according to eyes image, carries out eye recognition.The present embodiment is by gathering the face-image of user, achieve according to the action nictation quick position of user to ocular, thus acquisition eyes image carries out eye recognition, decrease the data processing amount for facial information, improve speed and the accuracy of eyes location, carry out eye recognition more efficiently.
Further, with reference to Fig. 4, eyes positioning and recognizing device second embodiment of the present invention provides a kind of eyes positioning and recognizing device, and based on the embodiment shown in above-mentioned Fig. 3, described capture module 10 comprises:
Collecting unit 11, for gathering the image of predetermined number within the time of presetting, according to the image capturing feature data of described predetermined number.
When carrying out eyes fixation and recognition, first, collecting unit 11 controls camera collection user's face image and carries out action recognition, catches characteristic.
Concrete, as a kind of embodiment, first, collecting unit 11 points out user to blink to carry out action recognition, and controls the image that camera gathers predetermined number within the time of presetting.It should be noted that, the time of presetting is the collection image temporal pre-set, and can arrange flexibly according to actual needs; Gather the predetermined number of image, can arrange flexibly according to actual needs such as the performance of camera, graphical analysis performances.
Collecting unit 11 can eject warning page prompts user within the time of presetting, and user should at least once blink action.
Then, according to the face recognition feature such as brightness, contour feature of face-image, obtain from the image collected with whole family face-images.
Then, collecting unit 11 according to obtain whole face-images, the characteristic of capturing facial image.The characteristic caught is for judging that user blinks the characteristic of action, can be obtained by change of the motion characteristic of capturing facial image, color or brightness etc.
Such as, because human eye forms primarily of the eye Renhe white of the eye, the color of the eye Renhe white of the eye is different, the eyes image color of opening eyes and close one's eyes has obvious difference, and the color change of other parts of face is little, therefore collecting unit 11 can extract the color characteristic of whole face-image, and the color characteristic data of whole face-images that will obtain, as catching the characteristic obtained.
Matching unit 12, mates according to described characteristic with preset characteristic parameter nictation, if described characteristic and described preset nictation characteristic parameter the match is successful, then successfully catch action nictation.
After obtaining the characteristic in all images, characteristic is mated with preset characteristic parameter nictation by matching unit 12.
Concrete, as a kind of embodiment, with the characteristic caught for color characteristic data is illustrated.
It should be noted that, preset nictation characteristic parameter for user blink time, color parameter and actuation time nictation of face-image are poor; Because action nictation is made up of open eyes action and eye closing action, therefore the color parameter of face-image comprises color parameter when color parameter when user opens eyes and user close one's eyes.Actuation time nictation, difference was actuation time nictation of presetting, can arrange flexibly according to actual needs, such as arranging actuation time nictation difference is 10 seconds, then user at least once should blink action in 10 seconds, and the eye opening image also namely collected and eye closing image maximum duration difference are 10 seconds.Preset characteristic parameter nictation can be arranged in storer 160.
First, the color characteristic data obtained mates with eye opening color parameter by matching unit 12 one by one, if the similarity of color characteristic data and eye opening color parameter reaches default threshold value, then matching unit 12 judges that this color characteristic data is as eye opening color characteristic data, capture eye opening action, and will the image of this eye opening color characteristic data be obtained as eye opening image.
The color characteristic data obtained mates with eye closing color parameter by matching unit 12 one by one, if the similarity of color characteristic data and eye closing color parameter reaches default threshold value, then matching unit 12 judges that this color characteristic data is as eye closing color characteristic data, capture eye closing action, and will the image of this eye closing color characteristic data be obtained as eye closing image.
If obtain eye opening color characteristic data and eye closing color characteristic data, then matching unit 12 judges that whether satisfied the acquisition time difference of eye opening image and eye closing image is poor for actuation time nictation of presetting.
Thus, matching unit 12 completes mating of color characteristic data and preset characteristic parameter nictation, and obtains matching result.Matching result comprises: whether color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data; If color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data, then whether satisfied the acquisition time of eye opening image and eye closing image is poor for actuation time nictation of presetting.
If color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data, and the eye opening image of correspondence and eye closing image acquisition time difference meet preset actuation time nictation poor, then matching unit 12 judge color characteristic data and preset nictation characteristic parameter the match is successful, also namely image feature data with preset nictation characteristic parameter the match is successful, successfully catch action of blinking.
In the present embodiment, gather user images by collecting unit 11 and carry out action recognition, catch the characteristic of image; Then collecting unit is according to characteristic, mates with preset characteristic parameter nictation; If characteristic and preset nictation characteristic parameter the match is successful, then successfully catch action nictation.The present embodiment carries out action recognition by gathering user images, realize the seizure of action nictation, thus according to the generation area quick position eye position of action nictation, carry out eye recognition, user operation is simple and convenient, improves Consumer's Experience.
Further, with reference to Fig. 4, eyes positioning and recognizing device the 3rd embodiment of the present invention provides a kind of eyes positioning and recognizing device, based on eyes positioning and recognizing device second embodiment of the present invention shown in above-mentioned Fig. 4, described collecting unit 11 also for,
If described characteristic and described preset nictation characteristic parameter it fails to match, then described action nictation of unsuccessful seizure, again gathers image, again gathers the characteristic of image described in seizure.
Complete mating of whole color characteristic data and preset characteristic parameter nictation, and after obtaining matching result, if only comprise eye opening color characteristic data in color characteristic data, without eye closing color characteristic data, then matching unit 12 judge color characteristic data and preset nictation characteristic parameter it fails to match, also namely image feature data with preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
If only comprise eye closing color characteristic data in color characteristic data, without eye opening color characteristic data, then matching unit 12 judge color characteristic data and preset nictation characteristic parameter it fails to match, also namely image feature data with preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
If both without eye opening color characteristic data in color characteristic data, also without eye closing color characteristic data, then matching unit 12 judge color characteristic data and preset nictation characteristic parameter it fails to match, also namely image feature data with preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
If color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data, but it is poor that the eye opening image of correspondence and eye closing image acquisition time difference do not meet actuation time nictation of presetting, then matching unit 12 judge color characteristic data and preset nictation characteristic parameter it fails to match, also namely image feature data with preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
Then, collecting unit 11 Resurvey user images, carries out action recognition.Such as, eyes positioning and recognizing device can with the form of word or voice, the unsuccessful identification of prompting user, Resurvey image.
Collecting unit 11 catches characteristic parameter after again gathering user images again.
In the present embodiment, if gather image nictation characteristic with preset nictation characteristic parameter it fails to match, then this unsuccessful seizure is blinked action, and collecting unit 11 carries out the seizure of image acquisition and image feature data again.The present embodiment achieves when user does not accurately carry out motion capture failure nictation that action nictation or other abnormal conditions cause, and again carries out image acquisition, catches action nictation, improve the capturing efficiency of action nictation.
Further, with reference to Fig. 5, eyes positioning and recognizing device the 4th embodiment of the present invention provides a kind of eyes positioning and recognizing device, based on eyes positioning and recognizing device second embodiment of the present invention (the present embodiment is with Fig. 3 example) shown in eyes positioning and recognizing device first embodiment of the present invention shown in above-mentioned Fig. 3 or Fig. 4, described identification module 30 comprises:
Iris recognition unit 31, for according to described eyes image, carries out iris recognition.
After acquisition eyes image, iris recognition unit 31 carries out iris recognition according to eyes image.
Concrete, as a kind of embodiment, first, iris recognition unit 31 obtains the eyes image in eye opening image.
Then, iris recognition unit 31 as sharpness etc. according to the Parameter Conditions preset, from the eyes image obtained, is chosen one and is filled the eyes image of sufficient parameter preset condition for extracting iris image, and obtain iris image according to the eyes image chosen.
Then, iris recognition unit 31 extracts the iris feature point of iris image, obtains the iris feature point of predetermined number;
Then, iris recognition unit 31 is encoded to the iris feature point obtained, and obtains the coding of predetermined number.
Then, the iris feature point coding that iris recognition unit 31 will obtain, mates one by one with preset coding.If iris feature point coding is completely the same with preset coding, then judge iris feature point coding and the success of preset codes match.
After completing the whole iris feature point codes match of iris image, iris recognition unit 31 to obtain in iris image the successful iris feature point group/cording quantity with preset codes match.
Then, iris recognition unit 31 calculates the feature coding quantity that the match is successful and accounts for the ratio that in iris image, whole iris feature point is encoded; If the ratio value obtained is more than or equal to default threshold value, then judge that the iris of active user is successfully identified; If the ratio value obtained is less than default threshold value, then judge that the iris of active user is not successfully identified.Wherein, the threshold value preset is for avoiding the iris feature point code error because the reasons such as picture quality produce, and the failure of the iris feature caused some codes match, threshold size can be arranged according to actual needs flexibly.
Thus, iris recognition unit 31 obtains the fixation and recognition result of eyes, can manage identity authority according to recognition result.
It should be noted that, preset is encoded to preset iris feature point coding, for identification authentication.Such as, when controlling the access rights of intelligent terminal, preset being encoded to allows the client iris unique point coding of access intelligent terminal; During access control system, preset being encoded to allows the client iris unique point coding opened the door.
Sclera recognition unit 32, for according to described eyes image, carries out sclera identification.
After acquisition eyes image, sclera recognition unit 32 carries out sclera identification according to eyes image.
Concrete, as a kind of embodiment, first, sclera recognition unit 32 obtains the eyes image of eye opening image.
Then, sclera recognition unit 32 as sharpness etc. according to the Parameter Conditions preset, from the eyes image obtained, is chosen one and is filled the eyes image of sufficient parameter preset condition for extracting sclera image, obtain the image information of sclera.Such as, because sclera is mainly in white, therefore sclera image information can be obtained according to the color characteristic of image.
Then, the sclera image information of acquisition is converted into sclera characteristic information by sclera recognition unit 32.
Then, the sclera characteristic information obtained mates with preset sclera characteristic information by sclera recognition unit 32 one by one; If the sclera characteristic information obtained and preset sclera characteristic information completely the same, then judge that the sclera characteristic information obtained mates with preset sclera characteristic information.
Owing to gathering the factor such as pixel, light of image, be difficult to realize each sclera characteristic information gathered completely the same, therefore, sclera recognition unit 32 pre-sets proportion threshold value, allows a certain proportion of information error.
If the sclera characteristic information obtained and preset sclera characteristic information matching ratio reach default proportion threshold value, then sclera recognition unit 32 judges the sclera identification success of active user; If the sclera characteristic information obtained and preset sclera characteristic information matching ratio do not reach default proportion threshold value, then sclera recognition unit 32 judges that the sclera identification of active user is unsuccessful.
Thus, sclera recognition unit 32 obtains sclera recognition result, also namely obtains eye recognition result.
In the present embodiment, by iris recognition unit 31 according to the eyes image obtained, iris recognition is carried out; Sclera recognition unit 32, according to the eyes image obtained, carries out sclera identification.This enforcement achieves according to after action nictation quick position eye position, and can complete identification to eyes by sclera identification or iris recognition, multiple eye recognition mode improves the efficiency of eye recognition.
Further, with reference to Fig. 5, eyes positioning and recognizing device the 5th embodiment of the present invention provides a kind of eyes positioning and recognizing device, based on eyes positioning and recognizing device of the present invention 4th embodiment shown in above-mentioned Fig. 5, described iris recognition unit 31 also for,
According to described eyes image, obtain iris image;
Pre-service is carried out to described iris image;
Extract iris feature point according to described pretreated iris image, and carry out iris recognition according to described iris feature point.
After acquisition eyes image, iris recognition unit 31 can carry out the iris recognition of eyes according to eyes image.
Concrete, as a kind of embodiment, first, iris recognition unit 31 according to preset Parameter Conditions as sharpness etc., from the eyes image obtained, choose one and be filled the eyes image of sufficient parameter preset condition for extracting iris image, and obtain iris image according to the eyes image chosen.
Iris is the annular formations between black pupil and white sclera, and it includes the minutia of much interlaced spot, filament, crown, striped, crypts etc.And iris is after prenatal development stage is formed, will remain unchanged in whole life course.The diameter of iris is generally 11 millimeters, between sclera and pupil, contains abundant texture information.
Thus, iris recognition unit 31 can catch iris image according to the color characteristic of iris etc.
After obtaining iris image, iris recognition unit 31 pairs of iris images carry out pre-service, improve the information discrimination of iris image.
Concrete, as a kind of embodiment, first, iris recognition unit 31 carries out Iris Location according to iris image, determines the inner circle of iris, cylindrical and quafric curve position in the picture.Wherein, inner circle is the border of iris and pupil, and cylindrical is the border of iris and sclera, and quafric curve is the border of iris and upper lower eyelid.
Then, iris recognition unit 31, by iris image normalization, also adjusts to default fixed measure by the iris size in iris image, is convenient to identify.
Then, iris recognition unit 31 carries out iris image enhancing, for the iris image after normalization, carries out the process such as brightness, contrast and smoothness, improves the discrimination of iris feature in iris image.
Thus, iris recognition unit 31 completes the pre-service to iris image, improves the discrimination of iris feature.
After to iris preprocessing, iris recognition unit 31 carries out iris recognition according to pretreated iris image.
Concrete, as a kind of embodiment, first, iris recognition unit 31, according to the iris image after process, extracts iris feature point.In the present embodiment, represent the iris feature information of every square millimeter by the data of 3,4 bytes, like this, an iris about has 266 to quantize iris feature point, can obtain the independent iris feature point of 173 scale-of-two degree of freedom.
Then, iris recognition unit 31 is encoded to extracting the iris feature point obtained, and obtains iris feature point coding.
Then, the iris feature point coding that iris recognition unit 31 will obtain, mates one by one with preset coding.The iris feature point coding obtained if extract is completely the same with preset coding, then judge iris feature point coding and the success of preset codes match.
After completing the whole iris feature point codes match of iris image, iris recognition unit 31 to obtain in iris image the successful iris feature point group/cording quantity with preset codes match.
Then, iris recognition unit 31 calculates the feature coding quantity that the match is successful and accounts for the ratio that in iris image, whole iris feature point is encoded; If the ratio value obtained is more than or equal to default threshold value, then judge iris recognition success; If the ratio value obtained is less than default threshold value, then judge iris recognition failure.Wherein, the threshold value preset is for avoiding the iris feature point code error because the reasons such as picture quality produce, and the failure of the iris feature caused some codes match, threshold size can be arranged according to actual needs flexibly.
Thus, iris recognition unit 31 obtains the recognition result of iris, is also eye recognition result.
It should be noted that, preset is encoded to preset iris feature point coding, for identification authentication.Such as, when controlling the access rights of intelligent terminal, preset being encoded to allows the client iris unique point coding of access intelligent terminal; During access control system, preset being encoded to allows the client iris unique point coding opened the door.
In the present embodiment, iris recognition unit 31 also for according to eyes image, obtains iris image; Then, iris recognition unit 31 pairs of iris images carry out pre-service, to improve iris feature point discrimination; Then, iris recognition unit 31 extracts iris feature point according to pretreated iris image, and carries out iris recognition according to iris feature point.When the present embodiment achieves and carries out iris recognition according to eyes image, the lifting of iris recognition efficiency and accuracy rate.
With reference to Fig. 6, eyes positioning identifying method first embodiment of the present invention provides a kind of eyes positioning identifying method, and described eyes positioning identifying method comprises:
Step S10, collection image, according to described picture catching action nictation.
The eyes positioning identifying method that the embodiment of the present invention proposes, by the quick position identification to human eye, can be widely used in the field such as identification authentication, security, the safeguard protection of such as intelligent terminal, automatically work attendance, gate control system etc.
The present embodiment is illustrated with the encryption of intelligent terminal.Eyes positioning and recognizing device is positioned on intelligent terminal, for controlling the fixation and recognition of eyes.
After startup eyes positioning and recognizing device, first, eyes positioning and recognizing device controls camera collection user face image.
Such as, the access rights of current intelligent terminal are controlled by eye recognition, then when user accesses intelligent terminal by user input unit 130, if it is closed condition that sensing cell 140 senses current intelligent terminal, then eyes positioning and recognizing device controls the front-facing camera collection image of intelligent terminal, lighted the screen ejection prompting page of intelligent terminal by the display unit 151 in output unit 150 simultaneously, or dio Output Modules 152 exports voice message, or alarm unit 153 carries out action nictation with the form of vibration prompting user, adjustment makes face-image be arranged in image acquisition region clearly with the position of camera.
It should be noted that, gather image to be realized by the camera 121 being arranged in the A/V input block 120 on intelligent terminal, also can be accessed external input device such as camera to realize by the interface unit 170 on intelligent terminal, can arrange flexibly according to actual needs.
Then, eyes positioning and recognizing device, according to the user's face image collected, by catching characteristic nictation, catches action nictation.Such as, in Preset Time, collect the image of predetermined number, catch eye opening characteristic or the eye closing characteristic of image, and mate with characteristic parameter nictation be preset in storer 160; Eye opening characteristic or eye closing characteristic are mated with preset characteristic parameter nictation, then obtain eye opening action or eye closing action; If obtaining eye opening action and eye closing action, to meet default actuation time nictation poor, then judge to capture action nictation.
If step S20 successfully captures described action nictation, then in described image, locate eye areas according to described action nictation and obtain eyes image.
After successfully capturing action nictation, eyes positioning and recognizing device according to operating position fixing eye areas nictation, and obtains eyes image.
Concrete, as a kind of embodiment, first, eyes positioning and recognizing device obtains eye opening image and eye closing image according to action nictation captured, and acquisition eye opening image and eye closing image are had any different the image-region of motion characteristic.
Then, according to the contour feature of eyes, remove the interference region in image-region.Such as: with nictation, there is the mouth of same motion characteristic, but can be removed by the size of eye contour and nictation action generation area occurs do not meet the interference region of pre-set dimension; Or according to the symmetry of eye contour, remove interference region horizontal direction only having an action of blinking.
Thus, obtain the image-region after removing interference region, this region of locating in all face-images is eye areas, and the image obtaining this region is eyes image, for eye recognition.
Step S30, according to described eyes image, carry out eye recognition.
After acquisition eyes image, glasses positioning and recognizing device carries out eye recognition according to eyes image.
In the present embodiment, eye recognition can be divided into iris recognition and sclera identification.Iris, also namely usually said iris removes the annulus of middle black pupil; Sclera, also namely usually said white of the eye part.Iris and sclera include the biological information that can be used for eye recognition.
Concrete, as a kind of embodiment, glasses positioning and recognizing device obtains the eyes image in eye opening image.
Then, glasses positioning and recognizing device, according to the eyes image in eye opening image, extracts iris image, extracts the iris feature point of iris image, obtains the iris feature point of predetermined number.Then, glasses positioning and recognizing device is encoded to the iris feature point obtained, and obtains the coding of predetermined number.
Then, the iris feature point coding that glasses positioning and recognizing device will obtain, mates one by one with preset coding, carries out eye recognition.If iris feature point coding is completely the same with preset coding, then judge iris feature point coding and the success of preset codes match, also namely the iris of active user is successfully identified.
Thus, glasses positioning and recognizing device obtains the fixation and recognition result of eyes, can manage identity authority according to recognition result.
It should be noted that, preset is encoded to preset iris feature point coding, for identification authentication.Such as, when controlling the access rights of intelligent terminal, preset being encoded to allows the client iris unique point coding of access intelligent terminal; During access control system, preset being encoded to allows the client iris unique point coding opened the door.
As another kind of embodiment, after acquisition eyes image, glasses positioning and recognizing device carries out sclera identification according to eyes image.
First, glasses positioning and recognizing device obtains the eyes image of eye opening image, obtains the image information of sclera according to eyes image.Then, the sclera image information of acquisition is converted into sclera characteristic information by glasses positioning and recognizing device.
Then, the sclera characteristic information obtained mates with preset sclera characteristic information by glasses positioning and recognizing device one by one, carries out sclera identification; If the sclera characteristic information obtained and preset sclera characteristic information completely the same, then namely judge that the sclera characteristic information obtained mates with preset sclera characteristic information, also the sclera of active user is successfully identified.
Thus, glasses positioning and recognizing device obtains sclera recognition result, is also eye recognition result.
In the present embodiment, eyes positioning and recognizing device carries out action recognition by gathering image, catches action nictation; If successfully capture action nictation, then in described image, locate ocular according to action nictation and obtain eyes image; Then, according to eyes image, carry out eye recognition.The present embodiment is by gathering the face-image of user, achieve according to the action nictation quick position of user to ocular, thus acquisition eyes image carries out eye recognition, decrease the data processing amount for facial information, improve speed and the accuracy of eyes location, carry out eye recognition more efficiently.
Further, with reference to Fig. 7, eyes positioning identifying method second embodiment of the present invention provides a kind of eyes positioning identifying method, and based on the embodiment shown in above-mentioned Fig. 6, described step S10 comprises:
Step S11, within the time of presetting, gather the image of predetermined number, according to the image capturing feature data of described predetermined number.
When carrying out eyes fixation and recognition, first, eyes positioning and recognizing device controls camera collection user's face image and carries out action recognition, catches characteristic.
Concrete, as a kind of embodiment, first, eyes positioning and recognizing device prompting user carries out action recognition nictation, and controls the image that camera gathers predetermined number within the time of presetting.It should be noted that, the time of presetting is the collection image temporal pre-set, and can arrange flexibly according to actual needs; Gather the predetermined number of image, can arrange flexibly according to actual needs such as the performance of camera, graphical analysis performances.
Eyes positioning and recognizing device can eject warning page prompts user within the time of presetting, and user should at least once blink action.
Then, according to the face recognition feature such as brightness, contour feature of face-image, from the image all collected, whole user's face images is obtained.
Then, according to the whole face-images obtained, the characteristic of capturing facial image.The characteristic caught is for judging that user blinks the characteristic of action, can be obtained by change of the action of capturing facial image, color or brightness etc.
Such as, because human eye forms primarily of the eye Renhe white of the eye, the color of the eye Renhe white of the eye is different, the eyes image color of opening eyes and close one's eyes has obvious difference, and the color change of other parts of face is little, therefore the color characteristic of whole face-image can be extracted, and the color characteristic data of whole face-images that will obtain, as catching the characteristic obtained.
Step S12, described characteristic to be mated with preset characteristic parameter nictation.
After obtaining the characteristic in all images, characteristic is mated with preset characteristic parameter nictation.
Concrete, as a kind of embodiment, with the characteristic caught for color characteristic data is illustrated.
It should be noted that, preset nictation characteristic parameter for user blink time, color parameter and actuation time nictation of face-image are poor; Because action nictation is made up of open eyes action and eye closing action, therefore the color parameter of face-image comprises color parameter when color parameter when user opens eyes and user close one's eyes.Actuation time nictation, difference was actuation time nictation of presetting, can arrange flexibly according to actual needs, such as arranging actuation time nictation difference is 10 seconds, then user at least once should blink action in 10 seconds, and the eye opening image also namely collected and eye closing image maximum duration difference are 10 seconds.Preset characteristic parameter nictation can be arranged in storer 160.
First, the color characteristic data obtained is mated with eye opening color parameter one by one, if the similarity of color characteristic data and eye opening color parameter reaches default threshold value, then judge that this color characteristic data is as eye opening color characteristic data, capture eye opening action, and will the image of this eye opening color characteristic data be obtained as eye opening image.
The color characteristic data obtained is mated with eye closing color parameter one by one, if the similarity of color characteristic data and eye closing color parameter reaches default threshold value, then judge that this color characteristic data is as eye closing color characteristic data, capture eye closing action, and will the image of this eye closing color characteristic data be obtained as eye closing image.
If obtain eye opening color characteristic data and eye closing color characteristic data, then judge whether the acquisition time difference of eye opening image and eye closing image meets actuation time nictation of presetting poor.
Thus, complete mating of color characteristic data and preset characteristic parameter nictation, and obtain matching result.Matching result comprises: whether color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data; If color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data, then whether satisfied the acquisition time of eye opening image and eye closing image is poor for actuation time nictation of presetting.
If the described characteristic of step S13 and described preset nictation characteristic parameter the match is successful, then successfully catch action nictation.
Complete mating of whole color characteristic data and preset characteristic parameter nictation, and after obtaining matching result, if color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data, and the eye opening image of correspondence and eye closing image acquisition time difference meet preset actuation time nictation poor, then judge color characteristic data and preset nictation characteristic parameter the match is successful, also namely image feature data with preset nictation characteristic parameter the match is successful, successfully catch action of blinking.
In the present embodiment, carrying out action recognition by gathering user images, catching the characteristic of image; Then according to characteristic, mate with preset characteristic parameter nictation; If characteristic and preset nictation characteristic parameter the match is successful, then successfully catch action nictation.The present embodiment carries out action recognition by gathering user images, realize the seizure of action nictation, thus according to the generation area quick position eye position of action nictation, carry out eye recognition, user operation is simple and convenient, improves Consumer's Experience.
Further, with reference to Fig. 8, eyes positioning identifying method the 3rd embodiment of the present invention provides a kind of eyes positioning identifying method, based on the embodiment shown in above-mentioned Fig. 7, after described step S12, also comprises:
If the described characteristic of step S14 and described preset nictation characteristic parameter it fails to match, then described action nictation of unsuccessful seizure, proceeds to and performs step S11.
Complete mating of whole color characteristic data and preset characteristic parameter nictation, and after obtaining matching result, if only comprise eye opening color characteristic data in color characteristic data, without eye closing color characteristic data, then judge color characteristic data and preset nictation characteristic parameter it fails to match, also namely image feature data with preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
If only comprise eye closing color characteristic data in color characteristic data, without eye opening color characteristic data, then judge color characteristic data with preset nictation characteristic parameter it fails to match, also namely image feature data and preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
If both without eye opening color characteristic data in color characteristic data, also without eye closing color characteristic data, then judge color characteristic data with preset nictation characteristic parameter it fails to match, also namely image feature data and preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
If color characteristic data comprises eye opening color characteristic data and eye closing color characteristic data, but it is poor that the eye opening image of correspondence and eye closing image acquisition time difference do not meet actuation time nictation of presetting, then judge color characteristic data and preset nictation characteristic parameter it fails to match, also namely image feature data with preset nictation characteristic parameter it fails to match, unsuccessful seizure is blinked action.
Then, eyes positioning and recognizing device Resurvey user images, carries out action recognition.Such as, eyes positioning and recognizing device can with the form of word or voice, the unsuccessful identification of prompting user, Resurvey image.
After again gathering user images, again catch characteristic parameter and mate with preset characteristic parameter nictation.
In the present embodiment, if gather image nictation characteristic with preset nictation characteristic parameter it fails to match, then this unsuccessful seizure is blinked action, again carries out the seizure of image acquisition and image feature data.The present embodiment achieves when user does not accurately carry out motion capture failure nictation that action nictation or other abnormal conditions cause, and again carries out image acquisition, catches action nictation, improve the capturing efficiency of action nictation.
Further, with reference to Fig. 9, eyes positioning identifying method the 4th embodiment of the present invention provides a kind of eyes positioning identifying method, and based on the embodiment (the present embodiment is with Fig. 6 example) shown in above-mentioned Fig. 6 or Fig. 7, described step S30 comprises:
Step S31, according to described eyes image, carry out iris recognition.
After acquisition eyes image, eyes positioning and recognizing device carries out iris recognition according to eyes image.
Concrete, as a kind of embodiment, first, obtain the eyes image in eye opening image.
Then, according to the Parameter Conditions preset as sharpness etc., from the eyes image obtained, choose one and be filled the eyes image of sufficient parameter preset condition for extracting iris image, and obtain iris image according to the eyes image chosen.
Then, extract the iris feature point of iris image, obtain the iris feature point of predetermined number;
Then, the iris feature point obtained is encoded, obtains the coding of predetermined number.
Then, by the iris feature point coding obtained, mate one by one with preset coding.If iris feature point coding is completely the same with preset coding, then judge iris feature point coding and the success of preset codes match.
After completing the whole iris feature point codes match of iris image, iris recognition unit 31 to obtain in iris image the successful iris feature point group/cording quantity with preset codes match.
Then, eyes positioning and recognizing device calculates the feature coding quantity that the match is successful and accounts for the ratio that in iris image, whole iris feature point is encoded; If the ratio value obtained is more than or equal to default threshold value, then judge that the iris of active user is successfully identified; If the ratio value obtained is less than default threshold value, then judge that the iris of active user is not successfully identified.Wherein, the threshold value preset is for avoiding the iris feature point code error because the reasons such as picture quality produce, and the failure of the iris feature caused some codes match, threshold size can be arranged according to actual needs flexibly.
Thus, eyes positioning and recognizing device obtains the fixation and recognition result of eyes, can manage identity authority according to recognition result.
It should be noted that, preset is encoded to preset iris feature point coding, for identification authentication.Such as, when controlling the access rights of intelligent terminal, preset being encoded to allows the client iris unique point coding of access intelligent terminal; During access control system, preset being encoded to allows the client iris unique point coding opened the door.
Step S32, according to described eyes image, carry out sclera identification.
After acquisition eyes image, eyes positioning and recognizing device carries out sclera identification according to eyes image.
Concrete, as a kind of embodiment, first, eyes positioning and recognizing device obtains the eyes image of eye opening image.
Then, according to the Parameter Conditions preset as sharpness etc., from the eyes image obtained, choose one and be filled the eyes image of sufficient parameter preset condition for extracting sclera image, obtain the image information of sclera.Such as, because sclera is mainly in white, therefore sclera image information can be obtained according to the color characteristic of image.
Then, the sclera image information of acquisition is converted into sclera characteristic information.
Then, the sclera characteristic information obtained is mated with preset sclera characteristic information one by one; If the sclera characteristic information obtained and preset sclera characteristic information completely the same, then judge that the sclera characteristic information obtained mates with preset sclera characteristic information.
Owing to gathering the factor such as pixel, light of image, be difficult to realize each sclera characteristic information gathered completely the same, therefore, eyes positioning and recognizing device pre-sets proportion threshold value, allows a certain proportion of information error.
If the sclera characteristic information obtained and preset sclera characteristic information matching ratio reach default proportion threshold value, then eyes positioning and recognizing device judges the sclera identification success of active user; If the sclera characteristic information obtained and preset sclera characteristic information matching ratio do not reach default proportion threshold value, then eyes positioning and recognizing device judges that the sclera identification of active user is unsuccessful.
Thus, eyes positioning and recognizing device obtains sclera recognition result, also namely obtains eye recognition result.
In the present embodiment, eyes positioning and recognizing device, according to the eyes image obtained, carries out iris recognition; Or according to the eyes image obtained, carry out sclera identification.This enforcement achieves according to after action nictation quick position eye position, and can complete identification to eyes by sclera identification or iris recognition, multiple eye recognition mode improves the efficiency of eye recognition.
Further, with reference to Figure 10, eyes positioning identifying method the 5th embodiment of the present invention provides a kind of eyes positioning identifying method, and based on the embodiment shown in above-mentioned Fig. 9, described step S31 comprises:
Step S311, according to described eyes image, obtain iris image.
After acquisition eyes image, eyes positioning and recognizing device can carry out the iris recognition of eyes according to eyes image.
Concrete, as a kind of embodiment, first, according to the Parameter Conditions preset as sharpness etc., from the eyes image obtained, choose one and be filled the eyes image of sufficient parameter preset condition for extracting iris image, and obtain iris image according to the eyes image chosen.
Iris is the annular formations between black pupil and white sclera, and it includes the minutia of much interlaced spot, filament, crown, striped, crypts etc.And iris is after prenatal development stage is formed, will remain unchanged in whole life course.The diameter of iris is generally 11 millimeters, between sclera and pupil, contains abundant texture information.
Thus, eyes positioning and recognizing device can catch iris image according to the color characteristic of iris etc.
Step S312, pre-service is carried out to described iris image, to improve iris feature point discrimination.
After obtaining iris image, pre-service is carried out to iris image, improve the information discrimination of iris image.
Concrete, as a kind of embodiment, first, carry out Iris Location according to iris image, determine the inner circle of iris, cylindrical and quafric curve position in the picture.Wherein, inner circle is the border of iris and pupil, and cylindrical is the border of iris and sclera, and quafric curve is the border of iris and upper lower eyelid.
Then, by iris image normalization, also adjust to default fixed measure by the iris size in iris image, be convenient to identify.
Then, carry out iris image enhancing, for the iris image after normalization, carry out the process such as brightness, contrast and smoothness, improve the discrimination of iris feature in iris image.
Thus, complete the pre-service to iris image, promote the discrimination of iris feature.
Step S313, extract iris feature point according to described pretreated iris image, and carry out iris recognition according to described iris feature point.
After to iris preprocessing, carry out iris recognition according to pretreated iris image.
Concrete, as a kind of embodiment, first, according to the iris image after process, extract iris feature point.In the present embodiment, represent the iris feature information of every square millimeter by the data of 3,4 bytes, like this, an iris about has 266 to quantize iris feature point, can obtain the independent iris feature point of 173 scale-of-two degree of freedom.
Then, encoding to extracting the iris feature point obtained, obtaining iris feature point coding.
Then, by the iris feature point coding obtained, mate one by one with preset coding.The iris feature point coding obtained if extract is completely the same with preset coding, then judge iris feature point coding and the success of preset codes match.
After completing the whole iris feature point codes match of iris image, to obtain in iris image the successful iris feature point group/cording quantity with preset codes match.
Then, calculate the feature coding quantity that the match is successful and account for the ratio that in iris image, whole iris feature point is encoded; If the ratio value obtained is more than or equal to default threshold value, then judge iris recognition success; If the ratio value obtained is less than default threshold value, then judge iris recognition failure.Wherein, the threshold value preset is for avoiding the iris feature point code error because the reasons such as picture quality produce, and the failure of the iris feature caused some codes match, threshold size can be arranged according to actual needs flexibly.
Thus, eyes positioning and recognizing device obtains the recognition result of iris, can manage identity authority according to recognition result.
It should be noted that, preset is encoded to preset iris feature point coding, for identification authentication.Such as, when controlling the access rights of intelligent terminal, preset being encoded to allows the client iris unique point coding of access intelligent terminal; During access control system, preset being encoded to allows the client iris unique point coding opened the door.
In the present embodiment, eyes positioning and recognizing device, according to eyes image, obtains iris image; Then, pre-service is carried out to iris image, to improve iris feature point discrimination; Then, extract iris feature point according to pretreated iris image, and carry out iris recognition according to iris feature point.When the present embodiment achieves and carries out iris recognition according to eyes image, the lifting of iris recognition efficiency and accuracy rate.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better embodiment.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computing machine, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only optional embodiment of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. an eyes positioning and recognizing device, is characterized in that, described eyes positioning and recognizing device comprises:
Capture module, for gathering image, according to described picture catching action nictation;
Locating module, if for successfully capturing described action nictation, then locates ocular according to described action nictation and obtains eyes image in described image;
Identification module, for according to described eyes image, carries out eye recognition.
2. eyes positioning and recognizing device as claimed in claim 1, it is characterized in that, described capture module comprises:
Collecting unit, for gathering the image of predetermined number within the time of presetting, according to the image capturing feature data of described predetermined number;
Matching unit, for described characteristic is mated with preset characteristic parameter nictation, if described characteristic and described preset nictation characteristic parameter the match is successful, then successfully catch action nictation.
3. eyes positioning and recognizing device as claimed in claim 2, is characterized in that, described collecting unit also for,
If described characteristic and described preset nictation characteristic parameter it fails to match, then described action nictation of unsuccessful seizure, again gathers image, again gathers the characteristic of image described in seizure.
4. eyes positioning and recognizing device as claimed in claim 1 or 2, it is characterized in that, described identification module comprises:
Iris recognition unit, for according to described eyes image, carries out iris recognition;
Sclera recognition unit, for according to described eyes image, carries out sclera identification.
5. eyes positioning and recognizing device as claimed in claim 4, is characterized in that, described iris recognition unit also for,
According to described eyes image, obtain iris image;
Pre-service is carried out to described iris image;
Extract iris feature point according to described pretreated iris image, and carry out iris recognition according to described iris feature point.
6. an eyes positioning identifying method, is characterized in that, described eyes positioning identifying method comprises the following steps:
Gather image, according to described picture catching action nictation;
If successfully capture described action nictation, then in described image, locate ocular according to described action nictation and obtain eyes image;
According to described eyes image, carry out eye recognition.
7. eyes positioning identifying method as claimed in claim 6, is characterized in that, described collection image, and the step according to described picture catching action nictation comprises:
The image of predetermined number is gathered, according to the image capturing feature data of described predetermined number within the time of presetting;
Described characteristic is mated with preset characteristic parameter nictation;
If described characteristic and described preset nictation characteristic parameter the match is successful, then successfully catch action nictation.
8. eyes positioning identifying method as claimed in claim 7, is characterized in that, described according to described characteristic, after carrying out the step of mating, also comprises with preset characteristic parameter nictation:
If described characteristic and described preset nictation characteristic parameter it fails to match, then described action nictation of unsuccessful seizure, proceeds to execution step: gather image, catch the characteristic of described image.
9. eyes positioning identifying method as claimed in claims 6 or 7, it is characterized in that, described according to described eyes image, the step of carrying out eye recognition comprises:
According to described eyes image, carry out iris recognition; Or,
According to described eyes image, carry out sclera identification.
10. eyes positioning identifying method as claimed in claim 9, it is characterized in that, described according to described eyes image, the step of carrying out iris recognition comprises:
According to described eyes image, obtain iris image;
Pre-service is carried out to described iris image;
Extract iris feature point according to described pretreated iris image, and carry out iris recognition according to described iris feature point.
CN201510831395.4A 2015-11-25 2015-11-25 Eye positioning identification device and method Pending CN105488462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510831395.4A CN105488462A (en) 2015-11-25 2015-11-25 Eye positioning identification device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510831395.4A CN105488462A (en) 2015-11-25 2015-11-25 Eye positioning identification device and method

Publications (1)

Publication Number Publication Date
CN105488462A true CN105488462A (en) 2016-04-13

Family

ID=55675434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510831395.4A Pending CN105488462A (en) 2015-11-25 2015-11-25 Eye positioning identification device and method

Country Status (1)

Country Link
CN (1) CN105488462A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774841A (en) * 2016-11-23 2017-05-31 上海擎感智能科技有限公司 Intelligent glasses and its awakening method, Rouser
CN107547797A (en) * 2017-07-27 2018-01-05 努比亚技术有限公司 A kind of image pickup method, terminal and computer-readable recording medium
CN107742103A (en) * 2017-10-14 2018-02-27 浙江鑫飞智能工程有限公司 A kind of video frequency monitoring method and system
CN108551699A (en) * 2018-04-20 2018-09-18 哈尔滨理工大学 Eye control intelligent lamp and control method thereof
WO2018228027A1 (en) * 2017-06-14 2018-12-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Iris recognition method, electronic device and computer-readable storage medium
CN110008812A (en) * 2019-01-22 2019-07-12 苏州迈荣祥信息科技有限公司 Website log system based on iris recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1599913A (en) * 2001-12-03 2005-03-23 株式会社斯耐克斯技术 Iris identification system and method, and storage media having program thereof
FR2875322B1 (en) * 2004-09-14 2007-03-02 Atmel Grenoble Soc Par Actions METHOD FOR AIDING FACE RECOGNITION
CN202257711U (en) * 2011-10-24 2012-05-30 苏州市职业大学 Automobile keyless access control system based on iris verification mode
CN104933344A (en) * 2015-07-06 2015-09-23 北京中科虹霸科技有限公司 Mobile terminal user identity authentication device and method based on multiple biological feature modals

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1599913A (en) * 2001-12-03 2005-03-23 株式会社斯耐克斯技术 Iris identification system and method, and storage media having program thereof
FR2875322B1 (en) * 2004-09-14 2007-03-02 Atmel Grenoble Soc Par Actions METHOD FOR AIDING FACE RECOGNITION
CN202257711U (en) * 2011-10-24 2012-05-30 苏州市职业大学 Automobile keyless access control system based on iris verification mode
CN104933344A (en) * 2015-07-06 2015-09-23 北京中科虹霸科技有限公司 Mobile terminal user identity authentication device and method based on multiple biological feature modals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
舒梅 等: "基于累积帧差的人眼定位及模板提取", 《西华大学学报(自然科学版)》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774841A (en) * 2016-11-23 2017-05-31 上海擎感智能科技有限公司 Intelligent glasses and its awakening method, Rouser
CN106774841B (en) * 2016-11-23 2020-12-18 上海擎感智能科技有限公司 Intelligent glasses and awakening method and awakening device thereof
WO2018228027A1 (en) * 2017-06-14 2018-12-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Iris recognition method, electronic device and computer-readable storage medium
US10839210B2 (en) 2017-06-14 2020-11-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Iris recognition method, electronic device and computer-readable storage medium
CN107547797A (en) * 2017-07-27 2018-01-05 努比亚技术有限公司 A kind of image pickup method, terminal and computer-readable recording medium
CN107742103A (en) * 2017-10-14 2018-02-27 浙江鑫飞智能工程有限公司 A kind of video frequency monitoring method and system
CN108551699A (en) * 2018-04-20 2018-09-18 哈尔滨理工大学 Eye control intelligent lamp and control method thereof
CN108551699B (en) * 2018-04-20 2019-10-01 哈尔滨理工大学 Eye control intelligent lamp and control method thereof
CN110008812A (en) * 2019-01-22 2019-07-12 苏州迈荣祥信息科技有限公司 Website log system based on iris recognition
CN110008812B (en) * 2019-01-22 2021-03-16 西安网算数据科技有限公司 Website login system based on iris recognition

Similar Documents

Publication Publication Date Title
CN105488462A (en) Eye positioning identification device and method
US11100208B2 (en) Electronic device and method for controlling the same
AU2020201662B2 (en) Face liveness detection method and apparatus, and electronic device
KR101710478B1 (en) Mobile electric document system of multiple biometric
US10496804B2 (en) Fingerprint authentication method and system, and terminal supporting fingerprint authentication
CN104899490A (en) Terminal positioning method and user terminal
WO2019019836A1 (en) Unlocking control method and related product
CN105022981A (en) Method and device for detecting health state of human eyes and mobile terminal
CN104636734A (en) Terminal face recognition method and device
CN105718043A (en) Method And Apparatus For Controlling An Electronic Device
US11328044B2 (en) Dynamic recognition method and terminal device
CN105577886A (en) Mobile terminal unlocking device and method
CN105243362A (en) Camera control apparatus and method
CN111103922B (en) Camera, electronic equipment and identity verification method
CN105320871A (en) Screen unlocking method and screen unlocking apparatus
CN105913019A (en) Iris identification method and terminal
US20240005695A1 (en) Fingerprint Recognition Method and Electronic Device
CN107622246A (en) Face identification method and Related product
CN104036170A (en) Smart glasses and a control method and device of same
CN106570365A (en) Application management device, mobile terminal and method
CN113888159A (en) Opening method of function page of application and electronic equipment
CN108206892A (en) Guard method, device, mobile terminal and the storage medium of contact person's privacy
US20230222843A1 (en) Method and device for registering biometric feature
CN105117627A (en) Method and device for hiding information
CN104715262B (en) A kind of utilization, which is taken pictures, realizes the method, device and mobile terminal of intelligent label function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160413

RJ01 Rejection of invention patent application after publication