CN104166835A - Method and device for identifying living user - Google Patents

Method and device for identifying living user Download PDF

Info

Publication number
CN104166835A
CN104166835A CN201310193848.6A CN201310193848A CN104166835A CN 104166835 A CN104166835 A CN 104166835A CN 201310193848 A CN201310193848 A CN 201310193848A CN 104166835 A CN104166835 A CN 104166835A
Authority
CN
China
Prior art keywords
described
object
random site
display screen
image
Prior art date
Application number
CN201310193848.6A
Other languages
Chinese (zh)
Inventor
张博
王文东
汪孔桥
Original Assignee
诺基亚公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 诺基亚公司 filed Critical 诺基亚公司
Priority to CN201310193848.6A priority Critical patent/CN104166835A/en
Publication of CN104166835A publication Critical patent/CN104166835A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00228Detection; Localisation; Normalisation
    • G06K9/00255Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00597Acquiring or recognising eyes, e.g. iris verification
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00597Acquiring or recognising eyes, e.g. iris verification
    • G06K9/00604Acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00885Biometric patterns not provided for under G06K9/00006, G06K9/00154, G06K9/00335, G06K9/00362, G06K9/00597; Biometric specific functions not specific to the kind of biometric
    • G06K9/00899Spoof detection
    • G06K9/00906Detection of body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Abstract

The embodiment of the invention relates to a method and a device for identifying a living user, and discloses a method for identifying a living user. The method comprises the following the following steps: a face-containing image is acquired; when the face is identified based on the image, whether the viewpoint of the face correspondingly moves to the neighborhood of a random position after an object is displayed in the random position of a display screen is detected; and whether the image is acquired from a living user is determined based on the detection. A corresponding device is further disclosed.

Description

For identifying live body user's method and apparatus

Technical field

Embodiments of the invention relate to computing technique, more specifically, relate to the method and apparatus for identifying live body user.

Background technology

Along with the development of image/video processing and mode identification technology, face recognition (face recognition) has become a kind of stable, accurate and efficient biometrics identification technology.Facial recognition techniques, to comprise facial image and/or video as input, is determined user's identity by identifying and analyze facial feature.Compared with iris recognition or other technology based on biological characteristic, face recognition can complete efficiently authentication in the situation that paying close attention to and discover without user, lower for user's annoyance level.Therefore, facial recognition techniques the every field in finance, the administration of justice, public safety, military affairs and people's daily life be widely used in authentication.And face recognition can realize by means of the various user terminals such as such as personal computer (PC), mobile phone, personal digital assistant (PDA), and without accurate, expensive instrumentation.

But also there is some drawback in the authentication based on face recognition.For example, the image/video that comprises validated user face may be obtained with various means by disabled user, for example, by disclosed network album, resume, pinhole camera, etc.Then, it is front to be inputted facial-recognition security systems that disabled user may be placed in this image/video (for example, the mug shot of validated user) image capture device, thus the account of invasion validated user.Traditional facial-recognition security systems cannot be tackled this situation, because whether user's face-image that they do not have ability to detect input is obtained from live body user (live user).

In order to alleviate the problems referred to above, propose to carry out the pre-service such as three dimensional depth analysis, blink detection and/or frequency spectrum detection to comprising facial image, thereby determined that the face-image identified is obtained from live body user or from two dimensional images such as user's photos.But this method is had relatively high expectations to operating environment.And this method cannot and comprise facial video to live body user to be distinguished, because the face in video can have the action such as three-dimensional depth information and nictation equally.Another kind of known method requires in the time carrying out face recognition, and user's privileged site (for example, hand or eyes) is made predetermined action, for example, advance according to projected path.But because these predetermined action are relatively-stationary, therefore disabled user can record the action that validated user does in the time of authentication, and utilize the video segment of recording to pretend to be live body user.And these class methods need user to remember predetermined action, increase user's mutual burden.Thereby the scheme of measuring human body temperature identification live body user by means such as infrared detection is also known.But this type of scheme need to be implemented by means of special equipment conventionally, this has increased complicacy and/or the cost of facial-recognition security systems.

Based on above-mentioned discussion, in this area, need a kind of technical scheme that can identify more effectively, accurately and easily live body user.

Summary of the invention

In order to overcome the problems referred to above of the prior art, the present invention proposes a kind of for identifying live body user's method and apparatus.

In one aspect of the invention, provide a kind of for identifying live body user's method.The method comprises: obtain and comprise facial image; In this face being identified based on this image, detect after every random site on display screen shows an object, this facial viewpoint just correspondingly moves in the neighborhood of this random site; And determine based on this detection whether this image is obtained from this live body user.

In still another aspect of the invention, provide a kind of for identifying live body user's device.This device comprises: image acquisition unit, is configured to obtain and comprises facial image; Viewpoint detecting unit, is configured in this face being identified based on this image, detects after every random site on display screen shows an object, this facial viewpoint just correspondingly moves in the neighborhood of this random site; And vivo identification unit, be configured to determine based on this detection whether this image is obtained from this live body user.

By below describing and will be understood that, according to embodiments of the invention, can be when user be carried out to face recognition, by being identified to face-image quickly and efficiently to the movement of random site on screen, viewpoint whether is obtained from live body user.And according to embodiments of the invention, disabled user is difficult to utilize the face-image that obtains in advance and/or video and pretends to be validated user.In addition, for example, because the principle of work of this scheme is the conventional physiological property (, stress reaction) based on human body, therefore burden for users can be maintained to acceptable reduced levels.Can utilize common computing equipment and be implemented easily according to the method and apparatus of the embodiment of the present invention, without specialized equipment or instrument, contributing to reduce costs.

Brief description of the drawings

Read detailed description below by reference to accompanying drawing, above-mentioned and other objects of the embodiment of the present invention, feature and advantage will become easy to understand.In the accompanying drawings, show some embodiment of the present invention in exemplary and nonrestrictive mode, wherein:

Fig. 1 shows embodiments of the invention and can be implemented in the schematic block diagram of the hardware configuration of environment wherein;

Fig. 2 show according to exemplary embodiment of the present invention for identifying live body user's the indicative flowchart of method;

Fig. 3 shows the schematic block diagram of identifying live body user by implementing the method shown in Fig. 2;

Fig. 4 show according to exemplary embodiment of the present invention for identifying live body user's the indicative flowchart of method;

Fig. 5 shows the schematic block diagram that shows the time relationship detecting with viewpoint according to the object of exemplary embodiment of the present invention;

Fig. 6 A-Fig. 6 D shows the schematic block diagram of identifying live body user by implementing the method shown in Fig. 4;

Fig. 7 show according to exemplary embodiment of the present invention for identifying live body user's the schematic block diagram of device; And

Fig. 8 shows the schematic block diagram of the equipment of the exemplary embodiment can be used in the present invention.

In each accompanying drawing, identical or corresponding label represents identical or corresponding part.

Embodiment

Some exemplary embodiments are below with reference to the accompanying drawings described principle of the present invention and spirit.Should be appreciated that describing these embodiment is only used to make those skilled in the art can understand better and then realize the present invention, and not limit the scope of the invention by any way.

First with reference to figure 1, it shows exemplary embodiment of the present invention and can be implemented in the schematic block diagram of the hardware configuration of system 100 wherein.As shown in the figure, system 100 comprises image capture device 101, for obtaining the facial image that comprises user.According to embodiments of the invention, image capture device 101 can include but not limited to camera, video camera or can catch any suitable equipment of static state and/or dynamic image.

System 100 also comprises display screen (below also referred to as " screen ") 102, for to user's presentation information.According to embodiments of the invention, screen 102 can be any equipment that can show to user visual information, include but not limited to following one or more: cathode-ray tube (CRT) (CRT) display, liquid crystal display (LCD), light emitting diode (LED) display, plasma display (PDP), three-dimensional (3D) display, touch mode display, etc.

Although it should be noted that image capture device 101 is illustrated as with display screen 102 equipment separating in Fig. 1, scope of the present invention is not limited to this.In certain embodiments, image capture device 101 and display screen 102 can be positioned at Same Physical equipment.For example, in the situation that utilizing mobile device to carry out authentication to user, image capture device 101 can be the camera of mobile device, and display screen 102 is screens of this mobile device.

Alternatively, system 100 can also comprise one or more sensors 103, for catching one or more parameters of the residing ambient condition of indicating user.In certain embodiments, sensor 103 for example can comprise following one or more: light sensor, temperature sensor, infrared sensor, spectral sensor, etc.Note, the parameter that sensor 103 is caught is to be only used to support the optional function in some embodiment, and live body user identification itself does not also rely on these parameters.Concrete operations and the function of sensor 103 will be explained below.Be similar to above and describe, sensor 103 also can be positioned at Same Physical equipment with image capture device 101 and/or display screen 102.For example, in certain embodiments, image capture device 101, display screen 102 and sensor 103 can be all the parts of same subscriber equipment (for example, mobile phone), and they can be coupled to the CPU (central processing unit) of subscriber equipment jointly.

Below with reference to Fig. 2, its show according to one exemplary embodiment of the present invention for identifying live body user's the indicative flowchart of method 200.After method 200 starts, at step S201, obtain and comprise facial image.As described above, can obtain by means of the image capture device 101 in system 100 face-image of any appropriate format.Especially, face-image can be also the one or more frames in caught video.In addition, according to some embodiment of the present invention, original image can pass through various pre-service and/or format conversion after being acquired, and detects and/or face recognition for follow-up live body user.In this regard, any image/video recognition technology known or exploitation in the future at present all can be combined with embodiments of the invention, and scope of the present invention is unrestricted in this regard.

Next, method 200 proceeds to step S202.At step S202, when the image obtaining based on step S201 place is identified face, detect facial viewpoint whether after a random site on screen shows an object, just correspondingly move in the neighborhood of this random site.

In operation, when after step S201 obtains image, can process facial feature and the information of this image to comprise in recognition image.Any face recognition and/or analytical approach known or exploitation in the future at present all can be combined with embodiments of the invention, and scope of the present invention is unrestricted in this regard.In face recognition, can show one or more objects to user by display screen 102, whether be obtained from live body user to detect current handled image.Note, according to embodiments of the invention, live body user detects with face recognition and carries out simultaneously.This is because if the two is not to carry out simultaneously, disabled user likely utilizes mug shot/video to carry out face recognition, and identifies by live body user by means of another (illegally) live body user's face.The generation of this situation can be found and stop to embodiments of the invention effectively.

Continue with reference to figure 2, at step S202 place, each object is displayed on a random corresponding position of determining on screen.When show a more than object on screen time, these objects can be presented on screen 102 successively according to time sequencing, and each object is displayed on a corresponding random site on screen.Especially, before showing next object, can eliminate from screen the demonstration of current object, this also will be explained below.Be appreciated that, random site on screen shows that object allows live body user effectively to identify: because object is all the random site being displayed on screen at every turn, therefore non-living body user (for example, comprising facial photo or video) cannot move to viewpoint in response to the demonstration of object corresponding position.

According to some embodiment of the present invention, shown object can be bright spot.Alternatively, shown object can be word, icon, pattern or can cause any suitable content that user notes.In order to ensure enough attentions that can cause user, for the background presenting with respect to display screen 102, object can be by eye-catching demonstration.For example, shown object at least can be in the following screen background that is different from aspect one or more: color, brightness, shape, action (for example, object can rotate, shake, convergent-divergent etc.), etc.

According to embodiments of the invention, in carrying out face detection, image capture device 102 is configured to catch constantly the image that comprises user's face.Thus, after the random site on display screen shows an object, can, to caught a series of images using viewpoint tracing process, whether correspondingly move to detect this facial viewpoint that random site that shows object on screen.Multiple viewpoint tracking technique is known, includes but not limited to: based on the tracking of shape, and based on the tracking of feature, based on the tracking of outward appearance, the tracking of the mixed characteristic based on geometry and optical signature, etc.Take a single example, proposed human eye to be identified to the scheme of following the tracks of with viewpoint by active shape model (ASM) or active appearance models (AAM).In fact, any at present viewpoint detection and tracking method known or exploitation in the future all can be combined with embodiments of the invention.Scope of the present invention is unrestricted in this regard.

Especially, consider the error that may exist in viewpoint testing process, in realization, not necessarily require user's viewpoint strictly to mate completely with the screen position that shows object.On the contrary, a predetermined neighborhood (proximity) can be set, for example, there is the border circular areas of predetermined radii or there is the polygonal region of the predetermined length of side.In viewpoint detects, as long as viewpoint falls in the predetermined neighborhood of object's position, just can determine that viewpoint has moved to the screen position at object place.

Return to Fig. 2, method 200 proceeds to step S203 then.At step S203, determine according to the detection of carrying out at step S202 place whether the image obtaining at step S201 place is obtained from live body user.The operation is here the physiologic character based on biosome.Particularly, for example, in the time there is an object (, bright spot) that is different from appearance background on screen, live body user's viewpoint will be consciously or subconsciousness attracted to the position at this bright spot place.Thus, if detected after an object is displayed on a random site on screen at step S202, the viewpoint of face just correspondingly moves in the neighborhood of this random site, at step S203, can determine that comprising facial image is to be obtained from live body user.

Otherwise, if the object showing on viewpoint is not along with screen detected and correspondingly move at step S202, can determine at step S203 place that comprising facial image does not likely obtain from live body user.Now, can take any suitable subsequent treatment, for example further estimated image is obtained from non-living body user's risk, or directly causes authentication procedure failure, etc.

Method 200 finishes after step S203.

Consider a concrete example below with reference to Fig. 3.In the example depicted in fig. 3, image capture device 101 and display screen 102 are parts of Same Physical equipment 301.In operation, image capture device 101 is configured to catch user 302 face-image 303, and this face-image 303 is presented on screen 102.When image is carried out to face recognition, an object 304 is displayed on the random site on display screen 102.After this, if detect that user 302 eyes viewpoint has correspondingly moved to that random site at object 304 places, can determine that the face-image of processing is obtained from live body user.On the contrary, if show the corresponding movement that viewpoint do not detected after object 304 on display screen 102, can determine face-image that existence the catches risk from non-living body user.

Be appreciated that the viewpoint in the still image of photo and so on cannot change; And that viewpoint in video moves to the probability of the random site that shows this object on screen just after object is shown is very low.Therefore,, according to embodiments of the invention, can effectively prevent that disabled user from utilizing facial photo and/or video successfully by the authentication based on face recognition.

As what above described, in face recognition, can on screen 102, show an object, also can show successively multiple objects.Describe according to the live body user identification method 400 that shows multiple objects on screen of the embodiment of the present invention below with reference to Fig. 4.Be appreciated that method 400 can be considered a kind of specific implementation of the method 200 of describing with reference to figure 2 above.

As shown in Figure 4, after method 400 starts, at step S401, obtain and comprise facial image.Step S401 is corresponding with the step S201 in the method 200 of above describing with reference to figure 2, and above-described each feature is applicable equally at this, so repeat no more.

Next,, at step S402, when the image based on obtaining carries out face recognition, on display screen, show an object.As mentioned above, shown object can be for example bright spot, and can be different from various aspects such as color, brightness, shape, actions the background of display screen 102.Especially, the display position of object on screen is random definite.

Method 400 proceeds to step S403 then, detects whether the facial viewpoint identified is displayed on random site on screen in response to object and moves in the neighborhood of this random site in predetermined time section at this.Be appreciated that according to embodiment as described herein, not only detect in the neighborhood whether viewpoint move to object's position, but also whether in predetermined time section, complete detecting this movement.In other words, can set the time window detecting for viewpoint, the viewpoint only detecting in this time window is considered to effective to the movement of object's position.Otherwise, if exceeded this time window, even if viewpoint moves in the neighborhood of the random site that shows object, also think and have the risk of Image Acquisition from non-living body user.

According to biological physiology stress reaction, when there is an eye-catching object on screen after, live body user can " keep a close watch on " this object conventionally at once.And biological this physiological property is difficult to simulate by manual operation face-image or video.Therefore, whether in the time period of enough sections, move to object's position by detecting viewpoint, can further improve the accuracy of live body user identification.

In order further to avoid non-living body user to be mistaken for live body user's risk, alternatively, can also record object and be displayed on the duration on screen.When object is displayed on after duration on screen reaches threshold time, at step S404, the demonstration of this object on screen is eliminated.For description object clearly shows the time relationship between viewpoint detection, referring now to Fig. 5, a concrete example is described.

As shown in Figure 5, suppose that an object (being for example called " the first object ") is at moment t 11be displayed on a random site on screen.Correspondingly, from two time shafts (T) shown in Fig. 5, can see, from moment t 11start, detect viewpoint and whether moved to the corresponding random site on screen.This object is presented at moment t on screen 12be eliminated; Also, the demonstration duration of object on screen is [t 11, t 12] this time period.At moment t 12moment t afterwards 13, viewpoint detects and finishes.In other words the time window that, viewpoint detects is [t 11, t 13].Can see, in this embodiment, object shows the moment t being eliminated 12detect with viewpoint the moment t stopping 13between there is a time increment Δ t 1.This mistiming is that the psychology in order to compensate user postpones.Particularly, be displayed on screen to user awareness to this object and start moving view point from object, conventionally existing the regular hour to postpone.By utilizing time increment Δ t 1compensate this delay, can reduce the probability that live body user is mistaken for to non-living body user.Alternatively, this psychology postpones also can compensate in the following way: when object is at moment t 11after shown, after specific the delay, restarting viewpoint testing process through one section.

Return to Fig. 4, should be understood that, step S403 and step S404 are optional.Particularly, in some alternative, viewpoint detects the constraint that can not be subject to time window.In other words, the time window that viewpoint detects can be set to endless.Alternatively or additionally, object can remain on screen after shown, instead of is eliminated after one section of threshold time.Scope of the present invention is unrestricted in these areas.

Next,, at optional step S405, detect the residence time of viewpoint in the neighborhood of random site that shows object.The start time of the residence time is that viewpoint moves to the moment within this neighborhood; And the time terminating point of the residence time is the moment that viewpoint shifts out this neighborhood.The viewpoint residence time detecting can be recorded for follow-up live body user identification, and this will be explained below.

Method 400 proceeds to step S406 then, and whether the number of determining shown object at this has reached predetermined threshold.According to embodiments of the invention, this threshold value can be predefined fixed number.Alternatively, this threshold value can be to carry out when live body user identifies to generate at random at every turn.If determine and not yet reach predetermined demonstration number (branch's "No") at step S406 place, method 400 proceeds to step S407.

At step S407, obtain at least one parameter (being called for short " environmental parameter ") of indicative for environments state, and adjust the outward appearance of the object that next will show based on environmental parameter.Environmental parameter for example can be obtained by means of the one or more sensors 103 shown in Fig. 1.According to embodiments of the invention, the example of environmental parameter includes but not limited to: temperature parameter, luminance parameter, frequency spectrum parameter, color parameter, audio parameter, etc.Based on these environmental parameters, can dynamically adjust the outward appearance of object.For example, to as if bright spot in the situation that, can dynamically adjust according to the brightness of user place environment brightness and/or the size of bright spot, or adjust the color of bright spot according to the color information of user environment, etc.Especially, as what above described, the environmental parameter gathering by means of sensor 103 is only used to support some optional function, for example, adjust the outward appearance of object.Live body user identification itself only needs image capture device and screen to complete, without depending on any other sensor parameters.

Method 400 is returned to step S402 after step S407, shows another object (being for example called " second object ") at this according to the outward appearance of adjusting at step S407 place.Especially, according to some embodiment of the present invention, the display position of second object can be set up, and makes it enough far away with the display position of the first object previously showing apart.Particularly, suppose in the first moment, the first object is displayed on the first random site on screen; In the second moment subsequently, second object is displayed on the second random site on screen.Can make the distance between the second random site and the first random site be greater than predetermined threshold distance.In realization, after the random candidate display position that generates second object, can calculate the distance between this candidate display position and the first random site.If this distance is greater than predetermined threshold distance, be the second random site for showing second object by this candidate display set positions.Otherwise, if this distance is less than predetermined threshold distance, regenerates the candidate display position of second object and repeat above-mentioned comparison procedure, until the distance between candidate display position and the first random site is greater than predetermined threshold distance.Enough far away apart by guaranteeing the position of twice demonstration object, can advantageously strengthen the identification that viewpoint moves, and then improve the accuracy of live body user identification.

Next, at step S403-S405, to the processing of second object and similar for the processing of the first object factory above.Especially, still with reference to figure 5, according to some embodiment of the present invention, be presented at moment t when the first object 12after being eliminated, at the second moment (moment t shown in Fig. 5 subsequently 21) start to show second object.After this, be presented at the duration (time period [t shown in Fig. 5 on screen in response to second object 22, t 21]) reach the predetermined threshold time, second object be presented at moment t 22be eliminated.Especially, be appreciated that the time interval (time period [t shown in Fig. 5 between the object of twice demonstration 12, t 21]) can fix, can be also (for example, being determined at random) changing.

At step S406, if determine and reached predetermined demonstration number (branch's "Yes"), method 400 proceeds to step S408, and whether the detection at this based on step S403 and/or step S405 is identified in image that step S401 obtains from live body user.Particularly, for any one is displayed on the object on screen, if detect that at step S403 viewpoint does not move in the neighborhood of random site at this object place within a predetermined period of time, determine that image is likely obtained from non-living body user.

Alternatively or additionally, at step S408, can be by the viewpoint obtaining at step S405 place the actual residence time in the neighborhood at random site, compare with predetermined threshold value residence time.If the actual residence time is greater than the threshold value residence time, think that the stop of viewpoint in neighborhood is effective.Otherwise, if the actual residence time is less than the threshold value residence time, determines and have the risk of Image Acquisition from non-living body user.For discuss convenient for the purpose of, by according to the detection to i object and definite non-living body consumer's risk (probability) value is designated as P i(i=1,2 ..., N, N is the number of shown object).Thus, can obtain a sequence { P who is formed by value-at-risk at step S408 1, P 2... P n.Then, according to some embodiment, can computed image be obtained from non-living body user's accumulative total value-at-risk (∑ ip i).If this accumulative total risk is greater than a threshold value accumulative total value-at-risk, can determine that the image of processing is not at present to be obtained from live body user.Alternatively, in other embodiments, can be by each independent value-at-risk P icompare with individual risk threshold value.Now, as example, if exceed the risk P of individual risk threshold value inumber exceeded predetermined threshold, can judge that the image of processing is not at present to be obtained from live body user.Other various processing modes are also feasible, and scope of the present invention is unrestricted in this regard.

If from non-living body user, can carry out various suitable subsequent treatment at the definite image of processing of step S408.For example, in certain embodiments, can directly refuse user's authentication.Alternatively, also can carry out further live body user identification.Now, for example, can correspondingly improve the standard of live body user identification, such as showing more object, shortening the demonstration interval between multiple objects, etc.Otherwise, if determine that at step S408 the current image of processing, really from live body user, allows to continue to carry out authentication based on the result of face recognition.Scope of the present invention is not limited to the subsequent operation that the result of live body user identification causes.

Method 400 finishes after step S408.

Show successively multiple objects by the multiple random sites on screen, can further improve accuracy and the reliability of live body user identification.With reference now to Fig. 6 A-Fig. 6 D, consider a concrete example.In the example depicted in fig. 6, in face recognition, a series of objects (being four in this example) 601-604 is by the diverse location of random display on screen 102 successively.Now, if the viewpoint in the face-image of processing correspondingly moves to these random sites along with the appearance of object, can determine that the face-image of processing is to be obtained from live body user.On the contrary, if show one or more in object 601-604 on screen 102 after, do not detect that viewpoint correspondingly moves to the display position of object, can determine the risk that exists non-living body user.Even if be appreciated that in the lucky position neighborhood that fall into respective objects object in suitable detection window of viewpoint possibility in video (this itself has been small probability event), this situation also can not continuous several times occur.Therefore, show multiple objects by the random site on screen, can prevent better that disabled user from utilizing facial video to carry out face recognition.

Below with reference to Fig. 7, its show according to exemplary embodiment of the present invention for identifying live body user's the schematic block diagram of device 700.As shown in Figure 7, device 700 comprises: image acquisition unit 701, is configured to obtain and comprises facial image; Viewpoint detecting unit 702, be configured in described face being identified based on described image, detect after every random site on display screen shows an object, the viewpoint of described face just correspondingly moves in the neighborhood of described random site; And vivo identification unit 703, be configured to determine based on described detection whether described image is obtained from described live body user.

According to some embodiment, viewpoint detecting unit 702 can comprise: whether the described viewpoint that is configured to detect described face moves to the unit in the described neighborhood of described random site in the predetermined amount of time after described object is shown.

According to some embodiment, in the first moment, the first object is displayed on the first random site on described display screen; In the second moment subsequently, second object is displayed on the second random site on described display screen, and the distance between wherein said the first random site and described the second random site is greater than predetermined threshold distance.And according to some embodiment, before described the second moment, described the first object is eliminated from described display screen.

According to some embodiment, object shown duration on described display screen is less than the predetermined threshold time.Alternatively or additionally, according to some embodiment, device 700 may further include: residence time detecting unit (not shown), be configured to detect the residence time of described viewpoint in the described neighborhood of described random site, for determining whether described image is obtained from described live body user.

According to some embodiment, device 700 may further include: environmental parameter acquiring unit (not shown), is configured to obtain at least one parameter of indicative for environments state; And object appearance adjustment unit (not shown), be configured to dynamically adjust based on described at least one parameter the outward appearance of described object.Alternatively or additionally, object is in the following background that is different from described display screen aspect at least one: color, brightness, shape, action.

Should be appreciated that for clarity, device 700 selectable units and subelement are not shown in Fig. 7.But, should be appreciated that above and be equally applicable to device 700 with reference to figure 2 and described each feature of Fig. 4.And terminology used here " unit " can be both hardware module, it can be also software unit module.Correspondingly, device 700 can be realized by variety of way.For example, in certain embodiments, device 700 can some or all ofly utilize software and/or firmware to realize, for example, be implemented as the computer program being included on computer-readable medium.Alternatively or additionally, device 700 can some or all ofly be realized based on hardware, for example, be implemented as integrated circuit (IC), special IC (ASIC), SOC (system on a chip) (SOC), field programmable gate array (FPGA) etc.Scope of the present invention is unrestricted in this regard.

Below with reference to Fig. 8, it shows the schematic block diagram that can be used to the equipment 800 of implementing embodiments of the invention.According to embodiments of the invention, equipment 800 can be fixed equipment or the mobile device of any type for carrying out face recognition and/or live body user identification.As shown in Figure 8, for example, equipment 800 comprises CPU (central processing unit) (CPU) 801, and it can carry out various suitable actions and processing according to the program that is stored in the program in ROM (read-only memory) (ROM) 802 or be loaded into random access storage device 803 from storage unit 808.In RAM803, also store equipment 800 and operate required various programs and data.CPU801, ROM802 and RAM803 are connected with each other by bus 804.I/O (I/O) unit 805 is also connected to bus 804.

Be connected to can also comprising of bus 804 of following one or more unit: input block 806, comprise keyboard, mouse, trace ball, etc.; Output unit 807, comprises display screen, loudspeaker, etc.; Storage unit 808, comprises hard disk etc.; And communication unit 809, comprise network adapter such as LAN (Local Area Network) (LAN) card, modulator-demodular unit.Communication unit 809 is for carrying out executive communication process via networks such as the Internets.Alternatively or additionally, communication unit 809 can comprise that one or more antennas are for carrying out wireless data and/or Speech Communication.Alternatively, driver 810 can be connected to I/O unit 805, on it, removable storage unit 811 can be installed, such as CD, magnetooptical disc, semiconductor storage medium etc.

Especially, according to the Method and Process of the embodiment of the present invention during by implement software, the computer program that forms this software can be downloaded and installed from network by communication unit 809, and/or is mounted from removable storage unit 811.

Only for purpose of explanation, some exemplary embodiments of the present invention have been described above.Embodiments of the invention can be realized by the combination of hardware, software or software and hardware.Hardware components can utilize special logic to realize; Software section can be stored in storer, and by suitable instruction execution system, for example microprocessor or special designs hardware are carried out.Those having ordinary skill in the art will appreciate that above-mentioned system and method can and/or be included in processor control routine with computer executable instructions realizes, for example, at the mounting medium such as disk, CD or DVD-ROM, provide such code on such as the programmable memory of ROM (read-only memory) (firmware) or the data carrier such as optics or electronic signal carrier.System of the present invention can be by such as VLSI (very large scale integrated circuit) or gate array, realize such as the semiconductor of logic chip, transistor etc. or such as the hardware circuit of the programmable hardware device of field programmable gate array, programmable logic device etc., also can use the software of being carried out by various types of processors to realize, also can be realized by for example firmware of the combination of above-mentioned hardware circuit and software.

Although it should be noted that some devices or the sub-device of having mentioned system in above-detailed, this division is only not enforceable.In fact,, according to embodiments of the invention, the feature of above-described two or more devices and function can be specialized in a device.Otherwise, the feature of an above-described device and function can Further Division for to be specialized by multiple devices.Similarly, although described in the accompanying drawings the operation of the inventive method with particular order, this not requires or hint must be carried out these operations according to this particular order, or the operation shown in must carrying out all could realize the result of expecting.On the contrary, the step of describing in process flow diagram can change execution sequence.Additionally or alternatively, can omit some step, multiple steps be merged into a step and carry out, and/or a step is decomposed into multiple steps carries out.

Although described the present invention with reference to some specific embodiments, should be appreciated that, the present invention is not limited to disclosed specific embodiment.The present invention is intended to contain interior included various amendments and the equivalent arrangements of spirit and scope of claims.The scope of claims meets the most wide in range explanation, thereby comprises all such amendments and equivalent structure and function.

Claims (17)

1. for identifying live body user's a method, described method comprises:
Obtain and comprise facial image;
In described face being identified based on described image, detect after every random site on display screen shows an object, the viewpoint of described face just correspondingly moves in the neighborhood of described random site; And
Determine based on described detection whether described image is obtained from described live body user.
2. method according to claim 1, comprises in the neighborhood that wherein detect after every random site on display screen shows an object, the viewpoint of described face just correspondingly moves to described random site:
Whether the described viewpoint that detects described face moves in the described neighborhood of described random site in the predetermined amount of time after described object is shown.
3. method according to claim 1, wherein:
In the first moment, the first object is displayed on the first random site on described display screen, and
In the second moment subsequently, second object is displayed on the second random site on described display screen, and the distance between described the first random site and described the second random site is greater than predetermined threshold distance.
4. method according to claim 3, wherein:
Before described the second moment, described the first object is eliminated from described display screen.
5. method according to claim 1, wherein said object shown duration on described display screen is less than the predetermined threshold time.
6. method according to claim 1, further comprises:
Detect the residence time of described viewpoint in the described neighborhood of described random site, for determining whether described image is obtained from described live body user.
7. method according to claim 1, further comprises:
Obtain at least one parameter of indicative for environments state; And
Dynamically adjust the outward appearance of described object based on described at least one parameter.
8. according to the method described in claim 1 to 7 any one, wherein said object is in the following background that is different from described display screen aspect at least one: color, brightness, shape, action.
9. for identifying live body user's a device, described device comprises:
Image acquisition unit, is configured to obtain and comprises facial image;
Viewpoint detecting unit, be configured in described face being identified based on described image, detect after every random site on display screen shows an object, the viewpoint of described face just correspondingly moves in the neighborhood of described random site; And
Vivo identification unit, is configured to determine based on described detection whether described image is obtained from described live body user.
10. device according to claim 9, wherein said viewpoint detecting unit comprises:
Whether the described viewpoint that is configured to detect described face moves to the unit in the described neighborhood of described random site in the predetermined amount of time after described object is shown.
11. devices according to claim 9, wherein:
In the first moment, the first object is displayed on the first random site on described display screen, and
In the second moment subsequently, second object is displayed on the second random site on described display screen, and the distance between described the first random site and described the second random site is greater than predetermined threshold distance.
12. devices according to claim 11, wherein:
Before described the second moment, described the first object is eliminated from described display screen.
13. devices according to claim 9, wherein said object shown duration on described display screen is less than the predetermined threshold time.
14. devices according to claim 9, further comprise:
Residence time detecting unit, is configured to detect the residence time of described viewpoint in the described neighborhood of described random site, for determining whether described image is obtained from described live body user.
15. devices according to claim 9, further comprise:
Environmental parameter acquiring unit, is configured to obtain at least one parameter of indicative for environments state; And
Object appearance adjustment unit, is configured to dynamically adjust based on described at least one parameter the outward appearance of described object.
16. according to the device described in claim 9 to 15 any one, and wherein said object is in the following background that is different from described display screen aspect at least one: color, brightness, shape, action.
17. 1 kinds of subscriber equipmenies, comprising:
Central processing unit CPU;
Be coupled to the image capture device of described CPU;
Display screen; And
According to the device described in claim 9 to 16 any one.
CN201310193848.6A 2013-05-17 2013-05-17 Method and device for identifying living user CN104166835A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310193848.6A CN104166835A (en) 2013-05-17 2013-05-17 Method and device for identifying living user

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201310193848.6A CN104166835A (en) 2013-05-17 2013-05-17 Method and device for identifying living user
PCT/FI2014/050352 WO2014184436A1 (en) 2013-05-17 2014-05-13 Method and apparatus for live user recognition
US14/784,230 US20160062456A1 (en) 2013-05-17 2014-05-13 Method and apparatus for live user recognition

Publications (1)

Publication Number Publication Date
CN104166835A true CN104166835A (en) 2014-11-26

Family

ID=51897813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310193848.6A CN104166835A (en) 2013-05-17 2013-05-17 Method and device for identifying living user

Country Status (3)

Country Link
US (1) US20160062456A1 (en)
CN (1) CN104166835A (en)
WO (1) WO2014184436A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005779A (en) * 2015-08-25 2015-10-28 湖北文理学院 Face verification anti-counterfeit recognition method and system thereof based on interactive action
CN105184246A (en) * 2015-08-28 2015-12-23 北京旷视科技有限公司 Living body detection method and living body detection system
CN105260726A (en) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 Interactive video in vivo detection method based on face attitude control and system thereof
CN105518715A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Living body detection method, equipment and computer program product
CN105518714A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and equipment, and computer program product
CN105518582A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and device, computer program product
WO2016197389A1 (en) * 2015-06-12 2016-12-15 北京释码大华科技有限公司 Method and device for detecting living object, and mobile terminal
CN106295288A (en) * 2015-06-10 2017-01-04 阿里巴巴集团控股有限公司 A kind of information calibration method and device
CN106295287A (en) * 2015-06-10 2017-01-04 阿里巴巴集团控股有限公司 Biopsy method and device and identity identifying method and device
CN106803829A (en) * 2017-03-30 2017-06-06 北京七鑫易维信息技术有限公司 A kind of authentication method, apparatus and system
TWI625679B (en) * 2017-10-16 2018-06-01 緯創資通股份有限公司 Live facial recognition method and system
WO2019011099A1 (en) * 2017-07-14 2019-01-17 Oppo广东移动通信有限公司 Iris living-body detection method and related product
US10528849B2 (en) 2015-08-28 2020-01-07 Beijing Kuangshi Technology Co., Ltd. Liveness detection method, liveness detection system, and liveness detection device

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9990772B2 (en) 2014-01-31 2018-06-05 Empire Technology Development Llc Augmented reality skin evaluation
WO2015116183A2 (en) * 2014-01-31 2015-08-06 Empire Technology Development, Llc Subject selected augmented reality skin
WO2015116179A1 (en) 2014-01-31 2015-08-06 Empire Technology Development, Llc Augmented reality skin manager
US9584510B2 (en) * 2014-09-30 2017-02-28 Airwatch Llc Image capture challenge access
WO2016127437A1 (en) * 2015-02-15 2016-08-18 北京旷视科技有限公司 Live body face verification method and system, and computer program product
US10275672B2 (en) 2015-04-29 2019-04-30 Beijing Kuangshi Technology Co., Ltd. Method and apparatus for authenticating liveness face, and computer program product thereof
CN105518710B (en) * 2015-04-30 2018-02-02 北京旷视科技有限公司 Video detecting method, video detection system and computer program product
WO2016201016A1 (en) * 2015-06-10 2016-12-15 Alibaba Group Holding Limited Liveness detection method and device, and identity authentication method and device
KR101688168B1 (en) * 2015-08-17 2016-12-20 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN107016270A (en) * 2015-12-01 2017-08-04 由田新技股份有限公司 With reference to the dynamic Verification System of motion graphics eye, the method for face's certification or hand certification
CN105867621A (en) * 2016-03-30 2016-08-17 上海斐讯数据通信技术有限公司 Method and device for operating intelligent equipment by crossing air
US10289822B2 (en) * 2016-07-22 2019-05-14 Nec Corporation Liveness detection for antispoof face recognition
GB2560340A (en) * 2017-03-07 2018-09-12 Eyn Ltd Verification method and system
CN106920256A (en) * 2017-03-14 2017-07-04 上海琛岫自控科技有限公司 A kind of effective missing child searching system
CN108363947A (en) * 2017-12-29 2018-08-03 武汉烽火众智数字技术有限责任公司 Delay demographic method for early warning based on big data and device
WO2019151368A1 (en) * 2018-02-01 2019-08-08 日本電気株式会社 Biometric authentication device, system, method and recording medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120140993A1 (en) * 2010-12-05 2012-06-07 Unisys Corp. Secure biometric authentication from an insecure device
US20120243729A1 (en) * 2011-03-21 2012-09-27 Research In Motion Limited Login method based on direction of gaze

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6351273B1 (en) * 1997-04-30 2002-02-26 Jerome H. Lemelson System and methods for controlling automatic scrolling of information on a display or screen
US6603491B2 (en) * 2000-05-26 2003-08-05 Jerome H. Lemelson System and methods for controlling automatic scrolling of information on a display or screen
US6292228B1 (en) * 1998-06-29 2001-09-18 Lg Electronics Inc. Device and method for auto-adjustment of image condition in display using data representing both brightness or contrast and color temperature
CN101686306A (en) * 2003-09-11 2010-03-31 松下电器产业株式会社 Visual processing device, visual processing method, visual processing program, integrated circuit, display device, imaging device, and mobile information terminal
US7965859B2 (en) * 2006-05-04 2011-06-21 Sony Computer Entertainment Inc. Lighting control of a user environment via a display device
US7529042B2 (en) * 2007-01-26 2009-05-05 Losee Paul D Magnifying viewer and projector for portable electronic devices
KR20080093875A (en) * 2007-04-17 2008-10-22 세이코 엡슨 가부시키가이샤 Display device, method for driving display device, and electronic apparatus
JP5121367B2 (en) * 2007-09-25 2013-01-16 株式会社東芝 Apparatus, method and system for outputting video
KR101571334B1 (en) * 2009-02-12 2015-11-24 삼성전자주식회사 Apparatus for processing digital image and method for controlling thereof
WO2010150973A1 (en) * 2009-06-23 2010-12-29 Lg Electronics Inc. Shutter glasses, method for adjusting optical characteristics thereof, and 3d display system adapted for the same
JP2011017910A (en) * 2009-07-09 2011-01-27 Panasonic Corp Liquid crystal display device
WO2011149558A2 (en) * 2010-05-28 2011-12-01 Abelow Daniel H Reality alternate
US20140168277A1 (en) * 2011-05-10 2014-06-19 Cisco Technology Inc. Adaptive Presentation of Content
US8605199B2 (en) * 2011-06-28 2013-12-10 Canon Kabushiki Kaisha Adjustment of imaging properties for an imaging assembly having light-field optics
KR101180119B1 (en) * 2012-02-23 2012-09-05 (주)올라웍스 Method, apparatusand computer-readable recording medium for controlling display by head trackting using camera module
US9400551B2 (en) * 2012-09-28 2016-07-26 Nokia Technologies Oy Presentation of a notification based on a user's susceptibility and desired intrusiveness
US8856541B1 (en) * 2013-01-10 2014-10-07 Google Inc. Liveness detection
US9596508B2 (en) * 2013-03-15 2017-03-14 Sony Corporation Device for acquisition of viewer interest when viewing content
US9734797B2 (en) * 2013-08-06 2017-08-15 Crackle, Inc. Selectively adjusting display parameter of areas within user interface
CN105280158A (en) * 2014-07-24 2016-01-27 扬升照明股份有限公司 Display device and control method of backlight module thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120140993A1 (en) * 2010-12-05 2012-06-07 Unisys Corp. Secure biometric authentication from an insecure device
US20120243729A1 (en) * 2011-03-21 2012-09-27 Research In Motion Limited Login method based on direction of gaze

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ASAD ALI ET AL.: ""Liveness Detection using Gaze Collinearity"", 《2012 THIRD INTERNATIONAL CONFERENCE ON EMERGING SECURITY TECHNOLOGIES》 *
ROBERT W.FRISCHHOLZ ET AL.: ""Avoiding Replay-Attacks in a Face Recognition System using Head-Pose Estimation"", 《IEEE INT.WORKSHOP ON ANALYSIS AND MODELING OF FACES AND GESTURES》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295288B (en) * 2015-06-10 2019-04-16 阿里巴巴集团控股有限公司 A kind of information calibration method and device
CN106295287B (en) * 2015-06-10 2019-04-09 阿里巴巴集团控股有限公司 Biopsy method and device and identity identifying method and device
CN106295288A (en) * 2015-06-10 2017-01-04 阿里巴巴集团控股有限公司 A kind of information calibration method and device
CN106295287A (en) * 2015-06-10 2017-01-04 阿里巴巴集团控股有限公司 Biopsy method and device and identity identifying method and device
WO2016197389A1 (en) * 2015-06-12 2016-12-15 北京释码大华科技有限公司 Method and device for detecting living object, and mobile terminal
CN105518715A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Living body detection method, equipment and computer program product
CN105518714A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and equipment, and computer program product
CN105518582A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and device, computer program product
CN105518582B (en) * 2015-06-30 2018-02-02 北京旷视科技有限公司 Biopsy method and equipment
WO2017000217A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
WO2017000218A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
CN105005779A (en) * 2015-08-25 2015-10-28 湖北文理学院 Face verification anti-counterfeit recognition method and system thereof based on interactive action
CN105184246A (en) * 2015-08-28 2015-12-23 北京旷视科技有限公司 Living body detection method and living body detection system
US10528849B2 (en) 2015-08-28 2020-01-07 Beijing Kuangshi Technology Co., Ltd. Liveness detection method, liveness detection system, and liveness detection device
CN105260726B (en) * 2015-11-11 2018-09-21 杭州海量信息技术有限公司 Interactive video biopsy method and its system based on human face posture control
CN105260726A (en) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 Interactive video in vivo detection method based on face attitude control and system thereof
WO2018177312A1 (en) * 2017-03-30 2018-10-04 北京七鑫易维信息技术有限公司 Authentication method, apparatus and system
CN106803829A (en) * 2017-03-30 2017-06-06 北京七鑫易维信息技术有限公司 A kind of authentication method, apparatus and system
WO2019011099A1 (en) * 2017-07-14 2019-01-17 Oppo广东移动通信有限公司 Iris living-body detection method and related product
TWI625679B (en) * 2017-10-16 2018-06-01 緯創資通股份有限公司 Live facial recognition method and system

Also Published As

Publication number Publication date
US20160062456A1 (en) 2016-03-03
WO2014184436A1 (en) 2014-11-20

Similar Documents

Publication Publication Date Title
US8819812B1 (en) Gesture recognition for device input
US8942419B1 (en) Position estimation using predetermined patterns of light sources
JP6039072B2 (en) Method, storage medium and apparatus for mobile device state adjustment based on user intent and / or identification information
CN105378595B (en) The method for calibrating eyes tracking system by touch input
US20140157209A1 (en) System and method for detecting gestures
US10108961B2 (en) Image analysis for user authentication
Orchard et al. Converting static image datasets to spiking neuromorphic datasets using saccades
EP2546782A1 (en) Liveness detection
KR20150122123A (en) Systems and methods for authenticating a user based on a biometric model associated with the user
US8984622B1 (en) User authentication through video analysis
US8549418B2 (en) Projected display to enhance computer device use
Sugano et al. Appearance-based gaze estimation using visual saliency
CN103620620A (en) Using spatial information in device interaction
KR101165537B1 (en) User Equipment and method for cogniting user state thereof
US9390340B2 (en) Image-based character recognition
JP6342458B2 (en) Improved facial recognition in video
EP2879095B1 (en) Method, apparatus and terminal device for image processing
KR20160099432A (en) Electronic device and method for registration finger print
US9955349B1 (en) Triggering a request for an authentication
CN105339868A (en) Visual enhancements based on eye tracking
Porzi et al. A smart watch-based gesture recognition system for assisting people with visual impairments
US20160062456A1 (en) Method and apparatus for live user recognition
JP5965404B2 (en) Customizing user-specific attributes
KR101637107B1 (en) Orientation aware authentication on mobile platforms
US20150302252A1 (en) Authentication method using multi-factor eye gaze

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20160112

Address after: Espoo, Finland

Applicant after: Technology Co., Ltd. of Nokia

Address before: Espoo, Finland

Applicant before: Nokia Oyj

C41 Transfer of patent application or patent right or utility model
RJ01 Rejection of invention patent application after publication

Application publication date: 20141126

RJ01 Rejection of invention patent application after publication