WO2021014417A1 - Method for recognizing a living body - Google Patents

Method for recognizing a living body Download PDF

Info

Publication number
WO2021014417A1
WO2021014417A1 PCT/IB2020/057012 IB2020057012W WO2021014417A1 WO 2021014417 A1 WO2021014417 A1 WO 2021014417A1 IB 2020057012 W IB2020057012 W IB 2020057012W WO 2021014417 A1 WO2021014417 A1 WO 2021014417A1
Authority
WO
WIPO (PCT)
Prior art keywords
data group
image
data
iris
representing
Prior art date
Application number
PCT/IB2020/057012
Other languages
French (fr)
Inventor
Massimo PANELLA
Emanuele Pucci
Antonello ROSATO
Original Assignee
Machine Learning Solutions S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Machine Learning Solutions S.R.L. filed Critical Machine Learning Solutions S.R.L.
Publication of WO2021014417A1 publication Critical patent/WO2021014417A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • This invention relates to a method for recognizing a living body and an apparatus for recognizing a living body.
  • Such attacks can be avoided only by verifying that the password is entered by a living being and not by a computer taught for that purpose.
  • the aim of this invention is to make available a method and an apparatus which overcome the above-mentioned technical disadvantages.
  • this invention makes available a method for recognizing a living body.
  • the method comprises a step of receiving first image data, representing a first image, at a first time instant.
  • the method comprises a step of identifying (filtering, selecting), amongst the first image data, a first data group.
  • the first data group represents a portion of the first image. That portion of the first image is preferably at least one eye of a human face.
  • the method comprises a step of storing the first data group.
  • the method comprises a step of receiving second image data, representing a second image, at a second time instant which, preferably, follows the first time instant.
  • the method comprises a step of identifying (filtering, selecting), amongst the second image data, a second data group.
  • the second data group represents a portion of the second image. That portion of the first image is preferably at least one eye of a human face. In fact, for a reliable comparison, the portion of the first image and the portion of the second image must have the same subject.
  • the method comprises a step of comparing the first data group with the second data group.
  • Appropriate assessment parameter values are derived from the result of that comparison.
  • Such assessment parameters are substantially quantities variable as a function of the data group compared, which may be indicative (that is to say, may be used) for establishing if the body is alive.
  • the assessment parameter may be the difference in the size of the iris or of the pupil, the difference in the distance between the eyelids, the difference in the ratio between the dimensions of the iris and of the pupil or differences between a combination of the above-mentioned dimensions.
  • the assessment parameter may therefore be of diverse types, provided that a variation in it is indicative of the fact that the eye has undergone a change consistent with a human change.
  • calculation of the assessment parameter may be a subtraction, integration or any mathematical operation which might lead to the definition of a value of the assessment parameter, which is indicative of the behaviour of the user in front of the camera.
  • the method comprises a step of sending a control signal to a light emitter. That control signal instructs the light emitter to emit a light signal or trace. In one embodiment, that control signal instructs the light emitter to emit a light signal or trace in an interval of time between the first time instant and the second.
  • the second image data in that embodiment, represent a second image, which is captured at an instant following emission of the light wave.
  • the first data group and the second data group represent the morphology and the position of the pupil and/or iris, relative to other reference points of the face, at a first time instant and at a second time instant, respectively.
  • the step comparing comprises one or more of the following steps:
  • the first reference point of the eye is the centre of the iris (calculated as a function of the first data group).
  • the reference point could also be another reference point, for example the centre of the pupil or another recognizable zone of the eye (again calculated as a function of the first data group).
  • the step of comparing comprises, for each of said first and second data groups (that is to say, for each pupil or for each iris captured) one or more of the following steps:
  • a first iris circle having a respective iris centre and a respective iris radius
  • a first pupil circle having a pupil centre and a respective pupil radius, as a function of the first data group
  • the step of comparing comprises a step of determining a first ratio between the iris radius and the pupil radius of the first data group. In one example embodiment of the method, in the step of determining, there is determination of a ratio between the position of the iris centre relative to the centre of the eye bounding box of the first data group.
  • the step of comparing comprises a step of determining a second ratio between the iris radius and the pupil radius of the second data group.
  • the step of comparing comprises a step of determining a second set of the above- mentioned values based on the second data group.
  • the method comprises a step of calculating a value which, in an example embodiment, is obtained as a comparison relative to suitable thresholds of the ratio between iris and pupil radii and/or the position deviation between centre of the iris and barycentre of the eye bounding box.
  • the method comprises a step of calculating a value which, in an example embodiment, is obtained as the absolute value of the difference between the second ratio and the first ratio.
  • the method comprises a step of sending the control signal to the light emitter again, in order to instruct it to emit a further light wave at an instant after reception of the second image data.
  • the method comprises a step of receiving third image data, representing a third image captured at an instant after the further light wave has been emitted.
  • the method comprises a step of identifying, amongst the third image data, a third data group, representing a portion of the third image, including said at least one eye of a human face, preferably an iris and a pupil of the human face.
  • the method comprises a step of comparing the third data group with the second data group, to derive additional values of the assessment parameters.
  • the step of comparing the third data group with the second data group comprises one or more of the steps described relative to the comparison between the first and second data groups.
  • the step of comparing the third data group with the second data group includes one or more of the following steps:
  • Determination of the additional values of the assessment parameter involves substantially the same steps carried out to determine the first value of the assessment parameter on successive images of the eye. Therefore, the additional values of the assessment parameter are values which also allow an understanding of the trend in the assessment parameter over time. Therefore, in a sequence of images, each image allows the determination of a value of the assessment parameter relative to the previously captured image and relative to the subsequently captured image. That large number of assessment parameter values allows averaging and reduction of false positives.
  • the method comprises a step of processing additional values.
  • an eye deviation is calculated, defined as the deviation between the values calculated relative to the real data groups.
  • the presence of a living body is determined (diagnosed, assessed) as a function of any eye deviations.
  • the method comprises receiving a plurality of image data, which correspond to a plurality of images captured in sequence.
  • II method comprises a plurality of steps of sending the control signal to instruct the light emitter to emit a light signal.
  • Each of said steps of sending the control signal is performed in a period of time between two steps of receiving image data chronologically in sequence. In that way, a better assessment is possible of the variation generated on the eye by the light waves.
  • Corresponding to said plurality of image data is a plurality of data groups, each identified within the image data relating to the respective image.
  • the method comprises a plurality of steps of calculating the indicative value, obtained by comparing one data group, relating to one image, with the data group relating to the previously captured image. Therefore, at the end of that step, a plurality of indicative values have been calculated. Said plurality of indicative values is processed in order to determine its information representing the presence or absence of a living body.
  • the method comprises the step of identifying the first and/or the second data group, processing one or more of the following steps:
  • the predetermined comparison logic is that the first data group and/or the second data group comprises the data relating to zones of the respective first and/or second image, which are inside the boundary and/or the deviation position recognized.
  • the boundary data are image data which identify, within an image, portions of the image which are boundaries of objects detected in the image itself. Therefore, the boundary data could be pixels of the image which divide the pixels of one object from the pixels of another object. Specifically, in this invention, the boundary data could be the pixels which divide the iris from the pupil.
  • the step of recognizing boundary data may be performed using one or more of the known algorithms in the sector for determining boundaries in the images, including by way of example only, the Canny algorithm.
  • the step of identifying comprises a step of filtering, through a filter. In the filtering step, the filter removes noise data, reflections and occlusions, representing false boundaries, from the first image data and/or from the second image data.
  • the method comprises a step of excluding.
  • the step of excluding comprises one or more of the following steps:
  • the first and/or the second image data represent a first and a second image having a resolution greater than 150 dpi (dots per inch).
  • the step of excluding the occluding data comprises a step of recognizing an object in the image (for example identifying its boundaries with the Canny algorithm or with other image detection algorithms).
  • the object recognized is then compared with a plurality of reference data (reference images), in order to identify potential objects extraneous to the image which may therefore be occluding objects.
  • the method comprises removing from the image the pixels which identify the occluding body identified.
  • the step of excluding brings very important advantages to the method, since it also allows verification in the presence of bodies which partly cover the eyes. In contrast, the prior art systems malfunction or are interrupted by the presence of bodies positioned in front of the eyes.
  • this invention makes available a system (that is to say, a device) for recognizing a living body.
  • the device comprises a camera (or, a stills camera or any other optical device suitable for image capture), configured to capture first image data and/or second image data.
  • the first image data and/or second image data represent a first image, captured at a first time instant, and a second image, captured at a second time instant.
  • the second time instant follows the first time instant.
  • the device comprises a control unit, connected to the camera and including a computer.
  • control unit (therefore the computer) is remote and connected to a plurality of “client” devices by a remote connection or by cable.
  • each client device sends the first and the second image data to the remote control unit, which processes the respective data and sends a result of the check to the corresponding client device.
  • this invention intends to protect a device for recognizing a living body, in which the control unit is local and therefore the steps of the recognizing method are performed directly on each individual client device.
  • the device comprises a memory, connected to the control unit in order to receive data to be stored.
  • the device comprises a light emitter, connected to the control unit and configured to emit light waves. That emitter may be the screen of the client device.
  • the computer is programmed to perform one or more of the steps of the method for recognizing a living body which are described according to this invention, whether they are described in the claims, in the description or in the drawings.
  • the light emitter is driven by the control unit to emit a light signal at a time instant between the first time instant and the second time instant.
  • the camera is configured to capture third image data, representing a third image, captured at a third time instant, following the second time instant.
  • the light emitter is instructed by the control unit to emit a further light signal at a time instant between the second time instant and the third time instant.
  • the camera has a minimum resolution of 150 dpi.
  • the minimum resolution of the camera is 300 dpi.
  • this invention describes a computer program comprising a software for performing one or more of the steps of the method for recognizing a living body which are described according to this invention, when run on the device according to any one of the claims described in this document.
  • FIG. 1 is a schematic illustration of the steps of a method for recognizing a living body
  • FIG. 2 is a schematic illustration of a detail of a step of comparing illustrated in Figure 1 ;
  • FIG. 3 is a schematic illustration of a detail of a step of identifying illustrated in Figure 1 ;
  • Figure 4 shows a variant of the method illustrated in Figure 1 ;
  • Figures 4A and 4B show a first and a second variant of the method of Figure 1 , respectively;
  • FIGS. 5A and 5B show a geometrical model of an iris and a pupil of a human eye and an image of a human eye, with a relative eye bounding box, respectively.
  • This invention illustrates a method for recognizing a living person.
  • the method comprises a step F10 of receiving first image data 101 , representing a first image, at a first time instant. Before the first time instant, the method comprises the sending of a respective capture signal, for instructing a camera to capture the first image.
  • the image is captured with the orthogonal method, that it to say, a method used in the context of cooperative recognition, which is carried out when the user actively participates during the capturing step, by keeping the head in a fixed position, so that the centre of the iris is aligned with the optical centre of the camera.
  • a computer receives a predetermined number of pixels, which represent the first image.
  • the method comprises a step F1 1 of identifying a first data group 101 A, representing a portion of the first image.
  • said portion of image includes a human eye, preferably an iris and a pupil of a human eye.
  • the computer filters the pixels which represent the portion of the first image including the pupil and the iris.
  • the step F11 of identifying is obtained by means of segmentation of the first image.
  • the method comprises a step F12 of storing, in which the first data group 101 A is stored in a memory.
  • the method comprises a step F20 of receiving second image data 102, representing a second image, at a second time instant. Before the second time instant, the method comprises the sending of a respective capture signal, for instructing a camera to capture the second image. Therefore, in the step F20, a computer receives a predetermined number of pixels, which represent the second image.
  • the method comprises a step F21 of identifying a second data group 102A, representing a portion of the first image.
  • said portion of image includes the same human eye, preferably the same iris and the same pupil.
  • the computer filters the pixels which represent the portion of the second image including the pupil and the iris.
  • the step F21 of identifying is obtained by means of segmentation of the second image.
  • various segmentation techniques are known, amongst which the most famous are the one based on the Hough transform (for more information about this known algorithm, consult for example the following scientific publications:
  • the Hough transform may be seen as a transformation of a point of the plane (x, y) of the image to the space of the parameters, which is defined based on the geometrical figure to be identified.
  • the search takes place in a three-dimensional space, being the parametric representation of the circle:
  • the space of the parameters is therefore defined by the coordinates of the centres (xc, yc) and by the values of the radii r of the circles.
  • each of said step F1 1 of identifying the first data group 101 A and said step F21 of identifying the second image group 102A comprise one or more of the following steps: - F21 1 , F1 11 filtering of the first image data 101 and/or of the second image data with a filter; in one example embodiment, the step of filtering is a filtering with a 10 x 10 pixel average filter for removing false boundaries;
  • F212, F1 12 extracting (capturing, recognizing, identifying) boundary data amongst the first image data and/or the second image data, for example by means of the Canny method (or Canny algorithm) known in the image processing sector;
  • each of said step F11 of identifying the first data group 101 A and said step F21 of identifying the second data group 102A comprises a respective step F214, F1 14 of excluding.
  • the step F214, F114 of excluding, the first data group 101 A and the second data group 102A are filtered to exclude the occluding data representing portions of the first and second image in which the eye is covered by interfering bodies.
  • the steps F214, F1 14 of excluding, for identifying any occlusions caused by eyelashes and eyelids consult the method proposed by Wildes et al., Kong and Zhang (for more information about this known algorithm, consult the following publication:
  • the method uses the parabolic Hough transform to identify, within the first data group 101 A and/or the second data group 102A (within the first and/or the second image), occluding data which represent the upper and lower eyelids, approximating them with parabolic arcs.
  • the steps F214, F1 14 of excluding, the occluding data identified in the first data group 101 A and/or in the second data group 102A are excluded so that they do not introduce artefacts during the step of normalization.
  • the method comprises a step F30 of sending a control signal 201 to a light emitter 20, to instruct it to emit a light signal in an interval of time between the first time instant and the second time instant.
  • the method comprises a step F40 of comparing the first data group 101 A with the second data group 102A.
  • the step F40 of comparing in one example of application of the method, comprises one or more of the following steps:
  • - F41 determining an iris circle C1 i and a pupil circle C1 P for the first data group 101 A; said iris circle C1 i including an iris radius r1 i and an iris centre 011; said pupil circle C1 P including a respective pupil radius r1 p and a respective pupil centre 01 p ;
  • - F42 determining an iris circle C2i and a pupil circle C2 P for the second data group 102A; said iris circle C2i including a respective iris radius r2i and a respective iris centre 02i; said pupil circle C2 P including a respective pupil radius r2 p and a respective pupil centre 02 p .
  • a first ratio wi defined as the ratio between the iris radius r1 i and the respective pupil radius r1 p relative to the first data group 101 A;
  • a second ratio W2 defined as the ratio between the iris radius r2i and the respective pupil radius r2 p relative to the second data group 102A;
  • the computer is capable of assessing whether or not a living body is present. Therefore, the method comprises a step of diagnosing, in which the computer assesses whether or not a living body is present as a function of the indicative value. For example, the assessment of the value could be performed by comparing it with thresholds to be appropriately calibrated as a function of the application context, so as to determine if there has been a variation due to the subject photographed actually being a living subject.
  • the iris circle and the pupil circle may not be concentric, in which case the distance between the edge of the pupil and the edge of the iris is not constant but varies as a function of the angle considered as described in L. Masek. Recognition of human iris pattern for biometric identification, 2003. In a preferred form of application of the method, which increases the reliability of the measurement, more than two images are captured.
  • n refers to a parameter indicating the number of images captured by the camera 10.
  • - F40/ determining, for each data group of said plurality of data groups 10/ ⁇ , a corresponding iris circle (characterized by the respective iris radius and iris centre) and a corresponding pupil circle (characterized by the respective pupil radius and a respective pupil centre);
  • - F430i, F440i determining, for each data group of said plurality of data groups 10/ ⁇ , a significant distance Di, for example calculated as the absolute value of the distance between the barycentre of corresponding reference geometrical figure BB2 and the iris centre 02, calculated for the corresponding data group;
  • each indicative value being determined as the difference between the ratio of the radii of one image and the ratio of the radii of the preceding image;
  • the method comprises assessing a plurality of single pictures, repeating n times the operations described for the basic solution for processing only two images. This allows the assessment to have a much higher level of reliability.
  • each image captured by the camera 10 has one or more of the following characteristics (for further information, consult Richard P. Wildes. Iris recognition: An emerging biometric technology. PROCEEDINGS OF TFIE IEEE, 1997.):
  • the numeral 100 denotes a device for recognizing a living body.
  • the device 100 comprises a camera 10.
  • the device 100 comprises a light emitter 20.
  • the device 100 comprises a control unit.
  • the control unit is connected to the camera 10 and/or to the light emitter 20 for sending to one and/or the other respective control signals to instruct them to capture an image and to emit a light signal, respectively.
  • the device 100 also comprises a display 30.
  • the device 1 is a tablet, a smartphone or a computer (laptop or desktop).
  • the camera 10 has a resolution of between 100 and 300 dpi in the eye area, preferably 150 dpi.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Medicines Containing Plant Substances (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A method for recognizing a living body comprises the following steps: (F10) receiving first image data (101), representing a first image, at a first time instant; (F11) identifying, amongst the first image data (101), a first data group (101 A) representing a portion of the first image; (F12) storing the first data group (101A); (F20) receiving second image data (102), representing a second image, at a second time instant, following the first time instant; (F21) identifying, amongst the second image data (102), a second data group (102A) representing a portion of the second image; (F40) comparing the first data group (101A) with the second data group (102A); (F30) sending a control signal (201) to a light emitter (20) to drive it to emit a light signal in a time interval between the first time instant and the second time instant.

Description

DESCRIPTION
METHOD FOR RECOGNIZING A LIVING BODY
Technical field
This invention relates to a method for recognizing a living body and an apparatus for recognizing a living body.
Background art
In the sector of digital equipment and web service platforms, there is increasingly widespread use of web services for carrying out bank transactions, money transfers, user authentication. Such activities usually require the entry of passwords, which allow the specific activity to be carried out.
In the cybersecurity sector, there are many methods for making a password secure, for example the common techniques of“hashing” passwords to render them less vulnerable. However, even in the presence of such techniques, password security could still be vulnerable to cyber attacks using the“brute- force” method (also known as an exhaustive key search), in which a computer tries all theoretically possible passwords until the correct one is found.
Such attacks can be avoided only by verifying that the password is entered by a living being and not by a computer taught for that purpose.
Therefore, it is necessary to verify that the device is actually used by a human being and not by a computer.
Moreover, in other situations, it is necessary to verify the presence of a human being continuously in front of the electronic device. This situation may be necessary, for example, when amid chaos a check has to be carried out to ensure that a worker is at his/her work station. In this case too, the positioning of a photograph of a human in front of the camera could distort assessments. In order to solve such problems, there are prior art solutions in which a camera sends a device computer an image of a zone in front of the device in which the user is generally positioned when using the device. Using image processing techniques, for example by assessing the boundaries, the presence or absence of a living being is verified. However, such solutions are vulnerable to the positioning, in front of the camera, of a photograph depicting an image of a person.
There are other solutions, for example described in document US2018336397A1 , in which the camera captures various images of the zone in front of the device, each of which has different illumination. By processing the image (light and shade zones, skin tones) differences revealing the three- dimensional nature of the subject are discovered. However, even these solutions may be dodged by positioning a three-dimensional dummy, which simulates the behaviour of a human body.
Moreover, there are other prior art technical solutions described in the following scientific documents:
- XINYU HUANG ET AL: "An experimental study of pupil constriction for liveness detection" APPLICATIONS OF COMPUTER VISION (WACV) , 2013 IEEE WORKSHOP ON , IEEE, 15 January 2013 (2013-01 -15) , pages 252- 258;
- KANG RYOIJNG PARK ED - FRANCISCO J PERALES ET AL "Robust Fake Iri s Detection", 1 January 2006 (2006-01 -01 ) , ARTICULATED MOTION AND DEFORMABLE OBJECTS LECTURE NOTES IN COMPUTER SCIENCE; ; LNCS, SPRINGER, BERLIN, DE, PAGE(S) 10 - 18.
Disclosure of the invention
The aim of this invention is to make available a method and an apparatus which overcome the above-mentioned technical disadvantages.
Said aim is fulfilled by the method and by the apparatus which form the subject matter of this invention, which is characterized as described in the appended claims.
According to one aspect of this description, this invention makes available a method for recognizing a living body.
The method comprises a step of receiving first image data, representing a first image, at a first time instant.
The method comprises a step of identifying (filtering, selecting), amongst the first image data, a first data group. The first data group represents a portion of the first image. That portion of the first image is preferably at least one eye of a human face.
The method comprises a step of storing the first data group.
The method comprises a step of receiving second image data, representing a second image, at a second time instant which, preferably, follows the first time instant.
The method comprises a step of identifying (filtering, selecting), amongst the second image data, a second data group. The second data group represents a portion of the second image. That portion of the first image is preferably at least one eye of a human face. In fact, for a reliable comparison, the portion of the first image and the portion of the second image must have the same subject.
The method comprises a step of comparing the first data group with the second data group. Appropriate assessment parameter values are derived from the result of that comparison. Such assessment parameters are substantially quantities variable as a function of the data group compared, which may be indicative (that is to say, may be used) for establishing if the body is alive. For example but without limiting the scope of the invention, the assessment parameter may be the difference in the size of the iris or of the pupil, the difference in the distance between the eyelids, the difference in the ratio between the dimensions of the iris and of the pupil or differences between a combination of the above-mentioned dimensions. The assessment parameter may therefore be of diverse types, provided that a variation in it is indicative of the fact that the eye has undergone a change consistent with a human change.
Therefore, calculation of the assessment parameter may be a subtraction, integration or any mathematical operation which might lead to the definition of a value of the assessment parameter, which is indicative of the behaviour of the user in front of the camera.
In one possible variant, the method comprises a step of sending a control signal to a light emitter. That control signal instructs the light emitter to emit a light signal or trace. In one embodiment, that control signal instructs the light emitter to emit a light signal or trace in an interval of time between the first time instant and the second.
The second image data, in that embodiment, represent a second image, which is captured at an instant following emission of the light wave.
Capturing the second image after a light stimulus makes it possible to verify what effect that light has had on the eye. In particular, since the eye and its organs are light-sensitive, a reaction to the light only occurs in the presence of a human. In one embodiment, the first data group and the second data group represent the morphology and the position of the pupil and/or iris, relative to other reference points of the face, at a first time instant and at a second time instant, respectively.
According to one aspect of this description, alternatively to or in combination with what was just described, the step comparing (the method) comprises one or more of the following steps:
- determining a first geometrical figure from the first data group according to a predetermined criterion;
- determining a first geometrical reference point, corresponding to the barycentre of the first geometrical figure;
- determining a second reference geometrical figure from the second data group according to the predetermined criterion;
- determining a second geometrical reference point, corresponding to the barycentre of the second geometrical figure;
- determining a first significant distance for the first data group, as a function of the distance between the first geometrical reference point and a first reference point of the eye, determined from the first data group according to a predetermined reference criterion; in one embodiment, according to the predetermined reference criterion, the first reference point of the eye is the centre of the iris (calculated as a function of the first data group). However, the reference point could also be another reference point, for example the centre of the pupil or another recognizable zone of the eye (again calculated as a function of the first data group).
- determining a second significant distance for the second data group, as a function of the distance between the second geometrical reference point and a second reference point of the eye, determined from the second data group according to said predetermined reference criterion.
- comparing the first significant distance and the second significant distance to determine differences between the first data group and the second data group, for example to determine a significant deviation between the first significant distance and the second significant distance.
The step of comparing comprises, for each of said first and second data groups (that is to say, for each pupil or for each iris captured) one or more of the following steps:
- determining a first iris circle, having a respective iris centre and a respective iris radius, and a first pupil circle, having a pupil centre and a respective pupil radius, as a function of the first data group;
- determining a second iris circle, having a respective iris centre and a respective iris radius, and a second pupil circle, having a pupil centre and a respective pupil radius, as a function of the second data group;
- determining reference points of the face, in particular the barycentre of a rectangular area identifying each eye (so-called“eye bounding box”).
In other words, for each data group, therefore for each image captured by the camera (or stills camera or other optical device suitable for image capture), there is determination of a circle which approximates the dimensions of the pupil, a circle which approximates the dimensions of the iris and a set of metrics for the distance between the elements of the eyes and the reference points of the face.
In one example embodiment of the method, the step of comparing comprises a step of determining a first ratio between the iris radius and the pupil radius of the first data group. In one example embodiment of the method, in the step of determining, there is determination of a ratio between the position of the iris centre relative to the centre of the eye bounding box of the first data group. The step of comparing comprises a step of determining a second ratio between the iris radius and the pupil radius of the second data group. The step of comparing comprises a step of determining a second set of the above- mentioned values based on the second data group.
The method comprises a step of calculating a value which, in an example embodiment, is obtained as a comparison relative to suitable thresholds of the ratio between iris and pupil radii and/or the position deviation between centre of the iris and barycentre of the eye bounding box. The method comprises a step of calculating a value which, in an example embodiment, is obtained as the absolute value of the difference between the second ratio and the first ratio.
In one embodiment, the method comprises a step of sending the control signal to the light emitter again, in order to instruct it to emit a further light wave at an instant after reception of the second image data.
The method comprises a step of receiving third image data, representing a third image captured at an instant after the further light wave has been emitted.
The method comprises a step of identifying, amongst the third image data, a third data group, representing a portion of the third image, including said at least one eye of a human face, preferably an iris and a pupil of the human face.
The method comprises a step of comparing the third data group with the second data group, to derive additional values of the assessment parameters. The step of comparing the third data group with the second data group comprises one or more of the steps described relative to the comparison between the first and second data groups. In one embodiment, the step of comparing the third data group with the second data group includes one or more of the following steps:
- determining a corresponding iris circle, having a respective iris centre and a respective iris radius as a function of the third data group;
- determining a corresponding pupil circle, having a respective pupil centre and a respective pupil radius as a function of the third data group;
- determining a third ratio between the iris radius and the pupil radius for the iris circle relating to the third data group;
- determining a position deviation between the centre of the iris and barycentre of the eye bounding box.
- calculating additional values, relative to appropriate thresholds, for all of the metric calculated (that is to say, for all of the comparisons performed for each image or data group).
Determination of the additional values of the assessment parameter involves substantially the same steps carried out to determine the first value of the assessment parameter on successive images of the eye. Therefore, the additional values of the assessment parameter are values which also allow an understanding of the trend in the assessment parameter over time. Therefore, in a sequence of images, each image allows the determination of a value of the assessment parameter relative to the previously captured image and relative to the subsequently captured image. That large number of assessment parameter values allows averaging and reduction of false positives.
In one embodiment, the method comprises a step of processing additional values. In the processing step, an eye deviation is calculated, defined as the deviation between the values calculated relative to the real data groups. In the processing step, the presence of a living body is determined (diagnosed, assessed) as a function of any eye deviations.
It should be noticed that, in a preferred embodiment, the method comprises receiving a plurality of image data, which correspond to a plurality of images captured in sequence. II method comprises a plurality of steps of sending the control signal to instruct the light emitter to emit a light signal. Each of said steps of sending the control signal is performed in a period of time between two steps of receiving image data chronologically in sequence. In that way, a better assessment is possible of the variation generated on the eye by the light waves. Corresponding to said plurality of image data is a plurality of data groups, each identified within the image data relating to the respective image.
In that embodiment, the method comprises a plurality of steps of calculating the indicative value, obtained by comparing one data group, relating to one image, with the data group relating to the previously captured image. Therefore, at the end of that step, a plurality of indicative values have been calculated. Said plurality of indicative values is processed in order to determine its information representing the presence or absence of a living body.
In one example embodiment, the method comprises the step of identifying the first and/or the second data group, processing one or more of the following steps:
- a step of recognizing boundary data, representing boundaries in the first and/or in the second image, preferably through a recognition algorithm;
- comparing the first image data or the second image data with the boundary data recognized respectively amongst the first image data and amongst the second image data. Using said comparison to identify said first data group and said second data group, according to a predetermined comparison logic. In one embodiment, the predetermined comparison logic is that the first data group and/or the second data group comprises the data relating to zones of the respective first and/or second image, which are inside the boundary and/or the deviation position recognized.
The boundary data are image data which identify, within an image, portions of the image which are boundaries of objects detected in the image itself. Therefore, the boundary data could be pixels of the image which divide the pixels of one object from the pixels of another object. Specifically, in this invention, the boundary data could be the pixels which divide the iris from the pupil.
The step of recognizing boundary data may be performed using one or more of the known algorithms in the sector for determining boundaries in the images, including by way of example only, the Canny algorithm. In one embodiment, the step of identifying comprises a step of filtering, through a filter. In the filtering step, the filter removes noise data, reflections and occlusions, representing false boundaries, from the first image data and/or from the second image data.
In one embodiment, the method comprises a step of excluding. The step of excluding comprises one or more of the following steps:
- identifying occluding data within the first data group and/or the second data group; said occluding data representing portions of the first and/or second image in which the eye is covered by interfering bodies;
- excluding the occluding data from the first and/or from the second data group.
According to one example embodiment, in the step of receiving, the first and/or the second image data represent a first and a second image having a resolution greater than 150 dpi (dots per inch).
In one embodiment, the step of excluding the occluding data comprises a step of recognizing an object in the image (for example identifying its boundaries with the Canny algorithm or with other image detection algorithms). The object recognized is then compared with a plurality of reference data (reference images), in order to identify potential objects extraneous to the image which may therefore be occluding objects. The method comprises removing from the image the pixels which identify the occluding body identified.
The step of excluding brings very important advantages to the method, since it also allows verification in the presence of bodies which partly cover the eyes. In contrast, the prior art systems malfunction or are interrupted by the presence of bodies positioned in front of the eyes.
According to one aspect of this description, this invention makes available a system (that is to say, a device) for recognizing a living body.
The device comprises a camera (or, a stills camera or any other optical device suitable for image capture), configured to capture first image data and/or second image data. The first image data and/or second image data represent a first image, captured at a first time instant, and a second image, captured at a second time instant. The second time instant follows the first time instant. The device comprises a control unit, connected to the camera and including a computer.
It should be noticed that the terms device or system may be used according to the embodiment adopted as regards the positioning of the control unit. More precisely, in an embodiment of the system, the control unit (therefore the computer) is remote and connected to a plurality of “client” devices by a remote connection or by cable.
In that embodiment, each client device sends the first and the second image data to the remote control unit, which processes the respective data and sends a result of the check to the corresponding client device.
In other embodiments, this invention intends to protect a device for recognizing a living body, in which the control unit is local and therefore the steps of the recognizing method are performed directly on each individual client device.
The device comprises a memory, connected to the control unit in order to receive data to be stored.
The device comprises a light emitter, connected to the control unit and configured to emit light waves. That emitter may be the screen of the client device.
The computer is programmed to perform one or more of the steps of the method for recognizing a living body which are described according to this invention, whether they are described in the claims, in the description or in the drawings.
In an example embodiment, the light emitter is driven by the control unit to emit a light signal at a time instant between the first time instant and the second time instant.
The camera is configured to capture third image data, representing a third image, captured at a third time instant, following the second time instant. In one embodiment, the light emitter is instructed by the control unit to emit a further light signal at a time instant between the second time instant and the third time instant.
In a particularly advantageous variant of the device, the camera has a minimum resolution of 150 dpi. In an example embodiment, the minimum resolution of the camera is 300 dpi.
According to one aspect of this description, this invention describes a computer program comprising a software for performing one or more of the steps of the method for recognizing a living body which are described according to this invention, when run on the device according to any one of the claims described in this document.
Brief description of the drawings
This and other features will be more apparent from the following description of a non-limiting preferred embodiment, illustrated by way of example only in the accompanying drawings, in which:
- Figure 1 is a schematic illustration of the steps of a method for recognizing a living body;
- Figure 2 is a schematic illustration of a detail of a step of comparing illustrated in Figure 1 ;
- Figure 3 is a schematic illustration of a detail of a step of identifying illustrated in Figure 1 ;
- Figure 4 shows a variant of the method illustrated in Figure 1 ;
- Figures 4A and 4B show a first and a second variant of the method of Figure 1 , respectively;
- Figures 5A and 5B show a geometrical model of an iris and a pupil of a human eye and an image of a human eye, with a relative eye bounding box, respectively.
Detailed description of preferred embodiments of the invention
This invention illustrates a method for recognizing a living person.
The method comprises a step F10 of receiving first image data 101 , representing a first image, at a first time instant. Before the first time instant, the method comprises the sending of a respective capture signal, for instructing a camera to capture the first image. In one embodiment, the image is captured with the orthogonal method, that it to say, a method used in the context of cooperative recognition, which is carried out when the user actively participates during the capturing step, by keeping the head in a fixed position, so that the centre of the iris is aligned with the optical centre of the camera.
Therefore, in the step F10, a computer receives a predetermined number of pixels, which represent the first image.
The method comprises a step F1 1 of identifying a first data group 101 A, representing a portion of the first image. In an example embodiment, said portion of image includes a human eye, preferably an iris and a pupil of a human eye.
Essentially, in the step F1 1 of identifying, the computer filters the pixels which represent the portion of the first image including the pupil and the iris.
In one embodiment, the step F11 of identifying is obtained by means of segmentation of the first image.
The method comprises a step F12 of storing, in which the first data group 101 A is stored in a memory.
The method comprises a step F20 of receiving second image data 102, representing a second image, at a second time instant. Before the second time instant, the method comprises the sending of a respective capture signal, for instructing a camera to capture the second image. Therefore, in the step F20, a computer receives a predetermined number of pixels, which represent the second image.
The method comprises a step F21 of identifying a second data group 102A, representing a portion of the first image. In an example embodiment, said portion of image includes the same human eye, preferably the same iris and the same pupil.
Essentially, in the step F21 of identifying, the computer filters the pixels which represent the portion of the second image including the pupil and the iris.
In one embodiment, the step F21 of identifying is obtained by means of segmentation of the second image. In the technical field of segmentation algorithms, various segmentation techniques are known, amongst which the most famous are the one based on the Hough transform (for more information about this known algorithm, consult for example the following scientific publications:
- Qi-Chuan Tjan, Quan Pan, Yong-Mei Cheng, and Quan-Xue Gao. Fast algorithm and application of hough transform in iris. PROCEEDINGS OF THE IEEE CONFERENCE: Machine Learning and Cybernetics, 2005;
- W. M. K Wan Mohd Khairosfaizal and A. J. Nor’aini. Eyes detection in facial images using circular hough transform. 2009 5th International Colloquium on Signal Processing and Its Applications (CSPA), 2009.) and the one proposed by Daugman (for more information about this known algorithm, consult for example the following publication:
- J. Daugman. How iris recognition works. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2004.). The present application is based on the Hough transform, an algorithm widely used in the context of processing images for determining the parameters of simple geometrical figures within an image. In the present context, in which it is necessary to assess the shape and the diameter of the pupil, it may therefore be used to obtain radii and centres of the boundaries corresponding to iris and pupil (CHT: Circular Hough Transform).
The Hough transform may be seen as a transformation of a point of the plane (x, y) of the image to the space of the parameters, which is defined based on the geometrical figure to be identified. For the circle the search takes place in a three-dimensional space, being the parametric representation of the circle:
x = Xc + r cos©
y = yc + r sin©
The space of the parameters is therefore defined by the coordinates of the centres (xc, yc) and by the values of the radii r of the circles.
In one embodiment, each of said step F1 1 of identifying the first data group 101 A and said step F21 of identifying the second image group 102A comprise one or more of the following steps: - F21 1 , F1 11 filtering of the first image data 101 and/or of the second image data with a filter; in one example embodiment, the step of filtering is a filtering with a 10 x 10 pixel average filter for removing false boundaries;
- F212, F1 12 extracting (capturing, recognizing, identifying) boundary data amongst the first image data and/or the second image data, for example by means of the Canny method (or Canny algorithm) known in the image processing sector;
- for each boundary datum which identifies a point of a boundary in the first and/or in the second image, depicting a circle with centre at said point and radius r;
- incrementing, within an accumulation matrix, all of the coordinates of the points which define the perimeter of the circle drawn;
- identifying a maximum point in the accumulation matrix whose coordinates correspond to the parameters of the circle which best approximate the iris - sclera boundary in the first and/or in the second image.
- identifying F213, F1 13 (extracting) the first data group 101 A and/or the second data group 102A which best approximates the portion of circle CS including only the iris of the human eye in the first and/or in the second image.
Moreover, in one example of application of the method, each of said step F11 of identifying the first data group 101 A and said step F21 of identifying the second data group 102A comprises a respective step F214, F1 14 of excluding. In the step F214, F114 of excluding, the first data group 101 A and the second data group 102A are filtered to exclude the occluding data representing portions of the first and second image in which the eye is covered by interfering bodies. In particular, in the steps F214, F1 14 of excluding, for identifying any occlusions caused by eyelashes and eyelids, consult the method proposed by Wildes et al., Kong and Zhang (for more information about this known algorithm, consult the following publication:
- W. K. Kong and D. Zhang. Accurate iris segmentation based on novel reflection and eyelash detection model. Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing. ISIMP 2001 .
The method uses the parabolic Hough transform to identify, within the first data group 101 A and/or the second data group 102A (within the first and/or the second image), occluding data which represent the upper and lower eyelids, approximating them with parabolic arcs. In the steps F214, F1 14 of excluding, the occluding data identified in the first data group 101 A and/or in the second data group 102A are excluded so that they do not introduce artefacts during the step of normalization.
In one embodiment, the method comprises a step F30 of sending a control signal 201 to a light emitter 20, to instruct it to emit a light signal in an interval of time between the first time instant and the second time instant.
In that way, by means of the light emission, variations in the conditions of the eye are caused, in particular in the ratio between the iris and the pupil in the first image and in the second image.
According to one embodiment of this invention, the method comprises a step F40 of comparing the first data group 101 A with the second data group 102A. The step F40 of comparing, in one example of application of the method, comprises one or more of the following steps:
- F410 determining, for each of said first data group 101 A and second data group 102A, a corresponding first BB1 and second BB2 reference geometrical figure, commonly known as a“bounding box”.
- F420 determining, for each of said first data group 101 A and second data group 102A, a respective barycentre BC1 , BC2 of the corresponding first BB1 reference geometrical figure and second BB2 reference geometrical figure;
- F41 determining an iris circle C1 i and a pupil circle C1 P for the first data group 101 A; said iris circle C1 i including an iris radius r1 i and an iris centre 011; said pupil circle C1 P including a respective pupil radius r1 p and a respective pupil centre 01 p;
- F42 determining an iris circle C2i and a pupil circle C2P for the second data group 102A; said iris circle C2i including a respective iris radius r2i and a respective iris centre 02i; said pupil circle C2P including a respective pupil radius r2p and a respective pupil centre 02p.
- F430 determining a first significant distance D1 for the first data group 101 A, calculated as the absolute value of the distance between the barycentre BC1 of the first reference geometrical figure BB1 and the iris centre 01 i of the first data group 101 A;
- F440 determining a second significant distance D2 for the second data group 102A, calculated as the absolute value of the distance between the barycentre BC2 of the second reference geometrical figure BB2 and the iris centre 02i of the first data group 101 A;
- F43 determining a first ratio wi, defined as the ratio between the iris radius r1 i and the respective pupil radius r1 p relative to the first data group 101 A;
- F44 determining a second ratio W2, defined as the ratio between the iris radius r2i and the respective pupil radius r2p relative to the second data group 102A;
- F45 calculating an indicative value, defined as the difference between the second ratio W2 and the first ratio wi, preferably as an absolute value;
- F450 calculating an indicative deviation S1 , as a function of the value of the first significant distance D1 and of the second significant distance D2.
Flaving obtained the indicative value, the computer is capable of assessing whether or not a living body is present. Therefore, the method comprises a step of diagnosing, in which the computer assesses whether or not a living body is present as a function of the indicative value. For example, the assessment of the value could be performed by comparing it with thresholds to be appropriately calibrated as a function of the application context, so as to determine if there has been a variation due to the subject photographed actually being a living subject.
In one embodiment, the iris circle and the pupil circle may not be concentric, in which case the distance between the edge of the pupil and the edge of the iris is not constant but varies as a function of the angle considered as described in L. Masek. Recognition of human iris pattern for biometric identification, 2003. In a preferred form of application of the method, which increases the reliability of the measurement, more than two images are captured. For the purpose of this text, n refers to a parameter indicating the number of images captured by the camera 10.
In that embodiment the method comprises one or more of the following steps:
- F1 / receiving a plurality of image data 10/, representing a corresponding plurality of images of the subject;
- F1 1 / identifying a plurality of data groups 10/Ά, each associated with a respective image of the subject and including the eye of a human face, preferably the iris and/or the pupil;
- F40/ determining, for each data group of said plurality of data groups 10/Ά, a corresponding iris circle (characterized by the respective iris radius and iris centre) and a corresponding pupil circle (characterized by the respective pupil radius and a respective pupil centre);
- F410i determining, for each data group of said plurality of data groups 10/Ά, a corresponding reference geometrical figure BBi (bounding box);
- F420i determining, for each data group of said plurality of data groups 10/Ά, a barycentre BCi of the corresponding reference geometrical figure (bounding box).
- for each iris circle and pupil circle of each data group, F4/' determining a corresponding ratio w, between the radii, defined as the ratio between the iris radius and the pupil radius;
- F430i, F440i determining, for each data group of said plurality of data groups 10/Ά, a significant distance Di, for example calculated as the absolute value of the distance between the barycentre of corresponding reference geometrical figure BB2 and the iris centre 02, calculated for the corresponding data group;
- F45/ calculating a plurality of indicative values, each indicative value being determined as the difference between the ratio of the radii of one image and the ratio of the radii of the preceding image;
- F450i calculating a plurality of significant deviations Si, each significant deviation being determined as a function of the significant distance Di of the corresponding data group and of the significant distance of the data group corresponding to the image previously captured;
- F50 assessing (processing) the plurality of indicative values and/or the plurality of significant deviations Si to determine an indication of whether or not a living body is present.
Therefore, in this embodiment, the method comprises assessing a plurality of single pictures, repeating n times the operations described for the basic solution for processing only two images. This allows the assessment to have a much higher level of reliability.
It should be noticed that in a particularly advantageous embodiment of the method, each image captured by the camera 10 has one or more of the following characteristics (for further information, consult Richard P. Wildes. Iris recognition: An emerging biometric technology. PROCEEDINGS OF TFIE IEEE, 1997.):
- sufficient resolution in the eye area: at least 150 dpi;
- high contrast in the image region, as far as possible without light sources which bother the subject to be recognized;
- absence of specular reflections within the iris.
With reference to the accompanying figures, the numeral 100 denotes a device for recognizing a living body. The device 100 comprises a camera 10. The device 100 comprises a light emitter 20. The device 100 comprises a control unit. The control unit is connected to the camera 10 and/or to the light emitter 20 for sending to one and/or the other respective control signals to instruct them to capture an image and to emit a light signal, respectively. The device 100 also comprises a display 30.
In one embodiment, the device 1 is a tablet, a smartphone or a computer (laptop or desktop).
In one embodiment, the camera 10 has a resolution of between 100 and 300 dpi in the eye area, preferably 150 dpi.

Claims

1. A method for recognizing a living body, comprising the following steps:
- (F10) receiving first image data (101 ), representing a first image of the face of the body to be recognized and captured by a camera at a first time instant;
- (F1 1 ) identifying, from the first image data (101 ), a first data group (101 A) representing a portion of the first image, the portion of the first image including an eye;
- (F12) storing the first data group (101 A);
- (F20) receiving second image data (102), representing a second image of the face and captured by a camera at a second time instant, following the first time instant;
- (F21 ) identifying, from the second image data (102), a second data group (102A) representing a portion of the second image, the portion of the second image including the eye;
- (F40) comparing the first data group (101 A) with the second data group (102A) to calculate a value of an assessment parameter,
characterized in that it comprises a step (F30) of sending a control signal (201 ) to a light emitter (20) to drive it to emit, between the first time instant and the second, a light signal which is perceivable by the eye.
2. The method according to claim 1 , wherein the first data group (101 A) and the second data group (102A) represent a pupil and an iris of the human eye.
3. The method according to claim 2, wherein the step (F40) of comparing the first data group (101 A) and the second data group (102A) comprises, for each of said first data group (101 A) and second data group (102A), the following steps:
- (F41 ) determining a first iris circle (C1 i) having a respective iris centre (01 i) and a respective iris radius (r1 i), and a first pupil circle (C1 ), having a pupil centre (01 P) and a respective pupil radius (r1 p), as a function of the first data group (101 A);
- (F42) determining a second iris circle (C2i) having a respective iris centre (02i) and a respective iris radius (r2i), and a second pupil circle (C2P), having a pupil centre (02p) and a respective pupil radius (r2p), as a function of the second data group (102A);
4. The method according to claim 3, wherein the step (F40) of comparing comprises the following steps:
- (F43) determining a first ratio between the iris radius (r1 i) and the pupil radius (r1 p) of the first data group (101 A);
- (F44) determining a second ratio between the iris radius (r2i) and pupil radius (r2p) of the second data group (102A);
- (F45) calculating the value of the assessment parameter using the absolute value of the difference between the second ratio and the first ratio.
5. The method according to any one of the preceding claims, wherein the method comprises the following steps:
- sending the control signal to the light emitter (20) again, in order to drive it to emit a further light signal after the second time instant;
- receiving third image data, representing a third image of the face and captured after the further light signal has been emitted;
- identifying, from the third image data, a third data group, representing a portion of the third image, the portion of the third image including the eye;
- comparing the third data group with the second data group (102A) to derive an additional value of the assessment parameter.
6. The method according to claim 5, comprising a step (F50) of processing the value and the additional value of the assessment parameter.
7. The method according to claim 6, wherein the step of comparing the third data group with the second data group includes the following steps:
- determining a corresponding iris circle, having a respective iris centre and a respective iris radius as a function of the third data group;
- determining a corresponding pupil circle, having a respective pupil centre and a respective pupil radius as a function of the third data group;
- determining a third ratio between the iris radius and the pupil radius for the iris circle relating to the third data group;
- calculating the value of the additional assessment parameter using the absolute value of the difference between the third ratio and the second ratio.
8. The method according to any one of the preceding claims, wherein the steps (F11 , F21 ) of identifying the first data group (101 A) and the second data group (102A) each comprises:
- a step (F1 12, F212) of recognizing boundary data, representing a boundary in the first or the second image, through a recognition algorithm;
- comparing the first image data (101 ) or the second image data (102) with the respective boundary data according to a predetermined comparison logic;
- (F1 13, F213) identifying the first data group (101 A) and the second data group (102A) as a function of the step of comparing.
9. The method according to any one of the preceding claims, wherein the step (F1 1 , F21 ) of identifying comprises a step (F1 1 1 , F21 1 ) of filtering, through a filter that removes noise data, representing false boundaries, from the first image data (101 ) and from the second image data (102).
10. The method according to any one of the preceding claims, comprising, for both the first data group (101 A) and the second data group (102A), a corresponding step (F1 14, F214) of excluding, including the following steps:
- identifying the occluding data within the respective first data group (101 A) or second data group (102A), the occluding data representing portions of the first and second image in which the eye is covered by interfering bodies;
- excluding the occluding data from the respective first data group (101 A) or second data group (102A).
11. The method according to any one of the preceding claims, comprising the following steps:
- determining a first geometrical figure (BB1 ) from the first data group (101 A) according to a predetermined criterion;
- determining a first geometrical reference point, corresponding to the barycentre (BC1 ) of the first geometrical figure (BB1 );
- determining a second geometrical figure (BB2) from the second data group (102A) according to the predetermined criterion;
- determining a second geometrical reference point, corresponding to the barycentre (BC2) of the second geometrical figure (BB2);
- determining a first significant distance (D1 ) for the first data group (101 A), as a function of the distance between the first geometrical reference point and a first reference point of the eye, determined from the first data group (101 A) according to a predetermined reference criterion;
- determining a second significant distance (D2) for the second data group (102A), as a function of the distance between the second geometrical reference point and a second reference point of the eye, determined from the second data group (120A) according to a predetermined reference criterion;
- comparing the first significant distance (D1 ) and the second significant distance (D2) to determine a significant deviation (S1 ) representing the differences between the first data group (101 A) and the second data group (102A).
12. A system (100) for recognizing a living body, comprising:
- a camera (10), configured to capture first image data (101 ), representing a first image of the face of the body to be recognized, at a first time instant, and second image data (102), representing a second image of the face at a second time instant, following the first time instant;
- a control unit, connected to the camera (10) and including a computer, programmed to:
identify from the first image data (101 ) a first data group (101 A) representing a portion of the first image, the portion of the first image including an eye;
identify from the second image data (102) a second data group (102A) representing a portion of the second image, the portion of the second image including the eye;
compare the first data group (101 A) with the second data group (102A),
- a memory connected to the control unit;
- a light emitter (20), connected to the control unit and configured to emit light waves;
characterized in that the control unit is programmed to drive the light emitter (20) to emit, between the first time instant and the second, a light signal which is perceivable by the eye.
13. The system (100) according to claim 12, wherein the camera (10) is configured to capture third image data, representing a third image, at a third time instant, following the second time instant, and wherein the light emitter (20) is driven by the control unit to emit a further light signal at a time instant between the second time instant and the third time instant.
14. The system (100) according to claim 12 or 13, wherein the camera (10) has a minimum resolution of 150 dpi.
15. A computer program comprising instructions for performing the steps according to any one of claims 1 to 10, when run on the system according to any one of claims 11 to 14.
PCT/IB2020/057012 2019-07-25 2020-07-24 Method for recognizing a living body WO2021014417A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102019000012852 2019-07-25
IT102019000012852A IT201900012852A1 (en) 2019-07-25 2019-07-25 METHOD TO RECOGNIZE A LIVING BODY

Publications (1)

Publication Number Publication Date
WO2021014417A1 true WO2021014417A1 (en) 2021-01-28

Family

ID=68733486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/057012 WO2021014417A1 (en) 2019-07-25 2020-07-24 Method for recognizing a living body

Country Status (2)

Country Link
IT (1) IT201900012852A1 (en)
WO (1) WO2021014417A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294464A1 (en) * 2012-11-19 2015-10-15 Dae Hoon Kim Method and apparatus for identifying living eye

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180336397A1 (en) 2017-05-17 2018-11-22 Tandent Vision Science, Inc. Method for detecting a live face for access to an electronic device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294464A1 (en) * 2012-11-19 2015-10-15 Dae Hoon Kim Method and apparatus for identifying living eye

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KANG RYOUNG PARK ED - FRANCISCO J PERALES ET AL: "Robust Fake Iris Detection", 1 January 2006, ARTICULATED MOTION AND DEFORMABLE OBJECTS LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, PAGE(S) 10 - 18, ISBN: 978-3-540-36031-5, XP019037053 *
XINYU HUANG ET AL: "An experimental study of pupil constriction for liveness detection", APPLICATIONS OF COMPUTER VISION (WACV), 2013 IEEE WORKSHOP ON, IEEE, 15 January 2013 (2013-01-15), pages 252 - 258, XP032339497, ISBN: 978-1-4673-5053-2, DOI: 10.1109/WACV.2013.6475026 *

Also Published As

Publication number Publication date
IT201900012852A1 (en) 2021-01-25

Similar Documents

Publication Publication Date Title
US11188734B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10095927B2 (en) Quality metrics for biometric authentication
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
US10108858B2 (en) Texture features for biometric authentication
EP2883189B1 (en) Spoof detection for biometric authentication
KR101808467B1 (en) Feature extraction and matching and template update for biometric authentication
US9710691B1 (en) Touchless fingerprint matching systems and methods
JP5076563B2 (en) Face matching device
CN110647955A (en) Identity authentication method
CN113614731A (en) Authentication verification using soft biometrics
WO2021014417A1 (en) Method for recognizing a living body
KR20210050649A (en) Face verifying method of mobile device
CN114663930A (en) Living body detection method and device, terminal equipment and storage medium
US11842573B1 (en) Methods and systems for enhancing liveness detection of image data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20756738

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20756738

Country of ref document: EP

Kind code of ref document: A1