An authentication device for forming an image of at least a partial area of an eye retina
The present invention relates to an authentication device using data of at least a partial area of an eye retina, said device comprising illumination means for forming an illumination channel, said illumination means comprising a first light source, in particular an infrared light source, provided for generating a first light beam in order to illuminate said partial area of said retina.
Such a device is known from US-A-4109237. In the known device, the illumination means are provided for illuminating a partial area of the retina by successive illumination of individual spots on the retina. In such a manner, a circular scanning is realised for collecting data of the scanned part of the retina by using the light reflected by the retina.
A drawback of the known device is that it requires either mechanical movements of some components or numerous duplication of senders and/or receivers, or expensive acoustical-optical elements.
An object of the present invention is to realise a homogeneous radiation density on the partial area of the retina in order to produce an image thereof. A device according to the invention is therefore characterised in that said device further comprises imaging means arranged in an imaging channel and ending in an image sensor for forming said image with light reflected by said retina, said illuminating means and said imaging means comprise an eyepiece applied into said illumination and said imaging channel, said eyepiece being provided for focusing light originating from said first light source at a first point in said illumination channel substantially corresponding to a position where a pupil of said eye has to be positioned, said eyepiece being further provided for focusing said reflected light on an image plane recordable by said image sensor. By focusing light, originating from the first light
source, at a position where the eye pupil has to be positioned, an image of the first light source is created at that position. In such a manner, the "imaged" first light source forms a light source at the pupil which enables to illuminate the to be considered retina area in a homogeneous manner. Scattered and/or reflected radiation from the retina is then collected by the eyepiece in order to form an image of the considered retina area on the image plane.
A first preferred embodiment of a device according to the invention is characterised in that a beam splitter is applied into said illumination and said imaging channel, said beam splitter being provided for orienting said first light beam towards said retina and for orienting said imaging channel towards said image sensor. The beam splitter enables on the one hand to combine the illumination and the imaging channel along a common axis in line with the eye and on the other hand to dissociate the channels in the vicinity of the image sensor.
A second preferred embodiment of a device according to the present invention is characterised in that an optical member is applied into said image channel between said image plane and said image sensor, said optical member being provided for projecting an image of said retina area formed on said image plane on said image sensor. The optical member enables to apply optical operations on the image such as for example magnification.
Preferably, an aperture stop is applied between said image plane and said image sensor at a position where an eye pupil is imaged. The aperture stop enhances the depth of the field and can be used for making the image's luminosity independent from the eye's pupil size.
Preferably, polarising means are used upon coupling said illumination into said imaging channel. The polarising means enables a better splitting of the imaging channel and the illumination channel.
Preferably said imaging means are designed to be substantially telecentric. This enables to obtain a consistent image size.
A third preferred embodiment of a device according to the invention is characterised in that it further comprises eye positioning means, provided for enabling a positioning of said eye substantially at said position, said eye positioning means comprising a second light source, provided for emitting second light beams of visible light intersecting at said position. The positioning means will help the user to correctly position his eye, in such a manner, that his pupil position substantially corresponds to the position of the first point. By having the second light beams generated by the second light source, intersecting at the considered position, the user will only see all the light of the second light beam if his eye is correctly positioned. The positioning means will thus help the user to correctly position his eye. Preferably said eye positioning means comprises at least one picture located into said second light beam and illuminated therewith, said picture(s) being applied in such a manner that it is (they are) sharply displayed on said retina. The use of a picture provides a user friendly device. A fourth preferred embodiment of a device according to the invention is characterised in that a further beam splitter is applied in said illumination and positioning channel, said further beam splitter being provided for orienting said positioning channel towards said retina. This provides more flexibility for the illumination and the positioning channel. A fifth preferred embodiment of a device according to the invention is characterised in that eye fixation means are provided for selecting said partial retina area, said eye fixation means comprising a target which is imaged by means of a visible target light beam on said retina. This enables to choose a particular part of the retina for imaging purpose.
Preferably said illumination and said imaging channel have an optical axis, said second light beam having a light beam axis being off-axis with respect to said optical axis and said target light beam being on said light beam axis. In such a manner a central viewing, which can be used with both eyes, is possible.
Preferably, said further beam splitter is a wavelength selective beam splitter provided for selectively orienting said imaging and said positioning channel in distinct directions. Light of different wavelengths can thus be used for imaging and positioning purpose, while using the same optics.
A sixth preferred embodiment of a device according to the invention is characterised in that said eyepiece, and said optical member, said first and second light source are rigidly fixed within said device. No adjustments of the optics are required once the device is built up. Preferably said device comprises pattern projection means provided for projecting a predetermined pattern on said retina, said imaging means being provided for forming on said image plane a further image of said pattern with light reflected by said retina, said further image being recordable by said image sensor. Recognition of the selected retina part becomes more easy.
A seventh preferred embodiment of a device according to the invention is characterised in that said image sensor is connected to image processing means provided to apply an authentication operation on an image, recorded by said image sensor. The processing of the recorded image can thus be performed.
The invention will now be described with reference to the drawings illustrating different embodiments of a device according to the invention. In the drawings :
figure 1 shows a set-up of the illumination and imaging channel within the device; figure 2 shows a set-up of the device using an intermediate image plane; figure 3 shows the incorporation of an eye detection target into the illumination channel; figure 4 illustrates an example of the positioning means; figure 5 shows an implementation of a circular set-up for positioning purpose; figure 6 and 7 illustrate the use of an image target for positioning; figure 8 shows an embodiment of the positioning means using a microlens array; figure 9 shows an embodiment of the positioning means using wedges; figure 10 illustrates an example of a retina structure; figure 11 illustrates an implementation of an autonomous fixation target; figure 12 illustrates an implementation of a fixation target using the eyepiece optics of the illumination and imaging channel; figure 13 illustrates an implementation of a fixation target using the beam splitter of the illumination or positioning channel; figure 14 illustrates a device with central viewing which can be used with any of both eyes of the user; and figure 15 shows by means of a flow chart the image processing for the authentication device according to the invention.
In the drawings, a same reference sign has been assigned to a same or analogous element.
Figure 1 illustrates a first embodiment of a retinal authentication device according to the invention for forming an image of at least a partial area of an eye retina. The device comprises a first light source 1 , provided for generating a first light beam 2, which is either formed by a continuous light or by pulsed light. The first light source is preferably formed by a LED emitting light in the near infrared light range, such as for example between 720 nm and 1300 nm. It would of course also be possible to have a first light source emitting visible light. This latter option is however less preferred as it is less convenient for the user and causes the eye's pupil to reduce in size when the light beam is switched on. The first light beam is preferably collimated by a collimator lens 3 in order to form a parallel beam.
The first light beam, which is part of an illumination channel 10, is incident on a beam splitter formed by a semi-transparent mirror 4, in order to orient the first light beam towards the user's eye 5 and to further form the illumination channel 10.
The first light beam 2, after being reflected by the semi- transparent mirror 4, crosses an eyepiece 11. The eyepiece is formed by a set of lenses with a combined positive effect which focuses the incoming light beam in a first point 12 situated near the eye pupil 13. In such a manner, an image of the first light source 1 is formed in that first point 12, which image forms a light source illuminating at least a part 16 of the retina 15. A substantially homogeneous illumination of the retina is thus obtained. Of course the eyepiece 11 could also be formed by a single lens with positive effect.
The light incident on the illuminated part of the retina is then scattered by the latter and the scattered light beam crossing the eye lens 17, the eye pupil 13 and a cornea 14 leaves the eye substantially collimated and reaches the eyepiece 11. That scattered light creates an imaging channel 9. The eyepiece will focus the scattered light on the
plane of the image sensor 8, in order to form by means of that reflected light an image of the illuminated retina part on the image sensor. The semi-transparent mirror 4 enables the light in the image channel to reach the image sensor. In such a manner, the beam splitter as well couples as de-couples the image and the illumination channel. Both the illumination and the image channel are centralised around the optical axis 18 crossing the retina, the eye pupil, the eyepiece, the beam splitter and the image sensor. The image sensor is preferably formed by a 2D CCD (Charge Coupled Device) a CMOS (Complementary Metal Oxide Semiconductor), a CID (Charge Injection Device), a PDA (Photodiode Array) or any other imaging sensor. When pulsed light is used for the first light source 1 , the pulse mode has to be synchronised with the image grabbing by the image sensor.
Preferably, polarising means 6, 7 are used, which are placed near the beam splitter 4, as illustrated in the embodiment of figure 1. The polarising means comprise a first polariser 6 provided for eliminating light from back-reflections on the image sensor 8 as will be described hereinafter. A second polariser 7 is applied in the path of the first light beam and is provided for polarising light reaching the cornea 14. Other set-ups for the polarising means than the one illustrated in figure 1 are however possible. It is however important that the first polariser 6 is mounted between the beam splitter and the sensor and that the second polariser 7 is mounted between the first light source 1 and the beam splitter 4. According to an alternative embodiment, the polarising means could be formed by the second polariser 7 only in combination with a polarising semi-transparent mirror 4, or by a combination of the polarising semi-transparent mirror and both polarisers 6 and 7.
As already mentioned, preference is given to the use of polarised light for the first light beam. The second polariser 7 is provided for selecting a linear polarisation state such a for example the s-
polarisation state. The beam splitter 4 is then chosen in such a manner that the semi-transparent mirror has a better reflection coefficient for s- polarisation than for a p-polarisation state, so that the efficiency of the light reflection towards the eye is maximised. In such a manner the necessary light power of the first light source and the back reflection on the image sensor are limited. The light scattered by the retina has lost its initial polarisation causing a random polarisation. This causes a sufficient part of this scattered light to cross the semi-transparent mirror and reach the image sensor, since a large part of the p-polarised light passes the beam splitter. The radiation within the image channel is further filtered by the first polariser 6 which only enables the passage of the p-polarised light.
Figure 2 shows a further embodiment of a device according to the present invention. The embodiment shown in this figure distinguishes from the one illustrated in figure 1 by the fact that a retina intermediate image plane 19 is used. This signifies that an image of the illuminated retina part is first formed on the intermediate image plane 19 and then on the image sensor 8 plane where the final image is recorded. In order to relay the retina image from the intermediate image plane to the sensor's image plane, a first 20 and a second 21 optical element are arranged within the imaging channel 9. In the device according to the invention the eyepiece, the optical elements and the first and second light source are all rigidly fixed within the device. In such a manner no adjustments to the user are required once they have been mounted in the device.
The first optical element 20 collimates the reflected light beam, starting from the intermediate image plane 19. The second optical element 21 focuses the collimated image light beam onto the image plane of the sensor 8. The use of this first and second optical element enables to choose a magnification of the image, formed on the image
sensor plane. In such a manner, the image of the retina part can be adjusted to the image sensor's size.
In the embodiment illustrated in figure 2 the beam splitter 4 is situated between the optical elements 20 and 21. In this configuration the first optical element 20, which is also applied in the illumination channel 10, replaces the lens 3 of figure 1 and forms a parallel first light beam in the illumination channel. Indeed as can be seen in figure 2, the first light beam incident on the beam splitter is not collimated by a lens.
Preferably, the imaging optics, composed of the optical elements 20 and 21 and the eyepiece 11 , should be designed in such a manner to be telecentric or close to telecentric in order to obtain a consistent image size, even if the focal distance of the eye varies somewhat. Telecentricity means that the chief rays 22 impinging on the sensor 8 should make an angle α with the optical axis 18 which equals 0°.
The eyepiece 11 , the lens 3, the first 20 and second 21 optical elements are all made of a combination of one or more optical components such as refractive, diffractive and/or reflective components which can be made of glass, vitroceramics, polymers or the like. The surface of those components is spherical or aspherical and is preferably coated with an appropriate layer. The optical components used within the image channel are tuned to make an image of an object at infinity or at a finite distance, as the retina plane is projected at this given distance by the eye lens 17. Therefore the device according to the present invention can also be used for face recognition or as a surveillance camera.
An aperture stop 23 is mounted between the first 20 and second 21 optical element, at the place where the eye's pupil 13 is imaged by the eyepiece and the first optical element 20. The aperture stop 23 enables to limit the aperture of the imaging channel to an aperture corresponding to the size of the smallest expected eye pupil, i.e.
1 to 2 mm. This avoids scattered light from i.a. the iris to influence the image formed on the sensor and yields a constant image brightness, which is independent from the eye's pupil size. The aperture stop also reduces the numerical aperture of the device leading to an increased field depth and limiting the sensitivity of the imaging channel to eye defects.
Preferably, a baffle 24 is applied between the eye 5 and the eyepiece 11. The baffle may limit the influence of stray light on the image and causes, due to the reduced amount of incident light, the eye pupil to open more, which on its turn enables a less stringent positioning accuracy and consequently improves the image quality.
In order to enable a user wearing glasses to keep on his glasses during operation of the device, it is necessary to provide a distance between the eyepiece 11 or baffle 24 and the cornea 14 which is large enough. A distance situated between 20 and 30 mm is appropriate.
The opening angle β of the illumination between the optical axis 18 and the outermost of the first light beam, considered in the first point 12 is determined by the dimension of the part of the retina to be illuminated. An angle 5° < β < 15° enables the illumination of a sufficient retina part, enough to form an image, providing sufficient information for biometric authentication purposes.
As long as the eye is not correctly positioned it is not meaningful to use an image grabbed by the image sensor for authentication purpose. In order to detect a correct eye positioning it is possible to project a pattern on the retina and recognise this pattern on the generated image. Preferably, the pattern is positioned in the outer regions of the image in order to avoid interference with the information carrying image parts. Since, according to a preferred embodiment, the image sensor only images near infrared light, the pattern on the retina
should also be created in the near infrared. One straightforward implementation is to block a part of the illumination channel (dark lines, dark square, ...), in a plane that is sharply imaged on the retina. In the embodiment of figure 1 , a shadow image can be created by an aperture in the collimated first light beam between the beam splitter 4 and the collimator lens 3 in a plane 90 situated just after the lens 3 and matching with the image plane 8. A further possible optical set-up for the illumination optics using such a target projection on the retina is shown in figure 3. This embodiment is comparable with the one illustrated in figure 2. The plane 90 matching the imaging plane is created by means of a lens 91 in the illumination path. The collimating lens 3 is added to restore the regular light path of the illumination subsystem. Since the pattern is incorporated into the first light beam due to its presence in plane 90, it is projected on the retina and thus displayed on the image sensor, enabling a selection of usable pictures for authentication purpose. The pattern used for eye detection can also be generated by an independent light source. The pattern shape can be tuned to algorithms used in the detection.
The focusing of the first light beam in the first point 12 requires of course that a user correctly positions his eye at this first point. In order to help the user in positioning his eye at the first point, positioning means are provided. Figure 4 illustrates a first embodiment of such positioning means. The latter comprise a series of second light sources 30-1 , 30-2, provided for producing second light beams 31 of visible light. The second light beams are preferably formed by collimated beams which are obtained by using small sized field stops 33 and collimating optics 36. The second light sources may also comprise an aperture stop 32, provided to adjust the diameter of the collimated bundle. The field stop 33 is also suitable for controlling the size or shape
of a target, such as for example a cross, which is incorporated into the second light source and has to be observed by the user.
The second light sources 30 are preferably positioned in a circle having its centre on the optical axis 18. Due to this circular set-up a cone shaped second light beam is formed by the different second light sources. The second light beams 31 intersect in a position 35, which coincides with the first point 12, on which the first light beam is focused. At this position 35 all the light of the second beams is concentrated in a disc having a size of at the most the minimal eye pupil opening. Only if the user positions his eye substantially at this position 35, the user will see a ring comprising all the beams produced by all the second light sources 30 and their respective targets. The user thus has to move his head and his eye until he sees all second light beams simultaneously. Only then, his eye position will substantially correspond with the one of the first point 12 and his retina will be adequately illuminated by the first source 1.
The second light beams need not necessarily to be formed by discrete light sources. A continuous ring of light, as illustrated in figure 5, can also be used. A light emitting ring 40, preferably formed by a Light Emitting Polymer (LEP) or a light guide, is applied around the optical axis 18. The user sees the LEP through an aperture 42 and preferably one or more baffles 41 in order to limit the eye positions from which the ring can be seen. Collimating optics 43 can be added in front of the ring of light assembly to offer a sharper view of the ring to the user. Figure 6 shows an alternative embodiment for the eye positioning means. In this embodiment a 2D image slide 50 is incorporated into the second light beam 3 . A lens 36, placed before this slide 50, collimates the second light beam, which has a limited spatial extent due to the aperture stop 54 applied adjacent the second light source 30. The light, collimated by lens 36, passes through the slide
carrying the image to be displayed in order to pick up the latter and reaches the second beam splitter 9. The position of the lens 36 and the aperture stop 54 need to be carefully chosen in order to produce an image of the aperture stop 54 via the eyepiece 11 at the entrance pupil 13. For correctly positioning his eye, the user then needs to adjust his position so that he can see all beams 56 originating from image 50, which intersect at position 35, corresponding to the first point. At too small or too large distances, only the central part of the displayed image is visible. When the user's eye is displaced in a lateral direction, one image side will disappear.
Figure 7 shows a further embodiment where one or more 2D-images are presented to the user. The second light source 30 illuminates the 2D-image slide 50 via a diffuser 51. The second light source is for example formed by a LED or a LEP. The slide 50 needs not to be transparent and illuminated by a separate light source if it is luminescent itself. Slide collimating optics 36 are placed subsequent to the slide and they are followed by an aperture stop 32, when considered in the direction of the outgoing second light beam. The aperture stop 32 enables the light, from a given part of the slide, to follow a predetermined path through the optics. The second light beam 31 is then incident on a second or further beam splitter 49 in order to be injected in the illumination and imaging channel. The second light beams intersect at the position 52 in order to substantially coincide with the first point. The full image can only be seen by the user when all the rays of the second light beams enter the eye pupil 13, thus when the latter is correctly positioned at the position 52.
The image plane 53 of the slide 50, as formed by the optics 36 and 20, should coincide with the intermediate image plane 19. In such a manner, when the user sees a sharp image, its retina is sharply projected on the image sensor 8. In order for the intersection point 52 to
be at the correct position, the aperture stop 32 of the image optics should match with the intended pupil position 13, and thus with the aperture stop 23 in the imaging channel.
The combination of the eye positioning means with the imaging optics can be done in several ways, without coupling, with partial coupling, or with a more intimate coupling.
When using multiple targets or a "ring of light", the positioning and imaging systems can be totally independent from each other, full-featured positioning optics can be mounted around the imaging optics. This is shown in figure 4. Also figure 5 shows that there is space for the imaging an illumination subsystems in the centre of the positioning subsystem, if the optional collimating optics are not included, or are hollow in the centre.
For multiple targets or a "ring of light" it is also possible to have a partial integration of the subsystems. This is shown for the "ring of light" in figure 5, where the collimating optics 43 are in fact the eyepiece 11 of the device. In order for the positioning target (discrete targets or "ring of light") to be observed sharp through the eyepiece, the positioning target needs to be in the intermediate image plane (see 19 on figure 2) of the imaging system, where an intermediate image of the retina is formed by the eyepiece. It is important to have the targets mounted at fields outside the field of interest for the retina imaging, in order to avoid shadows formed by the target on the retina image.
In the case of the use of a full 2D-image for the eye positioning, there is a need for a full coupling of both optical subsystems, as it is necessary to use the centre of the viewing field also. This is illustrated in figure 6 and 7. The coupling of the imaging and eye positioning optics can be performed by the beam splitter 9, or some other partially reflecting surface. Both subsystems can use common optics i.e. the eyepiece 11 and possibly the optical element 20.
It is advised to avoid that the visible light from the image, which is imaged on the retina, as illustrated in figure 6 or 7, forms an image on the sensor, as this would affect the homogeneity of the illumination. This can be done by the use of a wavelength selective optical element. If the first light source is an infrared source, this can be a filter at some stage between the eye positioning optics and the imaging sensor, which filter only accepts near infrared light, or beam splitter 49 being wavelength selective, where it reflects visible light but transmits infrared light. Configurations with an infrared reflecting and visible transmitting beam splitter can of course be envisaged. The use of wavelength selective beam splitter yields a better transmission in the imaging optics channel than the alternatives with regular beam splitters and filters.
Figure 8 shows an alternative embodiment of the positioning means where use is made of a microlens array. The second light source 65 is placed in the focal plane of a microlens array 66. The second light source 65 can be formed by a diffuse slide illuminated by a source 67. The slide 50 is placed close to the microlens array. A repetitive pattern 68 is introduced in the source, which pattern has a pitch 69 equal to the one of the microlens. Consider for example the second light source as a red plane with a matrix of green dots 68 aligned with respect to the microlens. The dot size is chosen in such a way that the image of one dot, as imaged by one microlens and the eyepiece 11 has the eye pupil dimension. In such a manner a coupling is generated between the observed colour of the pattern and the eye position. If the eye is correctly positioned, the user sees a homogeneous green plane. If the distance between the eye and the eyepiece is too large, the user will see green in the center and a combination of green and red on the edges. If, at too large distance of the eyepiece, the user moves his eye from the axis, he will see a lateral colour distribution in the image plane
from green on one side to red on the other. By moving his eye, the user can then find the correct position.
Another alternative for increasing the feedback to the user for a lateral displacement of his eye, is to use wedges, as illustrated in figure 9. Wedges 70 with a small deflection angle are positioned close to the central part of the slide 50, which is illuminated by collimated second light beams 55. This collimated light is made by a collimating light source, for instance a light source 30 with a field stop 54, illuminating a collimator lens 36. The placement of a wedge serves to displace the image of the source 71 away from the nominal position 56. The size of the source is chosen small enough and the angle deviation produced by the wedges is chosen in a way that the three aligned images of the stop (not deviated, through left wedge and through right wedge) can enter the eye pupil all together. If the user displaces his eye laterally, the light passing through one of the wedges will not reach the eye any more, and the corresponding part of the slide will be seen dark. Each wedge placed close to plane 50 deviates light in a particular direction. Four wedges can be used to produce the above described effect in four different directions. The number of wedges is not restricted, and even a continuous cone could be formed.
Instead of using a single slide 50, a few slides can be stacked, each bearing a target. These targets have to be aligned by the user by placing his eye correctly in lateral position. The intensity of the eye positioning targets can be made variable, dependent on the ambient light level. This can enhance the user comfort when using the device. The slide could also be formed by an imaging micro-display displaying still or moving video images.
Figure 10 illustrates an example of a retina 16. The latter comprises a central part being the fovea 82 used by a person to observe details, i.e. when a person stares at a given point, the image of that point
will be imaged on the fovea. The retina further comprises a vein pattern 83 around the white spot 81 at the place where the optical nerve is connected to the eye. This is located at about 15.5° (84) from the fovea 82, considered in substantially horizontal direction. For authentication using retina imaging it is important to choose a particular part of the retina area, which part will then be projected on the image sensor. For the selection of this retina part, the user is asked to stare at a given target, which will be imaged on the fovea. As long as the user stares at the target, the eye orientation will be fixed. The efficiency of the eye fixation is increased by generating the fixation targets in a pulsation mode by using a pulsed light source, which is at frequencies less than 50 Hz and preferably between 4 and 12 Hz. The eye fixation target can be combined with one of the eye positioning means or can be independent. If the fixation target is on the optical axis of the imaging channel, the fovea spot and its surroundings will be imaged on the sensor. A disadvantage however of using the area around the fovea is that the blood veins there are much narrower than for example around the optical nerve, and thus much more difficult to observe. For viewing other parts of the retina, fixation targets offset the imaging axis 18 can be used. If the fixation target is at about 15.5° right (left) of the optical axis in the horizontal plane, the optical nerve will be in view when the user uses his left (right) eye.
When using the optical nerve, the device needs to know if the user presents his left or right eye, in order to be able to offer the appropriate fixation target (otherwise the system would look at the wrong side of the fovea). A solution thereto is to use external proximity detectors on the device to "see" the position of the user's head and to deduce whether the left or right eye is offered. The detectors work for example with capacitive, ultrasonic, pyro-electric or opto-electronic
sensors. Two or more detectors are placed symmetrically with respect to the vertical plane passing through the eyepiece. When the user has positioned his eye in front of the eyepiece, one sensor will be close to the face, while the other will be more distant. The detectors can also be used to activate the device from a stand-by mode.
Figure 11 shows a detailed embodiment of an autonomous fixation target 96 which is intended to be positioned at a given angle from the optical axis and which is to aim a collimated third beam, produced by a third light source, of visible light directly to the eye. The latter comprises a LED 97 generating a bright illumination of the field stop 95 placed beyond the LED 97. The light crossing the field stop is collected by the lens 93 beyond which the aperture stop 92 is positioned.
Figure 12 illustrates an implementation of the fixation target using the eyepiece 11 of the illumination and imaging channel. The third light source is placed in the aerial image plane 19 of the retina. The fixation targets are visible light sources 98 with an aperture 95 in order to limit their spatial extent as seen by the user. The eyepiece 11 has to be designed for large field angles because the targets are viewed through it, i.e. at angles close to 15.5°. A disadvantage of this set-up is that the eye fixation targets can block light from the imaging or illumination channel.
Figure 13 illustrates an implementation of the fixation target using a beam splitter 72. In practice this could be the first beam splitter 4 introduced for coupling the illumination and the imaging channel, or the second beam splitter 49 introduced for coupling the positioning and the imaging channel. The use of the second beam splitter for projection of the targets provides the possibility to project targets on or around the optical axis 18, without generating shadows on the retina image. The complete field in plane 19 can therefore be used for illumination and imaging purposes. The targets and field stop for eye fixation are disposed, as in figure 12 on the aerial retinal image plane or a plane
matching with the retinal image plane. If the first beam splitter is used for coupling the fixation targets to the illumination channel, and wavelength filtering is performed to enable only infrared light on the image sensor, the first beam splitter cannot be placed behind the second beam splitter, as seen from the users' side. This would inject visible light in that part of the device where only infrared light should be, and consequently this light would not reach the eyepiece and the imaging channel would be disturbed.
A disadvantage, when using the optical nerve or some other retinal feature located elsewhere than the fovea, is that the user has to stare off-axis if the imaging, illumination and eye positioning optics are all axial, as was supposed until now. It is however also possible to have the eye positioning and eye fixation optics substantially axial with respect to the eyepiece, but the illumination and imaging optics off-axis. Two illumination and imaging optics are then needed in order to allow the use of both left or right eye. A possible set-up for this is shown in figure 14, where a device with central viewing which can be used with both eyes is illustrated. A wide-angle eyepiece (typically 50° field) is used. On the optical axis of the eyepiece, the eye positioning target is implemented, as was done in figure 6. The eye fixation target is now in the centre of the target on the optical axis. It can be included as a feature in the slide of the positioning subsystem, or can be a light source in the image plane 19 on the optical axis 18. On both sides of the optical axis, at about 15.5° if the optical nerves are to be imaged, an imaging subsystem is mounted, each equipped with an illumination subsystem, exactly as was shown in figure 2. The eyepiece 11 is common to all optical subsystems. In figure 14 only the imaging optical path is shown in the upper half, while the lower half only shows the illumination optical path. Of course, both are to be used in the same subsystem in order to have the device operational.
If the area of interest is chosen below or under the fovea or if the same eye is always present, the same approach of off-axis imaging and illumination can be used but with only one imaging/illumination subsystem. Care has to be taken that, when imaging other parts of the retina than the fovea, the part of the retina that is imaged is sensitive to rotation of the eye around the optical axis of the system (the system might then for instance look above or below the optical nerve). For this, the orientation of the head should be fixed. This can be done by ergonomic features of the housing, by a second dummy eyepiece or by using two parallel systems (with on-axis targets and off-axis illumination and imaging optics as shown in figure 14).
The image sensor 8 is connected to image processing means (not shown) which are generally formed by a microprocessor and a memory. The image of the illuminated retina area is, after being recorded by the image sensor, transmitted to the processing means in order to be grabbed and to form a picture thereof. That picture is generally used for authentication purpose, which signifies that a comparison with stored patterns is required. Figure 15 illustrates schematically, by means of a flow chart, the different operations performed for analysing an image of a retina part and generating biometric templates.
The processing is started (100) once an image is formed on the image plane. The analogue image formed on the image plane is grabbed (101 ) by the processor and converted into a digital picture (102), for example by an A/D converter. The picture is then processed (103), whereby several operations could be performed such as for example a check whether the picture comprises sufficient information for extracting the data necessary for authentication purpose, a verification if indeed retina data is available, a picture sharpness or illumination intensity
verification. The check could also include a verification if a sufficient part of the region of interest of the retina is imaged, whether no artefacts are present in the picture, etc. If it is established that the picture does not comprise useful data (103 N), then an error message is generated (104) and supplied to the operator. This error message may include a feedback mesage in order to adapt the image grabbing. After generation of an error message the process is restarted. The different checks are for example realised by using grey scaling techniques. If the picture is accepted by the processor (103 Y), it is further improved (105), for example by using digital filters in order to reduce the noise, increase the contrast, sharpen the picture, eliminate artefacts, etc. It is also possible to combine different pictures and form an average picture. Beside highlighting the distinctive features, the processing could also suppress possible variable features in the eye or artefacts in the picture. In a retinal picture it is mainly the vascular pattern which is stable. The present processing step could also comprise a selection of a region of interest, digital filtering and other picture processing (106).
The filtering 108 is for example a quadruple convolution realised with the four kernels described below (kernel vertical, kernel horizontal, kernel diagonal 1 , kernel diagonal 2) as illustrated in table 1. From the four pictures obtained by these convolutions, a result picture is obtained by holding the maximum pixel value of the four pictures. Alternative kernels, highlighting linear structures can be used.
A binary picture is generated by setting a value of one to the pixels greater than a predetermined threshold value and a zero value for the pixel values less than the threshold.
TABLE 1 Kernel diagonal 1
-0.20 -0.57 0.00 0.00 0.00 0.00 0.00 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.57 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.25 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.00 0.00 0.00 0.00 ■0.02 ■0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.00 0.00 -0.02 •0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 0.00 0.00 ■0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 -0.57 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 0.00 0.00 0.00 0.00 0.00 -0.57 -0.20
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 ■0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 ■0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 ■0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kernel diagonal 2 is the vertical symmetry of Kernel diagonal 1. Kernel horizontal is the transposition of the Kernel vertical matrix.
0 A first generated picture forms (107) then an initial standard biometric template of the considered retina part. The standard template is created for each user and uniquely identifies the latter. The operations 100 to 103 could be repeated for a predetermined number of times and
each generated picture is compared with the initial standard biometric template in order to improve the reliability of the standard template. If the compared templates are substantially similar, the last generated template is stored as standard template, if not, the last generated template is rejected. If too many rejections have been observed, the whole process is restarted. For this purpose, each rejection is memorised for example by means of a counter. The standard biometric template preferably has the form of a standard code comprising the distinctive features of the retina of the user. This template may be encrypted and should preferably be independent of the design parameters of the retinal imaging device.
The generation of a standard template is followed by a check (108) for evaluating the template properties themselves, or by comparing them to independently acquired biometric properties of the same eye. If the operation only comprises the generation of a standard biometric template, that template is then stored in a memory (109) and the processing is stopped thereafter. If however an authentication operation has to be performed, for example for enabling access, the process continues with a comparison operation (110) where the just acquired template is compared with the one assigned to the user. If the comparison matches (110Y) access is allowed (112), if not an error message is generated (11 ON) and access is refused. The biometric template can be stored in a local, central or distributed memory.
The computing device can base its decision (110) on one or more evaluations of similarity between templates. The authentication device according to the invention can be used :
- to enrol a user, i.e. after a check of his identity, record his retinal biometric template and store it in a database together with identity information for later authentication;
- to authenticate a user, based on the comparison of a previously stored template and one or more freshly acquired templates, after the user claimed a given identity;
- to identify a user that enrolled before, based on the comparison of a series of stored templates and one or more freshly acquired templates;
- to check that a user was not enrolled yet, based on the comparison of a series of stored templates and one or more freshly acquired templates;
- to verify the template stored for a given user, based on the comparison of a stored template and one or more freshly acquired templates.