WO2006025129A1 - Personal authentication system - Google Patents

Personal authentication system Download PDF

Info

Publication number
WO2006025129A1
WO2006025129A1 PCT/JP2005/004214 JP2005004214W WO2006025129A1 WO 2006025129 A1 WO2006025129 A1 WO 2006025129A1 JP 2005004214 W JP2005004214 W JP 2005004214W WO 2006025129 A1 WO2006025129 A1 WO 2006025129A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris
image
recognition
pupil
personal authentication
Prior art date
Application number
PCT/JP2005/004214
Other languages
French (fr)
Japanese (ja)
Inventor
Kiyomi Nakamura
Hironobu Takano
Original Assignee
Toyama-Prefecture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyama-Prefecture filed Critical Toyama-Prefecture
Publication of WO2006025129A1 publication Critical patent/WO2006025129A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present invention relates to a personal authentication device that uses an iris network to acquire an iris pattern of a human eye to identify and authenticate an individual.
  • iris personal identification is that it has been stable for two years after birth, and since then it has not changed, so counterfeiting that does not require re-registration is difficult. Another point is that there is a lower possibility of injury than a finger or face. Therefore, iris recognition can be expected as a security system for personal computer and mobile phone passwords, gate management for entrance and exit.
  • Patent Document 1 discloses a method of identifying an eye position by detecting the center position and diameter of a pupil and an iris in iris recognition.
  • Patent Document 2 discloses a method of determining impersonation by extracting the density of a specific region of a human eye in iris recognition.
  • Patent Document 3 discloses an iris authentication device that detects whether or not an iris is a living iris by eye movement or the like in iris recognition.
  • Patent Document 4 discloses an iris imaging device that enables iris imaging even if the identified person moves after the position of the identified person is measured in iris recognition.
  • Patent Document 5 discloses a personal identification device that makes it possible to identify and identify the iris of a person to be identified!
  • Patent Document 6 discloses an iris image acquisition device that enables the imaging of an iris image by searching for the position of the eye from the silhouette of the person to be identified in iris recognition.
  • the conventional pattern matching method used for iris recognition is vulnerable to rotation of the recognized iris because the iris is circular, and it takes time to recognize because it matches the registered image during recognition. There was a problem of power. Also, when using the system, the subject had to adjust his eyes to the designated position on the system. This is very problematic when the iris recognition system is miniaturized and used as a password for a PC or mobile phone.
  • Non-Patent Documents 1 and 2 focusing on the information processing process of the spatial recognition 'memory system (parietal association area) of the brain, as disclosed in Non-Patent Documents 1 and 2, the rotation that can recognize the rotation direction and shape of the object A diffusion type neural network has been announced by the present inventors.
  • This rotation-diffusion neural network performs polar coordinate conversion and is suitable for recognizing the shape and rotational orientation of concentric patterns such as irises.
  • orientation-invariant shape recognition and shape-invariant orientation recognition are possible at the same time.
  • the memory matrix is created during learning (registration), the recognition time is very short.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2004-21406
  • Patent Document 2 JP 2002-312772 A
  • Patent Document 3 Japanese Patent Laid-Open No. 2001-34754
  • Patent Document 4 Japanese Unexamined Patent Publication No. 2000-11163
  • Patent Document 5 JP-A-10-137220
  • Patent Document 6 JP-A-10-137219
  • Non-patent literature 1 IEICE Transactions
  • Non-Patent Document 2 IEICE Technical Report, NC2002-207, pp.25-30,2003 Disclosure of Invention
  • the conventional rotational diffusion type-Ural network recognizes characters and faces using still images, and its use is limited for recognition in offline processing using still images. It was inferior in practicality. In practice, it is impossible to capture the rotation direction of the iris pattern in the same orientation as the learning (registered) image during recognition. Therefore, the iris image actually used for recognition is rotated and corrected! / Wow! Therefore, the recognition rate of practical accuracy was not reached. Furthermore, the identification (rejection) rate for unregistered iris patterns has not been investigated and verified, and is insufficient for application to personal authentication devices.
  • the present invention has been made in view of the above-described problems of the prior art, and can recognize a human iris image to accurately identify and authenticate a person with high accuracy and high speed.
  • the purpose is to provide a personal authentication device that can be widely used in information systems and other management tasks.
  • the present invention includes a camera capable of capturing a moving image and a storage device that stores images captured by the camera in a predetermined cycle, and the image information of a person's face captured by the camera is used as an iris of a human eye. Or, compare with a template that falls within the size range of the pupil, and scan the iris or pupil to detect the iris or pupil part of the eye, and the iris pattern that captures the iris pattern detected by this iris' pupil detection means
  • the rotation direction and shape of the iris pattern is obtained by transforming the rotation direction and shape of the iris pattern by a rotational diffusion type-Ural network that creates a diffusion pattern by multiplying a polar coordinate transformed image by a periodic Gaussian function.
  • the memory conversion means for learning and storing the shape as vector information, and learning by converting by the memory conversion means.
  • an iris pattern shape determining means for comparing an arbitrary acquired iris pattern with the vector information converted by the rotational diffusion type neural network by the storage conversion means in the same manner as described above, and by the iris pattern shape determining means
  • the shape determination is performed by recognizing and correcting the direction of each vector information to be compared with each other, forming an orientation memory matrix and a shape memory matrix in the vector information, and recognizing the orientation that is an output of the rotational diffusion type-Ural network.
  • -Eron and shape recognition -Eron is a personal authentication device that performs shape recognition by correlating each of the recognized iris shapes with each other.
  • an original image of a predetermined 0 ° azimuth of the iris or a polar coordinate conversion image thereof is registered as vector information, and at the time of recognition, the iris pattern shape determination unit can perform arbitrary orientation.
  • the input iris image is corrected using the recognition orientation obtained by the rotation diffusion type neural network, and the rotation diffusion type-Ural network output is obtained using the original image or its polar coordinate conversion image as the vector information, This is to be compared with a preset threshold value.
  • the iris pattern acquisition means includes pupil center position detection means by labeling, center correction by least square method, iris edge detection means using Laplacian filter, and iris size standard using line correction. It has an eaves means.
  • At least one of the inner product and the minimum distance of each vector information is obtained, and the identity / inconsistency of each vector information is judged by comparing with a preset threshold value to perform personal identification.
  • the iris pattern acquisition means includes a flash light emitting device that causes a pupil reaction, an infrared light source for imaging, and an infrared transmission filter attached to the lens of the camera, and the size of the pupil is substantially constant. In this state, an iris image is acquired. Furthermore, the iris pattern acquisition means continuously measures the time change of the pupil diameter, and can cope with impersonation using a still image such as a photograph.
  • the correction of the azimuth in the shape determination by the iris pattern shape determination means replaces the correction of the azimuth recognized by the rotational diffusion type-Eural network, and learns the diffusion pattern of the iris used for recognition.
  • the input azimuth is recognized and the azimuth is corrected by vector synthesis.
  • the iris pattern acquisition means searches for the position of the person's face and eyes to be recognized from a low-resolution image by thinning out, identifies the position of the eyes, and then detects an iris region from the high-resolution image. It's okay.
  • each pixel value in a certain area of an image including a human eye is measured to obtain an average luminance, and this is standardized to a certain value to set the luminance of the iris. Furthermore, the luminance average and standard deviation of the pupil and iris of the measurer are obtained, and the binary threshold value determined by the ratio of the standard deviation is used. It is a personal authentication device that determines the pupil and iris.
  • the pupil diameter change at the time of light reflection caused by flash light irradiation is measured, and the biological reaction is detected from the difference or ratio between the maximum pupil diameter and the minimum pupil diameter, and this difference or ratio is the reference value. If the following, the measured iris image is a personal authentication device that determines that the image is impersonated.
  • the personal authentication device can accurately and quickly identify an individual using an iris pattern, and reliably prevent impersonation due to iris counterfeiting that reduces the burden on the subject at the time of authentication. Can be stopped. In particular, since the rotation and displacement of the iris pattern can be corrected, it is not limited by the usage pattern such as the installation status of the authentication apparatus.
  • FIG. 1 is a schematic diagram showing a device configuration of a personal authentication device according to an embodiment of the present invention.
  • FIG. 2 is a conceptual diagram showing a rotational diffusion type-Ural network used in the personal authentication device of this embodiment.
  • FIG. 3 is a conceptual explanatory view showing image conversion of a rotation diffusion type-Ural network used in the personal authentication device of this embodiment.
  • FIG. 4 is a schematic flowchart showing iris image acquisition using a rotation diffusion type-Ural network used in the personal authentication device of this embodiment.
  • FIG. 5 is a graph showing changes in pupil diameter due to flash irradiation of the personal authentication device of this embodiment.
  • FIG. 6 is a front view showing an eye and a template in iris image acquisition by the personal authentication device of this embodiment.
  • FIG. 7 is a schematic diagram showing labeling of the personal authentication device of this embodiment.
  • FIG. 8 is a front view showing a display screen when an iris image is acquired by the personal authentication device of this embodiment.
  • FIG. 9 is a front view showing an image for obtaining an average luminance for acquiring an iris image by the personal authentication device of this embodiment.
  • FIG. 10 Brightness of iris and pupil in iris image acquisition by personal authentication device of this embodiment It is a graph which shows a frequency value and a cumulative pixel number.
  • FIG. 11 is a schematic flowchart showing processing of the personal authentication device of this embodiment.
  • FIG. 16 is a graph showing the orientation recognition characteristic (a) and the shape recognition characteristic (b) of the recognition experiment result by the example of the personal authentication device of the present invention.
  • FIG. 18 is a graph showing an identity rejection rate and an acceptance rate by others according to an embodiment of the personal authentication device of the present invention.
  • Flash light emitting device 22 Flash light emitting device 24 Infrared transmission filter
  • Rotation-Diffusion-Eural Network is a diffusion pattern created by multiplying a polar coordinate transformed image by a periodic Gaussian function in the rotation direction, and recognizes the direction of the object and the shape of the object. It consists of a shape recognition system.
  • Figure 2 shows a conceptual diagram of the rotation diffusion type-Eural network of this embodiment.
  • the network's orientation recognition memory system-Euron (azimuth recognition-Euron) is arranged, for example, 30 per 12 ° on the circumference, and the shape recognition memory system-Euron (shape recognition-Euron) is an appropriate number, For example, 10 pieces correspond to the shape of each object.
  • This rotational diffusion type-Eural network inputs the converted image on the polar coordinates generated by the original image force to the diffusion layer, and diffuses the rotation information in the surrounding space.
  • the object orientation and shape are recognized using the diffusion pattern that is the output of the diffusion layer.
  • the coordinate system was rotated 90 ° counterclockwise to match the position vector. As shown in the explanatory diagram of FIG.
  • the object direction is defined as the rotation angle from the non-rotation state to the counterclockwise direction, with the rotation angle being 0 °.
  • object orientation is a problem in object orientation recognition, but measures against this can be done by another method. Therefore, in object orientation recognition, the figure (object) is positioned at the center of the image, and the center of rotation of the object coincides with the origin on the xy coordinates of the original image.
  • the original image used for learning and recall (Arabic numeral 1) is a binary image of 480 X 4 80 dots, and the original image is displayed in polar coordinates with a certain radius and angle. Split and The created image is used as a converted image.
  • Figure 3 shows an example of the Arabic numeral 1 with a rotation angle of 0 ° and its converted image. Object orientation recognition is performed on the premise that object position recognition has already been performed, and it is considered that there is no effect of object position deviation. The converted image is generated using Equation (1).
  • T is the pixel value at coordinates (r, ⁇ ) on the converted image.
  • Formula (1) divides the original image by 20 with a radius of 200 dots and an angle every 3 °, and further divides the small area surrounded by the boundary into 10 x 10, and calculates the value of each point. It is shown that the sum of them is used as the value of one element of the transformed image. For this reason, one element of the converted image has a value of 0–100, r takes a value from 1 to 20, and ⁇ takes a range of integer values from 1 to 120.
  • Figure 3 shows how to calculate the pixel value T at the coordinates (r, ⁇ ) of the transformed image by dividing a small area with a radius of 10 dots X angle 3 ° on the corresponding original image into 10 X 10
  • Each point is represented by (X, y), and the total pixel value I (x, y) is calculated.
  • an input pattern of 300 X 300 pixels is polar coordinates of 25 X 120 pixels excluding the pupil (center) portion. Convert to the image above.
  • a polar coordinate conversion image is input to the diffusion layer to obtain a diffusion pattern that is vector information.
  • an azimuth memory matrix and a shape memory matrix is input to this diffusion pattern.
  • an azimuth recognition-Euron output and a shape recognition-Euron output are obtained.
  • the resulting 30 orientation recognition-Euron output forces are also recognized using the population vector method.
  • shape recognition associates each recognition object with a different shape recognition-Euron on a one-to-one basis, and shape recognition is performed using the maximum output of these -Eurons.
  • the operation of this rotational diffusion type neural network is performed by learning using an orthogonal learning method in the rotational diffusion type dual network in the inscription process.
  • learning is performed between the diffusion pattern V of the transformed image for learning V and the orientation recognition-Euron teacher signal TO and the shape recognition neuron teacher signal TF according to equations (2)-(7).
  • the transformed image of the input iris image in an arbitrary orientation is input to the diffusion layer, and the output is the product of the diffusion pattern V and the orientation memory matrix M.
  • FIG. 1 shows a system configuration diagram of one embodiment of the present invention.
  • This system has a small camera 14 and a lens 15 capable of capturing a moving image for capturing an iris 12 of a human eye 10, a computer 16 for capturing the captured iris image, and a display 18.
  • the main body of the computer 16 includes an image input board for capturing image data into the CPU, a DLL (Dynamic Link Library) for manipulating and processing iris images, and other storage devices.
  • the small camera 14 also cuts the visible light noise reflected in the iris 12, near-infrared projector 20 which is an infrared light source for clearly capturing iris patterns, flash light emitting device 22 for causing pupil reflection, and so on.
  • a plastic infrared transmission filter 24 is installed.
  • the light emitting device 22 can emit light at an arbitrary timing (in frame units) in synchronization with an external trigger output signal from the image input board of the computer 16.
  • the input image is a grayscale image with 256 gradations of 640 x 480 pixels. This system is capable of real-time image capture at approximately 13 frames Z s.
  • the computer 16 and its operating system used a commercially available personal computer.
  • the processing flow of the iris recognition system is shown in the flow chart of FIG.
  • the eye 10 of the person to be recognized is photographed with the small camera 14 (slO).
  • the imaging iris diameter is initialized (si 1).
  • the image of iris 12 when the pupil diameter is constant pupil size (2.9 mm-3. Omm) is acquired.
  • the average luminance of a certain region including the eye 10 as shown in FIG. 9 is used to normalize the image, and the one-eye partial template 26 as shown in FIG.
  • the pupil 27 is detected from (sl3).
  • labeling detects specific parts such as irises by attaching the same label (number) to all connected pixels (connected components) and assigning different numbers to different connected components. It is a method to do.
  • the pupil center, the pupil diameter, and the pupil area are simultaneously measured, and the pupil detection is completed (sl5).
  • the pupil center measured by the above labeling is corrected using the least squares method.
  • the iris diameter is measured. If the iris diameter is initialized, the previously measured value is used as it is (si 6).
  • Laplacian processing is performed in the measurement of iris diameter.
  • the right iris is 0 ° above the center of the pupil, the counterclockwise direction is +, the angle is 75 ° —— 135 °, and the left iris is 1 ° to the angle 75 ° — 135 °.
  • the parts with the maximum cumulative pixel value are the right edge of the iris and the left edge of the iris, respectively.
  • the relative ratio force between the measurement size (pixels) on the image of the iris 12 and the pupil is 2.9 mm— 3.
  • the pupil diameter is set to 2.9 mm-3.0 mm in order to allow a slight size error in order to facilitate the acquisition of the iris image.
  • Fig. 8 shows the screen of display 18 when an iris image is acquired. From the obtained image, a 300 x 300 pixel image centered on the pupil is cut out (sl8).
  • the size on the image of the iris and pupil changes depending on the distance from the camera 14 and the zoom
  • the size is standardized using a known linear interpolation method in order to make the iris size constant.
  • the average luminance of multiple images is obtained and a correction coefficient is set to normalize the luminance (sl9).
  • This standardized image is defined as a reference iris pattern (rotation-diffused-input image of the neural network) (s20). To learn and memorize the reference iris pattern.
  • each pixel value in a certain area A of the image including the eyes is measured to obtain an average luminance, and this is standardized to a constant value for each measurement image, thereby allowing each measurer to The variation in luminance of each acquired image is eliminated.
  • the average brightness of area A was measured in order to ensure that the sclera, iris, and pupil are clear even after the brightness standard is reached. This is to set the value to the middle of the 256-level display.
  • Range B surrounded by the inner line is the pupil detection range.
  • a method for determining a binary threshold for pupil detection will be described. After the luminance standard, measure the luminance of the iris and pupil. The measurement method is the same as when the luminance standard is determined. The optimum binary threshold is determined from the ratio of the standard deviations obtained by calculating the average brightness and standard deviation of the pupils and irises of multiple measurers. The threshold Y is determined by the following formula.
  • AV is the average pupil brightness
  • AV is the average iris brightness
  • SD is the pupil brightness standard.
  • Fig. 10 shows a graph of the cumulative number of pixels for each luminance value using the data of all subjects after the luminance specification. According to Fig. 10, it can be seen that the iris luminance and pupil luminance are clearly separated by the binary key threshold. As a result, the center of the pupil is detected by the one-eye template, and the pupil end 'iris end' is accurately detected.
  • an iris input pattern at the time of recognizing the iris of another person is also obtained by the same procedure as that for acquiring the reference iris pattern.
  • personal identification is performed by shape recognition.
  • the regular diffusion pattern is used to perform learning and recognition using the above-described rotational diffusion type-Eural network.
  • a new shape recognition criterion is added in order to improve the discrimination accuracy of the unlearned iris.
  • the new shape recognition criteria used the inner product and the minimum distance, which are often used to investigate vector similarity.
  • these methods are known to be vulnerable to pattern variations.
  • iris recognition there are various pattern variations, but one of them is misorientation.
  • Image from camera 14 It is almost impossible to capture the learning image and the recognition image in exactly the same direction when performing recognition with. Therefore, it is possible to introduce the inner product and the minimum distance as shape recognition criteria by correcting the direction using orientation recognition, which is a feature of the rotational diffusion type-Eural network.
  • the orientation of the learning image is defined as 0 °
  • the orientation of the input image at the time of recognition is not necessarily 0 °. Unlike other people. Therefore, in the authentication using the rotational diffusion type neural network, the orientation is first recognized and the orientation of the input image is corrected.
  • Fig. 11 shows the flow of personal authentication using a rotation diffusion type-Eural network.
  • the direction correction range, step angle, and recognition method are selected.
  • the lower limit of azimuth correction is set to 3 °
  • the upper limit is set to 3 °
  • the step angle is set to 1 °.
  • the correction azimuth is 13 ° to 7 °, and an iris pattern with seven different azimuth corrections is obtained.
  • the reason why the correction azimuth is not limited to 10 ° is to take into account the resolution and error of the recognition azimuth obtained by the rotational diffusion type-Eural network.
  • individual iris pattern (shape) authentication is performed by the specified recognition method (inner product, minimum distance, shape recognition-Euron output).
  • recognition by inner product and minimum distance the vector calculation of the input image and the learning image (inner product, minimum distance) corrected for rotation within the specified range based on the rotation orientation recognized by the rotational diffusion type-Eural network. I do .
  • the inner product and minimum distance between the input image and the learning image that have been subjected to seven types of rotation correction are calculated.
  • the maximum value is used for the inner product, and the minimum value is used for the minimum distance.
  • the minimum value is used for the minimum distance.
  • the maximum value is greater than the determination threshold, the person is identified, and if the maximum is less than the threshold, the person is determined to be another person.
  • Minimum if minimum distance If the value is smaller than the determination threshold value, the person is identified, and if the value is larger than the threshold value!
  • shape recognition-Euron output is used for shape determination (personal authentication)
  • the orientation-corrected image is input as an input image again to the rotational diffusion type -Eural network, and it is determined by shape recognition -Euron output. If the shape recognition neuron output representing the registered person is larger than the preset judgment threshold, it is judged that the input iris image is the iris of the person representing -Yelon. If any shape recognition-Euron output does not exceed the judgment threshold, it is registered and judged as a bad one.
  • V is a vector representing the normal diffusion pattern of the learning iris image
  • V is for recognition
  • I is the absolute value of V and V, and represents the length of each vector.
  • the minimum distance is the vector difference (distance) of I V— V
  • FIG. 13 shows a flowchart of the recognition of the rotational diffusion type-Eural network in the above-described iris pattern recognition according to this embodiment.
  • Orientation correction is performed in the recognition process when the inner product and the minimum distance are used, since the iris image of 0 ° is used for the learning (registered) image, the recognition target must be in the 0 ° orientation. Because there is.
  • the orientation of the iris pattern can be recognized by the rotational diffusion type-Ural network, so that it is possible to cope with the rotational change by correcting the orientation. Furthermore, by introducing the inner product and the minimum distance as the shape recognition criterion, it is possible to achieve a 0% acceptance rate by using an iris-corrected iris pattern. Moreover, it is possible to automatically detect the pupil center, pupil edge, and iris edge by detecting the pupil center position by labeling, center correction by the least square method, and edge detection using a Laplacian filter.
  • the iris and pupil size on the image will change depending on the distance from the camera 14 and zoom, but it can be scaled up or down using linear interpolation and the iris size can be standardized to accommodate the size change. it can. Furthermore, by imposing a flash light on the eye 10 to induce a pupil response and measuring a temporal change in the pupil diameter, it is possible to reject impersonation using an iris photograph or the like.
  • the personal authentication device of the present invention creates a memory matrix that characterizes each learned iris. Therefore, the recognition time is short because the amount of calculation is small. In addition, since the pupil center position, pupil edge, and iris edge are automatically detected, eye alignment is not required and it can be used in a wide range of applications.
  • the personal authentication device of the present invention is not limited to the above-described embodiment, but has a camera power capable of capturing moving images. If an iris image can be directly captured by the CPU, the image input board can operate the iris image. A DLL for processing is not always necessary.
  • the personal authentication device of the present invention can be used without using a flash light emitting device. In this case, an iris image converted into a fixed relative pupil size by the linear interpolation is captured.
  • the vector information used for recognition is not limited to the original image captured as described above or its polar coordinate conversion image, but may be information obtained by performing image processing on the original image using a diffusion pattern or a Laplacian filter. .
  • the following processing is performed to further prevent impersonation.
  • the processing below sl6 and sl7 is performed as shown in FIG.
  • the LED is flashed to obtain a constant pupil size
  • the impersonation is determined by comparing the relative pupil diameter before and after the light emission.
  • the relative pupil diameter from when the first LED emits light during recognition until the light reaction of the pupil occurs is used for impersonation determination.
  • the first flash light is emitted at the 20th frame of the measurement image, so the light response occurs by 30th frame.
  • the relative pupil diameter of 20 to 29 frames where flash light is emitted and light reflection occurs is preserved.
  • the maximum or minimum is calculated from the stored relative pupil diameters, and the difference or ratio is calculated. If the difference or ratio is equal to or less than the reference value, it is determined to be impersonation.
  • Fig. 15 shows the iris image (300 X 300 pixels) used for learning and recognition. Learning, All recognition was performed with the iris image of the subject's right eye. The learning orientation was 0 ° and 360 ° with 6 orientations every 60 °. The number of learning patterns is given by (number of recognized irises) X (number of learning orientations), and is 18 patterns, 30 patterns, and 60 patterns for each number of subjects.
  • Fig. 16 (a) shows the orientation recognition characteristics and Fig. 16 (b) shows the shape recognition characteristics when the recognition experiment was performed by 10 people.
  • the horizontal axis is the input rotation orientation of the iris
  • the vertical axis is the recognition orientation.
  • the horizontal axis is the input rotation direction of the iris
  • the vertical axis is the shape recognition-Euron output
  • is the average value of the target neuron output
  • X is the average value of the non-target neuron output Represents a value.
  • the vertical line in each input direction represents the standard deviation.
  • the average value of the target neuron is approximately 1.0 and the average value of the non-target neuron is approximately 0.0, and the target neuron output is higher than the non-target neuron output.
  • the iris images used for learning and recognition were changed to three and five people, and after learning in advance, a real-time recognition experiment was performed.
  • the iris image (300 X 300 pixels) used for learning was selected from Fig. 15. Learning'recognition was performed with the same subject's right eye iris.
  • the number of learning notes is given by (number of recognized irises) X (number of learning orientations), 18 and 30 patterns for 3 and 5 subjects, respectively.
  • the images used for learning were taken in advance on the day of the experiment.
  • Fig. 17 (a) shows the azimuth recognition characteristics when a recognition experiment is performed by five persons
  • Fig. 17 (b) shows the shape recognition characteristics.
  • the horizontal axis is the input rotation direction of the iris
  • the vertical axis is the recognition direction. Good linearity was seen in the input rotation direction and recognition direction of the iris, and it was found that the direction could be recognized almost correctly.
  • the horizontal axis is the input rotation direction of the iris
  • the vertical axis is the shape recognition-Euron output
  • is the average value of the target neuron output
  • X is the non-target neuron output. Represents an average value.
  • the vertical line in each input azimuth represents the standard deviation.
  • the average value of the target neuron is approximately 1.0
  • the average value of the non-target neuron is approximately 0.0
  • the target neuron output is always higher than the non-target neuron output. large.
  • a person authentication experiment was performed using images captured from the camera.
  • the experiment was conducted with 10 subjects. Of the 10 people, 5 were studied and 5 were unlearned.
  • the images used for learning were taken in advance on the day of the authentication experiment. Learning was performed in a total of 10 sets by replacing the iris images of five people used for learning one by one.
  • the learning iris image the iris image when the input rotation direction of the iris is 0 ° was used. Since 10 subjects were recognized for 10 sets of learning, a total of 100 trials (50 trials for learning and 50 trials for unlearned persons) were obtained.
  • FIG. 18 shows the error rate using the shape recognition-Euron output, FIG. 19 the inner product, and FIG. 20 using the minimum distance as a criterion.
  • the dotted line represents the rejection rate (the rate of rejecting the user by mistake), and the solid line represents the acceptance rate of others (the rate of accepting others by mistake).
  • the vertical axis represents each error rate, and the horizontal axis represents the evaluation threshold. The method of calculating the rejection rate of the person is considered to have rejected the person if the output value of the shape corresponding to the person-the euron or inner product is smaller than the judgment threshold, or if the output value of the minimum distance is larger than the judgment threshold, The number of trials was counted and asked to determine the rejection rate.
  • the acceptance rate of others was calculated by counting the number of trials. From the experimental results, when the shape recognition neuron output was used as the criterion, the intersection of the rejection rate and the rejection rate was about 0.78, and the error rate was about 43%. When the inner product was used as the criterion, the intersection between the rejection rate and the acceptance rate was about 0.94, and the error rate was about 15%. However, if the decision threshold is 0.96, the rejection rate is 20%, but others can be completely rejected. When the judgment based on the minimum distance was used as the criterion, the intersection of the rejection rate and the acceptance rate of others was when the decision threshold was about 0.35 and the error rate was about 13%. However, when the decision threshold was 0.25, the rejection rate was 26%, but others could be completely rejected.

Abstract

A personal authentication system capable of identifying and authenticating a person accurately with high precision at a high speed by recognizing the iris image of the person, and applicable to an information system or other management business. The personal authentication system comprises a means for imaging an eye (10) with a dynamically imaging camera (14) and detecting the iris (12) or pupil (27) part, a means for capturing a detected iris pattern, and a means for converting the rotational orientation and shape of the iris pattern by a rotary diffusion neural network and learning/storing as vector information. The personal authentication system further comprises an iris pattern judging means for comparing vector information obtained by converting an arbitrary imaged iris pattern through the storing/converting means by rotary diffusion neural network with the vector information of iris pattern learnt/stored by being converted by the storing/converting means. Personal identification is carried out by recognizing and correcting the orientation of each vector information being compared, determining at least one of the inner product or the minimum distance of each vector information and then judging match/mismatch of each vector information.

Description

明 細 書  Specification
個人認証装置  Personal authentication device
技術分野  Technical field
[0001] この発明は、人の目の虹彩模様を取得して個人を識別し認証可能とするもので、特 に-ユーラルネットワークを利用した個人認証装置に関する。  TECHNICAL FIELD [0001] The present invention relates to a personal authentication device that uses an iris network to acquire an iris pattern of a human eye to identify and authenticate an individual.
背景技術  Background art
[0002] 従来、コンピュータ技術やインターネットの発展とともに、コンピュータが企業のみな らず一般の家庭でも広く使われるようになり、セキュリティに関する要求が高まってい る。さらに、国際的な犯罪の増加や各種施設における安全対策のため、指紋や顔な どの肉体的特徴を用いて個人を識別する研究が盛んに行われている。近年、肉体的 特徴を用いた個人識別 (バイオメトリタス認証)のための情報として、虹彩が注目され ている。虹彩は、妊娠 7、 8力月で作られ始めて、誕生後の 2年間で安定する。虹彩の 構成は、色素や筋肉によって複雑なランダムパターンができあがつていて、一卵性双 生児や同一人物の左眼、右眼でも虹彩パターンが異なる。虹彩による個人識別の利 点として、誕生後の 2年間で安定し、それ以降、変化することはないため再登録の必 要がなぐ偽造も困難である。また、指や顔よりも怪我の可能性が低い点等が挙げら れる。従って、虹彩認識は、パーソナルコンピュータや携帯電話のパスワード、入退 出のゲート管理などにおけるセキュリティシステムとして期待できる。  [0002] Conventionally, with the development of computer technology and the Internet, computers are widely used not only in companies but also in general households, and the demand for security is increasing. In addition, in order to increase international crime and to take safety measures at various facilities, research on identifying individuals using physical characteristics such as fingerprints and faces has been actively conducted. In recent years, the iris has attracted attention as information for personal identification (biometrics authentication) using physical features. The iris begins to be made in the 7th and 8th month of pregnancy and stabilizes in the second year after birth. The iris is composed of complex random patterns depending on pigments and muscles, and the iris pattern is different between the left and right eyes of identical twins or the same person. The advantage of iris personal identification is that it has been stable for two years after birth, and since then it has not changed, so counterfeiting that does not require re-registration is difficult. Another point is that there is a lower possibility of injury than a finger or face. Therefore, iris recognition can be expected as a security system for personal computer and mobile phone passwords, gate management for entrance and exit.
[0003] 従来の虹彩認識に用いられる手法としては、特許文献 1一 6に開示されているよう に、虹彩模様のパターンマッチング法がある。特許文献 1は、虹彩認識において、瞳 孔と虹彩の中心位置と径を検出して目位置を特定する方法を開示している。特許文 献 2は、虹彩認識において、人の目の特定領域の濃度を抽出してなりすましを判定 する方法を開示している。特許文献 3は、虹彩認識において、眼球の運動等により生 体の虹彩であるか否かを検知する虹彩認証装置を開示している。特許文献 4は、虹 彩認識において、被識別者の位置測定後、その被識別者が動いても虹彩撮像を可 能にした虹彩撮像装置を開示している。特許文献 5は、虹彩認識において、離れた 位置に!/ヽる被識別者の虹彩を撮像し識別可能にした個人識別装置を開示して ヽる。 特許文献 6は、虹彩認識において、被識別者のシルエットから目の位置を探索して 虹彩画像の撮像を可能にした虹彩画像取得装置を開示している。 [0003] As a technique used for conventional iris recognition, there is an iris pattern matching method as disclosed in Patent Documents 16-16. Patent Document 1 discloses a method of identifying an eye position by detecting the center position and diameter of a pupil and an iris in iris recognition. Patent Document 2 discloses a method of determining impersonation by extracting the density of a specific region of a human eye in iris recognition. Patent Document 3 discloses an iris authentication device that detects whether or not an iris is a living iris by eye movement or the like in iris recognition. Patent Document 4 discloses an iris imaging device that enables iris imaging even if the identified person moves after the position of the identified person is measured in iris recognition. Patent Document 5 discloses a personal identification device that makes it possible to identify and identify the iris of a person to be identified! Patent Document 6 discloses an iris image acquisition device that enables the imaging of an iris image by searching for the position of the eye from the silhouette of the person to be identified in iris recognition.
[0004] しかし、虹彩認識に用いられている従来のパターンマッチング法は、虹彩が円形で あることから、認識虹彩の回転に対して脆弱であり、認識時に登録画像とマッチング させるため認識に時間が力かるという問題があった。また、システム利用の際に、被験 者が目をシステムの指定位置に合わせる必要があった。このことは、虹彩認識システ ムを小型化し、 PCや携帯電話のパスワード等として利用する際には非常に問題とな る。 [0004] However, the conventional pattern matching method used for iris recognition is vulnerable to rotation of the recognized iris because the iris is circular, and it takes time to recognize because it matches the registered image during recognition. There was a problem of power. Also, when using the system, the subject had to adjust his eyes to the designated position on the system. This is very problematic when the iris recognition system is miniaturized and used as a password for a PC or mobile phone.
[0005] 一方、脳の空間認識'記憶系(頭頂連合野)の情報処理過程に着目して、非特許 文献 1, 2に開示されているように、物体の回転方位と形状を認識できる回転拡散型 ニューラルネットワークが、本願発明者等によって発表されている。この回転拡散型 ニューラルネットワークは、極座標変換を行うため、虹彩のような同心円状パターンの 形状と回転方位認識に適している。また、方位不変の形状認識と形状不変の方位認 識が同時に可能である。さらに、学習(登録)時に記憶行列を作成しているため、認 識時間が非常に短いという特徴がある。  [0005] On the other hand, focusing on the information processing process of the spatial recognition 'memory system (parietal association area) of the brain, as disclosed in Non-Patent Documents 1 and 2, the rotation that can recognize the rotation direction and shape of the object A diffusion type neural network has been announced by the present inventors. This rotation-diffusion neural network performs polar coordinate conversion and is suitable for recognizing the shape and rotational orientation of concentric patterns such as irises. In addition, orientation-invariant shape recognition and shape-invariant orientation recognition are possible at the same time. Furthermore, since the memory matrix is created during learning (registration), the recognition time is very short.
特許文献 1:特開 2004— 21406号公報  Patent Document 1: Japanese Patent Application Laid-Open No. 2004-21406
特許文献 2:特開 2002-312772号公報  Patent Document 2: JP 2002-312772 A
特許文献 3:特開 2001—34754号公報  Patent Document 3: Japanese Patent Laid-Open No. 2001-34754
特許文献 4:特開 2000— 11163号公報  Patent Document 4: Japanese Unexamined Patent Publication No. 2000-11163
特許文献 5 :特開平 10- 137220号公報  Patent Document 5: JP-A-10-137220
特許文献 6 :特開平 10- 137219号公報  Patent Document 6: JP-A-10-137219
非特許文献 1:電子情報通信学会論文誌  Non-patent literature 1: IEICE Transactions
,D-II,VOL.J81-D-II,No.6,pp.ll94-1204,1998  , D-II, VOL.J81-D-II, No.6, pp.ll94-1204,1998
非特許文献 2:電子情報通信学会信学技報, NC2002-207,pp.25-30,2003 発明の開示  Non-Patent Document 2: IEICE Technical Report, NC2002-207, pp.25-30,2003 Disclosure of Invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0006] し力しながら、従来の回転拡散型-ユーラルネットワークでは、静止画による文字や 顔認識を行うもので、静止画を用いたオフライン処理での認識に利用用途が限定さ れ、実用性に劣るものであった。また、実用上において認識時に虹彩模様の回転方 位を学習(登録)画像と同じ方位で撮像することは不可能である。従って、実際に認 識に用いる虹彩画像には回転補正がされて!/、な!/、ため、実用可能な精度の認識率 を得るまでに至っていないものであった。さらに、未登録の虹彩模様に対する識別( 拒否)率が調査、検証されておらず、個人認証装置に応用するには不十分であった [0006] However, the conventional rotational diffusion type-Ural network recognizes characters and faces using still images, and its use is limited for recognition in offline processing using still images. It was inferior in practicality. In practice, it is impossible to capture the rotation direction of the iris pattern in the same orientation as the learning (registered) image during recognition. Therefore, the iris image actually used for recognition is rotated and corrected! / Wow! Therefore, the recognition rate of practical accuracy was not reached. Furthermore, the identification (rejection) rate for unregistered iris patterns has not been investigated and verified, and is insufficient for application to personal authentication devices.
[0007] この発明は、上記従来の技術の問題点に鑑みて成されたもので、人の虹彩画像を 認識して、正確に高精度で高速に個人の識別と認証を行うことができ、情報システム やその他の管理業務に広く利用することができる個人認証装置を提供することを目 的とする。 [0007] The present invention has been made in view of the above-described problems of the prior art, and can recognize a human iris image to accurately identify and authenticate a person with high accuracy and high speed. The purpose is to provide a personal authentication device that can be widely used in information systems and other management tasks.
課題を解決するための手段  Means for solving the problem
[0008] この発明は、動画撮像可能なカメラと、このカメラにより撮像した画像を所定周期で 記憶する記憶装置とを備え、前記カメラにより撮像した人の顔の画像情報を、人の目 の虹彩もしくは瞳孔の大きさの範囲内に入るテンプレートと比較しながらスキャンし、 目の虹彩もしくは瞳孔部分を検知する虹彩 ·瞳孔検出手段と、この虹彩'瞳孔検出手 段により検出した虹彩模様を捉える虹彩模様取得手段と、前記虹彩模様の回転方位 と形状を、極座標変換した画像に周期的なガウス関数を乗じて拡散パターンを作製 する回転拡散型-ユーラルネットワークにより変換して、前記虹彩模様の回転方位と 形状をベクトル情報として学習,記憶する記憶変換手段と、前記記憶変換手段により 変換して学習 '記憶した虹彩模様のベクトル情報に対して、同様に画像取得した任 意の虹彩模様を上記と同様に前記記憶変換手段により前記回転拡散型ニューラル ネットワークにより変換したベクトル情報と比較する虹彩模様形状判定手段とを備え、 前記虹彩模様形状判定手段による形状判定は、互いに比較する前記各ベクトル情 報の方位を認識して補正し、前記ベクトル情報に方位記憶行列及び形状記憶行列 を形成させ、前記回転拡散型-ユーラルネットワークの出力である方位認識-ユーロ ン及び形状認識-ユーロンを得て、各認識対象虹彩の形状認識-ユーロンにっ 、て 、それぞれ記憶した形状認識-ユーロンを対応させて形状認識を行う個人認証装置 である。 [0009] 前記虹彩模様取得手段による学習時に、虹彩の所定の 0° 方位の原画像またはそ の極座標変換画像をベクトル情報として登録し、認識時には、前記虹彩模様形状判 定手段により、任意方位で入力された虹彩画像を、前記回転拡散型ニューラルネット ワークで得られた認識方位を用いて補正し、原画像またはその極座標変換画像をべ タトル情報として前記回転拡散型-ユーラルネットワーク出力を求め、あらかじめ設定 された閾値と比較するものである。 [0008] The present invention includes a camera capable of capturing a moving image and a storage device that stores images captured by the camera in a predetermined cycle, and the image information of a person's face captured by the camera is used as an iris of a human eye. Or, compare with a template that falls within the size range of the pupil, and scan the iris or pupil to detect the iris or pupil part of the eye, and the iris pattern that captures the iris pattern detected by this iris' pupil detection means The rotation direction and shape of the iris pattern is obtained by transforming the rotation direction and shape of the iris pattern by a rotational diffusion type-Ural network that creates a diffusion pattern by multiplying a polar coordinate transformed image by a periodic Gaussian function. The memory conversion means for learning and storing the shape as vector information, and learning by converting by the memory conversion means. Similarly, an iris pattern shape determining means for comparing an arbitrary acquired iris pattern with the vector information converted by the rotational diffusion type neural network by the storage conversion means in the same manner as described above, and by the iris pattern shape determining means The shape determination is performed by recognizing and correcting the direction of each vector information to be compared with each other, forming an orientation memory matrix and a shape memory matrix in the vector information, and recognizing the orientation that is an output of the rotational diffusion type-Ural network. -Eron and shape recognition -Eron is a personal authentication device that performs shape recognition by correlating each of the recognized iris shapes with each other. [0009] During learning by the iris pattern acquisition unit, an original image of a predetermined 0 ° azimuth of the iris or a polar coordinate conversion image thereof is registered as vector information, and at the time of recognition, the iris pattern shape determination unit can perform arbitrary orientation. The input iris image is corrected using the recognition orientation obtained by the rotation diffusion type neural network, and the rotation diffusion type-Ural network output is obtained using the original image or its polar coordinate conversion image as the vector information, This is to be compared with a preset threshold value.
[0010] さらに、前記虹彩模様取得手段は、ラベリングによる瞳孔中心位置検出手段と、最 小 2乗法による中心補正、ラプラシアンフィルタを用いた虹彩端検出手段、線間補正 を用いた虹彩サイズの規格ィ匕手段を有するものである。  [0010] Further, the iris pattern acquisition means includes pupil center position detection means by labeling, center correction by least square method, iris edge detection means using Laplacian filter, and iris size standard using line correction. It has an eaves means.
[0011] 前記各ベクトル情報の内積と最小距離の少なくとも一方を求めて、各ベクトル情報 の一致 ·不一致をあらかじめ設定された閾値と比較して判断し個人識別を行うもので ある。  [0011] At least one of the inner product and the minimum distance of each vector information is obtained, and the identity / inconsistency of each vector information is judged by comparing with a preset threshold value to perform personal identification.
[0012] 前記虹彩模様取得手段は、瞳孔反応を起こさせるフラッシュ光発光装置と、撮像用 の赤外線光源と、前記カメラのレンズに取り付けられた赤外線透過フィルタとを備え、 瞳孔の大きさがほぼ一定の状態で虹彩画像を取得するものである。さらに、前記虹 彩模様取得手段は、瞳孔径の時間変化を連続的に計測するものであり、写真などの 静止画像を用いたなりすましに対応可能としたものである。  [0012] The iris pattern acquisition means includes a flash light emitting device that causes a pupil reaction, an infrared light source for imaging, and an infrared transmission filter attached to the lens of the camera, and the size of the pupil is substantially constant. In this state, an iris image is acquired. Furthermore, the iris pattern acquisition means continuously measures the time change of the pupil diameter, and can cope with impersonation using a still image such as a photograph.
[0013] また、前記虹彩模様形状判定手段による形状判定における方位の補正は、前記回 転拡散型-ユーラルネットワークにより認識した方位の補正に代えて、認識に使用す る虹彩の拡散パターンと学習した複数の方位の拡散パターンとを用いて、前記各べ タトル情報の内積と最小距離の少なくとも一方を求めて、ベクトル合成することにより、 入力方位を認識し、方位補正するものである。  In addition, the correction of the azimuth in the shape determination by the iris pattern shape determination means replaces the correction of the azimuth recognized by the rotational diffusion type-Eural network, and learns the diffusion pattern of the iris used for recognition. By using at least one of the inner product and the minimum distance of the respective vector information using the plurality of azimuth diffusion patterns, the input azimuth is recognized and the azimuth is corrected by vector synthesis.
[0014] 前記虹彩模様取得手段は、認識する人の顔や目の位置を、間引きによる低解像度 画像で検索し、目の位置を特定した後、高解像度の画像で虹彩の領域を検出するも のでも良い。  [0014] The iris pattern acquisition means searches for the position of the person's face and eyes to be recognized from a low-resolution image by thinning out, identifies the position of the eyes, and then detects an iris region from the high-resolution image. It's okay.
[0015] また、人の目を含む画像の一定領域の各画素値を測定し平均輝度を求め、これを 一定値に規格ィ匕して虹彩の輝度を設定するものである。さら〖こ、測定者の瞳孔と虹 彩の輝度平均と標準偏差を求め、標準偏差の比により決定された 2値ィ匕閾値により 瞳孔と虹彩を決定する個人認証装置である。 [0015] In addition, each pixel value in a certain area of an image including a human eye is measured to obtain an average luminance, and this is standardized to a certain value to set the luminance of the iris. Furthermore, the luminance average and standard deviation of the pupil and iris of the measurer are obtained, and the binary threshold value determined by the ratio of the standard deviation is used. It is a personal authentication device that determines the pupil and iris.
[0016] また、フラッシュ光の照射により生じる対光反射時の瞳孔径変化を計測し、その最 大瞳孔径と最小瞳孔径の差もしくは比率から生体反応を検出し、この差もしくは比率 が基準値以下であれば測定した虹彩画像はなりすましであると判定する個人認証装 置である。  [0016] Further, the pupil diameter change at the time of light reflection caused by flash light irradiation is measured, and the biological reaction is detected from the difference or ratio between the maximum pupil diameter and the minimum pupil diameter, and this difference or ratio is the reference value. If the following, the measured iris image is a personal authentication device that determines that the image is impersonated.
発明の効果  The invention's effect
[0017] この発明の個人認証装置は、虹彩模様を利用して正確に迅速に個人識別を行うこ とができ、認証時の対象者の負担も少なぐ虹彩の偽造によるなりすましも、確実に防 止することができる。特に、虹彩模様の回転および位置ずれを補正することができる ため、本認証装置の設置状況など使用形態に制限されないものである。  [0017] The personal authentication device according to the present invention can accurately and quickly identify an individual using an iris pattern, and reliably prevent impersonation due to iris counterfeiting that reduces the burden on the subject at the time of authentication. Can be stopped. In particular, since the rotation and displacement of the iris pattern can be corrected, it is not limited by the usage pattern such as the installation status of the authentication apparatus.
図面の簡単な説明  Brief Description of Drawings
[0018] [図 1]この発明の一実施形態の個人認証装置の機器構成を示す概略図である。  FIG. 1 is a schematic diagram showing a device configuration of a personal authentication device according to an embodiment of the present invention.
[図 2]この実施形態の個人認証装置に用いられる回転拡散型-ユーラルネットワーク を示す概念図である。  FIG. 2 is a conceptual diagram showing a rotational diffusion type-Ural network used in the personal authentication device of this embodiment.
[図 3]この実施形態の個人認証装置に用いられる回転拡散型-ユーラルネットワーク の画像変換を示す概念説明図である。  FIG. 3 is a conceptual explanatory view showing image conversion of a rotation diffusion type-Ural network used in the personal authentication device of this embodiment.
[図 4]この実施形態の個人認証装置に用いられる回転拡散型-ユーラルネットワーク を利用した虹彩画像取得を示す概略フローチャートである。  FIG. 4 is a schematic flowchart showing iris image acquisition using a rotation diffusion type-Ural network used in the personal authentication device of this embodiment.
[図 5]この実施形態の個人認証装置のフラッシュ照射による瞳孔径の変化を示すダラ フである。  FIG. 5 is a graph showing changes in pupil diameter due to flash irradiation of the personal authentication device of this embodiment.
[図 6]この実施形態の個人認証装置による虹彩画像取得における目とテンプレートを 示す正面図である。  FIG. 6 is a front view showing an eye and a template in iris image acquisition by the personal authentication device of this embodiment.
[図 7]この実施形態の個人認証装置のラベリングを示す模式図である。  FIG. 7 is a schematic diagram showing labeling of the personal authentication device of this embodiment.
[図 8]この実施形態の個人認証装置による虹彩画像取得時のディスプレイ画面を示 す正面図である。  FIG. 8 is a front view showing a display screen when an iris image is acquired by the personal authentication device of this embodiment.
[図 9]この実施形態の個人認証装置による虹彩画像取得のための平均輝度を求める ための画像を示す正面図である。  FIG. 9 is a front view showing an image for obtaining an average luminance for acquiring an iris image by the personal authentication device of this embodiment.
[図 10]この実施形態の個人認証装置による虹彩画像取得における虹彩と瞳孔の輝 度値と累積画素数を示すグラフである。 [Fig. 10] Brightness of iris and pupil in iris image acquisition by personal authentication device of this embodiment It is a graph which shows a frequency value and a cumulative pixel number.
[図 11]この実施形態の個人認証装置の処理を示す概略フローチャートである。  FIG. 11 is a schematic flowchart showing processing of the personal authentication device of this embodiment.
圆 12]この実施形態の個人認証装置による処理に用いる内積と最小距離の関係を 示すベクトル図である。 12] A vector diagram showing the relationship between the inner product used for processing by the personal authentication device of this embodiment and the minimum distance.
圆 13]この実施形態の個人認証装置の認識処理を示すフローチャートである。 13] A flowchart showing the recognition process of the personal authentication device of this embodiment.
圆 14]この実施形態の個人認証装置の認識処理におけるなりすまし判定のフローチ ヤートである。 14] This is a flow chart of impersonation determination in the recognition processing of the personal authentication device of this embodiment.
圆 15]この発明の個人認証装置の一実施例に用いる虹彩模様を認識するための目 の画像である。 [15] An eye image for recognizing an iris pattern used in an embodiment of the personal authentication device of the present invention.
[図 16]この発明の個人認証装置の一実施例による認識実験結果の方位認識特性 (a )と、形状認識特性 (b)を示すグラフである。  FIG. 16 is a graph showing the orientation recognition characteristic (a) and the shape recognition characteristic (b) of the recognition experiment result by the example of the personal authentication device of the present invention.
圆 17]この発明の個人認証装置の一実施例により、 5人でリアルタイム認識実験を行 つたときの方位認識特性 (a)と、形状認識特性 (b)を示すグラフである。 17] A graph showing a direction recognition characteristic (a) and a shape recognition characteristic (b) when a real-time recognition experiment is performed by five persons according to an embodiment of the personal authentication device of the present invention.
圆 18]この発明の個人認証装置の一実施例により、形状認識-ユーロン出力による 本人拒否率と他人受け入れ率を示すグラフである。 [18] FIG. 18 is a graph showing an identity rejection rate and an acceptance rate by others according to an embodiment of the personal authentication device of the present invention.
圆 19]この発明の個人認証装置の一実施例により、内積を判断基準として用いた本 人拒否率と他人受け入れ率を示すグラフである。 19] This is a graph showing the rejection rate and the rejection rate using the inner product as a criterion, according to one embodiment of the personal authentication device of the present invention.
圆 20]この発明の個人認証装置の一実施例により、最小距離を判断基準として用い た本人拒否率と他人受け入れ率を示すグラフである。 20] This is a graph showing the rejection rate and the acceptance rate of others using the minimum distance as a criterion, according to an embodiment of the personal authentication device of the present invention.
符号の説明 Explanation of symbols
10 巨  10 huge
12 虹彩  12 Iris
14 カメラ  14 Camera
15 レンズ  15 Lens
16 コンピュータ  16 computers
18 ディスプレイ  18 display
20 近赤外投光器  20 Near-infrared projector
22 フラッシュ光発光装置 24 赤外線透過フィルタ 22 Flash light emitting device 24 Infrared transmission filter
26 テンプレート  26 Template
27 瞳孔  27 pupil
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0020] 以下、この発明の個人認証装置の第一実施形態について、図 1一図 15を基にして 説明する。この実施形態の個人認証装置は回転拡散型-ユーラルネットワークを利 用しているものであり、まず、この回転拡散型-ユーラルネットワークについて簡単に 説明する。回転拡散型-ユーラルネットワークは、極座標変換した画像に回転方位で 周期的なガウス関数を乗じて拡散パターンを作製したもので、物体の回転角度を認 識する方位認識系と物体の形状を認識する形状認識系から構成されている。この実 施形態の回転拡散型-ユーラルネットワークの概念図を図 2に示す。このネットワーク の方位認識記憶系-ユーロン (方位認識-ユーロン)は、例えば円周上に 12° 毎に 3 0個配置され、形状認識記憶系の-ユーロン (形状認識-ユーロン)は適宜の数、例 えば 10個とし、それぞれの物体の形状に対応している。  Hereinafter, a first embodiment of the personal authentication device of the present invention will be described with reference to FIG. 1 and FIG. The personal authentication device of this embodiment uses a rotation diffusion type-Eural network. First, the rotation diffusion type-Eural network will be briefly described. Rotation-Diffusion-Eural Network is a diffusion pattern created by multiplying a polar coordinate transformed image by a periodic Gaussian function in the rotation direction, and recognizes the direction of the object and the shape of the object. It consists of a shape recognition system. Figure 2 shows a conceptual diagram of the rotation diffusion type-Eural network of this embodiment. The network's orientation recognition memory system-Euron (azimuth recognition-Euron) is arranged, for example, 30 per 12 ° on the circumference, and the shape recognition memory system-Euron (shape recognition-Euron) is an appropriate number, For example, 10 pieces correspond to the shape of each object.
[0021] この回転拡散型-ユーラルネットワークは、原画像力 生成された極座標上での変 換画像を拡散層に入力し、周囲空間に回転の情報を拡散させる。そして、拡散層の 出力である拡散パターンを用いて物体方位と形状の認識を行う。この発明では、慣 用的に用いられている非回転状態と、数学の直交座標系の一般的な xy平面におけ る回転の基準点 (例えば原点から (X, y) = ( l , 0)を指す位置ベクトル)を一致させる ため、座標系を 90° 反時計回りに回転させた。図 3の説明図に示すように直交座標 上の点(X, y)は、極座標上の点(R, Θ )に対応し、 x=Rcos Θ 、 y=Rsin Θである。 物体方位は、非回転状態を回転角度 0° とし、非回転状態から反時計方向への回転 角度として定義する。また、物体方位認識では物体の位置ずれが問題となるが、これ に対する対策は別の方法により物体の位置ずれを補正できる。従って、物体方位認 識では、図形 (物体)は画像中心に位置し、物体の回転中心と原画像の xy座標上で の原点は一致する。  [0021] This rotational diffusion type-Eural network inputs the converted image on the polar coordinates generated by the original image force to the diffusion layer, and diffuses the rotation information in the surrounding space. The object orientation and shape are recognized using the diffusion pattern that is the output of the diffusion layer. In this invention, the non-rotation state used conventionally and the rotation reference point in the general xy plane of the mathematical Cartesian coordinate system (for example, (X, y) = (l, 0) from the origin) The coordinate system was rotated 90 ° counterclockwise to match the position vector. As shown in the explanatory diagram of FIG. 3, the point (X, y) on the Cartesian coordinates corresponds to the point (R, Θ) on the polar coordinates, and x = Rcos Θ and y = Rsin Θ. The object direction is defined as the rotation angle from the non-rotation state to the counterclockwise direction, with the rotation angle being 0 °. In addition, object orientation is a problem in object orientation recognition, but measures against this can be done by another method. Therefore, in object orientation recognition, the figure (object) is positioned at the center of the image, and the center of rotation of the object coincides with the origin on the xy coordinates of the original image.
[0022] 図 3の説明図では、学習および想起に用いた原画像 (アラビア数字の 1)は 480 X 4 80ドットの 2値画像であるとして、原画像を一定の半径と角度で極座標上で分割し、 作成された画像を変換画像とする。図 3は、これによる回転角度 0° のアラビア数字 1 と、その変換画像の例を示す。物体方位認識は、物体位置認識がすでに行われて いるという前提のもとに行い、物体の位置ずれの影響はないものと考える。変換画像 の生成は式(1)で行う。 In the explanatory diagram of FIG. 3, it is assumed that the original image used for learning and recall (Arabic numeral 1) is a binary image of 480 X 4 80 dots, and the original image is displayed in polar coordinates with a certain radius and angle. Split and The created image is used as a converted image. Figure 3 shows an example of the Arabic numeral 1 with a rotation angle of 0 ° and its converted image. Object orientation recognition is performed on the premise that object position recognition has already been performed, and it is considered that there is no effect of object position deviation. The converted image is generated using Equation (1).
[数 1]  [Number 1]
9 9
Γι θ ^∑ ∑ I ( x i jt Y i j) ( l ) Γ ι θ ^ ∑ ∑ I (x i jt Y ij) (l)
/=0  / = 0
[0023] ここで、 I(x,y )は原画像の中心を原点とした xy座標上の点(x =Rcos Θ,y =Rsin Θ [0023] Here, I (x, y) is a point on the xy coordinate (x = Rcos Θ, y = Rsin Θ) with the origin at the center of the original image
ij ϋ ϋ ϋ  ij ϋ ϋ ϋ
)の画素値、 T は変換画像上の座標 (r, Θ )における画素値である。式(1)は、原 画像に対し半径 200ドットを 20に、角度は 3° 毎に分割し、さらに、その境界線で囲 まれる小領域を 10 X 10に分割し、各々の点の値を調べ、それらを合計したものを変 換画像の 1要素の値として用いることを示す。このため,変換画像の 1要素は 0— 100 の値を持ち、 rは 1一 20, Θは 1一 120の整数値の範囲をとる。図 3は、変換画像の座 標 (r, Θ )における画素値 T を求めるにあたり、対応する原画像上の半径 10ドット X角度 3° の小領域を 10 X 10に分割し、その分割された各点を (X ,y )で表し、画素 値 I(x ,y )の合計を計算する様子を示す。  ), T is the pixel value at coordinates (r, Θ) on the converted image. Formula (1) divides the original image by 20 with a radius of 200 dots and an angle every 3 °, and further divides the small area surrounded by the boundary into 10 x 10, and calculates the value of each point. It is shown that the sum of them is used as the value of one element of the transformed image. For this reason, one element of the converted image has a value of 0–100, r takes a value from 1 to 20, and Θ takes a range of integer values from 1 to 120. Figure 3 shows how to calculate the pixel value T at the coordinates (r, Θ) of the transformed image by dividing a small area with a radius of 10 dots X angle 3 ° on the corresponding original image into 10 X 10 Each point is represented by (X, y), and the total pixel value I (x, y) is calculated.
[0024] 一方、図 2に示すこの実施形態の回転拡散型-ユーラルネットワークの場合は、ま ず 300 X 300ピクセルの入力パターンを、瞳孔(中心)部分を除いた 25 X 120ピクセ ルの極座標上の画像に変換する。次に、極座標変換画像を拡散層に入力して、ベタ トル情報である拡散パターンを得る。この拡散パターンに方位記憶行列、及び形状 記憶行列を作用させることによって、方位認識-ユーロン出力及び形状認識-ユーロ ン出力を得る。得られた 30個の方位認識-ユーロンの出力力もポピュレーションべク タ法を用いて方位を認識する。また、形状認識は各認識物体にそれぞれ異なる形状 認識-ユーロンを 1対 1に対応させ、これらの-ユーロンの最大出力により形状認識を 行う。  On the other hand, in the case of the rotational diffusion type-Eural network of this embodiment shown in FIG. 2, first, an input pattern of 300 X 300 pixels is polar coordinates of 25 X 120 pixels excluding the pupil (center) portion. Convert to the image above. Next, a polar coordinate conversion image is input to the diffusion layer to obtain a diffusion pattern that is vector information. By applying an azimuth memory matrix and a shape memory matrix to this diffusion pattern, an azimuth recognition-Euron output and a shape recognition-Euron output are obtained. The resulting 30 orientation recognition-Euron output forces are also recognized using the population vector method. In addition, shape recognition associates each recognition object with a different shape recognition-Euron on a one-to-one basis, and shape recognition is performed using the maximum output of these -Eurons.
[0025] この回転拡散型ニューラルネットワークの動作は、まず記銘過程では回転拡散型二 ユーラルネットワークで直交学習法を用いて学習を行う。学習パターン数は、学習虹 彩数 X学習方位数によって決定し、例えば 10人の虹彩の場合、 10 (虹彩) X 6 (方 位) = 60 (パターン)である。記憶を形成する記銘過程では、式(2)—(7)により学習 用変換画像の拡散パターン Vと方位認識-ユーロンの教師信号 TOおよび形状認識 ニューロンの教師信号 TFとの間で学習を行い、方位記憶行列 Mおよび形状記憶 [0025] The operation of this rotational diffusion type neural network is performed by learning using an orthogonal learning method in the rotational diffusion type dual network in the inscription process. The number of learning patterns is determined by the number of learning irises X the number of learning orientations. For example, in the case of 10 irises, 10 (iris) X 6 (way Place) = 60 (pattern). In the memorization process that forms the memory, learning is performed between the diffusion pattern V of the transformed image for learning V and the orientation recognition-Euron teacher signal TO and the shape recognition neuron teacher signal TF according to equations (2)-(7). , Orientation memory matrix M and shape memory
o  o
行列 Mを生成した。 Generate a matrix M.
F  F
[数 2]  [Equation 2]
「V (1) V (2) (60 )`` V (1) V (2) (60)
Figure imgf000011_0001
Figure imgf000011_0001
[数 3][Equation 3]
Figure imgf000011_0002
Figure imgf000011_0002
[数 4] [Equation 4]
Μ = ΤΟΧ
Figure imgf000011_0003
Μ = ΤΟΧ
Figure imgf000011_0003
[数 5] [Equation 5]
TO = [TO (,), TO (2) To (60)] (5 TO = [TO (,) , TO (2) To (60)] (5
[数 6][Equation 6]
Figure imgf000011_0004
Figure imgf000011_0004
[数 7] [Equation 7]
TF - [TF (1) TF (2) TF (60 ) TF-[TF (1) TF (2) TF (60)
] (7) 次に、記憶を呼び起こす想起 (認識)過程では、任意の方位における入力虹彩画 像の変換画像を拡散層に入力し、その出力である拡散パターン Vと方位記憶行列 M の積により方位認識-ユーロン出力 YOを、拡散パターン Vと形状記憶行列 Mの積 ] (7) Next, in the recall (recognition) process that recalls memory, the transformed image of the input iris image in an arbitrary orientation is input to the diffusion layer, and the output is the product of the diffusion pattern V and the orientation memory matrix M. Orientation recognition-Euron output YO, product of diffusion pattern V and shape memory matrix M
O F により形状認識-ユーロン出力 YFを得る(式 (8)、(9) )。 OF The shape recognition-Euron output YF is obtained by (Equations (8) and (9)).
[数 8]  [Equation 8]
YO = M 。 V ( 8 ) YO = M. V (8)
[数 9] [Equation 9]
YF = M F V ( 9 ) YF = M F V (9)
[0027] 次に、この発明の一実施形態のシステム構成図を図 1に示す。このシステムは、人 の目 10の虹彩 12を撮影するための動画撮像可能な小型カメラ 14とレンズ 15、撮像 した虹彩画像を取り込むコンピュータ 16、及びディスプレイ 18を有する。コンピュータ 16の本体には、 CPUに画像データを取り込むための画像入力ボード、虹彩画像を 操作 '処理するための DLL (Dynamic Link Library)、その他記憶装置等を備える。ま た、小型カメラ 14には、虹彩模様を鮮明に撮影するための赤外線光源である近赤外 投光器 20、瞳孔反射を引き起こすためのフラッシュ光発光装置 22、虹彩 12に映る 可視光ノイズをカットするためのプラスティック赤外線透過フィルタ 24が取り付けられ ている。 Next, FIG. 1 shows a system configuration diagram of one embodiment of the present invention. This system has a small camera 14 and a lens 15 capable of capturing a moving image for capturing an iris 12 of a human eye 10, a computer 16 for capturing the captured iris image, and a display 18. The main body of the computer 16 includes an image input board for capturing image data into the CPU, a DLL (Dynamic Link Library) for manipulating and processing iris images, and other storage devices. The small camera 14 also cuts the visible light noise reflected in the iris 12, near-infrared projector 20 which is an infrared light source for clearly capturing iris patterns, flash light emitting device 22 for causing pupil reflection, and so on. A plastic infrared transmission filter 24 is installed.
[0028] 発光装置 22は、コンピュータ 16の画像入力ボードからの外部トリガ出力信号に同 期して、任意のタイミング (フレーム単位)で発光可能である。入力画像は、 640 X 48 0ピクセルの 256階調のグレイスケール画像である。このシステムは、約 13フレーム Z sでのリアルタイム画像取り込みが可能である。なお、コンピュータ 16とその OSは、巿 販のパーソナルコンピュータを利用した。  The light emitting device 22 can emit light at an arbitrary timing (in frame units) in synchronization with an external trigger output signal from the image input board of the computer 16. The input image is a grayscale image with 256 gradations of 640 x 480 pixels. This system is capable of real-time image capture at approximately 13 frames Z s. The computer 16 and its operating system used a commercially available personal computer.
[0029] 次に、この発明の一実施形態の虹彩認識システムの処理フローを、図 4のフローチ ヤートに示す。まず小型カメラ 14で認識対象の人の目 10を撮影する(slO)。次に、 撮像フレーム数が 19フレーム、または 50n+ 19フレーム(n= l, 2· · ·)の時に、撮像 した虹彩径の初期化を行う(si 1)。次に、フレーム数が 20フレーム、または 50n+ 20 フレーム一 50n+ 20+nフレーム(n= l, 2· · ·)の間、 目 10に、フラッシュ光発光装 置 22によりフラッシュ光を当てる(sl2)。このシステムでは図 5に示すように、瞳孔反 応を利用して、瞳孔径が一定瞳孔サイズ(2. 9mm— 3. Omm)時の虹彩 12の画像 を取得する。虹彩画像を取得するために、図 9に示すような目 10を含む一定領域の 平均輝度を用いて一定輝度に規格化し、図 6に示すような片目部分テンプレート 26 により、撮像した顔画像の中から瞳孔 27部分を検出する(sl3)。そして、片目部分テ ンプレートによって検出された部分が瞳孔 27であるかどうかを、既存の判定法である ラベリングを用いて判定する(sl4)。ラベリングは、図 7に示すように、つながつている すべての画素 (連結成分)に同じラベル (番号)を付け、異なった連結成分には異な つた番号を付けることにより、虹彩等の特定部位を検出する方法である。 [0029] Next, the processing flow of the iris recognition system according to the embodiment of the present invention is shown in the flow chart of FIG. First, the eye 10 of the person to be recognized is photographed with the small camera 14 (slO). Next, when the number of imaging frames is 19 frames or 50n + 19 frames (n = 1, 2,...), The imaging iris diameter is initialized (si 1). Next, the flash light is applied to the eye 10 by the flash light emitting device 22 while the number of frames is 20 frames, or 50n + 20 frames and 50n + 20 + n frames (n = l, 2, ...) (sl2) . In this system, as shown in Figure 5, The image of iris 12 when the pupil diameter is constant pupil size (2.9 mm-3. Omm) is acquired. In order to obtain an iris image, the average luminance of a certain region including the eye 10 as shown in FIG. 9 is used to normalize the image, and the one-eye partial template 26 as shown in FIG. The pupil 27 is detected from (sl3). Then, whether or not the portion detected by the one-eye partial template is the pupil 27 is determined using labeling which is an existing determination method (sl4). As shown in Fig. 7, labeling detects specific parts such as irises by attaching the same label (number) to all connected pixels (connected components) and assigning different numbers to different connected components. It is a method to do.
[0030] 検出した部位が瞳孔であると判断された場合、瞳孔中心と瞳孔径、瞳孔面積を同 時に計測し瞳孔検出が終了する(sl5)。瞳孔検出後、最小 2乗法を用いて上記ラベ リングによって計測された瞳孔中心を補正する。瞳孔中心補正後、虹彩径の初期化 が行われて 、る場合は虹彩径を測定し、初期化が行われて 、な 、場合は前回測定 した値をそのまま使用する(si 6)。さらに、虹彩径の測定において、ラプラシアン処理 を行う。瞳孔を抜力した各半径において、右虹彩は瞳中心から上方を 0° として、反 時計回りを +として、角度— 75° —— 135° 、左虹彩は角度 75° — 135° まで、 1° ごとの画素値を足し合わせる。累積画素値が最大の部分をそれぞれ虹彩右端と虹彩 左端とする。 [0030] When it is determined that the detected part is the pupil, the pupil center, the pupil diameter, and the pupil area are simultaneously measured, and the pupil detection is completed (sl5). After pupil detection, the pupil center measured by the above labeling is corrected using the least squares method. After correcting the pupil center, if the iris diameter is initialized, the iris diameter is measured. If the iris diameter is initialized, the previously measured value is used as it is (si 6). In addition, Laplacian processing is performed in the measurement of iris diameter. At each radius of the pupil, the right iris is 0 ° above the center of the pupil, the counterclockwise direction is +, the angle is 75 ° —— 135 °, and the left iris is 1 ° to the angle 75 ° — 135 °. Add the pixel values of each. The parts with the maximum cumulative pixel value are the right edge of the iris and the left edge of the iris, respectively.
[0031] 次に、虹彩径が人によらずほぼ一定であることを利用して、虹彩 12と瞳孔の画像上 の計測サイズ (ピクセル)の相対比率力 導出した瞳孔の直径が 2. 9mm— 3. Omm の時の画像を基準画像として取得する(si 7)。ここで、瞳孔の直径を 2. 9mm— 3. 0 mmとしたのは、虹彩画像取得を円滑にするため多少のサイズ誤差を許容するため である。図 8に虹彩画像取得時のディスプレイ 18の画面を示す。得られた画像から、 瞳孔を中心として 300 X 300ピクセルの画像を切り出す (sl8)。虹彩、瞳孔の画像上 の計測サイズはカメラ 14からの距離やズームによって変化するので、虹彩サイズを一 定にするため、公知の線形補間法を用いてサイズの規格ィ匕を行う。また、周辺の光 環境の影響を低減させるため、複数の画像の平均輝度を求めて修正係数を設定し、 輝度の規格化を行う(sl9)。この規格化画像を、基準虹彩パターン (回転拡散型-ュ 一ラルネットワークの入力画像)とし(s20)、回転拡散型-ユーラルネットワークによつ て基準虹彩パターンの学習記憶を行う。 [0031] Next, using the fact that the iris diameter is almost constant regardless of the person, the relative ratio force between the measurement size (pixels) on the image of the iris 12 and the pupil is 2.9 mm— 3. Acquire the Omm image as the reference image (si 7). Here, the pupil diameter is set to 2.9 mm-3.0 mm in order to allow a slight size error in order to facilitate the acquisition of the iris image. Fig. 8 shows the screen of display 18 when an iris image is acquired. From the obtained image, a 300 x 300 pixel image centered on the pupil is cut out (sl8). Since the measurement size on the image of the iris and pupil changes depending on the distance from the camera 14 and the zoom, the size is standardized using a known linear interpolation method in order to make the iris size constant. In addition, in order to reduce the influence of the surrounding light environment, the average luminance of multiple images is obtained and a correction coefficient is set to normalize the luminance (sl9). This standardized image is defined as a reference iris pattern (rotation-diffused-input image of the neural network) (s20). To learn and memorize the reference iris pattern.
[0032] ここで、瞳孔および虹彩検出のための輝度の規格ィ匕基準の決定方法について述 ベる。ここでは、図 9に示すように、目を含む画像の一定領域 Aの各画素値を測定し 平均輝度を求め、これを各測定画像について一定値に規格ィ匕することにより、各測 定者毎の取得画像の輝度のばらつきをなくしたものである。領域 Aの平均輝度を測 定したのは、輝度の規格ィ匕後でも強膜、虹彩、瞳孔が鮮明になるようにするために、 強膜、虹彩、瞳孔のうち中間輝度を持つ虹彩の輝度を 256段階表示の中間に設定 するためである。内側の線で囲まれた範囲 Bは、瞳孔検出範囲である。  Here, a method for determining a luminance standard for detecting pupils and irises will be described. Here, as shown in FIG. 9, each pixel value in a certain area A of the image including the eyes is measured to obtain an average luminance, and this is standardized to a constant value for each measurement image, thereby allowing each measurer to The variation in luminance of each acquired image is eliminated. The average brightness of area A was measured in order to ensure that the sclera, iris, and pupil are clear even after the brightness standard is reached. This is to set the value to the middle of the 256-level display. Range B surrounded by the inner line is the pupil detection range.
[0033] さらに、瞳孔検出のための 2値ィ匕閾値の決定方法について述べる。輝度の規格ィ匕 後に、虹彩と瞳孔の輝度を測定する。測定方法は、輝度の規格ィ匕基準を決定したと きと同様である。最適な 2値ィ匕閾値は、複数の測定者の瞳孔と虹彩の輝度平均と標 準偏差を求め、標準偏差の比から決定する。閾値 Yは以下の式で決定する。  [0033] Further, a method for determining a binary threshold for pupil detection will be described. After the luminance standard, measure the luminance of the iris and pupil. The measurement method is the same as when the luminance standard is determined. The optimum binary threshold is determined from the ratio of the standard deviations obtained by calculating the average brightness and standard deviation of the pupils and irises of multiple measurers. The threshold Y is determined by the following formula.
Y= (AV -AV ) - SD / (SD +SD ) +AV  Y = (AV -AV)-SD / (SD + SD) + AV
2 1 1 1 2 1  2 1 1 1 2 1
ここで、 AVは、瞳孔輝度の平均、 AVは、虹彩輝度の平均、 SDは、瞳孔輝度の標  Where AV is the average pupil brightness, AV is the average iris brightness, and SD is the pupil brightness standard.
1 2 1  1 2 1
準偏差、 SD  Semi-deviation, SD
2は、虹彩輝度の標準偏差である。  2 is the standard deviation of iris brightness.
[0034] 輝度の規格ィ匕後の全被験者のデータを用いて、各輝度値に対する累積画素数の グラフを図 10に示す。図 10によれば 2値ィ匕閾値によって虹彩輝度と瞳孔輝度が明確 に分けられていることが分かる。これにより、片目テンプレートで瞳孔の中心が検出さ れ、瞳孔端'虹彩端が正確に検出される。  [0034] Fig. 10 shows a graph of the cumulative number of pixels for each luminance value using the data of all subjects after the luminance specification. According to Fig. 10, it can be seen that the iris luminance and pupil luminance are clearly separated by the binary key threshold. As a result, the center of the pupil is detected by the one-eye template, and the pupil end 'iris end' is accurately detected.
[0035] 一方、他の人物の虹彩の認識時の虹彩入力パターンも、基準虹彩パターン取得時 と同様の手順で得る。得られた画像を用いて、形状認識により個人識別を行う。本シ ステムでは正規ィ匕拡散パターンを使用して、上述の回転拡散型-ユーラルネットヮー クによる学習及び認識を行う。  On the other hand, an iris input pattern at the time of recognizing the iris of another person is also obtained by the same procedure as that for acquiring the reference iris pattern. Using the obtained image, personal identification is performed by shape recognition. In this system, the regular diffusion pattern is used to perform learning and recognition using the above-described rotational diffusion type-Eural network.
[0036] さらに、この発明では未学習虹彩の識別精度を上げるために、新たに形状認識判 定基準を追加した。新たな形状認識判定基準には、ベクトルの類似性を調査するの によく使われる内積(Inner product)と最小距離(Minimum distance)を使用した。しか し、これらの手法はパターンの変動に弱いことが知られている。また、虹彩認識の場 合、パターン変動には色々とあるがその一つに方位ずれがある。カメラ 14からの画像 で認識を行う場合、学習画像と認識画像を全く同じ方位で取り込むことは不可能に 近い。そこで,回転拡散型-ユーラルネットワークの特徴である方位認識を用いて方 位を補正することにより、内積や最小距離を形状認識判定基準として導入可能となる [0036] Further, in the present invention, a new shape recognition criterion is added in order to improve the discrimination accuracy of the unlearned iris. The new shape recognition criteria used the inner product and the minimum distance, which are often used to investigate vector similarity. However, these methods are known to be vulnerable to pattern variations. In the case of iris recognition, there are various pattern variations, but one of them is misorientation. Image from camera 14 It is almost impossible to capture the learning image and the recognition image in exactly the same direction when performing recognition with. Therefore, it is possible to introduce the inner product and the minimum distance as shape recognition criteria by correcting the direction using orientation recognition, which is a feature of the rotational diffusion type-Eural network.
[0037] 次に、回転拡散型-ユーラルネットワークによる個人認証を行う際の方位補正につ いて説明する。学習画像の方位を 0° と定義すると、認識を行う際の入力画像の方 位は必ずしも 0° とは限らないため、たとえ本人であっても入力画像である虹彩パタ ーンが学習時のものとは異なり他人と判断してしまう。そこで、回転拡散型ニューラル ネットワークによる認証では、まず方位認識を行い、入力画像の方位補正を行う。 [0037] Next, a description will be given of the azimuth correction when performing personal authentication by the rotational diffusion type-Eural network. If the orientation of the learning image is defined as 0 °, the orientation of the input image at the time of recognition is not necessarily 0 °. Unlike other people. Therefore, in the authentication using the rotational diffusion type neural network, the orientation is first recognized and the orientation of the input image is corrected.
[0038] 回転拡散型-ユーラルネットワークによる個人認証の流れを図 11に示す。まず、方 位補正の範囲、きざみ角度および認識方法を選択する。方位補正の範囲およびきざ み角度について、方位補正の下限を 3° 、上限を 3° 、きざみ角度を 1° と設定した 場合を考える。回転拡散型-ユーラルネットワークによる方位認識の結果が 10° であ つた場合、認識に用いた入力画像の方位は学習画像 (登録した人物の虹彩パターン )に対して反時計回りに 10° 回転していることになる。そこで、入力画像の方位を学 習画像の方位と一致させるために 10° ±3° 、きざみを ごとに回転させる。この 場合、補正方位は 13° から 7° であり、 7種類の方位補正を行った虹彩パターン を得る。補正方位を 10° だけにしない理由は、回転拡散型-ユーラルネットワーク により得られる認識方位の分解能や誤差を考慮するためである。方位補正を行った 虹彩パターンを使用して、指定した認識方法(内積、最小距離、形状認識-ユーロン 出力)により個人の虹彩模様 (形状)認証を行う。内積および最小距離による認識で は、回転拡散型-ユーラルネットワークにより認識した回転方位をもとに、指定した範 囲の回転補正を行った入力画像と学習画像のベクトル演算(内積、最小距離)を行う 。この例では、 7種類の回転補正 (補正方位 :ー13° —— 7° )を行った入力画像と学 習画像の内積および最小距離を計算することになる。  [0038] Fig. 11 shows the flow of personal authentication using a rotation diffusion type-Eural network. First, the direction correction range, step angle, and recognition method are selected. Let us consider the case where the lower limit of azimuth correction is set to 3 °, the upper limit is set to 3 °, and the step angle is set to 1 °. Rotation-diffusion type-If the result of orientation recognition by the Yural network is 10 °, the orientation of the input image used for recognition is rotated 10 ° counterclockwise with respect to the learning image (registered person's iris pattern). Will be. Therefore, in order to make the orientation of the input image coincide with the orientation of the learning image, the step is rotated by 10 ° ± 3 °. In this case, the correction azimuth is 13 ° to 7 °, and an iris pattern with seven different azimuth corrections is obtained. The reason why the correction azimuth is not limited to 10 ° is to take into account the resolution and error of the recognition azimuth obtained by the rotational diffusion type-Eural network. Using the iris pattern with azimuth correction, individual iris pattern (shape) authentication is performed by the specified recognition method (inner product, minimum distance, shape recognition-Euron output). In recognition by inner product and minimum distance, the vector calculation of the input image and the learning image (inner product, minimum distance) corrected for rotation within the specified range based on the rotation orientation recognized by the rotational diffusion type-Eural network. I do . In this example, the inner product and minimum distance between the input image and the learning image that have been subjected to seven types of rotation correction (correction direction: –13 ° —— 7 °) are calculated.
[0039] 内積の場合は最大値を、最小距離の場合は最小値を用いて、あら力じめ設定した 閾値と比較することにより判別する。内積の場合は、最大値が判定閾値より大きい場 合は本人と判別し、閾値より小さい場合は他人と判断する。最小距離の場合は、最小 値が判定閾値より小さ 、場合は本人と判別し、閾値より大き!、場合は他人と判断する 。これらのベクトル演算を登録した学習画像すべてについて行い、個人を特定する。 どの学習画像についても本人と判定されない場合、登録されていないと判定する。形 状認識-ユーロン出力を形状判定 (個人認証)に用いる場合は、方位補正した画像 を入力画像として回転拡散型-ユーラルネットワークに再度入力し、形状認識-ユー ロン出力により判定する。登録した人物を表す形状認識ニューロン出力が、あらかじ め設定した判定閾値より大きい場合、入力虹彩画像がその-ユーロンを表す人物の 虹彩であると判定する。どの形状認識-ユーロン出力も判定閾値を超えな ヽ場合は、 登録されて!ヽな ヽと判定する。 [0039] The maximum value is used for the inner product, and the minimum value is used for the minimum distance. In the case of an inner product, if the maximum value is greater than the determination threshold, the person is identified, and if the maximum is less than the threshold, the person is determined to be another person. Minimum if minimum distance If the value is smaller than the determination threshold value, the person is identified, and if the value is larger than the threshold value! These vector operations are performed for all registered learning images to identify individuals. If none of the learning images is determined to be the person, it is determined that they are not registered. When shape recognition-Euron output is used for shape determination (personal authentication), the orientation-corrected image is input as an input image again to the rotational diffusion type -Eural network, and it is determined by shape recognition -Euron output. If the shape recognition neuron output representing the registered person is larger than the preset judgment threshold, it is judged that the input iris image is the iris of the person representing -Yelon. If any shape recognition-Euron output does not exceed the judgment threshold, it is registered and judged as a bad one.
[0040] ここで、内積について定義式は  [0040] Here, the defining formula for the inner product is
[数 10] - = ( 1 0 ) [Equation 10]-= (1 0)
Figure imgf000016_0001
と表される。 Vは学習虹彩画像の正規ィ匕拡散パターンを表すベクトル、 Vは認識に
Figure imgf000016_0001
It is expressed. V is a vector representing the normal diffusion pattern of the learning iris image, V is for recognition
L R  L R
用いる虹彩画像の正規ィ匕拡散パターンを表すベクトルである。また、 | v | ,  This is a vector representing a normal diffusion pattern of an iris image to be used. Also, | v |,
L I V R  L I V R
Iは、それぞれ、 V , Vの絶対値であり各ベクトルの長さを表す。ここでは、  I is the absolute value of V and V, and represents the length of each vector. here,
L R I V L I  L R I V L I
= 1、 I V  = 1, I V
R I = 1に規格化 (正規化)されている。従って、  R I = 1 (normalized). Therefore,
[数 11] co VL ; VR = V V ( 1 1 ) [Equation 11] co V L; V R = V V (1 1)
|VL||VR| L " となり、 cos Θが二つのベクトル V , Vの類似度を表す。つまり、 V =Vになると cos | VL || V R | L "and cos Θ represents the similarity between the two vectors V and V. That is, when V = V, cos
L R L R  L R L R
Θ = 1となり類似度が最も高いことを示す。  Θ = 1, indicating the highest similarity.
[0041] 最小距離は、 I V— V のベクトル差 (距離)を [0041] The minimum distance is the vector difference (distance) of I V— V
L R Iと表され、図 12に示すように、 2つ  L R I, as shown in Figure 12,
示している。 V =Vになると距離はゼロになり、最も高い類似度を示す。 V , Vの大  Show. When V = V, the distance becomes zero, indicating the highest similarity. V, large of V
L R L R  L R L R
きさを 1に規格ィ匕した場合、最小距離の二乗は内積で表される。  If the standard is set to 1, the square of the minimum distance is expressed as an inner product.
[数 12] vL-vR|2 = (vL-vR)-(vL-vR) [Equation 12] v L -v R | 2 = (v L -v R )-(v L -v R )
= |vL| +|VR| -2VL VR = | v L | + | V R | -2V L V R
= 2-2VL-VR (12) よって、最小距離と内積の関係は以下のように表される。 = 2-2V L -V R (12) Therefore, the relationship between the minimum distance and the inner product is expressed as follows.
[数 13]
Figure imgf000017_0001
[Equation 13]
Figure imgf000017_0001
[0042] この式より、二つのベクトルの類似度が最も高い場合 (V =V ),内積は最大値 V · [0042] From this equation, when the similarity between two vectors is the highest (V = V), the inner product is the maximum value V ·
L R L  L R L
V =1となるので、最小距離は I V— V I =0となり最小値をとる。よって、 V , Vの Since V = 1, the minimum distance is I V-V I = 0 and takes the minimum value. Therefore, V and V
R L R L R R L R L R
大きさを 1に規格ィ匕した場合、最小距離および内積による形状識別 (個人認証)結果 は同じになる。  When the size is set to 1, the result of shape identification (personal authentication) by the minimum distance and inner product is the same.
[0043] この実施形態による上述した虹彩模様認識における回転拡散型-ユーラルネットヮ ークの認識フローチャートを図 13に示す。認識過程で方位補正を行うのは、内積及 び最小距離を使用する場合、学習(登録)画像に 0° の虹彩画像を使用しているた め、認識対象が 0° の方位にある必要があるからである。  FIG. 13 shows a flowchart of the recognition of the rotational diffusion type-Eural network in the above-described iris pattern recognition according to this embodiment. Orientation correction is performed in the recognition process when the inner product and the minimum distance are used, since the iris image of 0 ° is used for the learning (registered) image, the recognition target must be in the 0 ° orientation. Because there is.
[0044] この実施形態の個人認証装置によれば、回転拡散型-ユーラルネットワークにより 、虹彩模様の方位認識を行うことができるため、方位補正をすることにより回転変化 に対応することができる。さらに、内積と最小距離を形状認識判定基準として導入す ることにより、方位補正した虹彩模様を用いて 0%の他人受け入れ率を実現可能とな る。また、取得した虹彩画像に対して、ラベリングによる瞳孔中心位置検出、最小 2乗 法による中心補正、ラプラシアンフィルタを用いたエッジ検出により瞳孔中心、瞳孔端 および虹彩端の自動検出が可能である。画像上での虹彩、瞳孔サイズはカメラ 14か らの距離やズームにより変化するが、線形補間を用いて拡大 ·縮小を行い、虹彩サイ ズを規格ィ匕することによりサイズ変化に対応することができる。さらに、目 10にフラッシ ュ光を当て瞳孔反応を誘起し、瞳孔径の時間変化を計測することにより、虹彩写真等 を使用したなりすましを拒否することができる。  According to the personal authentication device of this embodiment, the orientation of the iris pattern can be recognized by the rotational diffusion type-Ural network, so that it is possible to cope with the rotational change by correcting the orientation. Furthermore, by introducing the inner product and the minimum distance as the shape recognition criterion, it is possible to achieve a 0% acceptance rate by using an iris-corrected iris pattern. Moreover, it is possible to automatically detect the pupil center, pupil edge, and iris edge by detecting the pupil center position by labeling, center correction by the least square method, and edge detection using a Laplacian filter. The iris and pupil size on the image will change depending on the distance from the camera 14 and zoom, but it can be scaled up or down using linear interpolation and the iris size can be standardized to accommodate the size change. it can. Furthermore, by imposing a flash light on the eye 10 to induce a pupil response and measuring a temporal change in the pupil diameter, it is possible to reject impersonation using an iris photograph or the like.
[0045] そして、この発明の個人認証装置は、学習した各虹彩を特徴づける記憶行列を作 成するため計算量が少なぐ認識時間が短い。また、瞳孔中心位置、瞳孔端および 虹彩端を自動検出するため、目の位置合わせが不要であり、幅広い用途に使用可 能である。 [0045] The personal authentication device of the present invention creates a memory matrix that characterizes each learned iris. Therefore, the recognition time is short because the amount of calculation is small. In addition, since the pupil center position, pupil edge, and iris edge are automatically detected, eye alignment is not required and it can be used in a wide range of applications.
[0046] なお、この発明の個人認証装置は上記実施形態に限定されるものでなぐ動画撮 像可能なカメラ力 虹彩画像を直接 CPUに取り込むことができれば、画像入力ボー ドゃ虹彩画像を操作'処理するための DLLは必ずしも必要ではない。また、フラッシ ュ光発光装置を使用しなくても、この発明の個人認証装置を使用することができる。こ の場合、前記線形補間により一定の相対瞳孔サイズに変換した虹彩画像を取り込む  [0046] It should be noted that the personal authentication device of the present invention is not limited to the above-described embodiment, but has a camera power capable of capturing moving images. If an iris image can be directly captured by the CPU, the image input board can operate the iris image. A DLL for processing is not always necessary. In addition, the personal authentication device of the present invention can be used without using a flash light emitting device. In this case, an iris image converted into a fixed relative pupil size by the linear interpolation is captured.
[0047] さらに、認識に用いるベクトル情報は、上述のように撮像した原画像若しくはその極 座標変換画像のみならず、拡散パターンやラプラシアンフィルタを用いて原画像に 画像処理を行ったものでも良 、。 [0047] Further, the vector information used for recognition is not limited to the original image captured as described above or its polar coordinate conversion image, but may be information obtained by performing image processing on the original image using a diffusion pattern or a Laplacian filter. .
[0048] また、なりすましの防止方法として、以下の処理を行うことにより、さらに確実になり すましを防止することができる。ここでは、瞳孔の対光反射が起きなければなりすまし であると判定するものであり、図 4のフローチャートにおいて、 sl6, sl7以下の処理を 、図 14に示すように行うものである。この処理は、一定瞳孔サイズを得るために LED をフラッシュ発光させ、発光前と発光後の相対瞳孔径の比較を行うことにより、なりす まし判定を行うものである。図 14の処理では、認識時に最初の LEDが発光してから 瞳孔の対光反応が生じるまでの相対瞳孔径をなりすまし判定に用いる。この実施形 態では、最初のフラッシュ光は測定画像の 20フレームの時に発光するため、対光反 応は 30フレームまでに起こる。よって、フラッシュ光が発光し対光反射が起こる 20か ら 29レームの相対瞳孔径を保存するようにした。保存した相対瞳孔径の中から最大 と最小を求めて差もしくは比率を算出し、差もしくは比率が基準値以下であればなり すましであると判定する。  [0048] Further, as a method for preventing impersonation, the following processing is performed to further prevent impersonation. In this case, it is determined that the pupil's light reflection should occur, and in the flowchart of FIG. 4, the processing below sl6 and sl7 is performed as shown in FIG. In this process, the LED is flashed to obtain a constant pupil size, and the impersonation is determined by comparing the relative pupil diameter before and after the light emission. In the process of Fig. 14, the relative pupil diameter from when the first LED emits light during recognition until the light reaction of the pupil occurs is used for impersonation determination. In this embodiment, the first flash light is emitted at the 20th frame of the measurement image, so the light response occurs by 30th frame. Therefore, the relative pupil diameter of 20 to 29 frames where flash light is emitted and light reflection occurs is preserved. The maximum or minimum is calculated from the stored relative pupil diameters, and the difference or ratio is calculated. If the difference or ratio is equal to or less than the reference value, it is determined to be impersonation.
実施例 1  Example 1
[0049] 以下に、この発明の虹彩模様による個人認証装置の実施結果について説明する。  [0049] Hereinafter, the results of implementing the personal authentication device using the iris pattern of the present invention will be described.
まず、学習及び認識に用いる虹彩画像を、 3人, 5人, 10人と変化させ認識実験を行 つた。学習及び認識に用いた虹彩画像 (300 X 300ピクセル)を図 15に示す。学習、 認識は、すべて被験者の右眼の虹彩画像で行った。学習方位は 0° カゝら 360° を 6 0° ごとに 6方位とした。学習パターン数は、(認識虹彩数) X (学習方位数)で与えら れ,各被験者数に対してそれぞれ 18パターン、 30パターン、 60パターンである。 First, the iris images used for learning and recognition were changed to 3, 5, and 10, and a recognition experiment was performed. Fig. 15 shows the iris image (300 X 300 pixels) used for learning and recognition. Learning, All recognition was performed with the iris image of the subject's right eye. The learning orientation was 0 ° and 360 ° with 6 orientations every 60 °. The number of learning patterns is given by (number of recognized irises) X (number of learning orientations), and is 18 patterns, 30 patterns, and 60 patterns for each number of subjects.
[0050] 10人で認識実験を行った時の方位認識特性を図 16 (a)、形状認識特性を図 16 (b )に示す。 (a)図の方位認識特性において、横軸が虹彩の入力回転方位、縦軸が認 識方位である。これにより、虹彩の入力回転方位と認識方位には、非常に良い直線 性が見られ、方位を正しく認識できたことが分力つた。方位誤差は、平均士標準偏差 =-0.83 ±0.75° であった。このことから、 10人の虹彩の回転方位を、全方位において 認識可能であるといえる。また、(b)図の形状認識特性において、横軸が虹彩の入力 回転方位、縦軸が形状認識-ユーロン出力であり、〇が Target neuron出力の平均値 、 Xが Non-target neuron出力の平均値を表す。各入力方位における縦線は、標準 偏差を表す。 Target neuron出力は、 0.67— 1.15 (平均士標準偏差 =0.94±0.11)、 Non- target neuron出力は、- 0.86— 0.51 (0.02 ±0.18)であった。 0。 力ら 360。 の虹 彩の入力回転方位において、 Target neuronの平均値はほぼ 1. 0を、 Non- target neuronの平均値はほぼ 0. 0を出力しており、 Target neuron出力は、 Non- target neuron出力より常に大きい。このことから、 10人の被験者による虹彩認識力 全方位 で可能であるといえる。 3人、 5人で認識実験を行った結果も、 10人の場合と同様に、 方位、形状共に正しく認識できた。  [0050] Fig. 16 (a) shows the orientation recognition characteristics and Fig. 16 (b) shows the shape recognition characteristics when the recognition experiment was performed by 10 people. (a) In the orientation recognition characteristics of the figure, the horizontal axis is the input rotation orientation of the iris, and the vertical axis is the recognition orientation. As a result, very good linearity was seen in the input rotation direction and recognition direction of the iris, and it was possible to recognize the direction correctly. The azimuth error was a mean standard deviation = -0.83 ± 0.75 °. From this, it can be said that the rotation direction of the iris of 10 people can be recognized in all directions. Also, in the shape recognition characteristics in (b), the horizontal axis is the input rotation direction of the iris, the vertical axis is the shape recognition-Euron output, ◯ is the average value of the target neuron output, and X is the average value of the non-target neuron output Represents a value. The vertical line in each input direction represents the standard deviation. The target neuron output was 0.67—1.15 (mean standard deviation = 0.94 ± 0.11), and the non-target neuron output was −0.86—0.51 (0.02 ± 0.18). 0. Power 360. The average value of the target neuron is approximately 1.0 and the average value of the non-target neuron is approximately 0.0, and the target neuron output is higher than the non-target neuron output. Always big. From this, it can be said that the iris recognition ability of 10 subjects is possible in all directions. The results of the recognition experiment with three and five people were also able to recognize both the direction and shape correctly, as in the case of ten people.
[0051] 次に、学習及び認識に用いる虹彩画像を 3人、 5人と変化させ、あらかじめ学習した 後にリアルタイム認識実験を行った。学習に用いた虹彩画像 (300 X 300 pixels)は、 図 15中から選んだ。学習'認識は、すべて同じ被験者の右眼の虹彩で行った。学習 ノターン数は、(認識虹彩数) X (学習方位数)で与えられ、被験者数 3人と 5人に対 してそれぞれ 18パターン、 30パターンである。学習に使用した画像は、実験の当日 にあらかじめ撮ったものである。  [0051] Next, the iris images used for learning and recognition were changed to three and five people, and after learning in advance, a real-time recognition experiment was performed. The iris image (300 X 300 pixels) used for learning was selected from Fig. 15. Learning'recognition was performed with the same subject's right eye iris. The number of learning notes is given by (number of recognized irises) X (number of learning orientations), 18 and 30 patterns for 3 and 5 subjects, respectively. The images used for learning were taken in advance on the day of the experiment.
[0052] 5人で認識実験を行った時の方位認識特性を図 17 (a)に、形状認識特性を図 17 ( b)に示す。図 17 (a)の方位認識特性において、横軸が虹彩の入力回転方位、縦軸 が認識方位である。虹彩の入力回転方位と認識方位には良好な直線性が見られ、 方位をほぼ正しく認識できたことが分力ゝつた。方位誤差は、平均士標準偏差 =-3.92 ±0.22° であった。このことから、この実施例の個人認証装置では、 5人の虹彩の回 転方位を全方位においてリアルタイムで認識可能であるといえる。また、図 17 (b)の 形状認識特性において、横軸が虹彩の入力回転方位、縦軸が形状認識-ユーロン 出力であり、〇が Target neuron出力の平均値、 Xが Non- target neuron出力の平均 値を表す。各入力方位における縦線は標準偏差を表す。 Target neuron出力は、 0.67— 1.27 (平均士標準偏差 =0.99±0.20)、 Non- target neuron出力は、 -0.69— 0.58 (0.03±0.24)であった。 0° から 360° の虹彩の入力回転方位において、 Target neuronの平均値はほぼ 1.0を、 Non- target neuronの平均値はほぼ 0.0を出力しており 、 Target neuron出力は Non- target neuron出力より常に大きい。このことから、 5人の 被験者によるリアルタイム虹彩認識が全方位で可能であることが分力つた。 3人でリア ルタイム認識実験を行った結果も 5人の場合と同様に、方位'形状共に正しく認識で きた。 [0052] Fig. 17 (a) shows the azimuth recognition characteristics when a recognition experiment is performed by five persons, and Fig. 17 (b) shows the shape recognition characteristics. In the azimuth recognition characteristics of Fig. 17 (a), the horizontal axis is the input rotation direction of the iris, and the vertical axis is the recognition direction. Good linearity was seen in the input rotation direction and recognition direction of the iris, and it was found that the direction could be recognized almost correctly. Azimuth error is averaged standard deviation = -3.92 It was ± 0.22 °. From this, it can be said that the personal authentication apparatus of this embodiment can recognize the rotation directions of the five irises in real time in all directions. Also, in the shape recognition characteristics of Fig. 17 (b), the horizontal axis is the input rotation direction of the iris, the vertical axis is the shape recognition-Euron output, ◯ is the average value of the target neuron output, and X is the non-target neuron output. Represents an average value. The vertical line in each input azimuth represents the standard deviation. The target neuron output was 0.67—1.27 (mean standard deviation = 0.99 ± 0.20), and the non-target neuron output was −0.69—0.58 (0.03 ± 0.24). In the input rotation direction of the iris from 0 ° to 360 °, the average value of the target neuron is approximately 1.0, the average value of the non-target neuron is approximately 0.0, and the target neuron output is always higher than the non-target neuron output. large. As a result, it was found that real-time iris recognition by five subjects is possible in all directions. As a result of conducting real-time recognition experiments with three people, the orientation and shape were correctly recognized as in the case of five people.
[0053] 次に、図 13に示すフローチャートによる認証システムを用いて、カメラからの取り込 み画像による人物認証実験を行った。実験は 10人の被験者で行った。 10人のうち 5 人を学習し、 5人を未学習とした。学習に使用する画像は、認証実験の当日にあらか じめ撮ったものを使用した。学習に使用する 5人の虹彩画像を 1人ずつ入れ替えるこ とにより、合計 10セットで学習を行った。学習虹彩画像は、虹彩の入力回転方位が 0 ° の時の虹彩画像を使用した。 10セットの学習に対して被験 10人を認識するので、 合計 100試行 (学習人物 50試行、未学習人物 50試行)の認識結果が得られた。  Next, using the authentication system according to the flowchart shown in FIG. 13, a person authentication experiment was performed using images captured from the camera. The experiment was conducted with 10 subjects. Of the 10 people, 5 were studied and 5 were unlearned. The images used for learning were taken in advance on the day of the authentication experiment. Learning was performed in a total of 10 sets by replacing the iris images of five people used for learning one by one. As the learning iris image, the iris image when the input rotation direction of the iris is 0 ° was used. Since 10 subjects were recognized for 10 sets of learning, a total of 100 trials (50 trials for learning and 50 trials for unlearned persons) were obtained.
[0054] 図 18に形状認識-ユーロン出力、図 19に内積、図 20に最小距離をそれぞれ判定 基準として用いた誤り率を示す。点線は本人拒否率 (誤って本人を拒否した率)、実 線は他人受け入れ率 (過って他人を受け入れた率)を表す。縦軸はそれぞれの誤り 率、横軸は評価基準の判定閾値を示す。本人拒否率の計算方法は、本人に対応す る形状-ユーロンや内積の出力値が判定閾値よりも小さな場合、また最小距離の出 力値が判定閾値よりも大きな場合本人を拒否したとみなし、試行数をカウントして ヽく ことで本人拒否率を求めた。本人拒否の特例として、本人に対応する形状認識ニュ 一ロンや内積の出力値が判定閾値を超えていて、他人に対応する出力値がそれらを 上回る場合、また、本人に対応する最小距離の出力値が判定閾値より小さいが、他 人に対応する出力値がそれらをより下回る場合が考えられる。認識実験結果では、 形状認識-ユーロン出力で 8%、内積、最小距離で 2%の本人拒否の特例があった。 一方、 Non- target neuron出力と内積の最大値が判定閾値よりも大きな場合、また 最小距離の最小値が判定閾値よりも小さな場合他人を受け入れ (過って他人を受け 入れた率)たとみなし、試行数をカウントしていくことで他人受け入れ率を求めた。実 験結果より、形状認識ニューロン出力を判定基準に使用したときは、本人拒否率と他 人受け入れ率の交点は判定閾値が約 0.78のときであり約 43%の等誤り率であった。 内積を判定基準に使用したときは、本人拒否率と他人受け入れ率の交点は、判定閾 値が約 0.94のときであり約 15%の等誤り率であった。しかし、判定閾値を 0.96にする と、本人拒否率は 20%となるが、他人を完全に拒否できる。上記最小距離による判 断を判定基準に使用したときは、本人拒否率と他人受け入れ率の交点は、判定閾値 が約 0.35のときであり約 13%の等誤り率であった。しかし、判定閾値を 0.25にすると、 本人拒否率は 26%となるが、他人を完全に拒否できた。 FIG. 18 shows the error rate using the shape recognition-Euron output, FIG. 19 the inner product, and FIG. 20 using the minimum distance as a criterion. The dotted line represents the rejection rate (the rate of rejecting the user by mistake), and the solid line represents the acceptance rate of others (the rate of accepting others by mistake). The vertical axis represents each error rate, and the horizontal axis represents the evaluation threshold. The method of calculating the rejection rate of the person is considered to have rejected the person if the output value of the shape corresponding to the person-the euron or inner product is smaller than the judgment threshold, or if the output value of the minimum distance is larger than the judgment threshold, The number of trials was counted and asked to determine the rejection rate. As a special case of rejecting the person, if the output value of the shape recognition neuron or inner product corresponding to the person exceeds the judgment threshold and the output value corresponding to the other person exceeds those, the output of the minimum distance corresponding to the person The value is smaller than the judgment threshold, but other It is conceivable that the output value corresponding to a person is lower than those. As a result of the recognition experiment, there was a special case of rejecting the person with shape recognition-Euron output 8%, inner product, minimum distance 2%. On the other hand, if the maximum value of the non-target neuron output and the inner product is larger than the judgment threshold, and if the minimum value of the minimum distance is smaller than the judgment threshold, it is assumed that the other person has been accepted (the rate of accepting others). The acceptance rate of others was calculated by counting the number of trials. From the experimental results, when the shape recognition neuron output was used as the criterion, the intersection of the rejection rate and the rejection rate was about 0.78, and the error rate was about 43%. When the inner product was used as the criterion, the intersection between the rejection rate and the acceptance rate was about 0.94, and the error rate was about 15%. However, if the decision threshold is 0.96, the rejection rate is 20%, but others can be completely rejected. When the judgment based on the minimum distance was used as the criterion, the intersection of the rejection rate and the acceptance rate of others was when the decision threshold was about 0.35 and the error rate was about 13%. However, when the decision threshold was 0.25, the rejection rate was 26%, but others could be completely rejected.

Claims

請求の範囲 The scope of the claims
[1] 動画撮像可能なカメラと、このカメラにより撮像した画像を所定周期で記憶する記憶 装置とを備え、  [1] A camera capable of capturing moving images and a storage device for storing images captured by the camera at a predetermined cycle,
前記カメラにより撮像した人の顔の画像情報を、人の目の虹彩もしくは瞳孔の大き さの範囲内に入るテンプレートと比較しながらスキャンし、目の虹彩もしくは瞳孔部分 を検知する虹彩 ·瞳孔検出手段と、  Iris / pupil detection means for scanning the human face image information captured by the camera with a template that falls within the range of the iris or pupil size of the human eye and detecting the iris or pupil part of the eye When,
この虹彩 ·瞳孔検出手段により検出した虹彩模様を捉える虹彩模様取得手段と、 前記虹彩模様の回転方位と形状を、極座標変換した画像に周期的なガウス関数を 乗じて拡散パターンを作製する回転拡散型-ユーラルネットワークにより変換して、ベ タトル情報として学習'記憶する記憶変換手段と、  An iris pattern acquisition unit that captures an iris pattern detected by the iris / pupil detection unit, and a rotation diffusion type that creates a diffusion pattern by multiplying a rotation coordinate and shape of the iris pattern by a polar gauss function on a polar coordinate transformed image. -A memory conversion means for converting and learning as 'vector information' by means of a Yural network;
前記記憶変換手段により変換して学習 '記憶した虹彩模様のべ外ル情報に対して Learning by converting by the memory conversion means
、同様に画像取得した任意の虹彩模様を上記と同様に前記記憶変換手段により前 記回転拡散型-ユーラルネットワークにより変換したベクトル情報と比較する虹彩模 様形状判定手段とを備え、 And an iris pattern shape determining means for comparing an arbitrary iris pattern obtained in the same manner with the vector information converted by the memory conversion means by the rotational diffusion type-Ural network in the same manner as described above,
前記虹彩模様形状判定手段による形状判定は、互いに比較する前記各ベクトル情 報の方位を認識して補正し、前記ベクトル情報に方位記憶行列及び形状記憶行列 を形成させ、前記回転拡散型-ユーラルネットワークの出力である方位認識-ユーロ ン及び形状認識-ユーロンを得て、各認識対象虹彩の形状認識-ユーロンにっ 、て 、それぞれ記憶した形状認識-ユーロンを対応させて形状認識を行うことを特徴とす る個人認証装置。  The shape determination by the iris pattern shape determination means recognizes and corrects the azimuth of each vector information to be compared with each other, forms an azimuth memory matrix and a shape memory matrix in the vector information, and the rotation diffusion type-Ural Recognize orientation recognition-Euron and shape recognition-Euron, which is the output of the network, and shape recognition of each recognition target iris-Euron, and shape recognition corresponding to each memorized shape recognition-Euron The personal authentication device is a feature.
[2] 前記虹彩模様取得手段による学習時に、虹彩の所定方位の原画像またはその極 座標変換画像をベクトル情報として登録し、  [2] At the time of learning by the iris pattern acquisition means, an original image of a predetermined orientation of the iris or its polar coordinate conversion image is registered as vector information,
認識時には、前記虹彩模様形状判定手段により、任意方位で入力された虹彩画像 を、前記回転拡散型-ユーラルネットワークで得られた認識方位を用いて補正し、原 画像またはその極座標変換画像をベクトル情報として、あらかじめ設定された閾値と 比較する請求項 1記載の個人認証装置。  At the time of recognition, the iris pattern shape judging means corrects the iris image input in an arbitrary direction using the recognition direction obtained by the rotational diffusion type-Eural network, and the original image or its polar coordinate conversion image is vectorized. The personal authentication device according to claim 1, wherein the information is compared with a preset threshold value as information.
[3] 前記虹彩模様取得手段は、ラベリングによる瞳孔中心位置検出手段と、最小 2乗法 による中心補正、ラプラシアンフィルタを用いた虹彩端検出手段、線間補正を用いた 虹彩サイズの規格ィ匕手段を有するものである請求項 1記載の個人認証装置。 [3] The iris pattern acquisition means uses pupil center position detection means by labeling, center correction by least square method, iris edge detection means using Laplacian filter, and line correction. 2. The personal authentication device according to claim 1, further comprising iris size standard means.
[4] 前記各ベクトル情報の内積と最小距離の少なくとも一方を求めて、各ベクトル情報 の一致 ·不一致をあらかじめ設定された閾値と比較して判断し個人識別を行う請求項[4] The method according to claim 1, wherein at least one of an inner product and a minimum distance of each vector information is obtained, and whether each vector information matches or does not match is determined by comparing with a preset threshold value.
1記載の個人認証装置。 1. The personal authentication device according to 1.
[5] 前記虹彩模様取得手段は、虹彩部分の瞳孔反応を起こさせるフラッシュ光発光装 置と、撮像用の赤外線光源と、前記カメラのレンズに取り付けられた赤外線透過フィ ルタとを備え、前記虹彩の瞳孔の大きさがほぼ一定の状態で虹彩画像を取得する請 求項 1記載の個人認証装置。 [5] The iris pattern acquisition means includes a flash light emitting device that causes a pupil reaction of the iris portion, an infrared light source for imaging, and an infrared transmission filter attached to a lens of the camera, and the iris The personal authentication device according to claim 1, wherein the iris image is acquired in a state where the size of the pupil of the pupil is substantially constant.
[6] 前記虹彩模様取得手段は、瞳孔径の時間変化を連続的に計測する請求項 5記載 の個人認証装置。 6. The personal authentication device according to claim 5, wherein the iris pattern acquisition means continuously measures a temporal change in pupil diameter.
[7] 前記虹彩模様形状判定手段による形状判定における方位の補正は、前記回転拡 散型-ユーラルネットワークにより認識した方位の補正に代えて、認識に使用する虹 彩の拡散パターンと学習した複数の方位の拡散パターンとを用いて、前記各ベクトル 情報の内積と最小距離の少なくとも一方を求めて、ベクトル合成することにより、入力 方位を認識し、方位補正する請求項 2記載の個人認証装置。  [7] The direction correction in the shape determination by the iris pattern shape determination means is performed in place of the correction of the direction recognized by the rotation diffusion type-Ural network, and a plurality of learned iris diffusion patterns used for recognition. 3. The personal authentication device according to claim 2, wherein at least one of the inner product and the minimum distance of each vector information is obtained using a diffusion pattern of the azimuth and the vector synthesizes to recognize the input azimuth and correct the azimuth.
[8] 前記虹彩模様取得手段は、認識する人の顔や目の位置を、間引きによる低解像度 画像で検索し、目の位置を特定した後、高解像度の画像で虹彩の領域を検出する 請求項 1記載の個人認証装置。 [8] The iris pattern acquisition means searches the position of the face and eyes of the person to be recognized from the low-resolution image by thinning out, specifies the position of the eyes, and then detects the iris region from the high-resolution image. Item 1. Personal authentication device.
[9] 人の目を含む画像の一定領域の各画素値を測定し平均輝度を求め、これを一定 値に規格化して虹彩の輝度を設定する請求項 1記載の個人認証装置。 [9] The personal authentication device according to [1], wherein each pixel value in a certain area of an image including a human eye is measured to obtain an average luminance, and the luminance of the iris is set by normalizing the average luminance.
[10] 測定者の瞳孔と虹彩の輝度平均と標準偏差を求め、標準偏差の比により決定され た 2値ィ匕閾値により瞳孔と虹彩を決定する請求項 1記載の個人認証装置。 10. The personal authentication device according to claim 1, wherein the brightness average and standard deviation of the pupil and iris of the measurer are obtained, and the pupil and iris are determined by a binary value threshold determined by a ratio of the standard deviation.
[11] フラッシュ光照射により生じる対光反射時の瞳孔径変化を計測し、その最大瞳孔径 と最小瞳孔径の差もしくは比率から生体反応を検出し、この差もしくは比率が基準値 以下であれば測定した虹彩画像はなりすましであると判定する請求項 1記載の個人 認証装置。 [11] Measure the pupil diameter change at the time of light reflection caused by flash light irradiation, detect the biological reaction from the difference or ratio between the maximum pupil diameter and the minimum pupil diameter, and if this difference or ratio is below the reference value The personal authentication device according to claim 1, wherein the measured iris image is determined to be spoofing.
PCT/JP2005/004214 2004-08-30 2005-03-10 Personal authentication system WO2006025129A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-249933 2004-08-30
JP2004249933 2004-08-30

Publications (1)

Publication Number Publication Date
WO2006025129A1 true WO2006025129A1 (en) 2006-03-09

Family

ID=35999793

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/004214 WO2006025129A1 (en) 2004-08-30 2005-03-10 Personal authentication system

Country Status (1)

Country Link
WO (1) WO2006025129A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104781830A (en) * 2012-11-19 2015-07-15 虹膜技术公司 Method and apparatus for identifying living eye
CN106778567A (en) * 2016-12-05 2017-05-31 望墨科技(武汉)有限公司 A kind of method that iris recognition is carried out by neutral net
CN107330395A (en) * 2017-06-27 2017-11-07 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05288520A (en) * 1992-04-14 1993-11-02 Matsushita Electric Ind Co Ltd Pattern matching method
JP2001167284A (en) * 1999-12-06 2001-06-22 Oki Electric Ind Co Ltd Device and method for detecting reflection of spectacles
JP2002006474A (en) * 2000-06-21 2002-01-09 Toppan Printing Co Ltd Method for processing mask pattern image
JP2003030659A (en) * 2001-07-16 2003-01-31 Matsushita Electric Ind Co Ltd Iris authentication device and iris image pickup device
JP2003187247A (en) * 2001-12-14 2003-07-04 Fujitsu Ltd Lip shape specification program, speech intention detection program and image generation program for face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05288520A (en) * 1992-04-14 1993-11-02 Matsushita Electric Ind Co Ltd Pattern matching method
JP2001167284A (en) * 1999-12-06 2001-06-22 Oki Electric Ind Co Ltd Device and method for detecting reflection of spectacles
JP2002006474A (en) * 2000-06-21 2002-01-09 Toppan Printing Co Ltd Method for processing mask pattern image
JP2003030659A (en) * 2001-07-16 2003-01-31 Matsushita Electric Ind Co Ltd Iris authentication device and iris image pickup device
JP2003187247A (en) * 2001-12-14 2003-07-04 Fujitsu Ltd Lip shape specification program, speech intention detection program and image generation program for face recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARIMURA K ET AL: "Kaiten Kakusangata Neural Net ni yoru 2 jigen Teiji Ichi ni Taisuru Ichi Fuhen na Buttai Hoi to Keijo no Doji Ninshiki.", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS., vol. 100, no. 490, 1 December 2000 (2000-12-01), pages 23 - 30, XP002996087 *
MURAKAMI M ET AL: "Kaiten Kakusangata Neural Net o Mochiita Kosai ni yoru Real time Kojin Shikibetsu.", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS., vol. 103, no. 733, 11 March 2004 (2004-03-11), pages 55 - 60, XP002996086 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104781830A (en) * 2012-11-19 2015-07-15 虹膜技术公司 Method and apparatus for identifying living eye
CN106778567A (en) * 2016-12-05 2017-05-31 望墨科技(武汉)有限公司 A kind of method that iris recognition is carried out by neutral net
CN106778567B (en) * 2016-12-05 2019-05-28 望墨科技(武汉)有限公司 A method of iris recognition is carried out by neural network
CN107330395A (en) * 2017-06-27 2017-11-07 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks
CN107330395B (en) * 2017-06-27 2018-11-09 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks

Similar Documents

Publication Publication Date Title
JP3855025B2 (en) Personal authentication device
US20220165087A1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10650261B2 (en) System and method for identifying re-photographed images
US9361507B1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US9064145B2 (en) Identity recognition based on multiple feature fusion for an eye image
US8265347B2 (en) Method and system for personal identification using 3D palmprint imaging
CN109800643A (en) A kind of personal identification method of living body faces multi-angle
US20100202669A1 (en) Iris recognition using consistency information
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
WO2009158700A1 (en) Assessing biometric sample quality using wavelets and a boosted classifier
JP4507679B2 (en) Image recognition apparatus, image extraction apparatus, image extraction method, and program
WO2006025129A1 (en) Personal authentication system
US9977889B2 (en) Device for checking the authenticity of a fingerprint
Ribarić et al. Personal recognition based on the Gabor features of colour palmprint images
CN113240043A (en) False identification method, device and equipment based on multi-picture difference and storage medium
CN114092679A (en) Target identification method and apparatus
Takano et al. Rotation invariant iris recognition method adaptive to ambient lighting variation
Ahlawat et al. Online invigilation: A holistic approach
EP3958218A1 (en) Object identification method and device thereof
WO2024042674A1 (en) Information processing device, authentication method, and storage medium
CN112183202B (en) Identity authentication method and device based on tooth structural features
Takano et al. Rotation independent iris recognition by the rotation spreading neural network
Shobhakumar et al. Face Recognition in Attendance Management Systems
Burghardt Inside iris recognition
Sohel et al. Robust Pose Invariant Shape and Texture based Hand Recognition

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP