CN111507198B - Training method for printing iris detection model, and printing iris detection method and device - Google Patents

Training method for printing iris detection model, and printing iris detection method and device Download PDF

Info

Publication number
CN111507198B
CN111507198B CN202010219620.XA CN202010219620A CN111507198B CN 111507198 B CN111507198 B CN 111507198B CN 202010219620 A CN202010219620 A CN 202010219620A CN 111507198 B CN111507198 B CN 111507198B
Authority
CN
China
Prior art keywords
iris
printed
image
interest
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010219620.XA
Other languages
Chinese (zh)
Other versions
CN111507198A (en
Inventor
张小亮
请求不公布姓名
王秀贞
戚纪纲
杨占金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202010219620.XA priority Critical patent/CN111507198B/en
Publication of CN111507198A publication Critical patent/CN111507198A/en
Application granted granted Critical
Publication of CN111507198B publication Critical patent/CN111507198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The disclosure relates to a training method for printing an iris detection model, and a method and a device for printing an iris detection model. The printed iris detection model is applied to detection of printed irises, and the training method of the printed iris detection model comprises the following steps: acquiring a training set, wherein the training set comprises a real iris image and a printed iris image; based on iris detection, obtaining an iris region of interest of a real iris image in a training set and an iris region of interest of a printed iris image; constructing a printing iris detection model; based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image, the printed iris detection model is trained. By the method and the device, the interference and attack of the printed iris in the process of confirming the identity by using the iris recognition technology can be effectively prevented.

Description

Training method for printing iris detection model, and printing iris detection method and device
Technical Field
The disclosure relates to the technical field of printed iris detection, in particular to a training method for a printed iris detection model, a printed iris detection method and a device.
Background
With the rise of biological feature recognition technology, biological feature recognition technologies such as face recognition, iris recognition and fingerprint recognition are receiving great attention. The iris recognition has the characteristics of high stability, high anti-counterfeiting performance, uniqueness and the like, so that the iris recognition is widely applied.
With the popularity of biometric technology, iris counterfeiting technology has also emerged, such as printing the iris. Printing irises creates a significant disturbance and attack on the identity verification process using iris recognition techniques.
Disclosure of Invention
In order to overcome the problems in the prior art, the present disclosure provides a training method for printing an iris detection model, a method and a device for printing an iris detection model.
In a first aspect, an embodiment of the present disclosure provides a training method for printing an iris detection model. The printed iris detection model is applied to the detection of the printed iris, and the training method of the printed iris detection model comprises the following steps: acquiring a training set, wherein the training set comprises a real iris image and a printed iris image; based on iris detection, an iris region of interest of a real iris image and an iris region of interest of a printed iris image in a training set are obtained, wherein the iris region of interest of the real iris image comprises an iris of the real iris image and a pupil of the real iris image, and the iris region of interest of the printed iris image comprises an iris of the printed iris image and a pupil of the printed iris image; constructing a printing iris detection model; based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image, the printed iris detection model is trained.
In one embodiment, constructing the printed iris detection model includes: the printed iris detection model was constructed based on a 3*3 normal convolution layer, three bottleneck structures, four max pooling layers, a 1*1 roll base layer, and a mean pooling layer.
In another embodiment, training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image comprises: training a printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image; and when the loss value obtained based on the loss function is smaller than or equal to a first threshold value, training the printed iris detection model is completed, wherein the loss function is determined according to the cross entropy loss function and the center loss function.
In yet another embodiment, the loss function is determined from a cross entropy loss function and a center loss function, comprising: and determining the product of the center loss function and a preset parameter and the sum of the product of the center loss function and the cross entropy loss function as a loss function, wherein the center loss function is as follows:
m represents the total number of categories of iris images;a center representing the ith category;
The cross entropy loss function is:
m represents the total number of categories of iris images; n represents the number of input layer feature layers; e, e y =W T *x+b,W T For the parameter matrix to be trained, x is the input data matrix and b is the bias option.
In yet another embodiment, the training method for printing the iris detection model further includes: preprocessing the real iris image and the printed iris image, wherein the preprocessing comprises at least one of adjusting the brightness of the image, adjusting the size of the image and increasing the noise of the image; training and printing an iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image, and further comprising: and training and printing an iris detection model based on the iris region of interest of the preprocessed real iris image and the iris region of interest of the printed iris image.
In a second aspect, embodiments of the present disclosure provide a printed iris detection method. The printing iris detection method comprises the following steps: acquiring an iris image to be detected; based on iris detection, obtaining an iris region of interest of an iris image to be detected, wherein the iris region of interest comprises an iris and a pupil; inputting an iris region of interest into a printed iris detection model to obtain a detection result of an iris image to be detected, wherein the printed iris detection model is the printed iris detection model according to the first aspect or any embodiment of the first aspect; based on the detection result, determining whether the iris image to be detected belongs to the printed iris
In an embodiment, determining whether the iris image to be detected belongs to the printed iris based on the detection result includes: if the detection result is that the probability of printing the iris is larger than the probability of printing the iris of the real body, judging whether the probability of printing the iris is larger than a second threshold value or not; if the probability of printing the iris is greater than a second threshold, determining that the iris image to be detected belongs to the printed iris; and if the probability of printing the iris is smaller than or equal to the second threshold value, determining that the iris image to be detected does not belong to the printed iris.
In another embodiment, based on iris detection, obtaining an iris region of interest of an iris image to be detected includes: based on iris detection, obtaining an initial iris region of interest of an iris image to be detected; and scaling the initial iris region of interest to a preset size to obtain the iris region of interest of the iris image to be detected.
In a third aspect, embodiments of the present disclosure provide a training apparatus for printing an iris detection model. The printed iris detection model is applied to the detection of the printed iris, and the training device of the printed iris detection model comprises: the training set acquisition module is used for acquiring a training set, wherein the training set comprises a real iris image and a printed iris image; the iris region-of-interest module is used for obtaining an iris region-of-interest of a real iris image and an iris region-of-interest of a printed iris image in a training set based on iris detection, wherein the iris region-of-interest of the real iris image comprises an iris of the real iris image and a pupil of the real iris image, and the iris region-of-interest of the printed iris image comprises an iris of the printed iris image and a pupil of the printed iris image; the construction module is used for constructing a printing iris detection model; and the training module is used for training and printing the iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image.
In one embodiment, the construction module is configured to: the printed iris detection model was constructed based on a 3*3 normal convolution layer, three bottleneck structures, four max pooling layers, a 1*1 roll base layer, and a mean pooling layer.
In another embodiment, the training module is to: training a printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image; and when the loss value obtained based on the loss function is smaller than or equal to a first threshold value, training the printed iris detection model is completed, wherein the loss function is determined according to the cross entropy loss function and the center loss function.
In yet another embodiment, the training module is to: and determining the product of the center loss function and a preset parameter and the sum of the product of the center loss function and the cross entropy loss function as a loss function, wherein the center loss function is as follows:
m represents the total number of categories of the iris image;a center representing the ith category;
the cross entropy loss function is:
m represents the total number of categories of iris images; n represents the number of input layer feature layers; e, e y =W T *x+b,W T For the parameter matrix to be trained, x is the input data matrix and b is the bias option.
In yet another embodiment, the training apparatus for printing an iris detection model further includes: the preprocessing module is used for preprocessing the real iris image and the printed iris image, wherein the preprocessing comprises at least one of the following steps of adjusting the brightness of the image, adjusting the size of the image and increasing the noise of the image; the training module is used for: and training and printing an iris detection model based on the iris region of interest of the preprocessed real iris image and the iris region of interest of the printed iris image.
In a fourth aspect, embodiments of the present disclosure provide a print iris detection apparatus. The print iris detection apparatus includes: the acquisition module is used for acquiring iris images to be detected; the iris interested region acquisition module is used for acquiring an iris interested region of an iris image to be detected based on iris detection, wherein the iris interested region comprises an iris and a pupil; the processing module is used for inputting the iris region of interest into the printed iris detection model to obtain a detection result of the iris image to be detected, wherein the printed iris detection model is the printed iris detection model according to the first aspect or any embodiment of the first aspect; and the judging module is used for determining whether the iris image to be detected belongs to the printed iris or not based on the detection result.
In one embodiment, the judging module is configured to: if the detection result is that the probability of printing the iris is larger than the probability of printing the iris of the real body, judging whether the probability of printing the iris is larger than a second threshold value or not; if the probability of printing the iris is greater than a second threshold, determining that the iris image to be detected belongs to the printed iris; and if the probability of printing the iris is smaller than or equal to the second threshold value, determining that the iris image to be detected does not belong to the printed iris.
In another embodiment, the iris region of interest module is configured to: based on iris detection, obtaining an initial iris region of interest of an iris image to be detected; and scaling the initial iris region of interest to a preset size to obtain the iris region of interest of the iris image to be detected.
In a fifth aspect, embodiments of the present disclosure provide an electronic device, where the electronic device includes: a memory for storing instructions; and a processor for invoking the instructions stored in the memory to perform the training method of the printed iris detection model described in the first aspect or any implementation manner of the first aspect of the disclosure, or the printed iris detection method described in the second aspect or any implementation manner of the second aspect of the disclosure.
In a sixth aspect, an embodiment of the disclosure provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform a training method for printing an iris detection model as described in the first aspect or any embodiment of the first aspect of the disclosure, or a method for printing an iris detection as described in the second aspect or any embodiment of the second aspect of the disclosure.
The disclosure provides a training method for a printed iris detection model, which realizes the detection of the printed iris based on a deep learning technology. The printed iris detection model obtained based on the training method can efficiently detect the printed iris, and further can effectively prevent interference and attack of the printed iris in the process of confirming identity by using the iris recognition technology.
Drawings
The above, as well as additional purposes, features, and advantages of embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 illustrates a flowchart of a training method for printing an iris detection model provided by an embodiment of the disclosure;
FIG. 2A shows a schematic representation of an iris image;
FIG. 2B shows a schematic representation of an iris region of interest of an iris image;
FIG. 3 is a flowchart showing the steps of training a print iris detection model in a training method of the print iris detection model provided in an embodiment of the disclosure;
FIG. 4 illustrates a flowchart of another training method for printing an iris detection model provided by embodiments of the disclosure;
FIG. 5 illustrates a flowchart of a method of print iris detection provided by an embodiment of the disclosure;
FIG. 6 illustrates a schematic diagram of a training apparatus for printing an iris detection model provided by an embodiment of the disclosure;
FIG. 7 shows a schematic diagram of a printed iris detection apparatus provided by an embodiment of the disclosure;
fig. 8 shows a schematic diagram of an electronic device provided in an embodiment of the disclosure.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present disclosure and are not intended to limit the scope of the present disclosure in any way.
It should be noted that, although the terms "first", "second", etc. are used herein to describe various modules, steps, data, etc. of the embodiments of the present disclosure, the terms "first", "second", etc. are merely for distinguishing between different modules, steps, data, etc. and not to indicate a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably.
At present, iris recognition has the characteristics of high stability, high anti-counterfeiting performance, uniqueness and the like, and becomes an important biological feature recognition technology.
With the advent of iris counterfeiting techniques, such as printing the iris, significant interference and attacks are created in the process of validating identities using iris recognition techniques.
The training method for the printed iris detection model realizes efficient detection of the printed iris based on the deep learning technology. The method can effectively prevent the interference and attack of the printed iris in the process of confirming the identity by using the iris recognition technology.
Fig. 1 shows a flowchart of a training method for printing an iris detection model according to an embodiment of the disclosure.
As shown in fig. 1, in an exemplary embodiment of the present disclosure, a printed iris detection model is applied to the detection of a printed iris. The training method for printing the iris detection model comprises a step S101, a step S102, a step S103 and a step S104. Step S101, step S102, step S103, and step S104 will be described below, respectively.
In step S101, a training set is acquired.
The training set comprises a real iris image and a printed iris image.
And acquiring a real iris image and a printed iris image by acquiring a real iris sample and a printed iris sample. The real iris image corresponds to a real iris image label, and the real iris image label is set as a positive sample; the printed iris image corresponds to the printed iris image label and the printed iris image label is set to a negative sample.
In step S102, based on the iris detection, an iris region of interest of the real iris image in the training set and an iris region of interest of the printed iris image are obtained.
Fig. 2A shows a schematic diagram of an iris image.
As shown in fig. 2A, the iris image includes a real iris image and a printed iris image, and is composed of a sclera 1, an iris 2, and a pupil 3.
Since it is often not necessary to refer to the features of the sclera 1 in determining whether the iris image belongs to the printed iris. The presence of sclera 1 will also increase the computational effort of training the printed iris detection model. Therefore, in order to reduce the calculation amount of training the printed iris detection model, it is possible to acquire an iris region of interest of the iris image based on the iris image and train the printed iris detection model based on the iris region of interest of the iris image.
Fig. 2B shows a schematic representation of an iris region of interest of an iris image.
As shown in fig. 2B, the region of interest of the iris image is constituted by the iris 2 and the pupil 3.
The iris region of interest of the real iris image comprises the iris of the real iris image and the pupil of the real iris image; the iris region of interest in which the iris image is printed includes the iris in which the iris image is printed and the pupil in which the iris image is printed.
In step S103, a print iris detection model is constructed.
Since the main feature of the iris is a texture part, and the part of the feature mainly exists in the shallow structure of the convolutional neural network, the meaning of the iris classification assistance is not great for the high-level semantic feature, so the depth of the whole convolutional network is not required to be deep. The printed iris detection model constructed in the mode reduces the calculated amount of training the printed iris detection model, and ensures that the trained printed iris detection model has less delay in reasoning and judging the printed iris.
In step S104, a print iris detection model is trained based on the iris region of interest of the real iris image and the iris region of interest of the print iris image.
Based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image, the printed iris detection model is trained, and the calculated amount of training the printed iris detection model is reduced on the basis of ensuring the accuracy of the trained printed iris detection model.
The disclosure provides a training method for a printed iris detection model, which realizes the detection of the printed iris based on a deep learning technology. The printed iris detection model obtained based on the training method can efficiently detect the printed iris, and further can effectively prevent interference and attack of the printed iris in the process of confirming identity by using the iris recognition technology.
In an exemplary embodiment of the present disclosure, in the step of constructing the printed iris detection model, the printed iris detection model may be constructed based on one 3*3 general convolution layer, three bottleneck structures, four maximum pooling layers, one 1*1 roll layer, and one mean pooling layer.
The method for constructing the printed iris detection model has the advantages that the loss of the network in extracting the iris texture details is reduced as much as possible through the mode of repeatedly utilizing the shallow network and reducing the depth of the network, and the trained printed iris detection model can be ensured to have less delay in reasoning and judging the printed iris.
Fig. 3 shows a flowchart of a step of training a print iris detection model in a training method of the print iris detection model provided in an embodiment of the disclosure.
As shown in fig. 3, in an exemplary embodiment of the present disclosure, training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image of step S104 includes step S1041 and step S1042. Step S1041 and step S1042 will be described below, respectively.
In step S1041, the printed iris detection model is trained based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image.
In step S1042, when the loss value obtained based on the loss function is less than or equal to the first threshold, training of the print iris detection model is completed.
Wherein the loss function is determined from the cross entropy loss function and the center loss function.
In the process of training and printing the iris detection model, initialization parameters including learning rate, the number of images of each batch sent to network training, model optimizer parameters and the like are set. And continuously adjusting parameters according to the result on the verification set to finally obtain the trained printed iris detection model.
The loss result of the verification set is a loss value obtained based on a loss function.
And when the loss value is smaller than or equal to a first threshold value, training the printed iris detection model is completed.
The first threshold is adjusted according to the actual situation, and in the present disclosure, the first threshold is not specifically limited.
In one embodiment, the training round number is set to 80; initial learning rate was 0.001, learning rate was reduced to 0.0001 when training round was passed to 30 th; the L2 regularization parameter is set to 0.0004; training a batch size of 128; an Adam optimizer is selected to optimize the loss function.
In an exemplary embodiment of the present disclosure, in the step of determining the loss function according to the cross entropy loss function and the center loss function, a product of the center loss function and a preset parameter may be determined as the loss function, and then, a sum of the center loss function and the cross entropy loss function.
For ease of illustration, let the loss function be L, the center loss function be Lc, and the cross entropy loss function be Ls.
The loss function is:
L=L S +α*L C
alpha is a preset parameter, the preset parameter alpha is adjusted according to actual conditions, and in the present disclosure, the preset parameter alpha is not specifically limited.
The center loss function is:
m represents the total number of categories of iris images;representing the center of the ith category.
The categories of iris images include both real iris images and printed iris images.
The cross entropy loss function is:
m represents the total number of categories of iris images; n represents the number of input layer feature layers; e, e y =W T *x+b,W T For the parameter matrix to be trained, x is the input data matrix and b is the bias option.
Because the iris image is easily influenced by surrounding environment in the acquisition process, in order to effectively distinguish the inter-class difference between the printed iris and the real iris, the trained printed iris detection model can correctly distinguish the printed iris and the real iris with smaller difference, and the loss function uses a cross entropy loss function and a center loss function.
Fig. 4 shows a flowchart of another training method for printing an iris detection model provided by an embodiment of the disclosure.
As shown in fig. 4, in an exemplary embodiment of the present disclosure, a training method of printing an iris detection model includes steps S201, S202, S203, and S204.
Step S201 is a step of acquiring a training set; step S203 is a step of constructing a print iris detection model. Since step S201 and step S203 have been described in detail above, step S201 and step S203 are not described in detail herein. Step S202 and step S204 will be described below.
In step S202, preprocessing is performed on the real iris image and the print iris image.
Wherein the preprocessing includes at least one of adjusting image brightness, adjusting image size, and increasing image noise.
In step S204, a print iris detection model is trained based on the iris region of interest of the preprocessed real iris image and the iris region of interest of the print iris image.
Through the preprocessing step, a plurality of iris images corresponding thereto can be obtained based on each of the real iris images or the printed iris images. By the method, a plurality of iris image samples for training can be obtained based on one collected iris image, and the diversity of training samples is increased. Further, generalization capability of the printed iris detection model obtained through training is increased.
In one embodiment, the training set includes a real iris image a and a printed iris image B. If the preprocessing for increasing noise is performed on the real iris image a, a real iris image A1 can be obtained, and if the preprocessing for reducing brightness is performed on the real iris image a, another preprocessing is performed on the real iris image a by the real iris image A2 … …, and a real iris image An can be obtained.
If the print iris image B is subjected to the noise-increasing preprocessing, a print iris image B1 can be obtained, and if the print iris image B is subjected to the brightness-reducing preprocessing, a print iris image B2 … … can be obtained, and the print iris image Bn can be obtained.
After the preprocessing, the iris image label corresponding to the real iris image A is still the real iris image label. After the printed iris image B is preprocessed, the corresponding iris image label is still the printed iris image label. By the mode, under the condition of paying the workload of collecting one iris image, a plurality of real iris images can be obtained or the iris images can be printed, so that the diversity of training samples is increased, and a foundation is laid for training to obtain a high-quality printed iris detection model.
Based on the same inventive concept, a second aspect of the embodiments of the present disclosure provides a print iris detection method.
Fig. 5 shows a flowchart of a print iris detection method provided by an embodiment of the present disclosure.
As shown in fig. 5, in an exemplary embodiment of the present disclosure, the print iris detection method includes step S301, step S302, step S303, and step S304. Step S301, step S302, step S303, and step S304 will be described below, respectively.
In step S301, an iris image to be detected is acquired.
In step S302, an iris region of interest of an iris image to be detected is obtained based on iris detection.
Wherein the iris region of interest includes the iris and pupil. The size of the iris region of interest corresponds to the size of the iris region of interest of the real iris image or the iris region of interest of the printed iris image applied in training the printed iris detection model.
In step S303, the iris region of interest is input to the printed iris detection model, and a detection result of the iris image to be detected is obtained.
The printed iris detection model is the printed iris detection model according to the first aspect of the disclosure or any embodiment of the first aspect.
Inputting the iris region of interest into the trained printed iris detection model, and obtaining the probability that the iris region of interest belongs to the printed iris.
In step S304, based on the detection result, it is determined whether or not the iris image to be detected belongs to the print iris.
And determining the probability that the iris image to be detected belongs to the printed iris based on the obtained probability that the iris region of interest belongs to the printed iris. Based on the obtained probabilities, it is determined whether the iris image to be detected belongs to a printed iris.
According to the printed iris detection method, the iris region of interest of the iris image to be detected is input into the printed iris detection model, so that the probability that the iris image to be detected belongs to the printed iris can be obtained rapidly and accurately. Based on the probabilities, it may be determined whether the iris image to be detected belongs to the printed iris. Further, interference and attack of the printed iris on the identification process using the iris recognition technology are effectively prevented.
In an exemplary embodiment of the present disclosure, determining whether an iris image to be detected belongs to a printed iris based on a detection result includes the following steps.
If the detection result is that the probability of printing the iris is larger than the probability of printing the iris of the real body, judging whether the probability of printing the iris is larger than a second threshold value.
If the probability of printing the iris is greater than the second threshold, determining that the iris image to be detected belongs to the printed iris.
And if the probability of printing the iris is smaller than or equal to the second threshold value, determining that the iris image to be detected does not belong to the printed iris.
In order to further ensure the accuracy of judging whether the iris image to be detected belongs to the printed iris, the detection result output according to the printed iris detection model can be further processed.
The detection result output by the printed iris detection model is as follows: the probability that the iris image to be detected belongs to the printed iris is 0.6, and then the probability that the iris image to be detected belongs to the real iris is 0.4.
According to the output result of the printed iris detection model, the possibility that the iris image to be detected belongs to the printed iris is higher.
However, in order to further ensure the accuracy of the final judgment result, further judgment is required for the output result of the printed iris detection model (the probability that the iris image to be detected belongs to the printed iris is 0.6).
Further judging whether the output result is larger than a second threshold value.
The second threshold may be adjusted according to actual situations, and in the present disclosure, the second threshold is not specifically limited.
Let the first threshold be 0.7, and because the output result of the iris detection model (the probability that the iris image to be detected belongs to the printed iris) is smaller than the second threshold, the iris image to be detected is not considered to belong to the printed iris.
In an embodiment, the first threshold is set to 0.5, and since the output result of the iris detection model (the probability that the iris image to be detected belongs to the printed iris) is greater than the second threshold, the iris image to be detected is considered to belong to the printed iris.
In one embodiment, if the detection result output by the iris detection model is printed: the probability that the iris image to be detected belongs to the printed iris is 0.4, and then the probability that the iris image to be detected belongs to the real iris is 0.6. Based on the above, it can be directly judged that the iris image to be detected belongs to the real iris.
By the method, accuracy of judging that the iris image to be detected belongs to the printed iris is guaranteed.
In an exemplary embodiment of the present disclosure, obtaining an iris region of interest of an iris image to be detected based on iris detection may be achieved in the following manner.
Based on iris detection, an initial iris region of interest of an iris image to be detected is obtained.
And scaling the initial iris region of interest to a preset size to obtain the iris region of interest of the iris image to be detected.
The preset size is adjusted according to actual situations, and in the present disclosure, the preset size is not specifically limited.
In one embodiment, the preset size may be the size of the iris region of interest of the real iris image or the iris region of interest of the printed iris image applied in training the printed iris detection model.
Based on the same inventive concept, a third aspect of the embodiments of the present disclosure further provides a training device for printing an iris detection model.
Fig. 6 shows a schematic diagram of a training apparatus for printing an iris detection model according to an embodiment of the disclosure.
As shown in fig. 6, in an exemplary embodiment of the present disclosure, a printed iris detection model is applied to the detection of a printed iris, and a training apparatus for printing the iris detection model includes an acquisition training set module 201, an acquisition iris region of interest module 202, a construction module 203, and a training module 204.
The training set acquisition module 201 is configured to acquire a training set, where the training set includes a real iris image and a printed iris image.
The iris region of interest module 202 is configured to obtain, based on iris detection, an iris region of interest of a real iris image in a training set and an iris region of interest of a printed iris image, where the iris region of interest of the real iris image includes an iris of the real iris image and a pupil of the real iris image, and the iris region of interest of the printed iris image includes an iris of the printed iris image and a pupil of the printed iris image.
A construction module 203 is configured to construct a printed iris detection model.
The training module 204 is configured to train the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image.
In an exemplary embodiment of the present disclosure, the construction module 203 is configured to: the printed iris detection model was constructed based on a 3*3 normal convolution layer, three bottleneck structures, four max pooling layers, a 1*1 roll base layer, and a mean pooling layer.
In an exemplary embodiment of the present disclosure, training module 204 is to: training a printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image; and when the loss value obtained based on the loss function is smaller than or equal to a first threshold value, training the printed iris detection model is completed, wherein the loss function is determined according to the cross entropy loss function and the center loss function.
In an exemplary embodiment of the present disclosure, training module 204 is to: and determining the product of the center loss function and a preset parameter and the sum of the product of the center loss function and the cross entropy loss function as a loss function, wherein the center loss function is as follows:
m represents the total number of categories of iris images;a center representing the ith category;
the cross entropy loss function is:
m represents the total number of categories of iris images; n represents the number of input layer feature layers; e, e y =W T *x+b,W T For the parameter matrix to be trained, x is the input data matrix and b is the bias option.
In an exemplary embodiment of the present disclosure, the training apparatus for printing an iris detection model further includes: the preprocessing module is used for preprocessing the real iris image and the printed iris image, wherein the preprocessing comprises at least one of the following steps of adjusting the brightness of the image, adjusting the size of the image and increasing the noise of the image; the training module 204 is configured to: and training and printing an iris detection model based on the iris region of interest of the preprocessed real iris image and the iris region of interest of the printed iris image.
Based on the same inventive concept, the fourth aspect of the embodiments of the present disclosure also provides a printed iris detection apparatus.
Fig. 7 shows a schematic diagram of a print iris detection apparatus provided in an embodiment of the disclosure.
As shown in fig. 7, in an exemplary embodiment of the present disclosure, the print iris detection apparatus includes an acquisition module 301, an iris region of interest acquisition module 302, a processing module 303, and a judgment module 304. The acquisition module 301, the iris region of interest acquisition module 302, the processing module 303, and the judgment module 304 will be described below, respectively.
The acquiring module 301 is configured to acquire an iris image to be detected.
The iris region of interest module 302 is configured to obtain an iris region of interest of an iris image to be detected based on iris detection, where the iris region of interest includes an iris and a pupil.
The processing module 303 is configured to input the iris region of interest to a printed iris detection model, and obtain a detection result of an iris image to be detected, where the printed iris detection model is a printed iris detection model according to the first aspect of the present disclosure or any embodiment of the first aspect.
The judging module 304 is configured to determine whether the iris image to be detected belongs to the printed iris based on the detection result.
In an exemplary embodiment of the present disclosure, the determining module 304 is configured to: if the detection result is that the probability of printing the iris is larger than the probability of printing the iris of the real body, judging whether the probability of printing the iris is larger than a second threshold value or not; if the probability of printing the iris is greater than a second threshold, determining that the iris image to be detected belongs to the printed iris; and if the probability of printing the iris is smaller than or equal to the second threshold value, determining that the iris image to be detected does not belong to the printed iris.
In an exemplary embodiment of the present disclosure, the iris region of interest module 302 is configured to: based on iris detection, obtaining an initial iris region of interest of an iris image to be detected; and scaling the initial iris region of interest to a preset size to obtain the iris region of interest of the iris image to be detected.
Fig. 8 shows a schematic diagram of an electronic device provided in an embodiment of the disclosure.
As shown in fig. 8, one embodiment of the present disclosure provides an electronic device 30, wherein the electronic device 30 includes a memory 310, a processor 320, and an Input/Output (I/O) interface 330. Wherein the memory 310 is used for storing instructions. A processor 320 for invoking instructions stored in memory 310 to perform the training method of the printed iris detection model or the printed iris detection method of the present disclosure. Wherein the processor 320 is coupled to the memory 310, the I/O interface 330, respectively, such as via a bus system and/or other form of connection mechanism (not shown). Memory 310 may be used to store programs and data, including programs for printing training of iris detection models or programs for printing iris detection as referred to in embodiments of the present disclosure, and processor 320 performs various functional applications of electronic device 30 and data processing by running the programs stored in memory 310.
The processor 320 in embodiments of the present disclosure may be implemented in hardware in at least one of a digital signal processor (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA), the processor 320 may be one or a combination of several of a central processing unit (Central Processing Unit, CPU) or other forms of processing units having data processing and/or instruction execution capabilities.
Memory 310 in embodiments of the present disclosure may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory (Random Access Memory, RAM) and/or cache memory (cache), etc. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (HDD), a Solid State Drive (SSD), or the like.
In the embodiment of the present disclosure, the I/O interface 330 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 30, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 330 in embodiments of the present disclosure may include one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, joystick, trackball, microphone, speaker, touch panel, etc.
In some embodiments, the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present disclosure can be implemented using standard programming techniques with various method steps being performed using rule-based logic or other logic. It should also be noted that the words "apparatus" and "module" as used herein and in the claims are intended to include implementations using one or more lines of software code and/or hardware implementations and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code capable of being executed by a computer processor for performing any or all of the described steps, operations, or programs.
The foregoing description of implementations of the present disclosure has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the present disclosure and its practical application to enable one skilled in the art to utilize the present disclosure in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (16)

1. A training method of a printed iris detection model, wherein the printed iris detection model is applied to detection of a printed iris, the training method of the printed iris detection model comprising:
acquiring a training set, wherein the training set comprises a real iris image and a printed iris image;
based on iris detection, obtaining an iris region of interest of the real iris image and an iris region of interest of the printed iris image in the training set, wherein the iris region of interest of the real iris image comprises an iris of the real iris image and a pupil of the real iris image, and the iris region of interest of the printed iris image comprises an iris of the printed iris image and a pupil of the printed iris image;
Constructing the printing iris detection model;
training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image;
wherein, based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image, training the printed iris detection model comprises:
training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image;
and when the loss value obtained based on the loss function is smaller than or equal to a first threshold value, training the printed iris detection model is completed, wherein the loss function is determined according to a cross entropy loss function and a center loss function.
2. The method of training a printed iris detection model of claim 1, wherein the constructing the printed iris detection model comprises:
the printed iris detection model was constructed based on a 3*3 normal convolution layer, three bottleneck structures, four max pooling layers, a 1*1 roll base layer, and a mean pooling layer.
3. The method of training a printed iris detection model of claim 1, wherein the loss function is determined from a cross entropy loss function and a center loss function, comprising:
and determining the product of the center loss function and a preset parameter and the sum of the product of the center loss function and the cross entropy loss function as the loss function, wherein the center loss function is as follows:
m represents the total number of categories of the iris image; c y i represents the center of the ith category;
the cross entropy loss function is:
m represents the total number of categories of the iris image; n represents the number of input layer feature layers; e, e y =W T *x+b,W T For the parameter matrix to be trained, x is the input data matrix and b is the bias option.
4. The method for training a printed iris detection model of claim 1, wherein,
the training method for printing the iris detection model further comprises the following steps: preprocessing the real iris image and the printed iris image, wherein the preprocessing comprises at least one of adjusting image brightness, adjusting image size and increasing image noise;
training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image, further comprising: training the printed iris detection model based on the iris region of interest of the preprocessed real iris image and the iris region of interest of the printed iris image.
5. A printed iris detection method, the printed iris detection method comprising:
acquiring an iris image to be detected;
based on iris detection, obtaining an iris region of interest of the iris image to be detected, wherein the iris region of interest comprises an iris and a pupil;
inputting the iris region of interest into a printed iris detection model to obtain a detection result of the iris image to be detected, wherein the printed iris detection model is the printed iris detection model according to any one of claims 1 to 4;
and determining whether the iris image to be detected belongs to a printed iris or not based on the detection result.
6. The method according to claim 5, wherein determining whether the iris image to be detected belongs to a printed iris based on the detection result comprises:
if the detection result is that the probability of printing the iris is larger than the probability of printing the iris of the real body, judging whether the probability of printing the iris is larger than a second threshold value or not;
if the probability of the iris printing is larger than the second threshold, determining that the iris image to be detected belongs to the iris printing;
and if the probability of the printed iris is smaller than or equal to the second threshold value, determining that the iris image to be detected does not belong to the printed iris.
7. The method for printing an iris detection according to claim 5, wherein the obtaining the iris region of interest of the iris image to be detected based on iris detection comprises:
based on iris detection, obtaining an initial iris region of interest of the iris image to be detected;
and scaling the initial iris region of interest to a preset size to obtain the iris region of interest of the iris image to be detected.
8. A training device for printing an iris detection model, wherein the printed iris detection model is applied to detection of a printed iris, the training device for printing an iris detection model comprising:
the training set acquisition module is used for acquiring a training set, wherein the training set comprises a real iris image and a printed iris image;
the iris region-of-interest module is used for obtaining an iris region-of-interest of the real iris image and an iris region-of-interest of the printed iris image in the training set based on iris detection, wherein the iris region-of-interest of the real iris image comprises an iris of the real iris image and a pupil of the real iris image, and the iris region-of-interest of the printed iris image comprises an iris of the printed iris image and a pupil of the printed iris image;
The construction module is used for constructing the printing iris detection model;
the training module is used for training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image;
wherein, training module is used for:
training the printed iris detection model based on the iris region of interest of the real iris image and the iris region of interest of the printed iris image;
and when the loss value obtained based on the loss function is smaller than or equal to a first threshold value, training the printed iris detection model is completed, wherein the loss function is determined according to a cross entropy loss function and a center loss function.
9. The training device for printing an iris detection model according to claim 8, wherein the construction module is configured to:
the printed iris detection model was constructed based on a 3*3 normal convolution layer, three bottleneck structures, four max pooling layers, a 1*1 roll base layer, and a mean pooling layer.
10. The training device for printing an iris detection model according to claim 9, wherein the training module is configured to:
And determining the product of the center loss function and a preset parameter and the sum of the product of the center loss function and the cross entropy loss function as the loss function, wherein the center loss function is as follows:
m represents the total number of categories of the iris image; c y i represents the center of the ith category;
the cross entropy loss function is:
m represents the total number of categories of the iris image; n represents the number of input layer feature layers; e, e y =W T *x+b,W T For the parameter matrix to be trained, x is the input data matrix and b is the bias option.
11. The training device for printing an iris detection model as claimed in claim 8, wherein the training device for printing an iris detection model further comprises:
the preprocessing module is used for preprocessing the real iris image and the printed iris image, wherein the preprocessing comprises at least one of adjusting the brightness of the image, adjusting the size of the image and increasing the noise of the image;
the training module is used for: training the printed iris detection model based on the iris region of interest of the preprocessed real iris image and the iris region of interest of the printed iris image.
12. A printed iris detection apparatus, the printed iris detection apparatus comprising:
The acquisition module is used for acquiring iris images to be detected;
the iris region-of-interest acquisition module is used for acquiring an iris region-of-interest of the iris image to be detected based on iris detection, wherein the iris region-of-interest comprises an iris and a pupil;
the processing module is used for inputting the iris region of interest into a printed iris detection model to obtain a detection result of the iris image to be detected, wherein the printed iris detection model is the printed iris detection model according to any one of claims 1 to 4;
and the judging module is used for determining whether the iris image to be detected belongs to the printed iris or not based on the detection result.
13. The printed iris detection apparatus of claim 12 wherein the judgment module is configured to:
if the detection result is that the probability of printing the iris is larger than the probability of printing the iris of the real body, judging whether the probability of printing the iris is larger than a second threshold value or not;
if the probability of the iris printing is larger than the second threshold, determining that the iris image to be detected belongs to the iris printing;
and if the probability of the printed iris is smaller than or equal to the second threshold value, determining that the iris image to be detected does not belong to the printed iris.
14. The printed iris detection apparatus of claim 12 wherein the means for obtaining iris regions of interest is configured to:
based on iris detection, obtaining an initial iris region of interest of the iris image to be detected;
and scaling the initial iris region of interest to a preset size to obtain the iris region of interest of the iris image to be detected.
15. An electronic device, the electronic device comprising:
a memory for storing instructions; and
a processor for invoking the instructions stored in the memory to perform the training method of the printed iris detection model of any one of claims 1-4, or the printed iris detection method of any one of claims 5-7.
16. A computer-readable storage medium comprising,
the computer readable storage medium stores computer executable instructions that, when executed by a processor, perform the training method of the printed iris detection model of any one of claims 1-4, or the printed iris detection method of any one of claims 5-7.
CN202010219620.XA 2020-03-25 2020-03-25 Training method for printing iris detection model, and printing iris detection method and device Active CN111507198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010219620.XA CN111507198B (en) 2020-03-25 2020-03-25 Training method for printing iris detection model, and printing iris detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010219620.XA CN111507198B (en) 2020-03-25 2020-03-25 Training method for printing iris detection model, and printing iris detection method and device

Publications (2)

Publication Number Publication Date
CN111507198A CN111507198A (en) 2020-08-07
CN111507198B true CN111507198B (en) 2023-11-28

Family

ID=71864606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010219620.XA Active CN111507198B (en) 2020-03-25 2020-03-25 Training method for printing iris detection model, and printing iris detection method and device

Country Status (1)

Country Link
CN (1) CN111507198B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014077447A1 (en) * 2012-11-19 2014-05-22 아이리텍 인크 Method and apparatus for identifying living eye
WO2015200247A1 (en) * 2014-06-25 2015-12-30 Kodak Alaris Inc. Adaptable eye artifact identification and correction system
CN107437064A (en) * 2017-07-05 2017-12-05 北京中科虹霸科技有限公司 Living iris detection method based on spectrum analysis
CN108388858A (en) * 2018-02-11 2018-08-10 北京京东金融科技控股有限公司 Iris method for anti-counterfeit and device
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014077447A1 (en) * 2012-11-19 2014-05-22 아이리텍 인크 Method and apparatus for identifying living eye
CN104781830A (en) * 2012-11-19 2015-07-15 虹膜技术公司 Method and apparatus for identifying living eye
WO2015200247A1 (en) * 2014-06-25 2015-12-30 Kodak Alaris Inc. Adaptable eye artifact identification and correction system
CN107437064A (en) * 2017-07-05 2017-12-05 北京中科虹霸科技有限公司 Living iris detection method based on spectrum analysis
CN108388858A (en) * 2018-02-11 2018-08-10 北京京东金融科技控股有限公司 Iris method for anti-counterfeit and device
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks

Also Published As

Publication number Publication date
CN111507198A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
US11704409B2 (en) Post-training detection and identification of backdoor-poisoning attacks
Rodrigues et al. Robustness of multimodal biometric fusion methods against spoof attacks
CN111767400B (en) Training method and device for text classification model, computer equipment and storage medium
CN111860674A (en) Sample class identification method and device, computer equipment and storage medium
Miller et al. When not to classify: Anomaly detection of attacks (ADA) on DNN classifiers at test time
US20170004351A1 (en) Method and apparatus for detecting fake fingerprint, and method and apparatus for recognizing fingerprint
JP2022141931A (en) Method and device for training living body detection model, method and apparatus for living body detection, electronic apparatus, storage medium, and computer program
CN108053545B (en) Certificate verification method and device, server and storage medium
CN110602120B (en) Network-oriented intrusion data detection method
CN111694954B (en) Image classification method and device and electronic equipment
US20230147685A1 (en) Generalized anomaly detection
US20240135211A1 (en) Methods and apparatuses for performing model ownership verification based on exogenous feature
CN108985151B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN111507198B (en) Training method for printing iris detection model, and printing iris detection method and device
CN110070017B (en) Method and device for generating human face artificial eye image
CN114567512B (en) Network intrusion detection method, device and terminal based on improved ART2
Mazumdar et al. Siamese convolutional neural network‐based approach towards universal image forensics
CN116564315A (en) Voiceprint recognition method, voiceprint recognition device, voiceprint recognition equipment and storage medium
CN116386117A (en) Face recognition method, device, equipment and storage medium
CN116340752A (en) Predictive analysis result-oriented data story generation method and system
Bisogni et al. Multibiometric score-level fusion through optimization and training
Bokade et al. An ArmurMimus multimodal biometric system for Khosher authentication
CN115565548A (en) Abnormal sound detection method, abnormal sound detection device, storage medium and electronic equipment
CN114373098A (en) Image classification method and device, computer equipment and storage medium
CN113361314A (en) Anti-spoofing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100081 room 701, floor 7, Fuhai international port, Haidian District, Beijing

Applicant after: Beijing wanlihong Technology Co.,Ltd.

Address before: 100081 1504, floor 15, Fuhai international port, Daliushu Road, Haidian District, Beijing

Applicant before: BEIJING SUPERRED TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant