CN111507195A - Iris segmentation neural network model training method, iris segmentation method and device - Google Patents

Iris segmentation neural network model training method, iris segmentation method and device Download PDF

Info

Publication number
CN111507195A
CN111507195A CN202010202969.2A CN202010202969A CN111507195A CN 111507195 A CN111507195 A CN 111507195A CN 202010202969 A CN202010202969 A CN 202010202969A CN 111507195 A CN111507195 A CN 111507195A
Authority
CN
China
Prior art keywords
iris
segmentation
network model
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010202969.2A
Other languages
Chinese (zh)
Other versions
CN111507195B (en
Inventor
张小亮
王秀贞
戚纪纲
杨占金
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202010202969.2A priority Critical patent/CN111507195B/en
Publication of CN111507195A publication Critical patent/CN111507195A/en
Application granted granted Critical
Publication of CN111507195B publication Critical patent/CN111507195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Abstract

The disclosure relates to a training method of an iris segmentation neural network model, an iris segmentation method and an iris segmentation device. The training method of the iris segmentation neural network model comprises the following steps: acquiring an iris identification sample set, wherein the iris identification sample set comprises a plurality of iris identification samples, and the iris identification samples comprise iris identification labels; training an iris recognition network model based on a network automatic search technology and an iris recognition sample set; determining an iris segmentation neural network model, wherein the iris segmentation neural network model takes an iris recognition network model as a basic characteristic network; acquiring an iris segmentation sample set, wherein the iris segmentation sample set comprises a plurality of iris segmentation samples, and the iris segmentation samples comprise iris segmentation labels; and training an iris segmentation neural network model based on the iris segmentation sample set. By the aid of the method and the device, on the premise that the segmentation precision is guaranteed, consumption of computing resources can be reduced.

Description

Iris segmentation neural network model training method, iris segmentation method and device
Technical Field
The disclosure relates to the technical field of iris segmentation, in particular to a training method of an iris segmentation neural network model, an iris segmentation method and an iris segmentation device.
Background
With the rise of artificial intelligence, biometric identification technologies such as face identification, iris identification, fingerprint identification and the like have received great attention. Among them, the iris recognition technology is considered as one of the most stable, accurate, and reliable verification methods.
Iris image segmentation is one of the difficult problems of iris recognition. In order to segment the iris in the iris image under a complicated and variable scene, an iris segmentation technology based on deep learning is emerging at present. However, the iris segmentation technique based on deep learning will consume a large amount of computing resources to segment the iris image.
Disclosure of Invention
In order to overcome the problems in the prior art, the present disclosure provides a training method of an iris segmentation neural network model, an iris segmentation method and an iris segmentation device.
In a first aspect, an embodiment of the present disclosure provides an iris segmentation method, where a training method of an iris segmentation neural network model includes: acquiring an iris identification sample set, wherein the iris identification sample set comprises a plurality of iris identification samples, and the iris identification samples comprise iris identification labels; training an iris recognition network model based on a network automatic search technology and an iris recognition sample set; determining an iris segmentation neural network model, wherein the iris segmentation neural network model takes an iris recognition network model as a basic characteristic network; acquiring an iris segmentation sample set, wherein the iris segmentation sample set comprises a plurality of iris segmentation samples, and the iris segmentation samples comprise iris segmentation labels; and training an iris segmentation neural network model based on the iris segmentation sample set.
In one embodiment, training the iris segmentation neural network model based on the iris segmentation sample set further comprises: based on the quantization technology, the weight parameters in the iris segmentation neural network model are quantized, and the iris segmentation neural network model after the weight parameters are quantized is used as the iris segmentation neural network model obtained through final training.
In another embodiment, training an iris recognition network model based on a network automatic search technique and an iris recognition sample set comprises: dividing an iris recognition sample set into an iris recognition training set and an iris recognition verification set; based on the network automatic search technology and the iris recognition sample set, the first parameter and the second parameter in the neural network search objective function are determined so as to minimize the loss of the iris recognition verification set and the loss of the iris recognition training set, wherein the neural network search objective function is as follows:
Figure BDA0002419988220000021
s.tω*(α)=argminωζtrain(ω,α)
α is a first parameter, ω is a second parameter, ζvalLoss of verification set for iris recognition, ζtrainLoss of a training set for iris recognition; and obtaining a trained iris recognition network model based on the determined first parameter and the second parameter.
In yet another embodiment, determining the iris segmentation neural network model includes: and determining an iris segmentation neural network model based on the coding-decoding structure, wherein the basic characteristic network of the coding part in the coding-decoding structure is the iris recognition network model.
In another embodiment, training an iris segmentation neural network model based on an iris segmentation sample set includes: training an iris segmentation neural network model determined based on an encoding-decoding structure based on the iris segmentation sample set; when the loss function is smaller than or equal to a first threshold value, training the iris segmentation neural network model is completed, wherein the loss function is as follows:
ζ(pt)=-αt(1-pt)γlog(pt)
Figure BDA0002419988220000022
αtis the first accommodation coefficient, gamma is the second accommodation coefficient, y is the class of the iris segmentation label, and p is the probability of belonging to the iris positive sample.
In another embodiment, the quantifying the weight parameters in the iris segmentation neural network model based on a quantification technique includes: training to obtain a clustering center of weight parameters in an iris segmentation neural network model based on an iris segmentation sample set; quantizing the clustering center; and when the error of the segmentation result of the iris segmentation sample set before and after the weight parameter is quantized is less than or equal to a second threshold value, the weight parameter of the iris segmentation neural network model is quantized.
In a second aspect, an embodiment of the present disclosure provides an iris segmentation method, where the iris segmentation method includes: acquiring an iris image to be processed; acquiring an iris area of an iris image to be processed based on iris detection; inputting the iris region into an iris segmentation neural network model, and acquiring an iris region binary image of an iris image to be processed, wherein the iris segmentation neural network model is the iris segmentation neural network model in the first aspect of the disclosure or any embodiment of the first aspect; and based on the iris region binary image, segmenting the iris image to be processed to obtain an iris image segmentation image.
In one embodiment, inputting the iris region into the iris segmentation neural network model, and acquiring the iris region binary image of the iris image to be processed, includes: inputting the iris region into an iris segmentation neural network model to obtain an initial iris region binary image of the iris image to be processed; removing isolated points in the initial iris region binary image to obtain an iris region binary image of the iris image to be processed; in the initial iris area binary image, the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding pixel points.
In another embodiment, acquiring an iris region of an iris image to be processed based on iris detection includes: based on iris detection, obtaining the vertex coordinates of an iris candidate region of the iris image to be processed and the length and width of the iris candidate region; selecting the larger size of the length and the width as a reference size, and amplifying the reference size by a first multiple to obtain a first size; and based on the vertex coordinates and the first size, intercepting a square area on the iris image to be processed to obtain an iris area.
In another embodiment, based on the vertex coordinates and the first size, a square region is cut out from the iris image to be processed, and the iris region is obtained, including: based on the vertex coordinates and the first size, cutting a square area on the iris image to be processed to obtain a first iris area; and amplifying the first iris area by a preset multiple to obtain the iris area.
In a third aspect, an embodiment of the present disclosure provides an iris segmentation apparatus, where the iris segmentation apparatus includes: the module for acquiring the iris image to be processed is used for acquiring the iris image to be processed; the iris acquisition module is used for acquiring an iris area of an iris image to be processed based on iris detection; an iris region binary image obtaining module, configured to input an iris region into an iris segmentation neural network model, and obtain an iris region binary image of an iris image to be processed, where the iris segmentation neural network model is the iris segmentation neural network model according to the first aspect of the present disclosure or any embodiment of the first aspect; and the processing module is used for segmenting the iris image to be processed based on the iris region binary image to obtain an iris image segmentation image.
In one embodiment, the iris region binary map obtaining module is configured to: inputting the iris region into an iris segmentation neural network model to obtain an initial iris region binary image of the iris image to be processed; removing isolated points in the initial iris region binary image to obtain an iris region binary image of the iris image to be processed; in the initial iris area binary image, the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding pixel points.
In another embodiment, the acquire iris region module is to: based on iris detection, obtaining the vertex coordinates of an iris candidate region of the iris image to be processed and the length and width of the iris candidate region; selecting the larger size of the length and the width as a reference size, and amplifying the reference size by a first multiple to obtain a first size; and based on the vertex coordinates and the first size, intercepting a square area on the iris image to be processed to obtain the iris area.
In yet another embodiment, the acquire iris region module is to: based on the vertex coordinates and the first size, cutting a square area on the iris image to be processed to obtain a first iris area; and amplifying the first iris area by a preset multiple to obtain the iris area.
In a fourth aspect, an embodiment of the present disclosure provides an electronic device, where the electronic device includes: a memory to store instructions; and a processor for calling the instructions stored in the memory to execute the iris segmentation method in the second aspect of the present disclosure or any one of the embodiments of the second aspect of the present disclosure.
In a fifth aspect, the present disclosure provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when executed by a processor, the computer-executable instructions perform the iris segmentation method described in the second aspect or any one of the implementation manners of the second aspect.
The invention provides a training method of an iris segmentation neural network model, which is based on the segmentation of an iris image according to the iris segmentation neural network model obtained by training and can reduce the consumption of computing resources on the premise of ensuring the segmentation precision.
Drawings
The above and other objects, features and advantages of the embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 shows a flowchart of a training method of an iris segmentation neural network model provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating steps of training an iris recognition network model in a training method of an iris segmentation neural network model provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating the steps of training an iris segmentation neural network model in a training method of the iris segmentation neural network model provided by an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a step of quantifying weight parameters in an iris segmentation neural network model in a training method of the iris segmentation neural network model provided by an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating an iris segmentation method provided by an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating an iris segmentation apparatus provided in an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
The principles and spirit of the present disclosure will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
It should be noted that, although the expressions "first", "second", etc. are used herein to describe different modules, steps, data, etc. of the embodiments of the present disclosure, the expressions "first", "second", etc. are merely used to distinguish between different modules, steps, data, etc. and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable.
Iris image segmentation is an important part of iris recognition and directly determines the performance of the whole iris recognition. Based on the traditional iris segmentation algorithm, the iris region required by people is difficult to segment in a scene with changeable replication. Therefore, a completely new technique is required to solve this problem.
The appearance of deep learning allows us to see some eosin. Although the iris segmentation technology based on deep learning solves the segmentation bottleneck problem in some copy scenarios, the consumption of computing resources is also a challenge to us.
According to the training method of the iris segmentation neural network model, the iris image is segmented based on the iris segmentation neural network model obtained through training, and consumption of computing resources can be reduced on the premise that segmentation accuracy is guaranteed.
Fig. 1 shows a flowchart of a training method of an iris segmentation neural network model according to an embodiment of the present disclosure.
As shown in fig. 1, the training method of the iris segmentation neural network model provided by the present disclosure includes step S101, step S102, step S103, step S104, and step S105. The steps will be described separately below.
In step S101, an iris recognition sample set is acquired.
The iris identification sample set comprises a plurality of iris identification samples, and the iris identification samples comprise iris identification labels.
The iris identification samples correspond to the iris identification labels one by one. For example, the iris identification sample a corresponds to the iris identification label a, and the iris identification sample B corresponds to the iris identification label B.
The iris recognition sample set is used for training an iris recognition network model.
In step S102, an iris recognition network model is trained based on the network automatic search technique and the iris recognition sample set.
In step S103, an iris segmentation neural network model is determined.
The iris segmentation neural network model takes an iris recognition network model as a basic characteristic network.
Because the deep convolutional neural network has strong characteristic expression capability and takes the training convergence speed into consideration, the iris recognition network model can be trained firstly for enhancing the characteristic expression capability and accelerating the training convergence speed of the trained iris segmentation neural network model based on the algorithm design of the deep convolutional neural network. And the iris recognition network model obtained by training is used as a basic characteristic network of the iris segmentation neural network model.
In step S104, an iris segmentation sample set is acquired.
The iris segmentation sample set includes a plurality of iris segmentation samples, and the iris segmentation samples include iris segmentation labels.
The iris segmentation samples correspond to the iris segmentation labels one to one. For example, the iris segmentation sample a 'corresponds to the iris segmentation label a', and the iris segmentation class sample B 'corresponds to the iris segmentation label B'. Wherein, the label is cut apart to the iris includes two kinds: "is an iris" and "is not an iris".
For convenience of illustration, the iris flag of "being iris" may be set to 1, and the iris flag of "not iris" may be set to 0.
The iris segmentation sample set is used for training an iris segmentation neural network model.
In step S105, an iris segmentation neural network model is trained based on the iris segmentation sample set.
Based on the iris segmentation sample set, the training of the iris segmentation neural network model with the iris recognition network model as the basic characteristic network is realized by adjusting the weight parameters in the iris recognition network model and training other weight parameters in the iris segmentation neural network model.
In the process of training the iris segmentation neural network model based on the iris segmentation sample set, the weight parameters in the iris recognition network model determined after training can be adjusted, so that the convergence rate of the iris segmentation neural network model is effectively accelerated.
According to the training method of the iris segmentation neural network model, the iris image is segmented based on the iris segmentation neural network model obtained through training, and consumption of computing resources can be reduced on the premise that segmentation accuracy is guaranteed.
In an exemplary embodiment of the present disclosure, in step S105, training the iris segmentation neural network model based on the iris segmentation sample set further includes: and quantifying the weight parameters in the iris segmentation neural network model based on a quantification technology. And the iris segmentation neural network model after the weight parameters are quantized is used as the iris segmentation neural network model obtained by final training.
The weight parameters in the iris segmentation neural network model are quantized to change the weight parameters from floating point numbers to integers, and then a light iris segmentation neural network model is obtained. And in the process of segmenting the iris image based on the obtained light-weight iris segmentation neural network model, the consumption of computing resources is reduced.
Fig. 2 shows a flowchart of steps of training an iris recognition network model in a training method of an iris segmentation neural network model provided by an embodiment of the disclosure.
As shown in fig. 2, in an exemplary embodiment of the present disclosure, the training of the iris recognition network model based on the network automatic search technology and the iris recognition sample set of step S102 includes step S1021, step S1022, and step S1023. The steps will be described separately below.
In step S1021, the iris recognition sample set is divided into an iris recognition training set and an iris recognition verification set.
In step S1022, the first parameter and the second parameter in the objective function are searched by determining the neural network based on the network automatic search technique and the iris recognition sample set to minimize the loss of the iris recognition verification set and the loss of the iris recognition training set.
The neural network searching objective function is as follows:
Figure BDA0002419988220000071
s.tω*(α)=argminωζtrain(ω,α)
wherein α is the first parameter, ω is the second parameter, ζvalLoss of verification set for iris recognition,ζtrainLoss of the training set for iris recognition.
In step S1023, a trained iris recognition network model is obtained based on the determined first parameter and second parameter.
By relaxing the search space of the neural network from a discrete space to a continuous space, the neural network search objective function can be optimized by using the loss of the iris recognition verification set and the loss of the iris recognition training set in a gradient descending mode.
In the process of training the iris recognition network model based on the iris recognition training set and the iris recognition verification set, the first parameter α and the second parameter ω are continuously adjusted to minimize the loss of the iris recognition verification set and the loss of the iris recognition training set.
By the method, the iris recognition network model with higher accuracy can be obtained, and a foundation is laid for reducing the consumption of calculated amount in the process of training the iris segmentation neural network model.
In an exemplary embodiment of the present disclosure, the determining the iris segmentation neural network model in step S103 includes: based on the encoding-decoding structure, an iris segmentation neural network model is determined.
Wherein, the basic characteristic network of the coding part in the coding-decoding structure is an iris recognition network model; the decoding part of the encoding-decoding structure adopts a progressive upsampling operation.
The iris segmentation neural network model determined in the mode can reduce the consumption of calculated amount in the process of training the iris segmentation neural network model.
Fig. 3 is a flowchart illustrating steps of training an iris segmentation neural network model in a training method of the iris segmentation neural network model provided by an embodiment of the present disclosure.
As shown in fig. 3, in an exemplary embodiment of the present disclosure, training the iris segmentation neural network model based on the iris segmentation sample set in step S105 includes steps S1051 and S1052. The steps will be described below.
In step S1051, an iris segmentation neural network model determined based on the encoding-decoding structure is trained based on the iris segmentation sample set.
In step S1052, when the loss function is less than or equal to the first threshold, the training of the iris segmentation neural network model is completed.
Wherein the loss function is:
ζ(pt)=-αt(1-pt)γlog(pt)
Figure BDA0002419988220000081
wherein, αtIs the first adjustment coefficient, gamma is the second adjustment coefficient, y is the class of the iris segmentation label, and p is the probability of belonging to the iris positive sample.
In one embodiment, if the iris segmentation label corresponding to the iris segmentation sample is "iris yes", that is, the category y of the iris segmentation label is iris, and the corresponding probability is p; if the iris segmentation label corresponding to the iris segmentation sample is not the iris, namely the class y of the iris segmentation label is not the iris, the corresponding probability is (1-p).
First adjustment factor αtThe value of (b) may be any value between 0 and 1.
The value of the second adjustment coefficient γ may be any value between 0 and 5.
The first threshold may be adjusted according to actual conditions, and in the present disclosure, the first threshold is not particularly limited.
In the process of training the iris segmentation neural network model based on the iris segmentation sample set, the weight parameters in the iris segmentation neural network model are continuously adjusted so that the loss function is smaller than or equal to the first threshold value. By the method, the iris segmentation neural network model with higher accuracy can be obtained.
Fig. 4 shows a flowchart of a step of quantifying weight parameters in an iris segmentation neural network model in a training method of the iris segmentation neural network model provided by the embodiment of the disclosure.
As shown in fig. 4, in an exemplary embodiment of the present disclosure, quantizing the weight parameters in the iris segmentation neural network model based on a quantization technique includes step S201, step S202, and step S203. The steps will be described separately below.
In step S201, based on the iris segmentation sample set, a cluster center of the weight parameter in the iris segmentation neural network model is obtained through training.
And clustering a plurality of weight parameters in the iris segmentation neural network model obtained by training to obtain the center of weight parameter clustering.
In step S202, the cluster center is quantized.
Because the cluster center of the weight parameter has floating point number, the cluster center can be centralized into an integer by quantizing the cluster center, and the weight parameter in the iris segmentation neural network model is replaced by the integer obtained by quantization. By the method, the light iris segmentation neural network model can be obtained, and the consumption of workload can be reduced in the process of segmenting the iris image based on the iris segmentation neural network model.
In step S203, when the error of the segmentation result of the iris segmentation sample set before and after the weighting parameter is quantized is less than or equal to the second threshold, the quantization of the weighting parameter of the iris segmentation neural network model is completed.
The second threshold may be adjusted according to actual conditions, and the first threshold is not specifically limited in this disclosure.
And continuously adjusting the quantized weight parameters in the process of carrying out quantitative training on the weight parameters of the iris segmentation neural network model based on the iris segmentation sample set so as to enable the error of the segmentation result of the iris segmentation sample set to be less than or equal to a second threshold value. By the method, the iris segmentation neural network model with high accuracy and quantized weight parameters can be obtained.
The iris segmentation neural network model after the weight parameter quantization reduces the consumption of the calculated amount in the iris image segmentation process while ensuring the accuracy of iris image segmentation based on the iris segmentation neural network model.
Based on the same inventive concept, a second aspect of the embodiments of the present disclosure provides an iris segmentation method.
Fig. 5 shows a flowchart of an iris segmentation method provided by an embodiment of the present disclosure.
As shown in fig. 5, in an exemplary embodiment of the present disclosure, the iris segmentation method includes step S301, step S302, step S303, and step S304. The steps will be described separately below.
In step S301, an iris image to be processed is acquired.
The iris image to be processed is an image containing an iris, and the iris in the iris image to be processed needs to be segmented.
In step S302, based on iris detection, an iris region of an iris image to be processed is acquired.
The iris region is an image region including an iris. It is not possible to clearly determine where the iris is based on the iris region, where it is not, and further segmentation of the iris in the iris region is required.
In step S303, the iris region input value iris segmentation neural network model is used to obtain an iris region binary image of the iris image to be processed.
The iris segmentation neural network model is the iris segmentation neural network model according to the first aspect of the present disclosure or any embodiment of the first aspect.
The iris region binary image is an image that can distinguish between irises and non-irises. Two numerical identifiers are included in the iris region binary map.
In one embodiment, the iris portion in the iris region binary image may be labeled as "1" and the non-iris portion in the iris region binary image may be labeled as "0". The user can definitely segment the iris in the iris image to be processed based on the number marked in the iris region binary image.
In step S304, the iris image to be processed is segmented based on the iris region binary image, so as to obtain an iris image segmentation map.
The iris image segmentation map is an image of the segmented iris.
According to the iris segmentation method, the iris region is input into the iris segmentation neural network model, the iris region binary image of the iris image to be processed can be rapidly obtained on the premise of ensuring the segmentation precision and reducing the consumption of computing resources, and the iris is segmented on the basis of the iris region binary image.
In an exemplary embodiment of the disclosure, inputting the iris region into the iris segmentation neural network model, and acquiring the iris region binary image of the iris image to be processed comprises the following steps.
Inputting the iris region into the iris segmentation neural network model to obtain an initial iris region binary image of the iris image to be processed.
And removing the isolated points in the initial iris region binary image to obtain the iris region binary image of the iris image to be processed. In the initial iris area binary image, the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding pixel points.
The iris in the iris image to be processed is often composed of a plurality of adjacent pixel points. If a numerical value corresponding to a certain pixel point appears in the initial iris area binary image, namely the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding adjacent pixel points, the pixel point is represented as the isolated point, and whether an error occurs in the iris is judged. It is necessary to remove the outlier or adjust the value corresponding to the outlier to the opposite value.
By processing isolated points in the obtained initial iris region binary image, an iris region binary image with higher accuracy can be obtained.
In an exemplary embodiment of the present disclosure, obtaining an iris region of an iris image to be processed based on iris detection includes the following steps.
And obtaining the vertex coordinates of the iris candidate region of the iris image to be processed and the length and width of the iris candidate region based on the iris detection.
The iris candidate region is a rectangular region, and the iris candidate region includes an iris.
Based on the iris detection, the vertex coordinates of the iris candidate region may be determined, for example, the top left vertex coordinates (x, y), and the length L and width W of the iris candidate region may also be determined.
And selecting the larger size of the length and the width as a reference size, and amplifying the reference size by a first multiple to obtain a first size.
The first multiple may be adjusted according to actual conditions, and for example, may be 1.1, and in the present disclosure, the first multiple is not specifically limited.
And based on the vertex coordinates and the first size, intercepting a square area on the iris image to be processed to obtain an iris area.
The iris region of the neural network model needs to be square due to input iris segmentation. Therefore, the first size can be used as the side length of the square area, and the square area is cut on the iris image to be processed according to the vertex coordinates. By the method, a foundation is laid for inputting the iris segmentation neural network model and obtaining the iris region binary image of the iris image to be processed with higher accuracy.
In one embodiment, if the length L of the iris candidate region is greater than the width W, then the larger dimension is L, i.e., the reference dimension is L. if the first multiple is 1.1, then the first dimension is 1.1L. based on the vertex coordinates (x, y) and the first dimension 1.1L, and taking the first dimension as the size of the side length of a square region, the square region is cut out on the iris image to be processed to obtain the iris region of the iris image to be processed.
In an exemplary embodiment of the present disclosure, the step of intercepting a square region on the iris image to be processed based on the vertex coordinates and the first size to obtain the iris region includes the following steps.
And based on the vertex coordinates and the first size, intercepting a square area on the iris image to be processed to obtain a first iris area.
And amplifying the first iris area by a preset multiple to obtain the iris area.
The preset multiple may be determined according to actual conditions, and in the present disclosure, the preset multiple is not specifically limited.
In application, the first iris area is amplified by a preset multiple by amplifying the side length of the first iris area by the preset multiple, so as to obtain the iris area.
Based on the same inventive concept, a third aspect of the embodiments of the present disclosure also provides an iris segmentation apparatus.
Fig. 6 shows a schematic diagram of an iris segmentation apparatus provided by an embodiment of the present disclosure.
As shown in fig. 6, in an exemplary embodiment of the present disclosure, the iris segmentation apparatus includes a module 201 for acquiring an iris image to be processed, a module 202 for acquiring an iris region, a module 203 for acquiring a binary image of the iris region, and a processing module 204. Each module will be described separately below.
And the module 201 for acquiring the iris image to be processed is used for acquiring the iris image to be processed.
An iris region acquiring module 202, configured to acquire an iris region of an iris image to be processed based on iris detection.
The iris region binary image obtaining module 203 is configured to input the iris region into the iris segmentation neural network model, and obtain an iris region binary image of the iris image to be processed.
The iris segmentation neural network model is the iris segmentation neural network model according to the first aspect of the embodiments of the present disclosure or any embodiment of the first aspect.
And the processing module 204 is configured to segment the iris image to be processed based on the iris region binary image to obtain an iris image segmentation map.
In an exemplary embodiment of the disclosure, the obtain iris region binary map module 203 is configured to: inputting the iris region into an iris segmentation neural network model to obtain an initial iris region binary image of the iris image to be processed; removing isolated points in the initial iris region binary image to obtain an iris region binary image of the iris image to be processed; in the initial iris area binary image, the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding pixel points.
In an exemplary embodiment of the present disclosure, the acquire iris region module 202 is configured to: based on iris detection, obtaining the vertex coordinates of an iris candidate region of the iris image to be processed and the length and width of the iris candidate region; selecting the larger size of the length and the width as a reference size, and amplifying the reference size by a first multiple to obtain a first size; and based on the vertex coordinates and the first size, intercepting a square area on the iris image to be processed to obtain an iris area.
In an exemplary embodiment of the present disclosure, the acquire iris region module 202 is configured to: based on the vertex coordinates and the first size, cutting a square area on the iris image to be processed to obtain a first iris area; and amplifying the first iris area by a preset multiple to obtain the iris area.
Fig. 7 illustrates an electronic device 30 provided by an embodiment of the present disclosure.
As shown in fig. 7, an embodiment of the present disclosure provides an electronic device 30, where the electronic device 30 includes a memory 310, a processor 320, and an Input/Output (I/O) interface 330. The memory 310 is used for storing instructions. A processor 320 for calling the instructions stored in the memory 310 to execute the iris segmentation method of the present disclosure. The processor 320 is connected to the memory 310 and the I/O interface 330, respectively, for example, via a bus system and/or other connection mechanism (not shown). The memory 310 may be used to store programs and data, including programs related to iris segmentation in the embodiments of the present disclosure, and the processor 320 executes various functional applications and data processing of the electronic device 30 by executing the programs stored in the memory 310.
In the embodiment of the present disclosure, the processor 320 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable logic Array (Programmable L organic Array, P L a), and the processor 320 may be one or a combination of several Central Processing Units (CPUs) or other forms of processing units with data processing capability and/or instruction execution capability.
Memory 310 in embodiments of the present disclosure may comprise one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile Memory may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD), a Solid-State Drive (SSD), or the like.
In the disclosed embodiment, the I/O interface 330 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 30, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 330 in embodiments of the present disclosure may include one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, a joystick, a trackball, a microphone, a speaker, a touch panel, and the like.
In some embodiments, the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present disclosure can be accomplished with standard programming techniques with rule-based logic or other logic to accomplish the various method steps. It should also be noted that the words "means" and "module," as used herein and in the claims, is intended to encompass implementations using one or more lines of software code, and/or hardware implementations, and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, which is executable by a computer processor for performing any or all of the described steps, operations, or procedures.
The foregoing description of the implementations of the disclosure has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the disclosure and its practical application to enable one skilled in the art to utilize the disclosure in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (16)

1. A training method of an iris segmentation neural network model is characterized by comprising the following steps:
acquiring an iris identification sample set, wherein the iris identification sample set comprises a plurality of iris identification samples, and the iris identification samples comprise iris identification labels;
training an iris recognition network model based on a network automatic search technology and the iris recognition sample set;
determining the iris segmentation neural network model, wherein the iris segmentation neural network model takes the iris recognition network model as a basic feature network;
acquiring an iris segmentation sample set, wherein the iris segmentation sample set comprises a plurality of iris segmentation samples, and the iris segmentation samples comprise iris segmentation labels;
and training the iris segmentation neural network model based on the iris segmentation sample set.
2. The method for training an iris segmentation neural network model according to claim 1, wherein the training the iris segmentation neural network model based on the iris segmentation sample set further comprises:
based on a quantization technology, the weight parameters in the iris segmentation neural network model are quantized, and the iris segmentation neural network model after the weight parameters are quantized is used as the iris segmentation neural network model obtained through final training.
3. The method for training an iris segmentation neural network model according to claim 1, wherein the training an iris recognition network model based on the network automatic search technology and the iris recognition sample set comprises:
dividing the iris recognition sample set into an iris recognition training set and an iris recognition verification set;
based on a network automatic search technology and the iris recognition sample set, determining a first parameter and a second parameter in a neural network search objective function so as to minimize the loss of the iris recognition verification set and the loss of the iris recognition training set, wherein the neural network search objective function is as follows:
Figure FDA0002419988210000011
s.tω*(α)=argminωζtrain(ω,α)
α is the first parameter, ω is the second parameter, ζvalLoss of the iris identification verification set, ζtrainIdentifying a loss of a training set for the iris;
and obtaining the trained iris recognition network model based on the determined first parameter and the second parameter.
4. The method for training an iris segmentation neural network model according to claim 1, wherein the determining the iris segmentation neural network model comprises:
determining the iris segmentation neural network model based on a coding-decoding structure, wherein the basic feature network of a coding part in the coding-decoding structure is the iris recognition network model.
5. The method for training an iris segmentation neural network model according to claim 4, wherein training the iris segmentation neural network model based on the iris segmentation sample set comprises:
training the iris segmentation neural network model determined based on the encoding-decoding structure based on the iris segmentation sample set;
completing the training of the iris segmentation neural network model when a loss function is smaller than or equal to a first threshold, wherein the loss function is as follows:
ζ(pt)=-αt(1-pt)γlog(pt)
Figure FDA0002419988210000021
αtis the first adjustment coefficient, gamma is the second adjustment coefficient, y is the class of the iris segmentation label, and p is the probability of belonging to the iris positive sample.
6. The method for training an iris segmentation neural network model according to claim 2, wherein the quantifying the weight parameters in the iris segmentation neural network model based on a quantification technique comprises:
training to obtain a clustering center of a weight parameter in the iris segmentation neural network model based on the iris segmentation sample set;
quantizing the cluster centers;
and when the error of the segmentation result of the iris segmentation sample set before and after the weighting parameter is quantized is less than or equal to a second threshold value, the weighting parameter of the iris segmentation neural network model is quantized.
7. An iris segmentation method, comprising:
acquiring an iris image to be processed;
acquiring an iris area of the iris image to be processed based on iris detection;
inputting the iris region into an iris segmentation neural network model, and acquiring an iris region binary image of the iris image to be processed, wherein the iris segmentation neural network model is the iris segmentation neural network model of any one of claims 1 to 6;
and segmenting the iris image to be processed based on the iris region binary image to obtain an iris image segmentation image.
8. The iris segmentation method as claimed in claim 7, wherein the inputting the iris region into the iris segmentation neural network model to obtain the iris region binary image of the iris image to be processed comprises:
inputting the iris region into the iris segmentation neural network model to obtain an initial iris region binary image of the iris image to be processed;
removing isolated points in the initial iris region binary image to obtain an iris region binary image of the iris image to be processed;
in the initial iris area binary image, the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding pixel points.
9. The iris segmentation method as claimed in claim 7, wherein the obtaining of the iris region of the iris image to be processed based on iris detection comprises:
based on the iris detection, obtaining the vertex coordinates of the iris candidate region of the iris image to be processed, and the length and the width of the iris candidate region;
selecting the larger size of the length and the width as a reference size, and amplifying the reference size by a first multiple to obtain a first size;
and based on the vertex coordinates and the first size, a square area is intercepted on the iris image to be processed, and the iris area is obtained.
10. The iris segmentation method as claimed in claim 9, wherein the intercepting a square area on the iris image to be processed based on the vertex coordinates and the first size to obtain the iris area comprises:
based on the vertex coordinates and the first size, cutting a square area on the iris image to be processed to obtain a first iris area;
and amplifying the first iris area by a preset multiple to obtain the iris area.
11. An iris segmentation apparatus, characterized in that the iris segmentation apparatus comprises:
the module for acquiring the iris image to be processed is used for acquiring the iris image to be processed;
the iris acquisition module is used for acquiring an iris area of the iris image to be processed based on iris detection;
an iris region binary image obtaining module, configured to input the iris region into an iris segmentation neural network model, and obtain an iris region binary image of the iris image to be processed, where the iris segmentation neural network model is the iris segmentation neural network model according to any one of claims 1 to 6;
and the processing module is used for segmenting the iris image to be processed based on the iris region binary image to obtain an iris image segmentation image.
12. An iris segmentation device as claimed in claim 11, wherein the iris region binary map acquiring module is used for:
inputting the iris region into the iris segmentation neural network model to obtain an initial iris region binary image of the iris image to be processed;
removing isolated points in the initial iris region binary image to obtain an iris region binary image of the iris image to be processed;
in the initial iris area binary image, the numerical value corresponding to the isolated point is different from the numerical values corresponding to other surrounding pixel points.
13. The iris segmentation apparatus as claimed in claim 11, wherein the iris region acquisition module is configured to:
based on the iris detection, obtaining the vertex coordinates of the iris candidate region of the iris image to be processed, and the length and the width of the iris candidate region;
selecting the larger size of the length and the width as a reference size, and amplifying the reference size by a first multiple to obtain a first size;
and based on the vertex coordinates and the first size, a square area is intercepted on the iris image to be processed, and the iris area is obtained.
14. The iris segmentation apparatus as claimed in claim 13, wherein the iris region acquisition module is configured to:
based on the vertex coordinates and the first size, cutting a square area on the iris image to be processed to obtain a first iris area;
and amplifying the first iris area by a preset multiple to obtain the iris area.
15. An electronic device, characterized in that the electronic device comprises:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the iris segmentation method of any one of claims 7-10.
16. A computer-readable storage medium, characterized in that,
the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform the iris segmentation method of any one of claims 7-10.
CN202010202969.2A 2020-03-20 2020-03-20 Iris segmentation neural network model training method, iris segmentation method and device Active CN111507195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010202969.2A CN111507195B (en) 2020-03-20 2020-03-20 Iris segmentation neural network model training method, iris segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010202969.2A CN111507195B (en) 2020-03-20 2020-03-20 Iris segmentation neural network model training method, iris segmentation method and device

Publications (2)

Publication Number Publication Date
CN111507195A true CN111507195A (en) 2020-08-07
CN111507195B CN111507195B (en) 2023-10-03

Family

ID=71864576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010202969.2A Active CN111507195B (en) 2020-03-20 2020-03-20 Iris segmentation neural network model training method, iris segmentation method and device

Country Status (1)

Country Link
CN (1) CN111507195B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287872A (en) * 2020-11-12 2021-01-29 北京建筑大学 Iris image segmentation, positioning and normalization method based on multitask neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209893A1 (en) * 1992-05-05 2003-11-13 Breed David S. Occupant sensing system
CN102306289A (en) * 2011-09-16 2012-01-04 兰州大学 Method for extracting iris features based on pulse couple neural network (PCNN)
CN109782902A (en) * 2018-12-17 2019-05-21 中国科学院深圳先进技术研究院 A kind of operation indicating method and glasses
CN110866507A (en) * 2019-11-20 2020-03-06 北京工业大学 Method for protecting mobile phone chatting content based on iris recognition
CN115062351A (en) * 2022-08-19 2022-09-16 北京万里红科技有限公司 Privacy protection circuit and method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209893A1 (en) * 1992-05-05 2003-11-13 Breed David S. Occupant sensing system
CN102306289A (en) * 2011-09-16 2012-01-04 兰州大学 Method for extracting iris features based on pulse couple neural network (PCNN)
CN109782902A (en) * 2018-12-17 2019-05-21 中国科学院深圳先进技术研究院 A kind of operation indicating method and glasses
CN110866507A (en) * 2019-11-20 2020-03-06 北京工业大学 Method for protecting mobile phone chatting content based on iris recognition
CN115062351A (en) * 2022-08-19 2022-09-16 北京万里红科技有限公司 Privacy protection circuit and method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡烜,等: "基于音视频分析的区域安防管控平台", vol. 35, no. 6, pages 17 - 20 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287872A (en) * 2020-11-12 2021-01-29 北京建筑大学 Iris image segmentation, positioning and normalization method based on multitask neural network

Also Published As

Publication number Publication date
CN111507195B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN108764195B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN109948149B (en) Text classification method and device
WO2022042123A1 (en) Image recognition model generation method and apparatus, computer device and storage medium
CN111259397B (en) Malware classification method based on Markov graph and deep learning
US11302108B2 (en) Rotation and scaling for optical character recognition using end-to-end deep learning
US20200082213A1 (en) Sample processing method and device
WO2022134805A1 (en) Document classification prediction method and apparatus, and computer device and storage medium
US11550996B2 (en) Method and system for detecting duplicate document using vector quantization
CN114332500A (en) Image processing model training method and device, computer equipment and storage medium
CN111444807A (en) Target detection method, device, electronic equipment and computer readable medium
US11687712B2 (en) Lexical analysis training of convolutional neural network by windows of different lengths with matrix of semantic vectors
US11507744B2 (en) Information processing apparatus, information processing method, and computer-readable recording medium
US20200167655A1 (en) Method and apparatus for re-configuring neural network
CN114444668A (en) Network quantization method, network quantization system, network quantization apparatus, network quantization medium, and image processing method
CN111507195B (en) Iris segmentation neural network model training method, iris segmentation method and device
Fonseca et al. Model-agnostic approaches to handling noisy labels when training sound event classifiers
KR102097724B1 (en) Method and apparatus for mitigating the catastrophic forgetting problem in cnn for image recognition
CN113762294B (en) Feature vector dimension compression method, device, equipment and medium
US20210397946A1 (en) Method and apparatus with neural network data processing
CN111091198A (en) Data processing method and device
CN111783088A (en) Malicious code family clustering method and device and computer equipment
CN115881103B (en) Speech emotion recognition model training method, speech emotion recognition method and device
CN113743448B (en) Model training data acquisition method, model training method and device
EP4125010A1 (en) Adaptive learning based systems and methods for optimization of unsupervised clustering
CN111507198B (en) Training method for printing iris detection model, and printing iris detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100081 room 701, floor 7, Fuhai international port, Haidian District, Beijing

Applicant after: Beijing wanlihong Technology Co.,Ltd.

Address before: 100081 1504, floor 15, Fuhai international port, Daliushu Road, Haidian District, Beijing

Applicant before: BEIJING SUPERRED TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant