WO2021056808A1 - 图像处理方法及装置、电子设备和存储介质 - Google Patents
图像处理方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021056808A1 WO2021056808A1 PCT/CN2019/121695 CN2019121695W WO2021056808A1 WO 2021056808 A1 WO2021056808 A1 WO 2021056808A1 CN 2019121695 W CN2019121695 W CN 2019121695W WO 2021056808 A1 WO2021056808 A1 WO 2021056808A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- iris
- feature
- image
- feature map
- processing
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 106
- 238000000034 method Methods 0.000 claims abstract description 92
- 230000011218 segmentation Effects 0.000 claims abstract description 77
- 238000000605 extraction Methods 0.000 claims abstract description 54
- 238000007499 fusion processing Methods 0.000 claims abstract description 35
- 230000008569 process Effects 0.000 claims description 55
- 238000001514 detection method Methods 0.000 claims description 40
- 238000013528 artificial neural network Methods 0.000 claims description 37
- 210000001747 pupil Anatomy 0.000 claims description 31
- 238000010586 diagram Methods 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 20
- 230000007246 mechanism Effects 0.000 claims description 20
- 238000010606 normalization Methods 0.000 claims description 14
- 230000004913 activation Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 210000000554 iris Anatomy 0.000 claims 116
- 230000004927 fusion Effects 0.000 description 20
- 238000004891 communication Methods 0.000 description 10
- 238000012549 training Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000007781 pre-processing Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 240000000015 Iris germanica Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular to an image processing method and device, electronic equipment, and storage medium.
- Iris recognition technology uses the lifelong stability and unique characteristics of the iris for identity authentication.
- the superiority of iris recognition makes it have great application prospects in finance, e-commerce, security, immigration control and other aspects.
- the present disclosure proposes a technical solution for image processing.
- an image processing method which includes: acquiring an iris image group, the iris image group including at least two iris images to be compared; detecting the position of the iris in the iris image, and The segmentation result of the iris region in the iris image; performing multi-scale feature extraction and multi-scale feature fusion processing on the image region corresponding to the iris position to obtain the iris feature map corresponding to the iris image; using the iris image group The segmentation result and the iris feature map corresponding to the at least two iris images respectively, perform a comparison process, and determine whether the at least two iris images correspond to the same object based on the comparison result of the comparison process .
- multi-scale feature extraction can be used to extract feature information of multiple scales.
- the feature information of the bottom and high-level can be obtained at the same time, and then through multi-scale feature fusion, the resulting feature map has higher accuracy and thus more accurate Compare, improve the accuracy of the comparison result.
- the performing multi-scale feature extraction and multi-scale feature fusion processing on the image region corresponding to the iris position to obtain the iris feature map corresponding to the iris image includes: The image region corresponding to the iris position performs the multi-scale feature extraction process to obtain feature maps of multiple scales; using the feature maps of the multiple scales to form at least one feature group, the feature group includes the multiple Feature maps of at least two scales in the scale feature map; based on the attention mechanism, perform the multi-scale feature fusion processing on the feature maps in the feature group to obtain the grouped feature map corresponding to the feature group; based on the feature The grouped feature map corresponding to the grouping is obtained to obtain the iris feature map corresponding to the iris image. Based on the above configuration, the obtained feature maps of multiple scales can be grouped, and the attention mechanism can be further introduced to determine the grouped feature maps of the corresponding groupings, so as to further improve the accuracy of the obtained iris feature maps.
- the performing the multi-scale feature fusion processing on the feature maps in the feature group based on the attention mechanism to obtain the grouped feature map corresponding to the feature group includes: The connected feature maps of the feature maps of the at least two scales in the grouping perform the first convolution processing to obtain the first sub-feature map; perform the second convolution processing and the activation function processing on the first sub-feature map to obtain The second sub-feature map, the second sub-feature map represents the attention coefficient corresponding to the first sub-feature map; the product result of the first sub-feature map and the second sub-feature map is combined with the first sub-feature map One sub-feature map is added to obtain a third sub-feature map; a third convolution process is performed on the third sub-feature map to obtain a grouped feature map corresponding to the feature group.
- the obtaining the iris feature map corresponding to the iris image based on the grouping feature map corresponding to the feature grouping includes: performing a weighted sum on the grouping feature map corresponding to each of the grouping features Through processing, the iris feature map corresponding to the iris image is obtained. The grouping characteristics of each grouping are merged by weighted sum to realize the effective fusion of characteristic information.
- the segmentation result includes a mask image corresponding to an iris region in the iris image, the first identifier in the mask image represents the iris region, and the first identifier in the mask image
- the second mark indicates a location area outside the iris area.
- the detecting the position of the iris in the iris image and the segmentation result of the iris area in the iris image includes: performing target detection processing on the iris image, and determining the position of the iris image The position of the iris and the position of the pupil; based on the determined position of the iris and the position of the pupil, the segmentation process is performed on the iris image to obtain a segmentation result of the iris region in the iris image. Based on the above configuration, the detection position corresponding to the iris in the iris image and the segmentation result of the iris area can be accurately determined.
- the detecting the position of the iris in the iris image and the segmentation result of the iris area in the iris image further includes: respectively determining the image area and the corresponding iris position of the iris image. Performing normalization processing on the segmentation result; performing the multi-scale feature extraction and the multi-scale feature fusion processing on the image region corresponding to the iris position to obtain the iris feature map corresponding to the iris image, and further includes: Performing the multi-scale feature extraction and the multi-scale feature fusion processing on the image region corresponding to the iris position after the normalization process, to obtain an iris feature map corresponding to the iris image. Based on the above configuration, it is possible to perform normalization processing on the image area of the iris position and the segmentation result, and the applicability can be improved.
- using the segmentation result and the iris feature map respectively corresponding to the at least two iris images to perform the comparison processing includes: using the segmentation results respectively corresponding to the at least two iris images , Determine the first position of the iris region in the at least two iris images; respectively determine the fourth sub-feature map corresponding to the first position in the iris feature map of the at least two iris images; The degree of association between the fourth sub-feature maps respectively corresponding to each iris image determines the comparison result of the at least two iris images.
- the segmentation results corresponding to different iris images can be used to determine the position of the same iris region in the compared iris images, and the features corresponding to the positions can be used for comparison to obtain the comparison result. Reduce the interference of regional features outside the iris area and improve the accuracy of the comparison.
- the determining whether the at least two iris images correspond to the same object based on the comparison result includes: an association between the fourth sub-feature maps respectively corresponding to the at least two iris images When the degree is greater than the first threshold, it is determined that the at least two iris images correspond to the same object. Based on the above configuration, through the setting of the first threshold, it can be flexibly adapted to different scenarios, and the comparison result can be conveniently obtained.
- the determining whether the at least two iris images correspond to the same object based on the comparison result may further include: between the fourth sub-feature maps corresponding to the at least two iris images.
- the degree of association is less than or equal to the first threshold, it is determined that the at least two iris images correspond to different objects.
- the image processing method is implemented by a convolutional neural network. Based on the above configuration, the comparison result of the two iris images can be obtained accurately, conveniently and quickly through the neural network.
- an image processing device which includes: an acquisition module for acquiring an iris image group, the iris image group including at least two iris images to be compared; a detection module for Detect the position of the iris in the iris image and the segmentation result of the iris region in the iris image; a feature processing module for performing multi-scale feature extraction and multi-scale feature fusion processing on the image region corresponding to the iris position to obtain The iris feature map corresponding to the iris image; a comparison module, configured to use the segmentation result and the iris feature map corresponding to the at least two iris images to perform a comparison process based on the comparison process The comparison result determines whether the at least two iris images correspond to the same object.
- the feature processing module is further configured to perform the multi-scale feature extraction process on the image region corresponding to the iris position in the iris image to obtain feature maps of multiple scales; Feature maps of multiple scales to form at least one feature group, the feature group includes feature maps of at least two scales in the multiple scale feature maps; based on the attention mechanism, all feature maps in the feature group are performed
- the multi-scale feature fusion processing obtains the grouped feature map corresponding to the feature group; and the iris feature map corresponding to the iris image is obtained based on the grouped feature map corresponding to the feature group.
- the feature processing module is further configured to perform the multi-scale feature extraction process on the image region corresponding to the iris position in the iris image to obtain feature maps of multiple scales; Feature maps of multiple scales to form at least one feature group, the feature group includes feature maps of at least two scales in the multiple scale feature maps; based on the attention mechanism, all feature maps in the feature group are performed
- the multi-scale feature fusion processing obtains the grouped feature map corresponding to the feature group; and the iris feature map corresponding to the iris image is obtained based on the grouped feature map corresponding to the feature group.
- the feature processing module is further configured to perform weighting and processing on the grouped feature map corresponding to each of the grouped features to obtain the iris feature map corresponding to the iris image.
- the segmentation result includes a mask image corresponding to an iris region in the iris image, the first identifier in the mask image represents the iris region, and the first identifier in the mask image
- the second mark indicates a location area outside the iris area.
- the second detection module is further configured to perform target detection processing on the iris image, and determine the iris position and the pupil position of the iris image;
- the segmentation process is performed on the iris image to obtain a segmentation result of the iris region in the iris image.
- the detection module is further configured to perform normalization processing on the image area corresponding to the iris position of the iris image and the segmentation result respectively;
- the feature processing module is further configured to: perform the multi-scale feature extraction and the multi-scale feature fusion processing on the image region corresponding to the iris position after the normalization process, to obtain an iris feature map corresponding to the iris image .
- the comparison module is further configured to use the segmentation results respectively corresponding to the at least two iris images to determine the first position of the at least two iris images that are both iris regions;
- the comparison result of the at least two iris images is determined according to the degree of association between the fourth sub-feature maps respectively corresponding to the at least two iris images.
- the comparison module is further configured to determine that the at least two iris images respectively correspond to a fourth sub-feature map with a correlation degree greater than a first threshold.
- the two iris images correspond to the same object.
- the comparison module is further configured to determine that the degree of association between the fourth sub-feature maps corresponding to the at least two iris images is less than or equal to a first threshold. At least two iris images correspond to different objects.
- the device includes a neural network
- the neural network includes the acquisition module, the detection module, the feature processing module, and the comparison module.
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to call instructions stored in the memory to execute the method described in any one of the first aspect.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method described in any one of the first aspect is implemented.
- a computer program including computer-readable code, when the computer-readable code runs in an electronic device, a processor in the electronic device executes the image Approach.
- the iris region in the iris image is located and segmented, and the iris position and the iris segmentation result are obtained.
- multi-scale feature extraction and multi-scale feature extraction can be performed on the iris image. Feature fusion to obtain a high-precision iris feature map, and then use the segmentation result and the iris feature map to perform identity recognition of the iris image to determine whether each iris image corresponds to the same object.
- the extracted low-level features and high-level features can be fully integrated through multi-scale feature extraction and multi-scale feature fusion, so that the finally obtained iris feature takes into account the texture features of the bottom layer and the classification features of the high level, and improves the accuracy of feature extraction.
- it can also use the combination of the segmentation result and the iris feature map to consider only the characteristic part of the iris area, reduce the influence of other areas, and more accurately identify whether the iris image corresponds to the same object, and the detection result is higher.
- Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure
- Fig. 2 shows a schematic process diagram of an image processing method according to an embodiment of the present disclosure
- Fig. 3 shows a flowchart of step S20 in an image processing method according to an embodiment of the present disclosure
- Fig. 4 shows a schematic diagram of preprocessing of an iris image according to an embodiment of the present disclosure
- Fig. 5 shows a flowchart of step S30 in an image processing method according to an embodiment of the present disclosure
- Fig. 6 shows a schematic structural diagram of a neural network according to an image processing method implementing an embodiment of the present disclosure
- Fig. 7 shows a flowchart of step S33 in an image processing method according to an embodiment of the present disclosure
- Fig. 8 shows a flowchart of step S40 in an image processing method according to an embodiment of the present disclosure
- Fig. 9 shows a block diagram of an image processing device according to an embodiment of the present disclosure.
- FIG. 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure
- Fig. 11 shows a block diagram of another electronic device according to an embodiment of the present disclosure.
- the embodiments of the present disclosure provide an image processing method, which can be used to distinguish whether the object corresponding to the iris image is the same object, such as whether it is the iris image of the same person object, based on the iris feature corresponding to the iris image.
- the execution subject of the image processing method may be an image processing device.
- the image processing method may be executed by a terminal device or a server or other processing device.
- the terminal device may be a user equipment (UE), a mobile device, a user terminal, Terminals, cellular phones, cordless phones, personal digital assistants (PDAs), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
- the server can be a local server or a cloud server.
- the image processing method may be implemented by a processor invoking computer-readable instructions stored in the memory.
- Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in Fig. 1, the image processing method includes:
- S10 Acquire an iris image group, the iris image group including at least two iris images to be compared;
- identity verification can be performed through an iris image to identify the identity of an object corresponding to the iris image, or to determine whether the corresponding object has authority.
- the embodiments of the present disclosure may perform feature processing on the iris image, and realize the comparison of the iris image based on the obtained features, and confirm whether the object corresponding to the iris image is the same object.
- the corresponding verification operation may be further performed according to whether the determined iris image corresponds to the same object.
- the embodiments of the present disclosure can first obtain the iris image to be compared, and the iris image to be compared forms an iris image group, and at least two iris images can be obtained.
- the iris image to be compared in the embodiment of the present disclosure may be collected by an iris camera, or may be transmitted and received by other devices, or may be read from a memory.
- the foregoing is only an exemplary description. This is not specifically limited.
- preprocessing may be performed on the iris image first, where the preprocessing may include locating the iris and pupil in the iris image, and determining the positions of the iris and pupil.
- the iris position and the pupil position can be respectively expressed as the detection frame of the iris and the position corresponding to the detection frame of the pupil.
- segmentation processing can be further performed on the iris region to obtain corresponding segmentation results, where the segmentation results can be expressed as a mask image.
- the mask image can be expressed in the form of a vector or a matrix, and the mask image can correspond to the pixels of the iris image one-to-one.
- the mask image may include a first identifier and a second identifier, where the first identifier indicates that the corresponding pixel is an iris area, and the second identifier indicates that the corresponding pixel is a non-iris area.
- the first identifier may be "1”
- the second identifier may be "0", so that the area where the iris is located can be determined by the area formed by the positions of the pixel points of the first identifier in the mask image.
- S30 Perform multi-scale feature extraction and multi-scale feature fusion processing on the image region corresponding to the iris position to obtain an iris feature map corresponding to the iris image;
- a multi-scale feature extraction process can be performed on the image area corresponding to the iris position. For example, a feature map of at least two scales can be obtained, and then pass Performing convolution processing on the feature map can realize the fusion of features, and then obtain the iris feature map of the iris image.
- multiple feature maps of different scales of the image region corresponding to the iris position can be obtained in the process of feature extraction.
- feature extraction can be performed through the residual network, and then the multiple scales of the feature maps can be obtained.
- the feature map performs convolution processing at least once to obtain an iris feature map that incorporates features of different scales.
- multi-scale feature extraction the feature information of the bottom and high layers can be obtained at the same time, and the feature information of the bottom and high layers can be effectively fused through multi-scale feature fusion, and the accuracy of the iris feature map can be improved.
- the attention mechanism can also be used to obtain different attention coefficients for different features.
- the attention coefficients can indicate the importance of features. Using the attention coefficients to perform feature fusion can be more robust and more efficient. Distinguishing features.
- S40 Use the segmentation result and the iris feature map respectively corresponding to the at least two iris images to perform a comparison process, and determine whether the at least two iris images correspond to The same object.
- the segmentation results of at least two iris images that need to be compared can be compared to obtain the positions of both iris regions in the two iris images, based on the features corresponding to the positions of the iris regions
- the distance between the at least two iris images can be obtained. Wherein, if the distance is less than the first threshold, it indicates that the two iris images compared correspond to the same object, that is, the iris images belonging to the same object. Otherwise, if the distance is greater than or equal to the first threshold, it indicates that the two iris images do not belong to the same object.
- any two iris images can be compared separately to determine whether any two iris images correspond to the same According to the comparison result of each iris image, the iris image of the same object in the iris image group is determined, and the number of objects corresponding to the iris image in the iris image group can also be counted.
- FIG. 2 shows a schematic diagram of the process of an image processing method according to an embodiment of the present disclosure, in which two iris images A and B can be acquired first, and pre-processing is performed on the two iris images to obtain the iris position in the iris image , Pupil position and the segmentation results corresponding to the iris area, such as the mask image, and then the iris feature extraction module can be used to perform feature extraction and fusion processing of the image area corresponding to the iris position to obtain the iris feature map of the iris image, and then use the comparison module Based on the mask map and the corresponding iris feature map, the comparison result (score) of whether image A and image B are the same object is obtained.
- the embodiments of the present disclosure can locate and segment the iris region in the iris image by performing target detection on the iris image, and obtain a segmentation result corresponding to the iris region. At the same time, it can also perform multi-scale feature extraction on the iris image. Fusion with the feature to obtain a high-precision feature map, and then use the segmentation result and the feature map to perform identity recognition of the iris image to determine whether each iris image corresponds to the same object.
- the extracted low-level features and high-level features can be fully integrated by fusing features, so that the final iris features can take into account the texture features of the bottom layer and the classification features of the high-level, improving the accuracy of feature extraction, and can also use the segmentation results
- the combination with the feature map only considers the characteristic part of the iris area, reduces the influence of other areas, and more accurately recognizes whether the iris image corresponds to the same object, and the detection result is higher.
- Fig. 3 shows a flowchart of step S20 in an image processing method according to an embodiment of the present disclosure.
- the detecting the position of the iris in the iris image and the segmentation result of the iris area in the iris image includes:
- S21 Perform target detection processing on the iris image, and determine the iris position and the pupil position of the iris image;
- S22 Perform the segmentation process on the iris image based on the determined iris position and the pupil position to obtain a segmentation result of the iris region in the iris image.
- preprocessing may be performed on the iris image to obtain the iris feature of the iris image and the segmentation result of the iris region.
- the target detection process of the iris image can be performed first through a neural network capable of performing target detection.
- the neural network may be a convolutional neural network, which is trained to recognize the position of the iris and the position of the pupil in the iris image.
- Fig. 4 shows a schematic diagram of preprocessing of an iris image according to an embodiment of the present disclosure.
- A represents the iris image, and the iris position and pupil position in the iris image can be determined after the target detection process is executed.
- B represents the image area corresponding to the iris position in the iris image.
- the position of the iris and the position of the pupil obtained by performing target detection can be expressed as the position of the iris detection frame and the position of the pupil detection frame, and the position can be expressed as (x1, x2, y1, y2), where ( x1, y1) and (x2, y2) are the position coordinates of the two diagonal vertices of the detection frame of the iris or pupil, respectively, and the location of the area corresponding to the corresponding detection frame can be determined based on the coordinates of the two vertices.
- the position of the detection frame may also be expressed in other forms, which is not specifically limited in the present disclosure.
- the neural network that performs the target detection processing in the embodiment of the present disclosure may include: Faster R-CNN neural network (fast target recognition convolutional neural network) or Retina network (single-stage target detection network), but it is not as specific in the present disclosure. limited.
- the segmentation of the iris area in the iris image can be further performed, so that the iris area can be segmented and distinguished from other parts such as eyelids and pupils.
- the iris position and pupil position can be used to directly segment the iris area in the iris image, that is, the image area corresponding to the pupil position can be deleted from the image area corresponding to the iris position, and the image area of the iris position can be deleted.
- the remaining image area is determined as the segmentation result of the iris area, and the iris area is assigned the mask value of the first identifier, and the remaining areas are assigned the mask value of the second identifier to obtain the mask map corresponding to the iris area.
- This method has the characteristics of simplicity and convenience, and improves the processing speed.
- the iris position, the pupil position, and the corresponding iris image can be input to a neural network for performing iris segmentation, and the neural network outputs a mask map corresponding to the iris region in the iris image.
- the neural network for performing iris segmentation may be trained to be able to determine the iris region in the iris image and generate a corresponding mask map.
- the neural network may also be a convolutional neural network, for example, PSPNet (Pyramid Scene Analysis Network) or Unet (U-shaped network), but it is not a specific limitation of the present disclosure.
- C in Figure 4 represents a schematic diagram of the iris area corresponding to the iris image, where the black part represents the image area outside the iris area, the mask value of this part is the second identifier, the white part is the iris area, and the mask value of this part For the first logo.
- the iris area can be accurately detected, and the accuracy of subsequent comparison processing can be improved.
- the embodiment of the present disclosure can obtain the iris position in the iris image and the mask map of the iris region (segmentation). In the case of result), it is also possible to perform normalization processing on the image area and the mask image corresponding to the iris position, so that the normalized image area and the mask image are adjusted to the preset specifications.
- the embodiment of the present disclosure can adjust the image area of the iris position and the mask map to a height of 64 pixels and a width of 512 pixels.
- the specific dimensions of the aforementioned preset specifications are not specifically limited in this disclosure.
- the characteristic processing of the corresponding image area can be performed based on the obtained iris position to obtain the iris characteristic.
- the embodiment of the present disclosure can also perform multi-scale feature processing on the image region corresponding to the normalized iris position, so as to further improve the feature accuracy.
- the following is an example of directly performing multi-scale feature extraction and multi-scale fusion processing on the image area corresponding to the iris position.
- the multi-scale feature processing and multi-scale fusion processing of the image area corresponding to the normalized iris position are no longer To repeat the explanation, the process of the two is the same.
- Fig. 5 shows a flowchart of step S30 in an image processing method according to an embodiment of the present disclosure.
- the performing multi-scale feature extraction and multi-scale feature fusion processing on the image region corresponding to the iris position to obtain the iris feature map corresponding to the iris image includes:
- S31 Perform the multi-scale feature extraction process on the image area corresponding to the iris position in the iris image to obtain feature maps of multiple scales;
- the feature extraction process may be performed first on the image region corresponding to the iris position in the iris image, where the feature extraction process may be performed using a feature extraction neural network, for example, a residual network or a pyramid feature may be used
- the extraction network executes the feature extraction process to obtain feature maps of multiple scales corresponding to the image area where the iris position of the iris image is located.
- Fig. 6 shows a schematic structural diagram of a neural network according to an image processing method implementing an embodiment of the present disclosure.
- the network structure M represents the part of the neural network that performs feature extraction, which may be a residual network, such as ResNet18, but it is not a specific limitation of the present disclosure.
- the iris image and the iris position of the iris image can be input to the feature extraction neural network, and the neural network is processed through the feature extraction to obtain the feature corresponding to the image area corresponding to the iris position of the iris image, that is, the feature of multiple scales.
- the image area corresponding to the position of the iris may be firstly intercepted from the iris image, and the image area may be input to the feature extraction neural network to obtain feature maps of multiple scales, wherein the features of the multiple scales
- the image can be output from different convolutional layers of the feature extraction neural network, and at least two feature maps of different scales can be obtained.
- the obtained feature maps of multiple scales can include low-level feature information (the feature map obtained by using the convolutional layer in front of the network architecture) or high-level feature information (using the network architecture The feature map obtained by the subsequent convolutional layer), through the fusion of the above features, a more accurate and comprehensive iris feature can be obtained.
- S32 Use the feature maps of the multiple scales to form at least one feature group, where the feature group includes feature maps of at least two scales in the feature maps of the multiple scales;
- At least one feature group may be formed based on the feature maps of the multiple scales.
- the feature maps of multiple scales can be regarded as a feature group, and subsequent feature fusion processing can be performed, or at least two feature groups can be formed, and each feature group can include at least two feature maps of different scales, and
- the different feature groups formed by the embodiments of the present disclosure may include the same feature map, that is, any two feature groups may include at least one different feature map.
- the multi-scale feature map obtained in step S31 may include F1, F2, and F3, and the scales of the three feature maps are different.
- a first preset number of feature groups may be formed, and the first preset number may be an integer greater than or equal to 1, for example, the first preset number may take a value of 2 in the embodiment of the present disclosure.
- each feature group can be assigned a second preset number of feature maps, where the second preset number of feature maps can be randomly selected from feature maps of multiple scales to form a feature group, and the selected feature map remains Can be grouped and selected by other features.
- the second preset number may be an integer greater than or equal to 2.
- the second preset number in the embodiment of the present disclosure may take a value of 2.
- the feature maps in one feature group formed are F1 and F2, and the features in another feature group can be F1 and F3.
- feature fusion processing may be performed on the feature maps in each feature group.
- a spatial attention mechanism is adopted.
- the convolution processing based on the attention mechanism can be realized through the spatial attention neural network, and the obtained feature map further highlights the important features.
- the importance of each position of the spatial feature can be learned adaptively, and the attention coefficient of the feature object of each position can be formed. This coefficient can represent the interval of [0,1] The coefficient value.
- the spatial attention neural network can be a network structure N.
- grouping convolution and standard convolution processing can be further performed to further obtain the fusion feature of each feature map in each feature group, that is, the grouped feature map.
- S34 Obtain an iris feature map corresponding to the iris image based on the grouped feature map corresponding to the feature grouping.
- feature fusion can be performed on the grouped feature maps of different feature groups to obtain the iris features corresponding to the iris image.
- the sum result of the grouped feature maps of each feature group can be used as the iris feature map, or the weighted sum of each grouped feature map can be used as the iris feature map, wherein the weighting coefficient of the weighted sum can be based on requirements And the setting of the scene, this disclosure does not specifically limit this.
- the fusion can be performed for different feature maps.
- the attention mechanism can further improve the attention of important features, and then the fusion processing of grouped feature maps based on different feature groups can further integrate the features of each part more comprehensively.
- Fig. 7 shows a flowchart of step S33 in an image processing method according to an embodiment of the present disclosure.
- the multi-scale feature fusion process is performed on the feature maps in the feature group based on the attention mechanism to obtain the The grouping feature map corresponding to the feature grouping, including:
- S331 Perform a first convolution process on the connected feature maps of the feature maps of the at least two scales in the feature group to obtain a first sub-feature map;
- connection processing on the feature maps in each feature group, such as concatenate in the channel direction, to obtain a connected feature map, as shown in FIG. 6, in which the convolution is enlarged.
- SAFFM Neural Network of Attention Mechanism
- the scale of the connection feature map obtained by the connection process can be expressed as (C, H, W), C represents the number of channels of the connection feature map, H represents the height of the connection feature map, and W represents the width of the connection feature map.
- F1 and F2 in the feature maps in the above feature group can be connected, and the feature maps F1 and F3 in another feature group can be connected to obtain corresponding connection feature maps respectively.
- the first convolution process can be performed on each connection feature map, such as using a 3*3 convolution kernel to perform the first convolution process, and then batch normalization and activation can also be performed Function processing to obtain the first sub-feature map corresponding to the connected feature map.
- the scale of the first sub-feature map can be expressed as (C/2, H, W), and the parameters in the feature map can be reduced through the first convolution process , Which reduces the subsequent calculation cost.
- S332 Perform second convolution processing and activation function processing on the first sub-feature map to obtain a second sub-feature map, where the second sub-feature map represents the attention coefficient corresponding to the first sub-feature map;
- the second convolution process may be performed on the obtained first sub-feature map.
- two convolution layers may be used to perform the second convolution process, one of which is convolution.
- batch normalization and activation function processing are performed to obtain the first intermediate feature map.
- the scale of the first intermediate feature map can be expressed as (C/8, H, W)
- the convolution processing of the 1*1 convolution kernel is performed on the intermediate feature map through the second convolution layer to obtain the second intermediate feature map of (1, H, W).
- dimensionality reduction processing can be performed on the first sub-feature map to obtain a single-channel second intermediate feature map.
- the sigmoid function can be used to perform activation function processing on the second intermediate feature map. After the second intermediate feature map is processed by the sigmoid function, a second sub-feature map corresponding to the first sub-feature map can be obtained. Each element in the figure represents the attention coefficient of the feature value of each pixel in the first sub-feature map. The coefficient value can be a value in the range of [0,1].
- product processing may be performed on the first sub-feature map and the second sub-feature map, such as multiplying corresponding elements. Then the product result is added to the first sub-characteristic map (add), that is, the corresponding elements are added to obtain the third sub-characteristic map.
- the characteristic map output by SAFFM is the third characteristic map. Since the input feature groups are different, the third sub-feature map obtained is also different.
- S334 Perform a third convolution process on the third sub-feature map to obtain a grouped feature map corresponding to the feature group.
- a third convolution process may be performed on the third sub-feature map, and the third convolution process may include at least one of grouped convolution processing and standard convolution processing.
- the third convolution process can further realize the further fusion of the feature information in each feature group.
- the third convolution process can include grouped convolution (depthwise conv) and standard convolution with a 1*1 convolution kernel, where grouped convolution can speed up the convolution and at the same time improve the convolution features. Accuracy.
- the grouped feature map corresponding to each feature group can be finally output.
- the grouped feature map effectively integrates the feature information of each feature map in the feature group.
- the weighted sum or addition of the grouped feature maps can be used to obtain the iris feature map corresponding to the iris image.
- Fig. 8 shows a flowchart of step S40 in an image processing method according to an embodiment of the present disclosure.
- the using the mask map and the iris feature map corresponding to the at least two iris images in the iris image group to perform the comparison processing includes:
- S41 Determine a first position of the at least two iris images that is the same as the iris area by using the segmentation results respectively corresponding to the at least two iris images;
- the segmentation result may be expressed as a mask image of the location of the iris region in the iris image. Based on this, it is possible to determine the first position of the iris region in the iris image to be compared according to the mask map of each iris image.
- the first identifier in the mask image can indicate the position of the iris area. If the pixel points at the same position in the mask images of two iris images have the corresponding mask values as the first identifier, that is, It can indicate that the pixel point is located in the mask area in the two iris images, and based on the positions of all such pixel points, it can be determined that the two iris images are the first positions of the iris area.
- the first position of the same iris region can be determined according to the product between the mask images of the two iris images, wherein the product result of the mask image is still the first identified pixel
- the position of the point is the first position of the same iris area.
- S42 Determine respectively the fourth sub-feature map corresponding to the first position in the iris feature map of the at least two iris images
- the embodiment of the present disclosure can obtain the feature corresponding to the above-mentioned first position in the iris feature map of each iris image, that is, the fourth sub-feature map.
- the embodiment of the present disclosure may determine the characteristic value of the corresponding pixel according to the coordinates of the first position, and form the fourth sub-characteristic map according to the determined characteristic value and the corresponding pixel.
- the iris feature map of the two iris images to be compared can be obtained at the same position as the iris area. Performing the comparison of the two iris images according to this feature can reduce the characteristic information of the non-iris area. Influence and improve the accuracy of the comparison.
- S43 Determine a comparison result of the at least two iris images according to the degree of association between the fourth sub-feature maps respectively corresponding to the at least two iris images.
- the correlation degree between the fourth sub-feature maps corresponding to the two iris images to be compared can be obtained, and then the correlation degree can be obtained Determine the correlation between the two iris images to be compared, that is, the comparison result.
- the above-mentioned correlation degree may be Euclidean distance, or may also be cosine similarity, which is not specifically limited in the present disclosure.
- the comparison process of two iris images to be compared can be expressed as:
- SD (f 1 , f 2 ) represents the comparison result (degree of association) between two iris images
- m 1 and m 2 represent the mask images of the two iris images
- f 1 and f 2 represent two iris images. Iris feature map of each iris image.
- the two iris images to be compared it can be determined whether the two iris images correspond to the same person object according to the comparison result.
- the correlation between the fourth sub-feature maps corresponding to the two iris images to be compared is greater than the first threshold, it can indicate that the correlation between the two iris images is high, and at this time, it can be determined that the The two iris images to be compared correspond to the same object.
- the correlation between the fourth sub-feature maps corresponding to the two iris images to be compared is less than or equal to the first threshold, it may indicate that the correlation between the two iris images is revealed, and it is determined at this time
- the two iris images to be compared correspond to different objects.
- the first threshold may be a preset value, such as 70%, but it is not a specific limitation of the present disclosure.
- the image processing method provided by the embodiment of the present disclosure can be implemented using a neural network, for example, can be implemented using the network structure shown in FIG. 6, and the process of training the neural network will be described below.
- the training image group can be obtained.
- the training images can include at least two iris images of human objects, each of which has at least one iris image, and the resolution, image quality, and size of each iris image can be different, so Can improve the applicability of neural networks.
- the neural network can be used to perform image processing of the training images, and the grouped feature maps corresponding to the feature groups obtained by the image processing of each training image can be obtained. Then obtain the network loss of the neural network based on the obtained grouping feature map.
- the network loss is less than the loss threshold, it can indicate that the detection accuracy of the neural network meets the requirements and can be applied.
- the network loss is greater than or equal to the loss threshold , Can feedback and adjust the parameters of the neural network network, such as convolution parameters, etc., until the loss function obtained is less than the loss threshold.
- the loss threshold may be a value set according to requirements, such as 0.1, but it is not a specific limitation of the present disclosure.
- the embodiments of the present disclosure may determine the network loss according to the minimum degree of association between the iris feature maps of the same person object and the maximum degree of association between different person objects.
- the loss function can be expressed as:
- L total ⁇ 1 L 1 + ⁇ 1 L 2 .
- Ls represents the network loss corresponding to the grouped feature maps obtained from the same feature fusion branch
- P represents the total number of person objects
- K represents the total number of iris images of each person object
- s represents the number of feature grouping groups
- m represents the common iris Area
- B represents the number of columns in the grouping feature map
- the MMSD function represents the degree of association between the features.
- MMSD(f 1,s ,f 2,s ) represents the two training images
- the grouped feature map of a training image f 1, s is the feature map obtained after column transposition
- L total represents the weighted sum of the network loss corresponding to the grouped feature maps obtained for different feature fusion branches, that is, the network loss of the entire neural network
- ⁇ 1 and ⁇ 1 respectively represent the weighting coefficient
- L 1 and L 2 respectively represent the corresponding two groups Network loss.
- the iris region in the iris image is located and segmented, and the segmentation result corresponding to the iris position and the iris region is obtained.
- multi-scale feature extraction and extraction of the iris image can be performed. Multi-scale feature fusion to obtain a high-precision iris feature map, and then use the segmentation result and the iris feature map to perform identity recognition of the iris image to determine whether each iris image corresponds to the same object.
- the extracted low-level features and high-level features can be fully integrated through multi-scale feature extraction and multi-scale fusion features, so that the finally obtained iris feature takes into account the texture features of the bottom layer and the classification features of the high-level, improving the accuracy of feature extraction
- it can also use the combination of the segmentation result and the iris feature map to consider only the characteristic part of the iris area, reduce the influence of other areas, and more accurately identify whether the iris image corresponds to the same object, and the detection result is higher.
- the embodiments of the present disclosure can adopt a spatial attention mechanism in a neural network to allow the network to adaptively learn iris features in view of the different importance of texture regions in the iris image.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides image processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in the present disclosure.
- image processing devices electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in the present disclosure.
- Fig. 9 shows a block diagram of an image processing device according to an embodiment of the present disclosure. As shown in Fig. 9, the image processing device includes:
- the acquiring module 10 is configured to acquire an iris image group, the iris image group including at least two iris images to be compared;
- the detection module 20 is used to detect the position of the iris in the iris image and the segmentation result of the iris area in the iris image;
- the feature processing module 30 is configured to perform multi-scale feature extraction and multi-scale feature fusion processing on the image region corresponding to the iris position to obtain an iris feature map corresponding to the iris image;
- the comparison module 40 is configured to use the segmentation result and the iris feature map corresponding to the at least two iris images to perform a comparison process, and determine the to-be-compared based on the comparison result of the comparison process Whether the two iris images correspond to the same object.
- the feature processing module is further configured to perform the multi-scale feature extraction process on the image region corresponding to the iris position in the iris image to obtain feature maps of multiple scales;
- the feature group including feature maps of at least two scales in the feature maps of the multiple scales;
- the iris feature map corresponding to the iris image is obtained based on the grouped feature map corresponding to the feature grouping.
- the feature processing module is further configured to perform the multi-scale feature extraction process on the image region corresponding to the iris position in the iris image to obtain feature maps of multiple scales;
- the feature group including feature maps of at least two scales in the feature maps of the multiple scales;
- the iris feature map corresponding to the iris image is obtained based on the grouped feature map corresponding to the feature grouping.
- the feature processing module is further configured to perform weighting and processing on the grouped feature map corresponding to each of the grouped features to obtain the iris feature map corresponding to the iris image.
- the segmentation result includes a mask image corresponding to an iris region in the iris image, the first identifier in the mask image represents the iris region, and the first identifier in the mask image
- the second mark indicates a location area outside the iris area.
- the second detection module is further configured to perform target detection processing on the iris image, and determine the iris position and the pupil position of the iris image;
- the segmentation process is performed on the iris image to obtain a segmentation result of the iris region in the iris image.
- the detection module is further configured to perform normalization processing on the image area corresponding to the iris position of the iris image and the segmentation result respectively;
- the feature processing module is also used for:
- the comparison module is further configured to use the segmentation results respectively corresponding to the at least two iris images to determine the first position of the at least two iris images that are both iris regions;
- the comparison result of the at least two iris images is determined according to the degree of association between the fourth sub-feature maps respectively corresponding to the at least two iris images.
- the comparison module is further configured to determine that the at least two iris images respectively correspond to a fourth sub-feature map with a correlation degree greater than a first threshold.
- the two iris images correspond to the same object.
- the comparison module is further configured to determine that the degree of association between the fourth sub-feature maps corresponding to the at least two iris images is less than or equal to a first threshold. At least two iris images correspond to different objects.
- the device includes a neural network
- the neural network includes the acquisition module, the detection module, the feature processing module, and the comparison module.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable read only memory
- PROM programmable read only memory
- ROM read only memory
- magnetic memory flash memory
- flash memory magnetic disk or optical disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application-specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field-available A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- the embodiments of the present disclosure also provide a computer program product, including computer readable code, and when the computer readable code runs on the device, the processor in the device executes instructions for implementing the method provided in any of the above embodiments.
- the computer program product can be specifically implemented by hardware, software, or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
- SDK software development kit
- Fig. 11 shows a block diagram of another electronic device according to an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server. 11, the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
- the present disclosure may be a system, method and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- FPGA field programmable gate array
- PDA programmable logic array
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
- Executable instructions may also occur in a different order than the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims (25)
- 一种图像处理方法,其特征在于,包括:获取虹膜图像组,所述虹膜图像组包括待比对的至少两个虹膜图像;检测所述虹膜图像中的虹膜位置,以及所述虹膜图像中虹膜区域的分割结果;对所述虹膜位置对应的图像区域执行多尺度特征提取和多尺度特征融合处理,得到所述虹膜图像对应的虹膜特征图;利用所述至少两个虹膜图像分别对应的所述分割结果和所述虹膜特征图,执行比对处理,基于所述比对处理得到的比对结果确定所述至少两个虹膜图像是否对应于同一对象。
- 根据权利要求1所述的方法,其特征在于,所述对所述虹膜位置对应的图像区域执行多尺度特征提取和多尺度特征融合处理,得到所述虹膜图像对应的虹膜特征图,包括:对所述虹膜图像中所述虹膜位置对应的图像区域执行所述多尺度特征提取处理,得到多个尺度的特征图;利用所述多个尺度的特征图,形成至少一个特征分组,所述特征分组包括所述多个尺度特征图中至少两个尺度的特征图;基于注意力机制,对所述特征分组中的特征图执行所述多尺度特征融合处理,得到所述特征分组对应的分组特征图;基于所述特征分组对应的分组特征图得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求2所述的方法,其特征在于,所述基于注意力机制,对所述特征分组中的特征图执行所述多尺度特征融合处理,得到所述特征分组对应的分组特征图,包括:对所述特征分组中的所述至少两个尺度的特征图的连接特征图执行第一卷积处理,得到第一子特征图;对所述第一子特征图执行第二卷积处理以及激活函数处理,得到第二子特征图,所述第二子特征图表示所述第一子特征图对应的注意力系数;将所述第一子特征图和所述第二子特征图的乘积结果与所述第一子特征图相加,得到第三子特征图;对所述第三子特征图执行第三卷积处理,得到所述特征分组对应的分组特征图。
- 根据权利要求2或3所述的方法,其特征在于,所述基于所述特征分组对应的分组特征图得到所述虹膜图像对应的虹膜特征图,包括:对每个所述分组特征对应的所述分组特征图执行加权和处理,得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求1-4中任意一项所述的方法,其特征在于,所述分割结果包括所述虹膜图像中虹膜区域对应的掩码图,所述掩码图中的第一标识表示所述虹膜区域,所述掩码图中的第二标识表示所述虹膜区域以外的位置区域。
- 根据权利要求1-5中任意一项所述的方法,其特征在于,所述检测所述虹膜图像中的虹膜位置,以及所述虹膜图像中虹膜区域的分割结果,包括:对所述虹膜图像执行目标检测处理,确定所述虹膜图像的虹膜位置以及瞳孔位置;基于确定的所述虹膜位置和所述瞳孔位置,对所述虹膜图像执行所述分割处理,得到所述虹膜图像中虹膜区域的分割结果。
- 根据权利要求1-6中任意一项所述的方法,其特征在于,所述检测所述虹膜图像中的虹膜位置,以及所述虹膜图像中虹膜区域的分割结果,还包括:分别对所述虹膜图像的所述虹膜位置对应的图像区域和所述分割结果执行归一化处理;所述对所述虹膜位置对应的图像区域执行所述多尺度特征提取和所述多尺度特征融合处理,得到所述虹膜图像对应的虹膜特征图,还包括:对归一化处理后的所述虹膜位置对应的图像区域执行所述多尺度特征提取和所述多尺度特征融合处理,得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求1-7中任意一项所述的方法,其特征在于,利用所述至少两个虹膜图像分别对应的 所述分割结果和所述虹膜特征图,执行比对处理,包括:利用所述至少两个虹膜图像分别对应的所述分割结果,确定所述至少两个虹膜图像中同为虹膜区域的第一位置;分别确定所述至少两个虹膜图像的虹膜特征图中与所述第一位置对应的第四子特征图;根据所述至少两个虹膜图像分别对应的所述第四子特征图之间的关联度,对所述至少两个虹膜图像执行比对处理。
- 根据权利要求8所述的方法,其特征在于,所述基于所述比对处理得到的比对结果确定所述至少两个虹膜图像是否对应于同一对象,包括:在所述至少两个虹膜图像分别对应的第四子特征图之间的关联度大于第一阈值的情况下,确定所述至少两个虹膜图像对应于同一对象。
- 根据权利要求8或9所述的方法,其特征在于,所述基于比对结果确定所述至少两个虹膜图像是否对应于同一对象,还包括:在所述至少两个虹膜图像分别对应的第四子特征图之间的关联度小于或者等于第一阈值的情况下,确定所述至少两个虹膜图像对应于不同对象。
- 根据权利要求1-10中任意一项所述的方法,其特征在于,所述图像处理方法通过卷积神经网络实现。
- 一种图像处理装置,其特征在于,包括:获取模块,用于获取虹膜图像组,所述虹膜图像组包括待比对的至少两个虹膜图像;检测模块,用于检测所述虹膜图像中的虹膜位置,以及所述虹膜图像中虹膜区域的分割结果;特征处理模块,用于对所述虹膜位置对应的图像区域执行多尺度特征提取和多尺度特征融合处理,得到所述虹膜图像对应的虹膜特征图;比对模块,用于利用所述至少两个虹膜图像分别对应的所述分割结果和所述虹膜特征图,执行比对处理,基于所述比对处理的比对结果确定所述至少两个虹膜图像是否对应于同一对象。
- 根据权利要求12所述的装置,其特征在于,所述特征处理模块还用于对所述虹膜图像中所述虹膜位置对应的图像区域执行所述多尺度特征提取处理,得到多个尺度的特征图;利用所述多个尺度的特征图,形成至少一个特征分组,所述特征分组包括所述多个尺度特征图中至少两个尺度的特征图;基于注意力机制,对所述特征分组中的特征图执行所述多尺度特征融合处理,得到所述特征分组对应的分组特征图;基于所述特征分组对应的分组特征图得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求13所述的装置,其特征在于,所述特征处理模块还用于对所述虹膜图像中所述虹膜位置对应的图像区域执行所述多尺度特征提取处理,得到多个尺度的特征图;利用所述多个尺度的特征图,形成至少一个特征分组,所述特征分组包括所述多个尺度特征图中至少两个尺度的特征图;基于注意力机制,对所述特征分组中的特征图执行所述多尺度特征融合处理,得到所述特征分组对应的分组特征图;基于所述特征分组对应的分组特征图得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求13或14所述的装置,其特征在于,所述特征处理模块还用于对每个所述分组特征对应的所述分组特征图执行加权和处理,得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求12-15中任意一项所述的装置,其特征在于,所述分割结果包括所述虹膜图像中虹膜区域对应的掩码图,所述掩码图中的第一标识表示所述虹膜区域,所述掩码图中的第二标识表示所述虹膜区域以外的位置区域。
- 根据权利要求12-16中任意一项所述的装置,其特征在于,所述检测模块还用于对所述虹膜图像执行目标检测处理,确定所述虹膜图像的所述虹膜位置以及瞳孔位置;基于确定的所述虹膜位置和所述瞳孔位置,对所述虹膜图像执行所述分割处理,得到所述虹膜图 像中虹膜区域的分割结果。
- 根据权利要求12-17中任意一项所述的装置,其特征在于,所述检测模块还用于分别对所述虹膜图像的虹膜位置对应的图像区域和所述分割结果执行归一化处理;所述特征处理模块还用于:对归一化处理后的所述虹膜位置对应的图像区域执行所述多尺度特征提取和所述多尺度特征融合处理,得到所述虹膜图像对应的虹膜特征图。
- 根据权利要求12-18中任意一项所述的装置,其特征在于,所述比对模块还用于利用所述至少两个虹膜图像分别对应的分割结果,确定所述至少两个虹膜图像中同为虹膜区域的第一位置;分别确定所述至少两个虹膜图像的虹膜特征图中与所述第一位置对应的第四子特征图;根据所述至少两个虹膜图像分别对应的第四子特征图之间的关联度,确定所述至少两个虹膜图像的比对结果。
- 根据权利要求19所述的装置,其特征在于,所述比对模块还用于在所述至少两个虹膜图像分别对应的第四子特征图之间的关联度大于第一阈值的情况下,确定所述至少两个虹膜图像对应于同一对象。
- 根据权利要求19或20所述的装置,其特征在于,所述比对模块还用于在所述至少两个虹膜图像分别对应的第四子特征图之间的关联度小于或者等于第一阈值的情况下,确定所述至少两个虹膜图像对应于不同对象。
- 根据权利要求12-21中任意一项所述的装置,其特征在于,所述装置包括神经网络,所述神经网络包括所述获取模块、所述检测模块、所述特征处理模块以及所述比对模块。
- 一种电子设备,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至11中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至11中任意一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-11中的任一权利要求所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217008623A KR20210047336A (ko) | 2019-09-26 | 2019-11-28 | 이미지 처리 방법 및 장치, 전자 기기 및 기억 매체 |
SG11202013254VA SG11202013254VA (en) | 2019-09-26 | 2019-11-28 | Image processing method and device, electronic apparatus and storage medium |
JP2021500196A JP7089106B2 (ja) | 2019-09-26 | 2019-11-28 | 画像処理方法及び装置、電子機器、コンピュータ読取可能記憶媒体及びコンピュータプログラム |
US17/137,819 US11532180B2 (en) | 2019-09-26 | 2020-12-30 | Image processing method and device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910919121.9A CN110688951B (zh) | 2019-09-26 | 2019-09-26 | 图像处理方法及装置、电子设备和存储介质 |
CN201910919121.9 | 2019-09-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/137,819 Continuation US11532180B2 (en) | 2019-09-26 | 2020-12-30 | Image processing method and device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021056808A1 true WO2021056808A1 (zh) | 2021-04-01 |
Family
ID=69110423
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/121695 WO2021056808A1 (zh) | 2019-09-26 | 2019-11-28 | 图像处理方法及装置、电子设备和存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US11532180B2 (zh) |
JP (1) | JP7089106B2 (zh) |
KR (1) | KR20210047336A (zh) |
CN (1) | CN110688951B (zh) |
SG (1) | SG11202013254VA (zh) |
TW (1) | TWI724736B (zh) |
WO (1) | WO2021056808A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642393A (zh) * | 2021-07-07 | 2021-11-12 | 重庆邮电大学 | 基于注意力机制的多特征融合视线估计方法 |
CN113643305A (zh) * | 2021-08-10 | 2021-11-12 | 珠海复旦创新研究院 | 一种基于深度网络上下文提升的人像检测与分割方法 |
CN114519723A (zh) * | 2021-12-24 | 2022-05-20 | 上海海洋大学 | 一种基于金字塔影像分割的陨石坑自动提取方法 |
WO2023169582A1 (zh) * | 2022-03-11 | 2023-09-14 | 北京字跳网络技术有限公司 | 图像增强方法、装置、设备及介质 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111292343B (zh) * | 2020-01-15 | 2023-04-28 | 东北大学 | 一种基于多视角下的肺叶分割方法和装置 |
CN111260627B (zh) * | 2020-01-15 | 2023-04-28 | 东北大学 | 一种基于肺叶的肺气肿区域判断方法和装置 |
CN111401145B (zh) * | 2020-02-26 | 2022-05-03 | 三峡大学 | 一种基于深度学习与ds证据理论的可见光虹膜识别方法 |
CN111858989B (zh) * | 2020-06-09 | 2023-11-10 | 西安工程大学 | 一种基于注意力机制的脉冲卷积神经网络的图像分类方法 |
CN111681273B (zh) * | 2020-06-10 | 2023-02-03 | 创新奇智(青岛)科技有限公司 | 图像分割方法、装置、电子设备及可读存储介质 |
CN111862034B (zh) * | 2020-07-15 | 2023-06-30 | 平安科技(深圳)有限公司 | 图像检测方法、装置、电子设备及介质 |
CN112184635A (zh) * | 2020-09-10 | 2021-01-05 | 上海商汤智能科技有限公司 | 目标检测方法、装置、存储介质及设备 |
CN112288723B (zh) * | 2020-10-30 | 2023-05-23 | 北京市商汤科技开发有限公司 | 缺陷检测方法、装置、计算机设备及存储介质 |
CN112287872B (zh) * | 2020-11-12 | 2022-03-25 | 北京建筑大学 | 基于多任务神经网络的虹膜图像分割、定位和归一化方法 |
CN113076926B (zh) * | 2021-04-25 | 2022-11-18 | 华南理工大学 | 一种带语义引导的多尺度目标检测方法及系统 |
CN113486815B (zh) * | 2021-07-09 | 2022-10-21 | 山东力聚机器人科技股份有限公司 | 一种行人重识别系统和方法、计算机设备及存储介质 |
CN113591843B (zh) * | 2021-07-12 | 2024-04-09 | 中国兵器工业计算机应用技术研究所 | 仿初级视觉皮层的目标检测方法、装置及设备 |
CN114359120B (zh) * | 2022-03-21 | 2022-06-21 | 深圳市华付信息技术有限公司 | 遥感影像处理方法、装置、设备及存储介质 |
CN114998980B (zh) * | 2022-06-13 | 2023-03-31 | 北京万里红科技有限公司 | 一种虹膜检测方法、装置、电子设备及存储介质 |
CN115100730B (zh) * | 2022-07-21 | 2023-08-08 | 北京万里红科技有限公司 | 虹膜活体检测模型的训练方法、虹膜活体检测方法及装置 |
CN116704666A (zh) * | 2023-06-21 | 2023-09-05 | 合肥中科类脑智能技术有限公司 | 售卖方法及计算机可读存储介质、自动售卖机 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778664A (zh) * | 2016-12-29 | 2017-05-31 | 天津中科智能识别产业技术研究院有限公司 | 一种虹膜图像中虹膜区域的分割方法及其装置 |
CN107506754A (zh) * | 2017-09-19 | 2017-12-22 | 厦门中控智慧信息技术有限公司 | 虹膜识别方法、装置及终端设备 |
CN108229531A (zh) * | 2017-09-29 | 2018-06-29 | 北京市商汤科技开发有限公司 | 对象特征处理方法、装置、存储介质和电子设备 |
US20180218213A1 (en) * | 2017-02-02 | 2018-08-02 | Samsung Electronics Co., Ltd. | Device and method of recognizing iris |
CN109426770A (zh) * | 2017-08-24 | 2019-03-05 | 合肥虹慧达科技有限公司 | 虹膜识别方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6247813B1 (en) * | 1999-04-09 | 2001-06-19 | Iritech, Inc. | Iris identification system and method of identifying a person through iris recognition |
JP3586431B2 (ja) * | 2001-02-28 | 2004-11-10 | 松下電器産業株式会社 | 個人認証方法および装置 |
JP2004206444A (ja) * | 2002-12-25 | 2004-07-22 | Matsushita Electric Ind Co Ltd | 個人認証方法および虹彩認証装置 |
CN100498837C (zh) * | 2004-05-10 | 2009-06-10 | 松下电器产业株式会社 | 虹彩注册方法、虹彩注册装置 |
KR100629550B1 (ko) * | 2004-11-22 | 2006-09-27 | 아이리텍 잉크 | 다중스케일 가변영역분할 홍채인식 방법 및 시스템 |
CN101539990B (zh) * | 2008-03-20 | 2011-05-11 | 中国科学院自动化研究所 | 一种虹膜图像鲁棒特征选择和快速比对的方法 |
CN102844766B (zh) * | 2011-04-20 | 2014-12-24 | 中国科学院自动化研究所 | 基于人眼图像的多特征融合身份识别方法 |
CN104063872B (zh) * | 2014-07-04 | 2017-02-15 | 西安电子科技大学 | 基于改进视觉注意模型的序列图像显著区域检测方法 |
CN106326841A (zh) * | 2016-08-12 | 2017-01-11 | 合肥虹视信息工程有限公司 | 一种快速虹膜识别算法 |
RU2670798C9 (ru) * | 2017-11-24 | 2018-11-26 | Самсунг Электроникс Ко., Лтд. | Способ аутентификации пользователя по радужной оболочке глаз и соответствующее устройство |
CN110059589B (zh) * | 2019-03-21 | 2020-12-29 | 昆山杜克大学 | 一种基于Mask R-CNN神经网络的虹膜图像中虹膜区域的分割方法 |
-
2019
- 2019-09-26 CN CN201910919121.9A patent/CN110688951B/zh active Active
- 2019-11-28 KR KR1020217008623A patent/KR20210047336A/ko not_active Application Discontinuation
- 2019-11-28 WO PCT/CN2019/121695 patent/WO2021056808A1/zh active Application Filing
- 2019-11-28 JP JP2021500196A patent/JP7089106B2/ja active Active
- 2019-11-28 SG SG11202013254VA patent/SG11202013254VA/en unknown
-
2020
- 2020-01-06 TW TW109100344A patent/TWI724736B/zh active
- 2020-12-30 US US17/137,819 patent/US11532180B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778664A (zh) * | 2016-12-29 | 2017-05-31 | 天津中科智能识别产业技术研究院有限公司 | 一种虹膜图像中虹膜区域的分割方法及其装置 |
US20180218213A1 (en) * | 2017-02-02 | 2018-08-02 | Samsung Electronics Co., Ltd. | Device and method of recognizing iris |
CN109426770A (zh) * | 2017-08-24 | 2019-03-05 | 合肥虹慧达科技有限公司 | 虹膜识别方法 |
CN107506754A (zh) * | 2017-09-19 | 2017-12-22 | 厦门中控智慧信息技术有限公司 | 虹膜识别方法、装置及终端设备 |
CN108229531A (zh) * | 2017-09-29 | 2018-06-29 | 北京市商汤科技开发有限公司 | 对象特征处理方法、装置、存储介质和电子设备 |
Non-Patent Citations (2)
Title |
---|
YULIN SI ET AL.: "Novel Approaches to Improve Robustness, Accuracy and Rapidity of Iris Recognition Systems", IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, vol. 8, no. 1, 19 January 2012 (2012-01-19), XP011398081, ISSN: 1551-3203 * |
ZHAO, YANMING: "THE IRIS RECOGNITION ALGORITHM BASED ON SCALE CORRELATION MULTI-FEATURE EXTRACTION AND FUSION", COMPUTER APPLICATIONS AND SOFTWARE, vol. 30, no. 7, 15 July 2013 (2013-07-15), pages 188 - 192, XP055794560, ISSN: 1000-386X * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642393A (zh) * | 2021-07-07 | 2021-11-12 | 重庆邮电大学 | 基于注意力机制的多特征融合视线估计方法 |
CN113642393B (zh) * | 2021-07-07 | 2024-03-22 | 重庆邮电大学 | 基于注意力机制的多特征融合视线估计方法 |
CN113643305A (zh) * | 2021-08-10 | 2021-11-12 | 珠海复旦创新研究院 | 一种基于深度网络上下文提升的人像检测与分割方法 |
CN113643305B (zh) * | 2021-08-10 | 2023-08-25 | 珠海复旦创新研究院 | 一种基于深度网络上下文提升的人像检测与分割方法 |
CN114519723A (zh) * | 2021-12-24 | 2022-05-20 | 上海海洋大学 | 一种基于金字塔影像分割的陨石坑自动提取方法 |
CN114519723B (zh) * | 2021-12-24 | 2024-05-28 | 上海海洋大学 | 一种基于金字塔影像分割的陨石坑自动提取方法 |
WO2023169582A1 (zh) * | 2022-03-11 | 2023-09-14 | 北京字跳网络技术有限公司 | 图像增强方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
TWI724736B (zh) | 2021-04-11 |
US11532180B2 (en) | 2022-12-20 |
JP2022511217A (ja) | 2022-01-31 |
CN110688951B (zh) | 2022-05-31 |
TW202113756A (zh) | 2021-04-01 |
JP7089106B2 (ja) | 2022-06-21 |
US20210117674A1 (en) | 2021-04-22 |
CN110688951A (zh) | 2020-01-14 |
SG11202013254VA (en) | 2021-04-29 |
KR20210047336A (ko) | 2021-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021056808A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN111310616B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021051650A1 (zh) | 人脸和人手关联检测方法及装置、电子设备和存储介质 | |
TWI747325B (zh) | 目標對象匹配方法及目標對象匹配裝置、電子設備和電腦可讀儲存媒介 | |
CN109522910B (zh) | 关键点检测方法及装置、电子设备和存储介质 | |
WO2021031609A1 (zh) | 活体检测方法及装置、电子设备和存储介质 | |
US10007841B2 (en) | Human face recognition method, apparatus and terminal | |
US20120321193A1 (en) | Method, apparatus, and computer program product for image clustering | |
WO2021208667A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN109934275B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN109977860B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN110532956B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN111435432B (zh) | 网络优化方法及装置、图像处理方法及装置、存储介质 | |
WO2019205605A1 (zh) | 人脸特征点的定位方法及装置 | |
CN111259967A (zh) | 图像分类及神经网络训练方法、装置、设备及存储介质 | |
US20220270352A1 (en) | Methods, apparatuses, devices, storage media and program products for determining performance parameters | |
TWI770531B (zh) | 人臉識別方法、電子設備和儲存介質 | |
WO2023155393A1 (zh) | 特征点匹配方法、装置、电子设备、存储介质和计算机程序产品 | |
CN111723715B (zh) | 一种视频显著性检测方法及装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021500196 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217008623 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19946496 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19946496 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19946496 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.05.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19946496 Country of ref document: EP Kind code of ref document: A1 |