CN113505716B - Training method of vein recognition model, and recognition method and device of vein image - Google Patents

Training method of vein recognition model, and recognition method and device of vein image Download PDF

Info

Publication number
CN113505716B
CN113505716B CN202110807293.4A CN202110807293A CN113505716B CN 113505716 B CN113505716 B CN 113505716B CN 202110807293 A CN202110807293 A CN 202110807293A CN 113505716 B CN113505716 B CN 113505716B
Authority
CN
China
Prior art keywords
vein
training
image
training image
labels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110807293.4A
Other languages
Chinese (zh)
Other versions
CN113505716A (en
Inventor
秦华锋
巩长庆
杨公平
王军
潘在宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Weimai Zhilian Technology Co ltd
Original Assignee
Chongqing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Technology and Business University filed Critical Chongqing Technology and Business University
Priority to CN202110807293.4A priority Critical patent/CN113505716B/en
Publication of CN113505716A publication Critical patent/CN113505716A/en
Application granted granted Critical
Publication of CN113505716B publication Critical patent/CN113505716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a training method of a vein recognition model, a recognition method and device of a vein image, electronic equipment and a storage medium. The training method comprises the following steps: determining a first high-dimensional vein feature corresponding to the first training image and the prediction distribution of the label in the first training image; obtaining a plurality of second high-dimensional vein features; inputting the first high-dimensional vein feature and a plurality of second high-dimensional vein features into a graph neural network, and determining first distribution of labels in a first training image; and determining the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels. The vein recognition model provided by the application synchronously learns the target label distribution of the relation between the labels on the basis of learning the label distribution of the input type image, so that the recognition effect is further improved while the recognition accuracy of the traditional prediction model is improved.

Description

Training method of vein recognition model, and recognition method and device of vein image
Technical Field
The present application relates to the field of data analysis technologies, and in particular, to a training method for a vein recognition model, a vein image recognition method, a vein image recognition device, an electronic device, and a storage medium.
Background
The finger vein biological recognition technology is a method for carrying out identity authentication by using a blood vessel distribution pattern formed when blood flows through a subcutaneous superficial blood vessel of a finger as biological characteristics.
However, in the prior art, the extraction of the finger vein features is realized through tags related to vein information in a finger vein picture, where the tags reflect basic problems and basic relationships of examples, and the tags can describe the basic problems of the examples, but the tags cannot accurately represent all the relationships of the examples, that is, in the prior art, one example is not necessarily mapped to one tag, and therefore, the accuracy of the extracted vein features is low on the basis of ignoring the relationship between the tags, resulting in poor recognition effect.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a training method for a vein recognition model, a method and an apparatus for recognizing a vein image, an electronic device, and a storage medium, in which a vein feature prediction classifier is combined with a graph neural network, a relationship between features of an input first training image and features of other types of second training images is deeply learned, a first distribution of tags is determined, the first distribution of tags is fused with tags of the input image, a target distribution is determined, and a vein recognition model is determined based on the target distribution of the tags and the predicted distribution of the tags, the vein recognition model provided in the present application synchronously learns a target tag distribution of a relationship between tags on the basis of learning the tag distribution of the input type image, so that while an accuracy of recognition of a conventional prediction model is improved, the recognition effect is further improved.
The application mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a training method for a vein recognition model, where the training method includes:
acquiring a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the training image set comprises a label of vein information; the label is used for representing a real identity type corresponding to the vein information in each training image;
inputting any first training image in any type of training image subset into a vein feature prediction classifier, determining a first high-dimensional vein feature corresponding to the first training image and prediction distribution of a label in the first training image, and determining corresponding initial identity type probability according to the prediction distribution;
inputting any second training image in a plurality of training image subsets of other types into a vein feature extraction model to obtain a plurality of second high-dimensional vein features;
inputting the first high-dimensional vein feature and a plurality of second high-dimensional vein features into a neural network of a graph, and determining a first distribution of labels in the first training image;
and determining the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
In one possible embodiment, the target distribution of labels in the first training image is determined from the first distribution of labels and the labels of the first training image by:
and performing weighted fusion on the first distribution of the labels and the labels of the first training image to determine the target distribution of the labels in the first training image.
In one possible implementation, the set of training images is obtained by:
acquiring an initial training image set;
carrying out binarization and marginalization processing on each initial training image in the initial training image set, and determining an interested image corresponding to each initial training image;
and optimizing the rotation amount and the translation amount of each image of interest and each initial training image to determine a training image set.
In one possible embodiment, the vein feature prediction classifier is obtained by:
obtaining a plurality of sample images and labels of sample vein information in each sample image; the label is used for representing a real sample identity type corresponding to the sample vein information in each sample image;
inputting the image of the sample vein information in each sample image into an initial classification network to obtain a predicted sample identity type corresponding to the sample vein information in each sample image;
and when the loss value between the real sample identity type corresponding to each sample vein information and the prediction sample identity type corresponding to the sample vein information is smaller than a preset threshold value, training is cut off, and a vein feature prediction classifier is obtained.
In a possible embodiment, the determining a vein recognition model based on the target distribution of the tags and the predicted distribution of the tags includes:
determining a loss value between the target distribution of tags and the predicted distribution of tags;
and when the loss value between the target distribution of the label and the predicted distribution of the label is smaller than a preset threshold value, training is cut off, and a vein recognition model is determined.
In a second aspect, an embodiment of the present application further provides a vein image recognition method, where the recognition method performs the steps of the training method in any one of the possible implementations of the first aspect:
acquiring an image to be identified;
determining the identity type probability corresponding to the vein information in the image to be recognized based on the vein recognition model;
determining a final identity type based on the identity type probability.
In a third aspect, an embodiment of the present application further provides a training apparatus for a vein recognition model, where the recognition apparatus includes:
the first acquisition module is used for acquiring a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the set of training images includes a label of vein information; the label is used for representing a real identity type corresponding to the vein information in each training image;
the first determination module is used for inputting any first training image in any type of training image subset into a vein feature prediction classifier, determining a first high-dimensional vein feature corresponding to the first training image and prediction distribution of a label in the first training image, and determining corresponding initial identity type probability according to the prediction distribution;
the second acquisition module is used for inputting any second training image in the plurality of training image subsets of other types into the vein feature extraction model to acquire a plurality of second high-dimensional vein features;
a second determining module, configured to input the first high-dimensional vein feature and a plurality of second high-dimensional vein features into a graph neural network, and determine a first distribution of labels in the first training image;
and the third determining module is used for determining the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
In a fourth aspect, an embodiment of the present application further provides an apparatus for recognizing a vein image, where the apparatus includes:
the third acquisition module is used for acquiring an image to be identified;
the fourth determining module is used for determining the identity type probability corresponding to the vein information in the image to be recognized based on the vein recognition model;
and the fifth determining module is used for determining the final identity type based on the identity type probability.
In a fifth aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the steps of the training method as described in any one of the possible embodiments of the first aspect or the steps of the recognition method as described in any one of the possible embodiments of the second aspect.
In a sixth aspect, the present embodiments also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program is executed by a processor to perform the steps of the training method as described in any one of the possible embodiments of the first aspect or the steps of the recognition method as described in any one of the possible embodiments of the second aspect.
In the embodiment of the application, the vein feature prediction classifier is combined with the graph neural network, the relationship between the features of the input first training image and the features of the second training images of other types is deeply learned, the first distribution of the labels is determined, the first distribution of the labels is fused with the labels of the input images, the target distribution is determined, and the vein recognition model is determined based on the target distribution of the labels and the prediction distribution of the labels.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a training method of a vein recognition model according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a vein image recognition method according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram illustrating a training apparatus for a vein recognition model according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram illustrating a vein image recognition apparatus provided in an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Description of the main element symbols:
in the figure: 300-a training device; 310-a first obtaining module; 320-a first determination module; 330-a second obtaining module; 340-a second determination module; 350-a third determination module; 400-identification means; 410-a third obtaining module; 420-a fourth determination module; 430-a fifth determining module; 500-an electronic device; 510-a processor; 520-a memory; 530-bus.
Detailed Description
To make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Further, it should be understood that the schematic drawings are not drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and that steps without logical context may be performed in reverse order or concurrently. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
To enable those skilled in the art to utilize the present disclosure, the following embodiments are presented in conjunction with a specific application scenario, "vein recognition model," which it would be apparent to those skilled in the art that the general principles defined herein may be applied to other embodiments and application scenarios without departing from the spirit and scope of the present application.
The method, the system, the electronic device and the storage medium in the embodiments of the present application may be applied to any scene that needs to perform vein image recognition, and the embodiments of the present application do not limit specific application scenes, and any scheme that uses the training method of the vein recognition model, the vein image recognition method, the vein image recognition apparatus, the electronic device and the storage medium provided in the embodiments of the present application is within the scope of the present application.
It is noted that, as a result of research, before the present application is proposed, extraction of finger vein features is implemented by using tags related to vein information in a finger vein picture, where the tags reflect basic problems and basic relationships of examples, and can describe the basic problems of the examples, but the tags cannot accurately represent all the relationships of the examples, that is, in the prior art, one example is not necessarily mapped to one tag, and therefore, on the basis of ignoring the relationship between the tags, the accuracy of extracted vein features is low, which results in poor recognition effect.
Based on this, embodiments of the present application provide a training method for a vein recognition model, a recognition method for a vein image, an apparatus, an electronic device, and a storage medium, in which a vein feature prediction classifier is combined with a graph neural network, a relationship between features of an input first training image and features of other types of second training images is deeply learned, a first distribution of tags is determined, the first distribution of tags is fused with tags of the input image, a target distribution is determined, and a vein recognition model is determined based on the target distribution of the tags and the prediction distribution of the tags, the vein recognition model provided by the present application synchronously learns a target tag distribution of a relationship between tags on the basis of learning the tag distribution of the input type image, so that while an accuracy rate of recognition of a conventional prediction model is improved, the recognition effect is further improved.
For the convenience of understanding of the present application, the technical solutions provided in the present application will be described in detail with reference to specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart illustrating a training method of a vein recognition model according to an embodiment of the present disclosure. As shown in fig. 1, the training method provided in the embodiment of the present application includes the following steps:
s101, acquiring a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the set of training images includes a label of vein information; the label is used for representing the real identity type corresponding to the vein information in each training image.
In the step, a large number of training image sets are obtained and used for carrying out model training on the vein recognition model provided by the application, the training image sets comprise a large number of training image subsets of multiple types, and the purpose of carrying out training by using the training image subsets of multiple types is to enable the recognition type of the obtained vein recognition model to be more accurate.
Here, the division of the training image set into a plurality of the training image subsets is according to the following, but not limited to: the plurality of training images of each of the training image subsets may be users of different shooting angles or finger vein features of the users.
Further, a training image set is obtained by:
an initial training image set is acquired.
Wherein the initial training image set is an original image set without preprocessing, and the training image set may be particularly but not exclusively limited to a vein image set of an acquired finger.
And carrying out binarization and marginalization processing on each initial training image in the initial training image set, and determining an interested image corresponding to each initial training image.
Firstly, performing marginalization and binarization on a finger region in each initial training image in an initial training image set to obtain an edge image and a binary image, and then subtracting the edge image from the binary image to further obtain a difference image. In the difference image, the finger region and the other region have been separated. Because the area of the finger region is significantly larger than that of the background region, other regions are removed by using a second threshold value, and a binary image only containing the finger region is obtained, namely the image of interest corresponding to each initial training image.
Here, the process of determining the image of interest corresponding to each of the initial training images, i.e., the process of determining the binary image of the finger region, is described by the following embodiments:
firstly, carrying out binarization on an image I by using a first threshold value to obtain a binary image D, and simultaneously obtaining an edge image J by using a canny edge detection algorithm.
The following is the process of binarization for the image I:
Figure RE-GDA0003209413850000101
where I (x, y) represents the gray value at the pixel point (x, y) and t is the first threshold.
The edge image J is obtained from the following edge detection templates:
Figure RE-GDA0003209413850000102
thus, the difference image is represented as D ═ B-J.
In the above, the difference image is used to extract the image of interest.
The specific extraction process is that firstly, N8 communication areas are arranged in the difference image D, and the area size R of the ith area isi1,2, N and the set of coordinates of the corresponding region is zi
Figure RE-GDA0003209413850000103
Here, the D (z)i) Is the extracted image of interest.
And optimizing the rotation amount and the translation amount of each image of interest and each initial training image to determine a training image set.
Here, optimization in both directions of the amount of rotation and the amount of translation is performed for the image of interest, and after the optimization, the optimized image of interest is determined as a training image set for training use.
Thus, the optimization of the image of interest in terms of rotation amount is:
calculating the p + q step m of the image binary image Dpq: and further derive its central moment from these moments.
First, a p + q step m of the binary image D is calculatedpq
Figure RE-GDA0003209413850000111
And center of gravity
Figure RE-GDA0003209413850000112
And
Figure RE-GDA0003209413850000113
Figure RE-GDA0003209413850000114
Figure RE-GDA0003209413850000115
then, its p + q order center distance upqIs calculated as follows
Figure RE-GDA0003209413850000116
Then, the central moment is used for calculating the rotation offset delta theta of the binary image D and the original image I in the horizontal direction, so that the original image I and the binary image D are corrected, and the corrected images are I 'and D' respectively.
Moment u is calculated by the above equation11,u12And u22Then, the rotation angle is calculated by the following equation:
Figure RE-GDA0003209413850000121
the original image I and the binary image D are corrected by the rotation angle theta, and the corrected images are the first original image I 'and the first binary image D', respectively.
Here, the optimization of the image of interest in terms of translation amount is:
calculating the offset delta of the finger area in the horizontal direction and the vertical direction according to the central moment of the binary image DxAnd Δy
Calculating the central moment u of the image D00,u10And u01And calculates the offset delta between the horizontal direction and the vertical direction by using the central momentsx=u10/u00And Δy=u01/u00
Normalizing the images I 'and D' in the horizontal and vertical directions by the offset amount yields I "and D".
At this time, the optimized image of interest, i.e. the training image set, is:
P=I″*D″
wherein, denotes multiplication of position elements corresponding to the two matrices.
S102, inputting any first training image in any type of training image subset into a vein feature prediction classifier, determining a first high-dimensional vein feature corresponding to the first training image and prediction distribution of a label in the first training image, and determining corresponding initial identity type probability according to the prediction distribution.
In the step, after the training image set is classified according to categories to generate a plurality of types of training image subsets, a first training image in any type of the training image subsets is input into a vein feature prediction classifier, a prediction distribution of labels in the output first training image is obtained, and a first high-dimensional vein feature in a full connection layer of the vein feature prediction classifier is obtained, wherein the prediction distribution of the labels in the first training image is used for representing an initial identity type probability corresponding to the first training image, and the vein feature prediction classifier is trained by monitoring a learned target distribution serving as supervision information.
Here, the image Embedding technique is a technical operation of converting an input 2-dimensional training image into a one-dimensional first high-dimensional vein feature through a full-connected layer, and effectively extracts key feature information of the image training image, and the following specifically exemplifies one of the used image Embedding:
the image Embedding is characterized by comprising 6 layers of convolution for feature extraction, 4 layers of maximum pooling layers are used for pooling the output of the convolution layers, namely, a training image is divided into subblocks which are not overlapped with each other, the maximum value of each subblock is reserved to generate a pooled image, 6 layers of Batchnorm and three layers of Dropout are formed, and LeakyReLU is used as an activation function; and flattening the input data format of the graph convolution network by a full connection layer at the end of Embedding, namely a one-dimensional second high-dimensional vein feature. And the one-dimensional second high-dimensional vein features and the first high-dimensional vein features are spliced into a vector input graph neural network.
Further, the vein feature prediction classifier is obtained by:
obtaining a plurality of sample images and labels of sample vein information in each sample image; the label is used for representing the identity type of the real sample corresponding to the sample vein information in each sample image.
Before a training image set is obtained and the obtained training image set is divided according to categories, firstly, a plurality of sample images and labels of sample vein information in each sample image are obtained, the labels are used for carrying out primary training on an initial classification network, the initial classification network is determined to be used for identifying vein features of fingers, and identity types corresponding to the vein features are determined.
Inputting the image of the sample vein information in each sample image into an initial classification network to obtain a predicted sample identity type corresponding to the sample vein information in each sample image.
The initial classification network is a general deep neural network classifier, which may include but is not limited to classifiers of VGG, ResNet, and google lenet.
Here, inputting the label of the sample vein information in each sample image into an initial classification network, and generating a predicted sample identity type corresponding to the sample vein information in each sample image; each of the sample images of the input put image size may be the processed 224 x 224 size.
And when the loss value between the real sample identity type corresponding to each sample vein information and the prediction sample identity type corresponding to the sample vein information is smaller than a preset threshold value, training is cut off, and a vein feature prediction classifier is obtained.
S103, inputting any second training image in the plurality of training image subsets of other types into the vein feature extraction model, and obtaining a plurality of second high-dimensional vein features.
In this step, any second training image in a plurality of training image subsets of other types than the above type, which are generated by dividing the training image set, is input into a vein feature extraction model, so as to obtain a plurality of second high-dimensional vein features extracted by the vein feature extraction model.
Wherein each of the second high-dimensional vein features is a high-dimensional vein feature of a finger vein in any one of a plurality of training image subsets of other types than the above-mentioned type.
In this case, the data amplification and the label amplification may be performed on the acquired first training image and the second training images in the other types of the plurality of training image subsets, respectively, and specifically, any one of the acquired first training images and any one of the plurality of second training images may be amplified to acquire any two of the first training images and any two of the plurality of second training images.
Or assuming that a training image set is totally N types, performing 2 times of dimensional expansion on the labels, and then performing predictive distribution on the labels to obtain 2N one-dimensional vectors, specifically as follows:
for example, the predicted distribution label (i.e., the logic label) of the label in the first training image is (0, 1, 0), and after amplification is (0, 0, 1, 1, 0, 0), and the amplified label is used to fuse the probability of the first distribution of the label of the graph neural network model in addition to the first high-dimensional vein feature before amplification by using the amplified label.
S104, inputting the first high-dimensional vein feature and the second high-dimensional vein features into a neural network of a graph, and determining first distribution of labels in the first training image.
In the step, the first high-dimensional vein feature output by the vein feature prediction classifier and a plurality of second high-dimensional vein features extracted by the vein feature extraction model are input into a graph neural network, so that a first distribution of labels in the first training image is obtained.
The first distribution of the labels in the first training image is used for learning the relation between the first training image and other types of training images and for performing deep training and adjustment on the trained vein feature prediction model subsequently.
Here, the graph neural network has the ability to learn the relationship between samples, which in this document can efficiently learn the first distribution of labels in the first training image.
Thus, the graph neural network may be specifically represented by:
the graph neural network is dynamically composed of N layers of graph model networks, and each layer of graph model network is composed of a relation matrix learning block and a Gconv block. For the relationship matrix learning layer, consisting of 5 layers of convolution and 4 layers of BatchNorm, the convolution kernels are both 1 in size and using Leaky ReLU as the activation function and using the discard method to randomly release some neurons to prevent overfitting.
In the Gconv block, all the full link layers and the BatchNorm are formed, and after output, the softmax function is used for predicting the label probability distribution of the input samples to each class.
S105, determining target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
In this step, the target distribution of the labels in the first training image is determined according to the first distribution of the labels and the labels of the first training image by:
and performing weighted fusion on the first distribution of the labels and the labels of the first training image to determine the target distribution of the labels in the first training image.
And fusing the obtained labels of the first distribution first training image of the labels, wherein a specific fusion mode includes but is not limited to a fusion mode of the fusion coefficient, and determining the target distribution of the labels in the first training image after the weighted fusion.
The purpose of fusion is that the labels containing vein information in each training image in the training image set ignore the degree of correlation between the training image of the self type and the training images of other types, so that the self-belonged relationship is highlighted, the supervision information of the training images is enhanced in a fusion mode, and further better training and prediction effects are obtained.
Here, the determining of the target distribution of the label functions to enhance a relationship between the first training image and other types of training patterns.
After the target distribution of the labels is determined, further deep training is carried out on the vein feature prediction classifier based on the target distribution of the labels and the prediction distribution of the labels, and a vein recognition model is determined.
Further, the determining a vein recognition model based on the target distribution of the tags and the predicted distribution of the tags includes:
a loss value between the target distribution of tags and the predicted distribution of tags is determined.
And when the loss value between the target distribution of the label and the predicted distribution of the label is smaller than a preset threshold value, training is cut off, and a vein recognition model is determined.
And calculating a loss value between the target distribution of the label and the prediction distribution of the label by utilizing the KL divergence, and deeply training a vein feature prediction classifier by adopting a random gradient descent method.
Fusing the target distribution of the label with the label of the first training image as the supervision information ysThe KL divergence calculates the loss L between the target distribution of the pre-label and the predicted distribution of the label.
Figure RE-GDA0003209413850000171
Here, after the vein recognition model is trained, the vein recognition model is verified using a verification image set, which may be verified using, but not limited to, a random gradient descent method.
Thus, after the vein recognition model is verified, the vein recognition model is tested by using the test image set, the output of the test image set is the probability that the test image belongs to each type, and the type corresponding to the maximum probability value is selected as the final classification result of the test image set.
Compared with the prior art, the vein recognition model training method has the advantages that the vein feature prediction classifier is combined with the graph neural network, the relation between the features of the input first training image and the features of other types of second training images is deeply learned, the first distribution of the labels is determined, the first distribution of the labels is fused with the labels of the input images, the target distribution is determined, the vein recognition model is determined based on the target distribution of the labels and the prediction distribution of the labels, the target label distribution of the relation between the labels is synchronously learned through the vein recognition model provided by the application on the basis of learning the label distribution of the input type images, and therefore the recognition accuracy of a traditional prediction model is improved and the recognition effect is further improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a vein image recognition method according to an embodiment of the present disclosure. As shown in fig. 2, the identification method provided in the embodiment of the present application includes the following steps:
s201, acquiring an image to be identified.
In the step, an image to be identified is obtained, and the image to be identified is preprocessed to obtain an interesting image of the image to be identified.
S202, based on the vein recognition model, determining the identity type probability corresponding to the vein information in the image to be recognized.
In the step, the vein recognition model is utilized to extract the features of the image to be recognized, and the features are matched to realize identity authentication.
S203, determining the final identity type based on the identity type probability.
In the step, the vein recognition model can effectively extract vein features, obtain better vein identity class probability distribution and improve the recognition precision of vein recognition.
When the registered image and one test image exist, the target distribution of the labels of the corresponding template image is searched in the registered database for the registered image, the test image is used for determining the prediction distribution of the test labels through a trained vein recognition model, and then the target distribution of the labels of the two categories is subjected to matching calculation. The matching method includes, but is not limited to, a matching method using a maximum value, in which if the matching value of the two is smaller than a specific threshold, the authentication fails, and otherwise, the authentication succeeds.
Compared with the prior art, the vein image recognition method provided by the embodiment of the application deeply learns the relationship between the features of the input first training image and the features of other types of second training images by combining the vein feature prediction classifier with the graph neural network, determines the first distribution of the labels, fuses the first distribution of the labels and the labels of the input images to determine the target distribution, and determines the vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a training apparatus for vein recognition models according to an embodiment of the present application, and as shown in fig. 3, the training apparatus 300 includes:
a first obtaining module 310, configured to obtain a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the set of training images includes a label of vein information; the label is used for representing the real identity type corresponding to the vein information in each training image.
Further, the first obtaining module 310 obtains the training image set by:
an initial training image set is obtained.
And carrying out binarization and marginalization processing on each initial training image in the initial training image set, and determining an interested image corresponding to each initial training image.
And optimizing the rotation amount and the translation amount of each image of interest and each initial training image to determine a training image set.
The first determining module 320 is configured to input any first training image in any type of training image subset into a vein feature prediction classifier, determine a first high-dimensional vein feature corresponding to the first training image and a prediction distribution of a label in the first training image, and determine a corresponding initial identity type probability according to the prediction distribution.
Further, the first determining module 320 obtains the vein feature prediction classifier by:
obtaining a plurality of sample images and labels of sample vein information in each sample image; the label is used for representing the identity type of the real sample corresponding to the sample vein information in each sample image.
Inputting the image of the sample vein information in each sample image into an initial classification network to obtain a predicted sample identity type corresponding to the sample vein information in each sample image.
And when the loss value between the real sample identity type corresponding to each sample vein information and the prediction sample identity type corresponding to the sample vein information is smaller than a preset threshold value, training is cut off, and a vein feature prediction classifier is obtained.
The second obtaining module 330 is configured to input any one of the second training images in the plurality of training image subsets of other types into the vein feature extraction model, and obtain a plurality of second high-dimensional vein features.
A second determining module 340, configured to input the first high-dimensional vein feature and the plurality of second high-dimensional vein features into a graph neural network, and determine a first distribution of labels in the first training image.
Further, the second determining module 340 determines the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image by:
and performing weighted fusion on the first distribution of the labels and the labels of the first training image to determine the target distribution of the labels in the first training image.
A third determining module 350, configured to determine, according to the first distribution of the labels and the labels of the first training image, a target distribution of the labels in the first training image, and determine a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
Further, the determining, in the third determining module 350, a vein recognition model based on the target distribution of the tags and the predicted distribution of the tags includes:
a loss value between the target distribution of tags and the predicted distribution of tags is determined.
And when the loss value between the target distribution of the label and the predicted distribution of the label is smaller than a preset threshold value, training is cut off, and a vein recognition model is determined.
Compared with the prior art, the training device 300 provided by the embodiment of the application deeply learns the relationship between the features of the input first training image and the features of other types of second training images by combining the vein feature prediction classifier with the graph neural network, determines the first distribution of the labels, fuses the first distribution of the labels and the labels of the input images, determines the target distribution, and determines the vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
Referring to fig. 4, fig. 3 is a schematic structural diagram of a vein image recognition apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the recognition apparatus 400 includes:
and a third obtaining module 410, configured to obtain an image to be identified.
A fourth determining module 420, configured to determine, based on the vein recognition model, an identity type probability corresponding to vein information in the image to be recognized.
A fifth determining module 430, configured to determine a final identity type based on the identity type probability.
Compared with the prior art, the recognition device 400 provided by the embodiment of the application deeply learns the relationship between the features of the input first training image and the features of other types of second training images by combining the vein feature prediction classifier with the graph neural network, determines the first distribution of the labels, fuses the first distribution of the labels and the labels of the input images, determines the target distribution, and determines the vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present disclosure, including: a processor 510, a memory 520 and a bus 530, the memory 520 storing machine-readable instructions executable by the processor 510, the processor 510 and the memory 520 communicating via the bus 530 when the electronic device 500 is in operation, the machine-readable instructions being executable by the processor 510 to perform the steps of the training method or the identification method steps as described in any of the above embodiments.
In particular, the machine readable instructions, when executed by the processor 510, may perform the following:
acquiring a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the set of training images includes a label of vein information; the label is used for representing the real identity type corresponding to the vein information in each training image.
Inputting any first training image in any type of training image subset into a vein feature prediction classifier, determining a first high-dimensional vein feature corresponding to the first training image and prediction distribution of a label in the first training image, and determining corresponding initial identity type probability according to the prediction distribution.
And inputting any second training image in the plurality of training image subsets of other types into the vein feature extraction model to obtain a plurality of second high-dimensional vein features.
And inputting the first high-dimensional vein feature and a plurality of second high-dimensional vein features into a neural network of a graph, and determining a first distribution of labels in the first training image.
And determining the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
And performing the following processes:
and acquiring an image to be identified.
And determining the identity type probability corresponding to the vein information in the image to be recognized based on the vein recognition model.
Determining a final identity type based on the identity type probability.
In the embodiment of the application, the vein feature prediction classifier is combined with the graph neural network, the relationship between the features of the input first training image and the features of the second training images of other types is deeply learned, the first distribution of the labels is determined, the first distribution of the labels is fused with the labels of the input images, the target distribution is determined, and the vein recognition model is determined based on the target distribution of the labels and the prediction distribution of the labels.
Based on the same application concept, embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the training method or the recognition method provided by the above embodiments.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, or the like, and when a computer program on the storage medium is run, the training method can be executed, and target label distribution of the relationship between labels is synchronously learned on the basis of learning label distribution of an input type image, so that the recognition accuracy of a traditional prediction model is improved, and meanwhile, the recognition effect is further improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A training method of a vein recognition model, the training method comprising:
acquiring a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the set of training images includes a label of vein information; the label is used for representing a real identity type corresponding to the vein information in each training image;
inputting any first training image in any type of training image subset into a vein feature prediction classifier, determining a first high-dimensional vein feature corresponding to the first training image and prediction distribution of a label in the first training image, and determining corresponding initial identity type probability according to the prediction distribution;
inputting any second training image in a plurality of training image subsets of other types into a vein feature extraction model to obtain a plurality of second high-dimensional vein features;
inputting the first high-dimensional vein feature and a plurality of second high-dimensional vein features into a neural network of a map, and determining a first distribution of labels in the first training image;
and determining the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
2. Training method according to claim 1, characterized in that the target distribution of labels in the first training image is determined from the first distribution of labels and the labels of the first training image by:
and performing weighted fusion on the first distribution of the labels and the labels of the first training image to determine the target distribution of the labels in the first training image.
3. Training method according to claim 1, wherein the set of training images is obtained by:
acquiring an initial training image set;
carrying out binarization and marginalization processing on each initial training image in the initial training image set, and determining an interested image corresponding to each initial training image;
and optimizing the rotation amount and the translation amount of each image of interest and each initial training image to determine a training image set.
4. The training method of claim 1, wherein the vein feature prediction classifier is obtained by:
obtaining a plurality of sample images and labels of sample vein information in each sample image; the label is used for representing a real sample identity type corresponding to the sample vein information in each sample image;
inputting the image of the sample vein information in each sample image into an initial classification network to obtain a predicted sample identity type corresponding to the sample vein information in each sample image;
and when the loss value between the real sample identity type corresponding to each sample vein information and the prediction sample identity type corresponding to the sample vein information is smaller than a preset threshold value, training is cut off, and a vein feature prediction classifier is obtained.
5. Training method according to claim 1 or 2, wherein the determining a vein recognition model based on the target distribution of labels and the predicted distribution of labels comprises:
determining a loss value between the target distribution of tags and the predicted distribution of tags;
and when the loss value between the target distribution of the label and the predicted distribution of the label is smaller than a preset threshold value, training is cut off, and a vein recognition model is determined.
6. A method for recognizing vein images, characterized in that the training method according to any one of claims 1-5 is used, and the recognition method comprises:
acquiring an image to be identified;
determining the identity type probability corresponding to the vein information in the image to be recognized based on the vein recognition model;
determining a final identity type based on the identity type probability.
7. A training apparatus for a vein recognition model, the training apparatus comprising:
the first acquisition module is used for acquiring a training image set; wherein the training image set comprises a plurality of types of training image subsets; and each training image in the set of training images includes a label of vein information; the label is used for representing a real identity type corresponding to the vein information in each training image;
the first determining module is used for inputting any first training image in any type of training image subset into a vein feature prediction classifier, determining a first high-dimensional vein feature corresponding to the first training image and prediction distribution of a label in the first training image, and determining corresponding initial identity type probability according to the prediction distribution;
the second acquisition module is used for inputting any second training image in the plurality of training image subsets of other types into the vein feature extraction model to acquire a plurality of second high-dimensional vein features;
a second determining module, configured to input the first high-dimensional vein feature and a plurality of the second high-dimensional vein features into a graph neural network, and determine a first distribution of labels in the first training image;
and the third determining module is used for determining the target distribution of the labels in the first training image according to the first distribution of the labels and the labels of the first training image, and determining a vein recognition model based on the target distribution of the labels and the predicted distribution of the labels.
8. A recognition apparatus of a vein image, characterized in that, using the training apparatus of claim 7, the recognition apparatus comprises:
the third acquisition module is used for acquiring an image to be identified;
the fourth determining module is used for determining the identity type probability corresponding to the vein information in the image to be recognized based on the vein recognition model;
and the fifth determining module is used for determining the final identity type based on the identity type probability.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operated, the machine-readable instructions being executable by the processor to perform the training method of any one of claims 1 to 5 or the recognition method steps of claim 6.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, is adapted to carry out the training method according to one of the claims 1 to 5 or the identification method steps according to claim 6.
CN202110807293.4A 2021-07-16 2021-07-16 Training method of vein recognition model, and recognition method and device of vein image Active CN113505716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110807293.4A CN113505716B (en) 2021-07-16 2021-07-16 Training method of vein recognition model, and recognition method and device of vein image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110807293.4A CN113505716B (en) 2021-07-16 2021-07-16 Training method of vein recognition model, and recognition method and device of vein image

Publications (2)

Publication Number Publication Date
CN113505716A CN113505716A (en) 2021-10-15
CN113505716B true CN113505716B (en) 2022-07-01

Family

ID=78013090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110807293.4A Active CN113505716B (en) 2021-07-16 2021-07-16 Training method of vein recognition model, and recognition method and device of vein image

Country Status (1)

Country Link
CN (1) CN113505716B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022960B (en) * 2022-01-05 2022-06-14 阿里巴巴达摩院(杭州)科技有限公司 Model training and behavior recognition method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740407A (en) * 2018-08-27 2019-05-10 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on figure network
CN110427832A (en) * 2019-07-09 2019-11-08 华南理工大学 A kind of small data set finger vein identification method neural network based
WO2021027364A1 (en) * 2019-08-13 2021-02-18 平安科技(深圳)有限公司 Finger vein recognition-based identity authentication method and apparatus
WO2021056972A1 (en) * 2019-09-27 2021-04-01 五邑大学 Finger vein segmentation method and apparatus based on neural network and probabilistic graphical model

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6490747B2 (en) * 2017-06-08 2019-03-27 三菱電機株式会社 Object recognition device, object recognition method, and vehicle control system
US20200293838A1 (en) * 2019-03-13 2020-09-17 Deepmind Technologies Limited Scheduling computation graphs using neural networks
US20200342968A1 (en) * 2019-04-24 2020-10-29 GE Precision Healthcare LLC Visualization of medical device event processing
CN110263681B (en) * 2019-06-03 2021-07-27 腾讯科技(深圳)有限公司 Facial expression recognition method and device, storage medium and electronic device
CN110555399B (en) * 2019-08-23 2022-04-29 北京智脉识别科技有限公司 Finger vein identification method and device, computer equipment and readable storage medium
US11606389B2 (en) * 2019-08-29 2023-03-14 Nec Corporation Anomaly detection with graph adversarial training in computer systems
US20210158155A1 (en) * 2019-11-26 2021-05-27 Nvidia Corp. Average power estimation using graph neural networks
CN111191526B (en) * 2019-12-16 2023-10-10 汇纳科技股份有限公司 Pedestrian attribute recognition network training method, system, medium and terminal
CN111062345B (en) * 2019-12-20 2024-03-29 上海欧计斯软件有限公司 Training method and device for vein recognition model and vein image recognition device
CN111950408B (en) * 2020-07-28 2023-07-11 深圳职业技术学院 Finger vein image recognition method and device based on rule diagram and storage medium
CN112016438B (en) * 2020-08-26 2021-08-10 北京嘀嘀无限科技发展有限公司 Method and system for identifying certificate based on graph neural network
CN112149717B (en) * 2020-09-03 2022-12-02 清华大学 Confidence weighting-based graph neural network training method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740407A (en) * 2018-08-27 2019-05-10 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on figure network
CN110427832A (en) * 2019-07-09 2019-11-08 华南理工大学 A kind of small data set finger vein identification method neural network based
WO2021027364A1 (en) * 2019-08-13 2021-02-18 平安科技(深圳)有限公司 Finger vein recognition-based identity authentication method and apparatus
WO2021056972A1 (en) * 2019-09-27 2021-04-01 五邑大学 Finger vein segmentation method and apparatus based on neural network and probabilistic graphical model

Also Published As

Publication number Publication date
CN113505716A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN110414432B (en) Training method of object recognition model, object recognition method and corresponding device
US10762376B2 (en) Method and apparatus for detecting text
US11003941B2 (en) Character identification method and device
CN105550657B (en) Improvement SIFT face feature extraction method based on key point
CN109190470B (en) Pedestrian re-identification method and device
CN111709311A (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
CN112132099A (en) Identity recognition method, palm print key point detection model training method and device
CN113420690A (en) Vein identification method, device and equipment based on region of interest and storage medium
CN111275070B (en) Signature verification method and device based on local feature matching
CN116311214B (en) License plate recognition method and device
CN111488798B (en) Fingerprint identification method, fingerprint identification device, electronic equipment and storage medium
CN113705749A (en) Two-dimensional code identification method, device and equipment based on deep learning and storage medium
CN109145704A (en) A kind of human face portrait recognition methods based on face character
CN113505716B (en) Training method of vein recognition model, and recognition method and device of vein image
CN113706550A (en) Image scene recognition and model training method and device and computer equipment
CN117351192A (en) Object retrieval model training, object retrieval method and device and electronic equipment
Shreya et al. Gan-enable latent fingerprint enhancement model for human identification system
CN114022684B (en) Human body posture estimation method and device
CN113762249A (en) Image attack detection and image attack detection model training method and device
CN111582404A (en) Content classification method and device and readable storage medium
CN111428670A (en) Face detection method, face detection device, storage medium and equipment
US20140119641A1 (en) Character recognition apparatus, character recognition method, and computer-readable medium
Aly et al. Adaptive feature selection and data pruning for 3D facial expression recognition using the Kinect
Mali et al. Sign Language Recognition Using Long Short-Term Memory Deep Learning Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230418

Address after: 401329 2F, Building 18, Section I, Science Valley, Hangu Town, Jiulongpo District, Chongqing

Patentee after: Chongqing Financial Technology Research Institute

Patentee after: Qin Huafeng

Address before: No.19, Xuefu Avenue, Nan'an District, Chongqing, 400000

Patentee before: CHONGQING TECHNOLOGY AND BUSINESS University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240611

Address after: Building 18, 2nd Floor, Section 1, Science Valley Phase 1, Hangu Town, Jiulongpo District, Chongqing, 400000

Patentee after: Chongqing Weimai Zhilian Technology Co.,Ltd.

Country or region after: China

Address before: 401329 2F, Building 18, Section I, Science Valley, Hangu Town, Jiulongpo District, Chongqing

Patentee before: Chongqing Financial Technology Research Institute

Country or region before: China

Patentee before: Qin Huafeng