CN112801186A - Verification image generation method, device and equipment - Google Patents

Verification image generation method, device and equipment Download PDF

Info

Publication number
CN112801186A
CN112801186A CN202110123699.0A CN202110123699A CN112801186A CN 112801186 A CN112801186 A CN 112801186A CN 202110123699 A CN202110123699 A CN 202110123699A CN 112801186 A CN112801186 A CN 112801186A
Authority
CN
China
Prior art keywords
image
sample image
loss
verification
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110123699.0A
Other languages
Chinese (zh)
Inventor
杨宇喆
王娜
姜璐
钟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110123699.0A priority Critical patent/CN112801186A/en
Publication of CN112801186A publication Critical patent/CN112801186A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a verification image generation method, a verification image generation device and verification image generation equipment. The method comprises the following steps: obtaining an original sample image; the original sample image comprises a first sample image and a second sample image; the first sample image corresponds to a sample image label; determining a first prediction label of a first sample image; determining prior loss according to the prediction label and the sample image label; generating a candidate verification image using the second sample image based on an image generation algorithm; respectively acquiring a second prediction label corresponding to the second sample image and a third prediction label corresponding to the candidate verification image; calculating a posterior loss using the second predictive tag and the third predictive tag; and under the condition that the prior loss and the posterior loss accord with the picture application condition, determining the candidate verification image as a verification image. By using the method, the images meeting the requirements can be obtained by screening, the number of the images for training is expanded while the image verification effect is ensured, and the verification requirements are met.

Description

Verification image generation method, device and equipment
Technical Field
The embodiment of the specification relates to the technical field of artificial intelligence, in particular to a verification image generation method, device and equipment.
Background
In order to ensure the information security of the user, the real identity of the user is verified, and when the user performs account login, financial transaction and other network services, a corresponding verification code image is generally required to be displayed to the user, and characters recognized from the verification code image are input by the user, so that the identity of the user is authenticated, and the effects of man-machine recognition and the like are realized.
At present, with the popularization of the internet and the needs of various network transactions, the demand for the number of authentication code images is gradually increasing. In order to meet the requirement of the number of verification code pictures, when the verification code pictures are generated at present, the verification code images are generated through a corresponding algorithm based on a certain number of sample verification images. However, when the number of sample verification images provided in the initial state is small in the conventional generation of verification code images, the number of verification code images that can be obtained accordingly is also small. If the number of generated verification code images is increased on the basis again, in order to avoid higher repeatability among the verification code images, the images generally need to be greatly distorted or changed, so that the obtained verification code images often need to be subjected to manual verification and secondary marking, and the generation efficiency of the verification code images is greatly reduced. Therefore, a method for generating a large number of verification code images quickly and efficiently is needed.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a verification image generation method, device and apparatus, so as to solve the technical problem of how to effectively generate a large number of available verification images.
In order to solve the above technical problem, an embodiment of the present specification provides a verification image generation method, including: acquiring a first prediction label of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified; determining prior loss according to the first prediction label and a sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image; generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image; calculating a posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image; determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
An embodiment of the present specification further provides a verification image generation apparatus, including: the first prediction tag determining module is used for acquiring a first prediction tag of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified; the prior loss determining module is used for determining prior loss according to the first prediction label and the sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image; a candidate verification image generation module for generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image; the posterior loss calculating module is used for calculating posterior loss by utilizing the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image; the verification image determining module is used for determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
A verification image generation device comprising a memory and a processor; the memory to store computer program instructions; the processor to execute the computer program instructions to implement the steps of: acquiring a first prediction label of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified; determining prior loss according to the first prediction label and a sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image; generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image; calculating a posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image; determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
As can be seen from the technical solutions provided by the embodiments of the present specification, after a first sample image with a sample image label is acquired, a first prediction label used for representing a category of the first sample image is determined, and a priori loss is determined. And then, a candidate verification image is generated by utilizing the second sample image, and the posterior loss is calculated by utilizing the second sample image and the candidate verification image, so that the verification image can be selected from the candidate verification image to verify the identity of the user after the prior loss and the posterior loss are synthesized. According to the method, all sample images do not need to be manually labeled one by one, and after the images are generated, the generated images can be prevented from being too different from the previous images, so that the generated verification images are effectively utilized, and the time and resources consumed by manual labeling are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the specification, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a verification image generation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a process for calculating the prior loss according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a process of obtaining a candidate verification image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a process for calculating the posterior loss in an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a process of calculating a final loss and verifying an image application according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating an overall process of a verification image generation method according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of an authentication image generating apparatus according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an authentication image generating apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort shall fall within the protection scope of the present specification.
In order to solve the above technical problem, an embodiment of the present specification first proposes a verification image generation method. The execution subject of the verification image generation method is verification image generation equipment, and the verification image generation equipment comprises but is not limited to a server, an industrial personal computer, a PC (personal computer) and the like. As shown in fig. 1, the verification image generation method may specifically include the following steps.
S110: acquiring a first prediction label of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained by classifying and identifying the first sample image.
The first sample image and the second sample image may belong to the original sample image. The original sample image may be an image acquired before the verification image generation method is performed. The original sample image may be used to verify the identity of the user. For example, distorted, colored or rotated characters are contained in the original sample image. In the case of ensuring that the characters in the verification image have a certain degree of distortion, the computer may not be able to directly recognize the characters in the image. And people can often recognize characters in the image, so that man-machine recognition is realized through the original sample image, the identity of a user is verified, and the probability of illegal behaviors such as number scanning, library dragging and the like through a computer program is reduced.
When verifying the user's identity using the verification image, the server performing the verification operation needs to store actual tags corresponding to the respective verification images. The actual label is the actual content used to represent the verification image. And after receiving the input content fed back by the client, the server compares the input content with the actual label to judge whether the input content corresponds to the characters in the verification image or not, so that the identity verification process is completed. Therefore, before applying the verification image, the actual label corresponding to the verification image needs to be determined.
The original sample image may not have a one-to-one correspondence of label content in the initial stage, and therefore, the label corresponding to the original sample image needs to be determined to better apply to the image. However, when the number of original sample images is large, manual labeling of the original sample images one by one consumes a lot of labor and time, and obviously, the method is not suitable for a practical application scenario.
Therefore, in the embodiment, the original sample image may be divided into a first sample image and a second sample image. The number ratio of the first sample image and the second sample image can be set according to the requirement of practical application, for example, the first sample image and the second sample image can be distributed according to a one-to-one ratio, or according to a one-to-five ratio. The specific distribution proportion is not limited, and can be flexibly adjusted according to the actual application condition.
After determining the first sample image, the first sample image may be marked. Marking is to determine the sample image labels of the first sample images in sequence. The sample image label is used to describe the character presented in the first sample image. For example, when a distorted "8 ER 0" image is presented in a first sample image, the corresponding sample image label should be "8 ER 0". The sample image labels include, but are not limited to, arabic numerals, uppercase english letters, lowercase english letters, simplified chinese, traditional chinese, etc. By expanding the pattern of the sample image label, the difficulty of identifying and verifying the image by a machine can be increased, and the human-computer identification effect is effectively ensured.
The specific marking process may be that the first sample image is pushed to the client, an operator identifies characters in the first sample image and inputs a corresponding label, and after the label is fed back by the client, the label is stored as a sample image label corresponding to the first sample image. The specific marking process may be set according to the requirements of practical applications, and is not limited to the above examples, and will not be described herein again.
After a first sample image corresponding to a sample image label is acquired, a first prediction label corresponding to the first sample image may also be determined. The first prediction tag may be a tag obtained by classifying and recognizing the first sample image, specifically, a character represented in the first sample image determined by an image recognition method, or may be only a tag obtained by dividing an image into corresponding categories set in advance. The first prediction tag may have a certain error, that is, the first prediction tag has a possibility of not corresponding to the actual sample image tag.
In some embodiments, the first predictive label may be obtained using a classification model by training the classification model. In this embodiment, the classification model may be a mathematical model for determining a category to which a character corresponding to the picture belongs. The classification model may be a bayesian classification model, a Support Vector Machine (SVM) classification model, or a Convolutional Neural Network (CNN) classification model. The classification model can be a risk classification model, an emotion classification model, a theme classification model or the like. Of course, the classification model may be used not only to identify the first prediction label, but also to identify labels corresponding to other images.
Specifically, the first sample image and the sample image label may be used as training samples to train a classification model. Because the number of the first sample images in the initial state is small, the classification model can be constructed by adopting a neural network model with a shallow layer number and a small number of rounds. Since the training data has corresponding labels, the classification model can be trained by means of supervised learning. The specific training process can be set according to the setting condition of the model and the actual operation process, and is not described herein again.
After the classification model is obtained, the first predictive tag may be identified using the classification model. For example, the first sample images may be directly input into the classification models respectively to obtain corresponding first prediction labels.
Through the steps, the model training is realized by directly utilizing the original sample data without additionally acquiring other sample data, the process of step execution is simplified, and the identified first prediction label can also be used for calculation in the subsequent steps.
S120: determining prior loss according to the first prediction label and a sample image label; the a priori loss is used to represent a proportion of the first prediction label corresponding to the first sample image.
After the prediction tag is obtained, the prior loss can be determined by comparing the prediction tag with the sample image tag. The a priori loss may be used to describe the proportion of the prediction label corresponding to the first sample image. Accordingly, in the case of determining the first prediction label by using the classification model, the prior loss can also be used to describe the accuracy of the classification model in identifying the label, so that it can be comprehensively evaluated whether the generated image meets the standard of the verification image in the subsequent step.
Specifically, the conditional probability P corresponding to the sample and the label can be determined according to the sample image label and the prediction labelθ(y | x), calculating the prior loss L using the conditional probabilitytrior. For example, the conditional probability can be directly used as an a priori loss.
An example scenario for obtaining the a priori loss is described below with reference to fig. 2. As shown in fig. 2, after the first sample image 21 is acquired, the sample image label 22 of the first sample image 21 is also acquired. A classification model 23 is trained using the first sample image 21, and a first predictive label 24 corresponding to the first sample image 21 is determined using the classification model 23. The a priori losses 25 can be obtained by combining the correspondence of the first prediction label 24 and the sample image label 22.
S130: generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used to construct the difference of the candidate verification image compared to the second sample image.
A candidate verification image may be generated from the second sample image. In order to increase the number of verification images, the sample image needs to be modified to some extent on the basis of the original sample image, so that a certain difference is formed between the candidate verification image and the second sample image. Since the second sample image is not labeled, the corresponding generated candidate verification image is also not labeled.
The second sample image may be altered using an image generation algorithm when generating the candidate verification image. The image generation algorithm is used to construct the difference of the candidate verification image compared to the second sample image. Specifically, the process of generating the candidate verification image may be regarded as performing propagation on the second sample image once, and a loss of information occurs in the propagation process, and this loss may also be regarded as a conditional probability. Assuming that the second sample image is X, the generated candidate verification image is
Figure BDA0002922993550000051
The image generation process is
Figure BDA0002922993550000061
The image generation algorithm is used to determine
Figure BDA0002922993550000062
The algorithm of (1).
In some embodiments, the image generation algorithm may be a GAN (generic adaptive Networks) algorithm. The GAN algorithm is a deep learning algorithm based on unsupervised learning, and one important use of the GAN algorithm is to generate an image, for example, a new picture can be generated after receiving certain noise data, and the new picture can be distinguished. The specific method for generating the candidate verification image by using the GAN may be set according to the requirements of the actual application, and is not described herein again. The degree of dissimilarity between the second sample image and the candidate verification image may be embodied in the GAN algorithm itself, and thus the degree of alteration of the second sample image may also be varied by adjusting the GAN algorithm itself. The GAN algorithm can conveniently and quickly generate candidate verification images, and the progress of flow execution is accelerated.
A specific scenario example of this step is described below with reference to fig. 3, and as shown in fig. 3, after the second sample image 31 is acquired, the second sample image 31 may be directly input into the GAN network 32 to obtain the candidate verification image 33.
The candidate verification images are generated in the mode, so that the number of sample images is increased, the verification images which can be applied can be increased after subsequent screening is carried out, and the requirement on the verification images is met.
S140: calculating a posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image.
After the candidate verification image is acquired, a second prediction label corresponding to the second sample image and a third prediction label corresponding to the candidate verification image may be acquired, respectively. The second prediction tag may be a tag obtained by performing classification and identification on the second sample image, and the third prediction tag may be a tag obtained by performing classification and identification on the candidate verification image, that is, the second prediction tag and the third prediction tag may respectively represent results obtained by performing identification on characters in corresponding images.
In some embodiments, the classification model trained in step S110 may be used to determine a second prediction label corresponding to the second sample image and a third prediction label corresponding to the candidate verification image, respectively. For a specific classification identification process, reference may be made to the description in step S110, which is not described herein again.
After the second prediction tag and the third prediction tag are obtained, the posterior loss can be calculated according to the second prediction tag and the third prediction tag. The posterior loss may be used to represent a degree of difference between the candidate verification image and the second sample image. In the subsequent step, whether the generated candidate verification image has overlarge deviation or not can be determined by integrating the prior loss and the posterior loss, so that whether the candidate verification image can be actually applied or not is judged.
In some embodiments, calculating the posterior loss may first calculate a first conditional probability based on the second prediction tag and the second sample image, the first conditional probability being used to represent a ratio of the second prediction tag to the second sample image. And calculating a second conditional probability according to the third prediction label and the candidate verification image, wherein the second conditional probability represents the corresponding proportion of the third prediction label and the candidate verification image. And integrating the first conditional probability and the second conditional probability to complete the calculation of the posterior probability.
Specifically, the first conditional probability may be calculated by softmax (l (x)/τ), and the conditional probability that the second prediction tag corresponds to the second sample image is obtained as
Figure BDA0002922993550000071
Where l (x) is the logical distribution probability of the predicted result, τ is the adjustment factor, and the larger τ, the more dispersed the distribution. Soft max is a function which is important in machine learning, and is widely used in a multi-classification scene particularly. He maps some inputs to real numbers between 0-1 and the normalization guarantees a sum of 1, so the sum of the probabilities for the multi-classes is also exactly 1. In the process of carrying out classification and identification on the second sample image, the accuracy of the result of the classification and identification is ensured.
Accordingly, the second conditional probability can be obtained in the same manner. Assuming that the second sample image is X, the generated candidate verification image is
Figure BDA0002922993550000072
Combining the obtained third prediction label to obtain the corresponding conditional probability of the third prediction label and the candidate verification image as
Figure BDA0002922993550000073
After the first conditional probability and the second conditional probability are calculated, the posterior loss can be calculated according to the first conditional probability and the second conditional probability. In particular, a formula may be utilized
Figure BDA0002922993550000074
The a posteriori loss was calculated, where,
Figure BDA0002922993550000075
for a posterior loss, DKLIn order to be the divergence degree,
Figure BDA0002922993550000076
in order to be the first conditional probability,
Figure BDA0002922993550000077
is the second conditional probability. The formula determines the posterior loss by determining the divergence between the first conditional probability and the second conditional probability, so that the deviation of the candidate verification image compared with the second sample image is determined according to the correlation degree between the first conditional probability and the second conditional probability, and accurate calculation in the subsequent process can be further performed.
To illustrate by using a specific scenario example, as shown in fig. 4, after the second sample image 41 is acquired, the second sample image 41 may be input into a GAN network 43 to obtain a candidate verification image 44, then a first conditional probability 42 and a second conditional probability 45 of the second sample image 41 and the candidate verification image 44 are respectively acquired, and finally the first conditional probability 42 and the second conditional probability 45 are combined to obtain a posterior loss 46.
S150: determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
After the prior loss and the posterior loss are obtained, the prior loss and the posterior loss can be synthesized to judge whether the generated candidate verification image meets the requirements of practical application. Since the prior loss represents the accuracy of the classification model, and the posterior loss represents the degree of difference between the second sample image and the candidate verification image determined based on the classification model, the prior loss and the posterior loss can be combined to determine whether the candidate verification image has too much difference from the second sample image.
In some embodiments, when determining whether the prior loss and the posterior loss meet the picture application condition, a final loss may be obtained by synthesizing the prior loss and the posterior loss, and then the determination is performed based on the final loss.
In practical applications, for a certain second sample image, a plurality of candidate verification images can be generated by using an image generation algorithm, and in order to ensure the usability of the finally obtained image, the minimum posterior loss can be selected from the candidate verification images as the target posterior loss. Accordingly, the target posterior loss has a minimal degree of difference with the corresponding second sample image. Then, the prior loss and the target posterior loss can be synthesized to obtain a final loss, and the final loss is compared with a judgment threshold, so that the candidate verification image corresponding to the target posterior loss can be determined as the verification image under the condition that the final loss is not greater than the judgment threshold.
The determination threshold may be a predetermined determination criterion for calibrating a maximum difference between the candidate verification image and the sample image. The determination threshold may be set directly by an operator according to work experience, or may be data obtained by deep learning model training, which is not limited to this.
In some specific examples, formulas are utilized
Figure BDA0002922993550000081
The final loss is calculated, where,
Figure BDA0002922993550000082
for the final loss, LtriorFor a priori losses, λ is the adjustment coefficient, LposteriorThe target posterior loss. The adjustment coefficient may be adjusted according to application requirements, and in general, the adjustment coefficient may be set to 1.
In practical application, the prior loss and the posterior loss may also be synthesized in other manners, and the corresponding picture application condition is set to determine the verification image, which is not limited to the above example and is not described herein again.
After the verification image is acquired, the user can be verified by applying the verification image. In some embodiments, after the verification image is acquired, the verification image may be marked to enable the verification image to be provided with verification functionality. Since the verification image may be provided with the same label as the corresponding second sample image, the time consumed for marking is reduced.
In some embodiments, after determining the verification image, the verification image may be iteratively trained as a first sample image on the classification model because the verification image has a corresponding label. Because the number of the first sample images used for training the classification model is small initially, the classification model obtained by training may lack certain accuracy, and therefore the newly generated verification image can be used as training data to train the classification model again, so that the accuracy of classification and identification of the classification model is improved, and the verification image can be obtained more quickly and better in the subsequent steps. The specific training process can be set according to the actual application situation, and is not described herein again.
In some embodiments, after the verification images are determined, certain verification images may also be screened from the verification images to optimize the image generation algorithm. Since the differential adjustment of the candidate verification images is determined by the image generation algorithm, the effect of the generated candidate verification images is affected when the adjustment degree of the image generation algorithm on the second sample image is too high or too low, and the verification images are determined by screening and can be used for verifying the identity of the user. Therefore, the image generation algorithm can be optimized by combining the standard of the verification image, so that the quality of the generated candidate verification image is improved, and the verification image can be acquired more quickly and better in the subsequent steps. The specific process of optimizing the image verification algorithm by using the verification image can be set according to the requirements of practical application, and is not described herein again.
The above steps are explained by using a specific scenario example, as shown in fig. 5, after obtaining the prior loss 51 and the posterior loss 52, the prior loss 51 and the posterior loss 52 are combined to obtain a final loss 53, then step 54 is performed to obtain a part of images with small final loss for label migration, which is used for training a classification model, and another part of unlabeled verification images is taken and put into GAN for training.
Based on the verification image generation method, the above flow is described with a specific scene example. As shown in fig. 6, first, after an original sample image 610 is acquired, the original sample image 610 is split into a first sample image 620 and a second sample image 660. Then, step 630 is executed to perform image annotation on the first sample image. In step 640, a classification model is trained using the labeled first sample image, and in step 650, a prior loss is calculated according to a classification result of the classification model and a labeling result of the image. For the second sample image 660, step 670 is performed first, and the second sample image is input into the GAN network, resulting in a candidate verification image. In step 680, the second sample image and the candidate verification image are subjected to consistency determination, and according to the determination result, step 690 is performed to calculate the posterior loss. Thereafter, step 6100 may be performed to calculate a final loss by combining the prior loss and the posterior loss, and obtain a verification image by judging the final loss. For the verification images, a part of the verification images may be acquired, step 6110 is performed, tags are added to the verification images, step 6120 is performed, and the images with the tags added are added to the first sample image, so that the classification model is iteratively trained. For another portion of the verification image, step 6130 can be performed, and the portion of the verification image can be used to train the GAN network.
By the verification image generation method, after the first sample image with the sample image label is acquired, the first prediction label for representing the category of the first sample image is determined, and the prior loss is determined. And then, generating a candidate verification image by using the second sample image, and calculating posterior loss by using the second prediction label and the third prediction label after determining the second prediction label and the third prediction label which respectively correspond to each other, so that the verification image can be selected from the candidate verification image to verify the identity of the user after the prior loss and the posterior loss are synthesized. According to the method, all sample images do not need to be manually labeled one by one, and after the images are generated, the generated images can be prevented from being too different from the previous images, so that the generated verification images are effectively utilized, and the time and resources consumed by manual labeling are reduced.
Based on the verification image generation method, embodiments of the present specification further provide a verification image generation apparatus, which may be integrated in a verification image generation device. As shown in fig. 7, the apparatus may include the following specific modules.
A first prediction tag determination module 710, configured to obtain a first prediction tag of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified;
a priori loss determination module 720, configured to determine a priori loss according to the first prediction label and the sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image;
a candidate verification image generation module 730 for generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image;
a posterior loss calculation module 740 for calculating posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image;
a verification image determining module 750, configured to determine the candidate verification image as a verification image if the prior loss and the posterior loss meet a picture application condition; the verification image is used for verifying the identity of the user.
Based on the verification image generation method, the embodiment of the specification further provides verification image generation equipment. As shown in fig. 8, the verification image generation device may include a memory and a processor.
In this embodiment, the memory may be implemented in any suitable manner. For example, the memory may be a read-only memory, a mechanical hard disk, a solid state disk, a U disk, or the like. The memory may be used to store computer program instructions.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth.
The processor may execute the computer program instructions to perform the steps of: acquiring a first prediction label of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified; determining prior loss according to the first prediction label and a sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image; generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image; calculating a posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image; determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present specification can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solutions of the present specification may be essentially or partially implemented in the form of software products, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The description is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the specification has been described with examples, those skilled in the art will appreciate that there are numerous variations and permutations of the specification that do not depart from the spirit of the specification, and it is intended that the appended claims include such variations and modifications that do not depart from the spirit of the specification.

Claims (14)

1. A verification image generation method, comprising:
acquiring a first prediction label of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified;
determining prior loss according to the first prediction label and a sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image;
generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image;
calculating a posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image;
determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
2. The method of claim 1, wherein the sample image labels comprise at least one of arabic numerals, uppercase english letters, lowercase english letters, simplified chinese, and traditional chinese.
3. The method of claim 1, wherein the image generation algorithm comprises a GAN algorithm.
4. The method of claim 1, wherein said obtaining a first prediction tag for a first sample image comprises:
training a classification model by using the first sample image and the sample image label; the classification model is used for identifying a label corresponding to the image;
determining a first prediction label for the first sample image using the classification model.
5. The method of claim 4, wherein calculating the posterior loss using the second sample image and the candidate verification image comprises:
respectively acquiring a second prediction label corresponding to the second sample image and a third prediction label corresponding to the candidate verification image; the second prediction label and the third prediction label respectively represent classification categories obtained after classification and identification are carried out on the second sample image and the candidate verification image;
calculating a posterior loss using the second predictive label and the third predictive label.
6. The method of claim 5, wherein separately obtaining a second prediction label corresponding to a second sample image and a third prediction label corresponding to a candidate verification image comprises:
a second prediction label corresponding to the second sample image and a third prediction label corresponding to a candidate verification image are respectively determined using the classification model.
7. The method of claim 4, wherein after determining that the candidate verification image is a verification image, further comprising:
and performing iterative training on the classification model by using the verification image.
8. The method of claim 5, wherein said calculating a posterior loss using said second predictive tag and a third predictive tag comprises:
calculating a first conditional probability according to the second prediction label and a second sample image; the first conditional probability represents the corresponding proportion of the second prediction label and the second sample image;
calculating a second conditional probability according to the third predictive label and the candidate verification image; the second conditional probability represents a ratio of a third prediction label to a candidate verification image;
and calculating the posterior probability according to the first conditional probability and the second conditional probability.
9. The method of claim 8, wherein said calculating a posterior probability based on said first conditional probability and said second conditional probability comprises:
using formulas
Figure FDA0002922993540000021
The a posteriori loss was calculated, where,
Figure FDA0002922993540000022
for a posterior loss, DKLIn order to be the divergence degree,
Figure FDA0002922993540000024
in order to be the first conditional probability,
Figure FDA0002922993540000023
is the second conditional probability.
10. The method of claim 1, wherein determining the candidate verification image as the verification image in the case that the prior loss and the posterior loss meet a picture application condition comprises:
selecting the minimum posterior loss as a target posterior loss;
synthesizing the prior loss and the target posterior loss to obtain final loss;
and under the condition that the final loss is not greater than a judgment threshold value, determining the candidate verification image corresponding to the target posterior loss as a verification image.
11. The method of claim 10, wherein said combining said a priori losses and target a posteriori losses to obtain a final loss comprises:
using formulas
Figure FDA0002922993540000031
The final loss is calculated, where,
Figure FDA0002922993540000032
for the final loss, LtriorFor a priori losses, λ is the adjustment coefficient, LposteriorThe target posterior loss.
12. The method of claim 1, wherein after determining that the candidate verification image is a verification image, further comprising:
optimizing the image generation algorithm using the verification image.
13. A verification image generation apparatus, characterized by comprising:
the first prediction tag determining module is used for acquiring a first prediction tag of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified;
the prior loss determining module is used for determining prior loss according to the first prediction label and the sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image;
a candidate verification image generation module for generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image;
the posterior loss calculating module is used for calculating posterior loss by utilizing the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image;
the verification image determining module is used for determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
14. A verification image generation device comprising a memory and a processor;
the memory to store computer program instructions;
the processor to execute the computer program instructions to implement the steps of: acquiring a first prediction label of a first sample image; the first sample image corresponds to a sample image label; the sample image label is used for describing the character presented by the first sample image; the first prediction label represents a classification category obtained after the first sample image is classified and identified; determining prior loss according to the first prediction label and a sample image label; the prior loss is used for representing the proportion of the first prediction label corresponding to the first sample image; generating a candidate verification image using the second sample image based on an image generation algorithm; the image generation algorithm is used for constructing the difference of the candidate verification image compared with the second sample image; calculating a posterior loss using the second sample image and the candidate verification image; the posterior loss is used to represent a degree of difference between the second sample image and a candidate verification image; determining the candidate verification image as a verification image under the condition that the prior loss and the posterior loss accord with the picture application condition; the verification image is used for verifying the identity of the user.
CN202110123699.0A 2021-01-29 2021-01-29 Verification image generation method, device and equipment Pending CN112801186A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110123699.0A CN112801186A (en) 2021-01-29 2021-01-29 Verification image generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110123699.0A CN112801186A (en) 2021-01-29 2021-01-29 Verification image generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN112801186A true CN112801186A (en) 2021-05-14

Family

ID=75812704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110123699.0A Pending CN112801186A (en) 2021-01-29 2021-01-29 Verification image generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN112801186A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298774A (en) * 2021-05-20 2021-08-24 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN115661619A (en) * 2022-11-03 2023-01-31 北京安德医智科技有限公司 Network model training method, ultrasonic image quality evaluation method, device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298774A (en) * 2021-05-20 2021-08-24 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN115661619A (en) * 2022-11-03 2023-01-31 北京安德医智科技有限公司 Network model training method, ultrasonic image quality evaluation method, device and electronic equipment

Similar Documents

Publication Publication Date Title
US20210334706A1 (en) Augmentation device, augmentation method, and augmentation program
CN109101537B (en) Multi-turn dialogue data classification method and device based on deep learning and electronic equipment
CN110069709B (en) Intention recognition method, device, computer readable medium and electronic equipment
EP3869385B1 (en) Method for extracting structural data from image, apparatus and device
WO2021164481A1 (en) Neural network model-based automatic handwritten signature verification method and device
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
CN106372624B (en) Face recognition method and system
CN113837205B (en) Method, apparatus, device and medium for image feature representation generation
JP2022512065A (en) Image classification model training method, image processing method and equipment
US11915500B2 (en) Neural network based scene text recognition
CN108898181B (en) Image classification model processing method and device and storage medium
Sagayam et al. A probabilistic model for state sequence analysis in hidden Markov model for hand gesture recognition
CN111612081B (en) Training method, device, equipment and storage medium for recognition model
CN114677565B (en) Training method and image processing method and device for feature extraction network
CN112801186A (en) Verification image generation method, device and equipment
KR20210149530A (en) Method for training image classification model and apparatus for executing the same
CN117197904A (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN112418320A (en) Enterprise association relation identification method and device and storage medium
WO2022126917A1 (en) Deep learning-based face image evaluation method and apparatus, device, and medium
US20230267709A1 (en) Dataset-aware and invariant learning for face recognition
Tymoshenko et al. Real-Time Ukrainian Text Recognition and Voicing.
CN111783812A (en) Method and device for identifying forbidden images and computer readable storage medium
Hu et al. Attention‐guided evolutionary attack with elastic‐net regularization on face recognition
CN117315758A (en) Facial expression detection method and device, electronic equipment and storage medium
WO2020170803A1 (en) Augmentation device, augmentation method, and augmentation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination