CN111191568A - Method, device, equipment and medium for identifying copied image - Google Patents

Method, device, equipment and medium for identifying copied image Download PDF

Info

Publication number
CN111191568A
CN111191568A CN201911366794.2A CN201911366794A CN111191568A CN 111191568 A CN111191568 A CN 111191568A CN 201911366794 A CN201911366794 A CN 201911366794A CN 111191568 A CN111191568 A CN 111191568A
Authority
CN
China
Prior art keywords
image
phase quantization
local phase
recognized
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911366794.2A
Other languages
Chinese (zh)
Inventor
喻晨曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201911366794.2A priority Critical patent/CN111191568A/en
Publication of CN111191568A publication Critical patent/CN111191568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a method and a device for identifying a copied image, computer equipment and a storage medium, wherein the method comprises the following steps: obtaining an image to be identified; inputting an image to be recognized into a head portrait detection model, and extracting a head portrait of the image to be recognized; performing gray level processing on the head portrait of the image to be identified to obtain a gray level image of the head portrait; transforming the gray level image by a local phase quantization method to obtain a local phase quantization characteristic diagram of the gray level image; inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and performing copying recognition processing on the local phase quantization characteristic diagram through the shallow neural network model to obtain copying probability; and when the copying probability is greater than or equal to a preset probability threshold value, determining the image to be identified as a copied image. Therefore, the automatic identification of the copied image is realized, the identification accuracy is improved, and the identification efficiency and reliability are improved.

Description

Method, device, equipment and medium for identifying copied image
Technical Field
The invention relates to the field of image classification, in particular to a method and a device for identifying a copied image, computer equipment and a storage medium.
Background
With the development of the credit society, more and more application scenes (such as application scenes related to finance, insurance and security) need to verify the identity of a user through certificate recognition and face recognition. In the prior art, the existing identity authentication mainly carries out manual verification by spending a large amount of manpower, so a large amount of manpower resources and waiting time are consumed, along with the improvement of a data photographing technology, lawless persons are endless in means of verifying the identity of a user by copying an image, the accuracy of manually recognizing the copied image is low, and the user is easy to recognize mistakes, so that the safety problem of user information can be caused if the copied image is not recognized in the identity authentication process.
Disclosure of Invention
The invention provides a method and a device for identifying a copied image, computer equipment and a storage medium, which realize automatic identification of the copied image, improve the identification accuracy and improve the identification efficiency and reliability.
A method for recognizing a copied image comprises the following steps:
acquiring an image to be identified;
inputting the image to be recognized into a head portrait detection model, and extracting a head portrait of the image to be recognized;
carrying out gray level processing on the head portrait of the image to be identified to obtain a gray level image of the head portrait;
acquiring phase information of each pixel point of the gray image and a preset local area block corresponding to each pixel point;
processing the phase information of each pixel point and the preset local area block corresponding to each pixel point by a local phase quantization method, and calculating a local phase quantization characteristic value corresponding to each pixel point;
arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic diagram of the gray level image;
inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and identifying the local phase quantization characteristic diagram through the shallow neural network model to obtain the reproduction probability of the local phase quantization characteristic diagram;
and when the copying probability is greater than or equal to a preset probability threshold value, determining the image to be identified as a copied image.
A reproduction image recognition apparatus comprising:
the first acquisition module is used for acquiring an image to be identified;
the first extraction module is used for inputting the image to be recognized into a head portrait detection model and extracting a head portrait of the image to be recognized;
the conversion module is used for carrying out gray processing on the head portrait of the image to be identified to obtain a gray image of the head portrait;
the second acquisition module is used for acquiring phase information of each pixel point of the gray image and a preset local area block corresponding to each pixel point;
the first calculation module is used for processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method and calculating a local phase quantization characteristic value corresponding to each pixel point;
the generating module is used for arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic diagram of the gray level image;
the identification module is used for inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and identifying the local phase quantization characteristic diagram through the shallow neural network model so as to obtain the reproduction probability of the local phase quantization characteristic diagram;
and the determining module is used for determining the image to be identified as the copied image when the copying probability is greater than or equal to a preset probability threshold.
The invention provides a method and a device for identifying a copied image, computer equipment and a storage medium, wherein the image to be identified is obtained; inputting the image to be recognized into a head portrait detection model, and extracting a head portrait of the image to be recognized; carrying out gray level processing on the head portrait of the image to be identified to obtain a gray level image of the head portrait; transforming the gray level image by a local phase quantization method to obtain a local phase quantization characteristic diagram of the gray level image; inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and performing copying recognition processing on the local phase quantization characteristic diagram through the shallow neural network model to obtain copying probability; and when the copying probability is greater than or equal to a preset probability threshold value, determining the image to be identified as a copied image. Therefore, the gray level image is obtained by extracting the head photograph in the image to be recognized and carrying out gray level processing on the head photograph, then the head photograph is converted into the local phase quantization characteristic diagram by the local phase quantization method, the local phase quantization characteristic diagram is input into the shallow neural network model which is trained, the local phase quantization characteristic diagram is recognized for texture characteristics, the recognition result of texture characteristic statistics can be obtained, the reproduction probability of the image to be recognized is obtained, and when the reproduction probability exceeds a preset threshold value, the image to be recognized is determined to be the reproduced image, so that the automatic recognition of the reproduced image is realized, the recognition accuracy is improved, and the recognition efficiency and the reliability are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a method for recognizing a copied image according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for identifying a copied image according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for identifying a copied image according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating step S20 of the method for recognizing a copied image according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating step S70 of the method for recognizing a copied image according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating the method for recognizing a copied image before step S70 according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a device for recognizing a copied image according to an embodiment of the present invention;
FIG. 8 is a block diagram of a first extraction module of the apparatus for recognizing a copied image according to an embodiment of the present invention;
FIG. 9 is a functional block diagram of an identification module in the apparatus for recognizing a copied image according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a computer device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for recognizing the copied image provided by the invention can be applied to the application environment shown in fig. 1, wherein a client (computer device) is communicated with a server through a network. The client (computer device) includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for identifying a captured image is provided, which mainly includes the following steps S10-S80:
and S10, acquiring the image to be recognized.
The image to be recognized can be an identity card front color image comprising RGB channels (three channels are a red channel, a green channel and a blue channel). In one embodiment, the server can shoot through a camera installed on the terminal to obtain the image to be identified; the server can also obtain the image to be identified from a local album stored in the terminal through the terminal.
And S20, inputting the image to be recognized into a head portrait detection model, and extracting the head portrait of the image to be recognized.
Understandably, inputting the image to be recognized into an avatar detection model, wherein the avatar detection model comprises an mtcnn (Multi-task convolutional neural network) algorithm, and the mtcnn algorithm puts detection of the clavicle and above avatar area together with detection of the clavicle and above avatar key points, and recognizes a final avatar area and a feature point position. Performing avatar detection on the image to be identified through an mtcnn algorithm in the avatar detection model to detect whether the image to be identified contains an avatar; and when the head portrait detection model detects that the image to be recognized contains a head portrait, extracting the image to be recognized, and marking the extracted image as the head portrait of the image to be recognized. For example: and inputting the picture of the front face of the identity card into a head portrait detection model, detecting that the picture contains a head portrait, and cutting the picture of the front face of the identity card according to a detected head portrait area (usually, the area is a rectangular area), wherein the cut image is the head portrait of the picture of the front face of the identity card.
In an embodiment, as shown in fig. 4, the step S20, namely, inputting the image to be recognized into the avatar detection model, and extracting the avatar of the image to be recognized, includes:
s201, inputting the image to be recognized into an avatar detection model, and performing avatar detection on the image to be recognized through an mtcnn algorithm in the avatar detection model to obtain an avatar confidence probability of the image to be recognized;
understandably, the image to be recognized is input into an avatar detection model, avatar detection is performed on the image to be recognized through an mtcnn algorithm in the avatar detection model, and the avatar confidence probability of the image to be recognized can be extracted. For example, the image to be recognized is the positive image of the identity card, and the accuracy rate of detecting the positive image of the identity card can be 99.8% according to the experimental data.
S202, when the head portrait confidence probability of the image to be recognized is larger than a preset threshold value, extracting a head portrait area of the image to be recognized, and marking the head portrait area as a head portrait of the image to be recognized.
Understandably, when the confidence probability of the head portrait is larger than a preset threshold value, when the head portrait is judged to be contained in the image to be recognized, the head portrait area of the image to be recognized can be recognized, wherein the head portrait area comprises a rectangular area of the head of the person on the front side above the clavicle in the image to be recognized. Wherein, it can be determined whether the image to be recognized contains the avatar according to the confidence probability of the avatar in the image to be recognized, for example: and when the confidence probability of the head portrait in the identified image is set to be more than 90%, determining that the identified image contains the head portrait.
Therefore, the identification of the head portrait through the mtcnn algorithm can reduce data interference generated by images except the head portrait area, and the textural features of the image to be identified are mainly concentrated in the head portrait area, so that the accuracy of copying identification can be improved.
And S30, performing gray level processing on the head portrait of the image to be recognized to obtain a gray level image of the head portrait.
Understandably, the head portrait of the image to be identified comprises images of three channels of RGB, namely, each pixel point in each head portrait has a wind volume value of the three channels, namely an R component value, a G component value and a B component value, each pixel point in the head portrait is subjected to gray level processing, a gray level value of each pixel point is obtained through a formula of a weighted average method, so that a gray level image of the head portrait is generated, and thus, the head portrait of the three channels is converted into a gray level image of one channel, and only one channel is processed, and the processing of each channel is reduced.
And S40, acquiring phase information of each pixel point of the gray image and a preset local area block corresponding to each pixel point.
Understandably, the grayscale image is composed of pixel points, for example, the grayscale image of 80 × 60, that is, an image with a length of 80 pixel points and a width of 60 pixel points, has 4800 pixel points in total, and the preset local area block corresponding to each pixel point is a square area with a side length of M and a single pixel point as the center of the preset local area block, where M may select odd numbers of 3, 5, 7, etc., so that the center position of the preset local area block coincides with the pixel point.
And S50, processing the phase information of each pixel point and the preset local area block corresponding to each pixel point by a local phase quantization method, and calculating a local phase quantization characteristic value corresponding to each pixel point.
Understandably, the Local Phase Quantization (LPQ) method includes performing fourier transform on the grayscale image to obtain Phase information of the grayscale image, and obtaining Phase information of each pixel point and Phase information of a preset Local area block corresponding to each pixel point, where the preset Local area block may be a square area with a side length of M. The phase information is an information set about phases in four directions of a pixel point, and a local phase quantization characteristic value corresponding to each pixel point is calculated through the local phase quantization method. The local phase quantization eigenvalue is also called LPQ eigenvalue, and each pixel point is quantized by a local phase quantization method (LPQ) to obtain a corresponding integer value from 0 to 255.
In this way, the gray-scale image is generated and then the image feature extraction processing is carried out, and the processing process of the local phase quantization method utilizes the characteristic of fuzzy invariance of Fourier transform phase, so that the phase information of the original image is reserved, and the identification accuracy and reliability are improved.
And S60, arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic diagram of the gray level image.
Understandably, the local phase quantization feature values of all the pixel points are arranged to generate a local phase quantization feature map of the gray image, that is, the local phase quantization feature values of all the pixel points are arranged according to the positions of the corresponding pixel points, and the local phase quantization feature map of the gray image can be generated, for example, the gray image is an image with a length of 80 pixel points and a width of 60 pixel points, and if there is an arrangement matrix of 4800 pixel points in total, the local phase quantization feature map also corresponds to an image with a length of 80 pixel points and a width of 60 pixel points, and if there is an arrangement matrix of 4800 pixel points in total.
In this way, since the phase information obtained by fourier transform has blur invariance, extracting texture features from the phase information can improve robustness to head portrait recognition. Through image analysis, some more remarkable textural features such as ripples, abnormal stripes and the like exist in the copied image. And the local phase quantization method has the advantages of high efficiency, strong identification performance and the like when being applied to the field of texture classification, so that the identification accuracy and reliability are improved by identifying the texture features through the local phase quantization method for reproduction detection.
And S70, inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and identifying the local phase quantization characteristic diagram through the shallow neural network model to obtain the reproduction probability of the local phase quantization characteristic diagram.
Understandably, the shallow neural network model is a trained neural network model, and the local phase quantization feature map is input into the shallow neural network model, so that the local phase quantization feature map is identified, that is, the local phase quantization feature map is identified by texture features, and the identification result of the texture feature statistics can be obtained, so that the reproduction probability of the local phase quantization feature map is counted. Because the input image of the shallow neural network model is quantized image data, and complex image transformation processing on the image is not needed in an input layer, the shallow neural network model has high processing speed and small capacity, and can be applied to a portable mobile terminal with small capacity.
In an embodiment, as shown in fig. 5, in the step S70, that is, inputting the local phase quantization feature map into a trained shallow neural network model, and performing a copying recognition process on the local phase quantization feature map through the shallow neural network model to obtain a copying probability, the method includes:
and S701, extracting a local phase quantization feature histogram from the local phase quantization feature map.
Understandably, all local phase quantization characteristic values in the local phase quantization characteristic graph are subjected to summary statistics, the local phase quantization characteristic values are integer values from 0 to 255, statistics of 256 dimensions are obtained after the summary statistics, the statistics are represented by a histogram, and the local phase quantization characteristic histogram is extracted and generated, wherein the abscissa of the local phase quantization characteristic histogram is 256 dimension values corresponding to the local phase quantization characteristic values, and the ordinate of the local phase quantization characteristic histogram is a summary value equal to the local phase quantization characteristic values corresponding to the dimension values of the abscissa.
S702, enhancing the local phase quantization feature histogram through a Gaussian noise algorithm to obtain neuron data.
Understandably, the local phase quantization feature histogram is enhanced by a function in the gaussian noise algorithm, i.e. a mean value and a variance of the local phase quantization feature histogram are calculated by a function in the gaussian noise algorithm, the local phase quantization feature histogram is converted into a gaussian distribution diagram by the mean value and the variance, and parameter data (including a plurality of neurons) in the gaussian distribution diagram is marked as the neuron data.
S703, inputting the neuron data into a random inactivation layer in the shallow neural network, performing texture feature extraction on the neuron data through the random inactivation layer, and acquiring the prediction probability output by the random inactivation layer and matched with the texture feature.
Understandably, the random inactivation layer is also called a dropout layer, and means that the probability is output after the neurons in the neuron data are weighted according to parameters. Namely, the random inactivation layer inputs the neurons in the neuron data into a parameter weighting function, the parameter weighting function extracts the texture feature of each neuron in the neuron data, namely, the parameter weighting function calculates and outputs the texture feature probability corresponding to each neuron, so that the output texture feature probability is marked as the prediction probability matched with the texture feature.
S704, inputting the prediction probability into an activation layer in the shallow neural network, and processing the prediction probability by the activation layer through a sigmoid function to obtain the reproduction probability of the local phase quantization feature map.
Understandably, the prediction probability is input into the sigmoid function in the activation layer, the sigmoid function classifies the prediction probability, the sigmoid function can be set in multiple classifications according to requirements, that is, the sigmoid function outputs probability values corresponding to multiple categories, preferably, the sigmoid function is set in two classifications, that is, a copying category and a non-copying category, and also can be understood that the sigmoid function outputs the probability corresponding to the copying and the probability corresponding to the non-copying, and the probability corresponding to the copying output by the sigmoid function is marked as the copying probability of the local phase quantization feature map.
Thus, since the shallow neural network model receives an image with enhanced texture features, the shallow neural network model of an efficient and simple network structure level is designed for the local phase quantization feature map.
In an embodiment, as shown in fig. 6, before the step S70, that is, before the step S70, inputting the local phase quantization feature map into a trained shallow neural network model, and performing a copying recognition process on the local phase quantization feature map through the shallow neural network model to obtain a copying probability, the method includes:
s705, obtaining an image sample, wherein the image sample comprises a copied image sample and a non-copied image sample, and inputting the image sample into the shallow neural network model; wherein each of the image samples is associated with a copy label.
The image sample comprises a copied image sample and a non-copied image sample, each image sample is associated with one copied label, the image samples are the images subjected to gray processing and transformed by a local phase quantization method, the copied labels comprise copied images and non-copied images, in the service process, the number of scenes needing identity card copying identification is large, the selected identity card copied images and the selected identity card non-copied images can respectively occupy 50% and 50% of the image samples, and the aim is to train the shallow neural network model to cover all the use scenes, so that the reliability of the shallow neural network model is improved.
S706, extracting texture features of the image sample through the shallow neural network model containing initial parameters.
Understandably, the shallow neural network model comprises an input layer, a convolution layer and a full-connection layer, the structure of the shallow neural network model can be set as required, and since the obtained image shows obvious texture features after the transformation by the local phase quantization method, the structure of the shallow neural network model can be set as a simple neural network structure with a binary problem, and the texture features of the image sample are extracted through the convolution layer, wherein the texture features comprise wave light grains and abnormal stripe features.
And S707, obtaining an identification result output by the shallow neural network model according to the texture feature, and determining a loss value according to the identification result and the matching degree of the reproduction tag.
Understandably, the shallow neural network model outputs two classified recognition results, that is, the recognition results include two results (probability corresponding to copying and probability corresponding to non-copying), and the loss value is a value obtained through a loss function of the shallow neural network model according to the recognition result output by the shallow neural network model and the copying label associated with the image sample.
And S708, when the loss value reaches a preset convergence condition, finishing training of the shallow neural network model.
Understandably, the preset convergence condition may be a condition that the loss value is small and does not decrease again after 5000 times of calculation, that is, when the loss value is small and does not decrease again after 5000 times of calculation, the training is stopped, and the converged shallow neural network model is recorded as a trained shallow neural network model; the preset convergence condition may also be a condition that the loss value is smaller than a set threshold, that is, when the loss value is smaller than the set threshold, the training is stopped, and the converged shallow neural network model is recorded as a trained shallow neural network model.
The shallow neural network model is used for identifying the textural features of the image sample, and the accuracy of the shallow neural network model can reach 90% by training the shallow neural network model to identify the textural features.
As described above, since the input image of the shallow neural network model is quantized image data, it is not necessary to perform a complicated image conversion process on the image in the input layer, and it is only necessary to recognize texture features, the shallow neural network model has a high processing speed and a small capacity, and the capacity of the shallow neural network model is generally about 300 bytes, and thus the shallow neural network model can be applied to a mobile terminal having a small portable capacity.
In an embodiment, when the loss value does not reach a preset convergence condition, iteratively updating initial parameters of the shallow neural network model until the loss value reaches the preset convergence condition, and recording the converged shallow neural network model as a trained shallow neural network model. The iterative updating of the initial parameters of the shallow neural network model refers to the fact that parameter values are calculated according to different loss function optimization algorithms matched with different ranges of the loss values to update the initial parameters of the shallow neural network model, and therefore the initial parameters of the shallow neural network model are iteratively updated through the loss function optimization algorithms, and the efficiency of model identification is improved.
And S80, when the copying probability is larger than or equal to a preset probability threshold value, determining that the image to be identified is a copied image.
Understandably, the preset probability threshold can be set according to requirements. Preferably, the preset probability threshold is set to 90%. Thus, the accuracy of the shallow neural network model reaches 96%. And when the copying probability is greater than or equal to a preset probability threshold value, determining the image to be identified as a copied image.
The method comprises the steps of obtaining an image to be identified; inputting the image to be recognized into a head portrait detection model, and extracting a head portrait of the image to be recognized; transforming the head photograph of the image to be identified by a local phase quantization method to obtain a local phase quantization characteristic diagram of the head photograph; inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and performing copying recognition processing on the local phase quantization characteristic diagram through the shallow neural network model to obtain copying probability; and when the copying probability is greater than or equal to a preset probability threshold value, determining the image to be identified as a copied image. Therefore, through extracting the head photos in the images to be recognized, converting the head photos into local phase quantization characteristic graphs by a local phase quantization method, inputting the local phase quantization characteristic graphs into the trained shallow neural network model, recognizing the texture characteristics of the local phase quantization characteristic graphs, obtaining the recognition results of texture characteristic statistics, obtaining the reproduction probability of the images to be recognized, and determining the images to be recognized as reproduction images when the reproduction probability exceeds a preset threshold value, the automatic recognition of the reproduction images is realized, the recognition accuracy is improved, and the recognition efficiency and the recognition reliability are improved.
In another embodiment, as shown in fig. 3, after the step S70, that is, after the step S70 is executed by the duplication recognition model, the method further includes performing duplication recognition processing on the local phase quantization feature map through the duplication recognition model to obtain a duplication probability:
and S90, judging whether the copying probability is smaller than the preset probability threshold value.
Understandably, the reproduction probability is obtained and judged.
S100, if the copying probability is smaller than the preset probability threshold value, determining that the image to be identified is a non-copying image.
Understandably, if the copying probability does not reach the preset probability threshold, the texture features of the image to be recognized are not obvious, so that the image to be recognized is determined to be a non-copied image, namely the image to be recognized is a real image.
S110, obtaining user identity information associated with the image to be recognized, and storing the image to be recognized to a specified storage position associated with the user identity information.
Understandably, user identity information associated with the image to be recognized, such as an identification number of the user identity, a name of the user identity, and the like, is obtained, and the image to be recognized is stored in a specified storage location associated with the user identity information, where the specified storage location may be a user table in a database, so as to record user identity information, because the image to be recognized is a real image, and at the same time, it is convenient for subsequently reading the user identity or verifying the user identity.
In an embodiment, a device for recognizing a captured image is provided, and the device for recognizing a captured image corresponds to the method for recognizing a captured image in the above embodiments one to one. As shown in fig. 7, the apparatus for recognizing a copied image includes a first acquiring module 11, a first extracting module 12, a converting module 13, a second acquiring module 14, a first calculating module 15, a generating module 16, a recognizing module 17, and a determining module 18. The functional modules are explained in detail as follows:
the first obtaining module 11 is configured to obtain an image to be identified.
And the first extraction module 12 is configured to input the image to be recognized into the head portrait detection model, and extract a head portrait of the image to be recognized.
And the conversion module 13 is configured to perform gray processing on the head portrait of the image to be identified to obtain a gray image of the head portrait.
The second obtaining module 14 is configured to obtain phase information of each pixel point of the grayscale image and a preset local area block corresponding to each pixel point.
The first calculating module 15 is configured to process the phase information of each pixel point and the preset local area block corresponding to each pixel point by using a local phase quantization method, and calculate a local phase quantization characteristic value corresponding to each pixel point.
And the generating module 16 is configured to arrange the local phase quantization feature values of all the pixel points, and generate a local phase quantization feature map of the grayscale image.
And the recognition module 17 is configured to input the local phase quantization feature map into a trained shallow neural network model, and perform recognition processing on the local phase quantization feature map through the shallow neural network model to obtain a reproduction probability of the local phase quantization feature map.
And the determining module 18 is configured to determine that the image to be identified is a copied image when the copying probability is greater than or equal to a preset probability threshold.
In one embodiment, as shown in fig. 8, the first extraction module 12 includes:
the detection module 21 is configured to input the image to be recognized into an avatar detection model, perform avatar detection on the image to be recognized through an mtcnn algorithm in the avatar detection model, and obtain an avatar confidence probability of the image to be recognized;
the marking module 22 is configured to, when the head portrait confidence probability of the image to be recognized is greater than a preset threshold, extract a head portrait region of the image to be recognized, and mark the head portrait region as a head portrait of the image to be recognized.
In one embodiment, as shown in fig. 9, the identification module 17 includes:
a second extraction module 71, configured to extract a local phase quantization feature histogram from the local phase quantization feature map;
the enhancing module 72 is configured to perform enhancement processing on the local phase quantization feature histogram through a gaussian noise algorithm to obtain neuron data;
a second calculating module 73, configured to input the neuron data into a random inactivation layer in the shallow neural network, perform texture feature extraction on the neuron data through the random inactivation layer, and obtain a prediction probability output by the random inactivation layer and matching the texture feature;
and the output module 74 is configured to input the prediction probability into an activation layer in the shallow neural network, where the activation layer processes the prediction probability through a sigmoid function to obtain a reproduction probability of the local phase quantization feature map.
In one embodiment, the apparatus for recognizing a captured image further includes:
the judging module is used for judging whether the copying probability is smaller than the preset probability threshold value or not;
the confirming module is used for determining the image to be identified as a non-reproduction image if the reproduction probability is smaller than the preset probability threshold;
and the storage module is used for acquiring the user identity information associated with the image to be recognized and storing the image to be recognized to the specified storage position associated with the user identity information.
In one embodiment, the identification module 17 further comprises:
the acquisition sample unit is used for acquiring an image sample, wherein the image sample comprises a copied image sample and a non-copied image sample, and the image sample is input into the shallow neural network model; wherein each of the image samples is associated with a copy label;
the training extraction unit is used for extracting the texture features of the image sample through the shallow neural network model containing initial parameters;
the training and identifying unit is used for acquiring an identification result output by the shallow neural network model according to the textural features and determining a loss value according to the identification result and the matching degree of the reproduced label;
and the training completion unit is used for completing the training of the shallow neural network model when the loss value reaches a preset convergence condition.
For specific limitations of the copied image recognition apparatus, reference may be made to the above limitations of the copied image recognition method, which are not described herein again. All or part of the modules in the copied image recognition device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of recognizing a copied image.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the method for recognizing a copied image in the above embodiments is implemented.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the method of recognizing a copied image in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for recognizing a copied image is characterized by comprising the following steps:
acquiring an image to be identified;
inputting the image to be recognized into a head portrait detection model, and extracting a head portrait of the image to be recognized;
carrying out gray level processing on the head portrait of the image to be identified to obtain a gray level image of the head portrait;
acquiring phase information of each pixel point of the gray image and a preset local area block corresponding to each pixel point;
processing the phase information of each pixel point and the preset local area block corresponding to each pixel point by a local phase quantization method, and calculating a local phase quantization characteristic value corresponding to each pixel point;
arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic diagram of the gray level image;
inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and identifying the local phase quantization characteristic diagram through the shallow neural network model to obtain the reproduction probability of the local phase quantization characteristic diagram;
and when the copying probability is greater than or equal to a preset probability threshold value, determining the image to be identified as a copied image.
2. The method for recognizing the copied image according to claim 1, wherein the inputting the image to be recognized into a head portrait detection model and extracting the head portrait of the image to be recognized comprises:
inputting the image to be recognized into an avatar detection model, and performing avatar detection on the image to be recognized through an mtcnn algorithm in the avatar detection model to obtain an avatar confidence probability of the image to be recognized;
and when the head portrait confidence probability of the image to be recognized is greater than a preset threshold value, extracting a head portrait area of the image to be recognized, and marking the head portrait area as a head portrait of the image to be recognized.
3. The method for recognizing the copied image according to claim 1, wherein the inputting the local phase quantization feature map into a trained shallow neural network model, and performing recognition processing on the local phase quantization feature map through the shallow neural network model to obtain the copying probability of the local phase quantization feature map comprises:
extracting a local phase quantization feature histogram from the local phase quantization feature map;
enhancing the local phase quantization characteristic histogram through a Gaussian noise algorithm to obtain neuron data;
inputting the neuron data into a random inactivation layer in the shallow neural network, performing texture feature extraction on the neuron data through the random inactivation layer, and acquiring a prediction probability output by the random inactivation layer and matched with the texture feature;
and inputting the prediction probability into an activation layer in the shallow neural network, and processing the prediction probability by the activation layer through a sigmoid function to obtain the reproduction probability of the local phase quantization characteristic diagram.
4. The method for recognizing the copied image according to claim 1, wherein the copying recognition model, after performing the copying recognition processing on the local phase quantization feature map through the copying recognition model to obtain the copying probability, comprises:
judging whether the copying probability is smaller than the preset probability threshold value or not;
if the copying probability is smaller than the preset probability threshold value, determining that the image to be identified is a non-copying image;
and acquiring user identity information associated with the image to be recognized, and storing the image to be recognized to a specified storage position associated with the user identity information.
5. The method for recognizing the copied image according to claim 1, wherein before the input of the local phase quantization feature map into the trained shallow neural network model and the copying recognition processing of the local phase quantization feature map by the shallow neural network model to obtain the copying probability, the method comprises:
acquiring an image sample, wherein the image sample comprises a copied image sample and a non-copied image sample, and inputting the image sample into the shallow neural network model; wherein each of the image samples is associated with a copy label;
extracting texture features of the image sample through the shallow neural network model containing initial parameters;
acquiring an identification result output by the shallow neural network model according to the textural features, and determining a loss value according to the identification result and the matching degree of the reproduction label;
and when the loss value reaches a preset convergence condition, finishing the training of the shallow neural network model.
6. A reproduction image recognition apparatus, comprising:
the first acquisition module is used for acquiring an image to be identified;
the first extraction module is used for inputting the image to be recognized into a head portrait detection model and extracting a head portrait of the image to be recognized;
the conversion module is used for carrying out gray processing on the head portrait of the image to be identified to obtain a gray image of the head portrait;
the second acquisition module is used for acquiring phase information of each pixel point of the gray image and a preset local area block corresponding to each pixel point;
the first calculation module is used for processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method and calculating a local phase quantization characteristic value corresponding to each pixel point;
the generating module is used for arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic diagram of the gray level image;
the identification module is used for inputting the local phase quantization characteristic diagram into a trained shallow neural network model, and identifying the local phase quantization characteristic diagram through the shallow neural network model so as to obtain the reproduction probability of the local phase quantization characteristic diagram;
and the determining module is used for determining the image to be identified as the copied image when the copying probability is greater than or equal to a preset probability threshold.
7. The apparatus for recognizing a captured image according to claim 6, wherein said first extraction module comprises:
the detection module is used for inputting the image to be recognized into an avatar detection model, and performing avatar detection on the image to be recognized through an mtcnn algorithm in the avatar detection model so as to obtain an avatar confidence probability of the image to be recognized;
and the marking module is used for extracting the head portrait area of the image to be recognized and marking the head portrait area as the head portrait of the image to be recognized when the head portrait confidence probability of the image to be recognized is greater than a preset threshold value.
8. The apparatus for recognizing a reproduction image according to claim 6, wherein the recognition module comprises:
the second extraction module is used for extracting a local phase quantization feature histogram from the local phase quantization feature map;
the enhancement module is used for enhancing the local phase quantization characteristic histogram through a Gaussian noise algorithm to obtain neuron data;
the second calculation module is used for inputting the neuron data into a random inactivation layer in the shallow neural network, performing texture feature extraction on the neuron data through the random inactivation layer, and acquiring a prediction probability which is output by the random inactivation layer and matched with the texture feature;
and the output module is used for inputting the prediction probability into an activation layer in the shallow neural network, and the activation layer processes the prediction probability through a sigmoid function to obtain the reproduction probability of the local phase quantization characteristic diagram.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of recognizing a copied image according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the method for recognizing a copied image according to any one of claims 1 to 5.
CN201911366794.2A 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying copied image Pending CN111191568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911366794.2A CN111191568A (en) 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying copied image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911366794.2A CN111191568A (en) 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying copied image

Publications (1)

Publication Number Publication Date
CN111191568A true CN111191568A (en) 2020-05-22

Family

ID=70705848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911366794.2A Pending CN111191568A (en) 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying copied image

Country Status (1)

Country Link
CN (1) CN111191568A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666890A (en) * 2020-06-08 2020-09-15 平安科技(深圳)有限公司 Spine deformation crowd identification method and device, computer equipment and storage medium
CN111754166A (en) * 2020-05-29 2020-10-09 大亚湾核电运营管理有限责任公司 Nuclear power station spare part cross-power station allocation method and system and storage medium
CN112116564A (en) * 2020-09-03 2020-12-22 深圳大学 Method and device for generating countermeasure sample for anti-copying detection and storage medium
CN112258481A (en) * 2020-10-23 2021-01-22 北京云杉世界信息技术有限公司 Portal photo reproduction detection method
CN112396058A (en) * 2020-11-11 2021-02-23 深圳大学 Document image detection method, device, equipment and storage medium
CN113128521A (en) * 2021-04-30 2021-07-16 西安微电子技术研究所 Method and system for extracting features of miniaturized artificial intelligence model, computer equipment and storage medium
CN114005019A (en) * 2021-10-29 2022-02-01 北京有竹居网络技术有限公司 Method for identifying copied image and related equipment thereof
CN114333037A (en) * 2022-02-25 2022-04-12 北京结慧科技有限公司 Identification method and system for copied photo containing identity card
CN114565827A (en) * 2022-04-29 2022-05-31 深圳爱莫科技有限公司 Cigarette display anti-cheating detection method based on image recognition and model training method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754166A (en) * 2020-05-29 2020-10-09 大亚湾核电运营管理有限责任公司 Nuclear power station spare part cross-power station allocation method and system and storage medium
CN111666890B (en) * 2020-06-08 2023-06-30 平安科技(深圳)有限公司 Spine deformation crowd identification method and device, computer equipment and storage medium
CN111666890A (en) * 2020-06-08 2020-09-15 平安科技(深圳)有限公司 Spine deformation crowd identification method and device, computer equipment and storage medium
CN112116564A (en) * 2020-09-03 2020-12-22 深圳大学 Method and device for generating countermeasure sample for anti-copying detection and storage medium
CN112116564B (en) * 2020-09-03 2023-10-20 深圳大学 Anti-beat detection countermeasure sample generation method, device and storage medium
CN112258481A (en) * 2020-10-23 2021-01-22 北京云杉世界信息技术有限公司 Portal photo reproduction detection method
CN112396058A (en) * 2020-11-11 2021-02-23 深圳大学 Document image detection method, device, equipment and storage medium
CN112396058B (en) * 2020-11-11 2024-04-09 深圳大学 Document image detection method, device, equipment and storage medium
CN113128521A (en) * 2021-04-30 2021-07-16 西安微电子技术研究所 Method and system for extracting features of miniaturized artificial intelligence model, computer equipment and storage medium
CN113128521B (en) * 2021-04-30 2023-07-18 西安微电子技术研究所 Method, system, computer equipment and storage medium for extracting characteristics of miniaturized artificial intelligent model
CN114005019A (en) * 2021-10-29 2022-02-01 北京有竹居网络技术有限公司 Method for identifying copied image and related equipment thereof
CN114005019B (en) * 2021-10-29 2023-09-22 北京有竹居网络技术有限公司 Method for identifying flip image and related equipment thereof
CN114333037B (en) * 2022-02-25 2022-05-13 北京结慧科技有限公司 Identification method and system for copied photo containing identity card
CN114333037A (en) * 2022-02-25 2022-04-12 北京结慧科技有限公司 Identification method and system for copied photo containing identity card
CN114565827A (en) * 2022-04-29 2022-05-31 深圳爱莫科技有限公司 Cigarette display anti-cheating detection method based on image recognition and model training method

Similar Documents

Publication Publication Date Title
CN111191568A (en) Method, device, equipment and medium for identifying copied image
CN111191539B (en) Certificate authenticity verification method and device, computer equipment and storage medium
CN108846355B (en) Image processing method, face recognition device and computer equipment
CN111476268A (en) Method, device, equipment and medium for training reproduction recognition model and image recognition
CN110399799B (en) Image recognition and neural network model training method, device and system
CN110569721A (en) Recognition model training method, image recognition method, device, equipment and medium
CN111275685A (en) Method, device, equipment and medium for identifying copied image of identity document
US8842889B1 (en) System and method for automatic face recognition
CN109816200B (en) Task pushing method, device, computer equipment and storage medium
WO2022206319A1 (en) Image processing method and apparatus, and device, storage medium and computer program product
WO2020143325A1 (en) Electronic document generation method and device
CN112381775A (en) Image tampering detection method, terminal device and storage medium
WO2021143088A1 (en) Synchronous check method and apparatus for multiple certificate types, and computer device and storage medium
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
CN113111880B (en) Certificate image correction method, device, electronic equipment and storage medium
CN113344000A (en) Certificate copying and recognizing method and device, computer equipment and storage medium
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
CN112668462A (en) Vehicle loss detection model training method, vehicle loss detection device, vehicle loss detection equipment and vehicle loss detection medium
CN110866457A (en) Electronic insurance policy obtaining method and device, computer equipment and storage medium
CN111178203B (en) Signature verification method and device, computer equipment and storage medium
CN113963295A (en) Method, device, equipment and storage medium for recognizing landmark in video clip
CN113378609B (en) Agent proxy signature identification method and device
CN115223022B (en) Image processing method, device, storage medium and equipment
CN111274965A (en) Face recognition method and device, computer equipment and storage medium
KR102240495B1 (en) Method for managing abusing user about identification and authentication, and server for the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination