CN111191568B - Method, device, equipment and medium for identifying flip image - Google Patents

Method, device, equipment and medium for identifying flip image Download PDF

Info

Publication number
CN111191568B
CN111191568B CN201911366794.2A CN201911366794A CN111191568B CN 111191568 B CN111191568 B CN 111191568B CN 201911366794 A CN201911366794 A CN 201911366794A CN 111191568 B CN111191568 B CN 111191568B
Authority
CN
China
Prior art keywords
image
identified
phase quantization
local phase
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911366794.2A
Other languages
Chinese (zh)
Other versions
CN111191568A (en
Inventor
喻晨曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201911366794.2A priority Critical patent/CN111191568B/en
Publication of CN111191568A publication Critical patent/CN111191568A/en
Application granted granted Critical
Publication of CN111191568B publication Critical patent/CN111191568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, computer equipment and a storage medium for identifying a flip image, wherein the method comprises the following steps: acquiring an image to be identified; inputting the image to be identified into a head portrait detection model, and extracting a head portrait of the image to be identified; carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo; transforming the gray image by a local phase quantization method to obtain a local phase quantization characteristic diagram of the gray image; inputting the local phase quantization characteristic map into a trained shallow neural network model, and performing a flap recognition process on the local phase quantization characteristic map through the shallow neural network model to obtain a flap probability; and when the probability of the image to be identified is larger than or equal to a preset probability threshold value, determining that the image to be identified is the image to be identified. Therefore, the automatic identification of the flip image is realized, the identification accuracy is improved, and the identification efficiency and reliability are improved.

Description

Method, device, equipment and medium for identifying flip image
Technical Field
The present invention relates to the field of image classification, and in particular, to a method and apparatus for identifying a flipped image, a computer device, and a storage medium.
Background
With the development of credit society, more and more application scenes (such as application scenes related to finance, insurance and security) need to verify the identity of a user through certificate recognition and face recognition. In the prior art, the existing identity verification mainly performs manual verification by spending a large amount of manpower, so that a large amount of manpower resources and waiting time are consumed, along with the improvement of the data photographing technology, the means for verifying the identity of a user by the lawless person through the image reproduction is endless, the accuracy of identifying the image reproduction by the manpower is low, errors are easy to identify, and if the image reproduction is not identified in the identity verification process, the security problem can occur for the user information.
Disclosure of Invention
The invention provides a method, a device, computer equipment and a storage medium for identifying a reproduction image, which realize automatic identification of the reproduction image, improve the identification accuracy and improve the identification efficiency and reliability.
A method of identifying a flip image, comprising:
acquiring an image to be identified;
inputting the image to be identified into a head portrait detection model, and extracting a head portrait of the image to be identified;
carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo;
acquiring phase information of each pixel point of the gray level image and a preset local area block corresponding to each pixel point;
processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method, and calculating a local phase quantization characteristic value corresponding to each pixel point;
arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic map of the gray image;
Inputting the local phase quantization feature map into a trained shallow neural network model, and identifying the local phase quantization feature map through the shallow neural network model so as to obtain the probability of the local phase quantization feature map for flipping;
and when the probability of the image to be identified is larger than or equal to a preset probability threshold, determining that the image to be identified is a image to be identified.
A flip image recognition device, comprising:
the first acquisition module is used for acquiring an image to be identified;
the first extraction module is used for inputting the image to be identified into a head portrait detection model and extracting a head portrait of the image to be identified;
the conversion module is used for carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo;
The second acquisition module is used for acquiring phase information of each pixel point of the gray level image and a preset local area block corresponding to each pixel point;
The first calculation module is used for processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method and calculating a local phase quantization characteristic value corresponding to each pixel point;
the generation module is used for arranging the local phase quantization characteristic values of all the pixel points and generating a local phase quantization characteristic map of the gray image;
The identification module is used for inputting the local phase quantization characteristic map into a trained shallow neural network model, and carrying out identification processing on the local phase quantization characteristic map through the shallow neural network model so as to obtain the probability of the local phase quantization characteristic map for flipping;
and the determining module is used for determining that the image to be identified is a reproduction image when the reproduction probability is greater than or equal to a preset probability threshold.
The invention provides a method, a device, computer equipment and a storage medium for identifying a flip image, wherein an image to be identified is obtained; inputting the image to be identified into a head portrait detection model, and extracting a head portrait of the image to be identified; carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo; transforming the gray image by a local phase quantization method to obtain a local phase quantization characteristic diagram of the gray image; inputting the local phase quantization characteristic map into a trained shallow neural network model, and performing a flap recognition process on the local phase quantization characteristic map through the shallow neural network model to obtain a flap probability; and when the probability of the image to be identified is larger than or equal to a preset probability threshold, determining that the image to be identified is a image to be identified. In this way, the head photo in the image to be identified is extracted, gray level processing is carried out on the head photo, a gray level image is obtained, the gray level image is converted into a local phase quantization feature image through a local phase quantization method, the local phase quantization feature image is input into a trained shallow neural network model, the local phase quantization feature image is identified by texture features, the identification result of texture feature statistics can be obtained, the probability of the image to be identified is obtained, and when the probability of the image to be identified exceeds a preset threshold value, the image to be identified is determined to be the image to be identified, so that the automatic identification of the image to be identified is realized, the identification accuracy is improved, and the identification efficiency and reliability are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a method for identifying a flipped image in an embodiment of the invention;
FIG. 2 is a flow chart of a method for identifying a flipped image in an embodiment of the invention;
FIG. 3 is a flow chart of a method for identifying a flipped image in another embodiment of the invention;
FIG. 4 is a flowchart of step S20 of a method for identifying a flipped image in an embodiment of the present invention;
FIG. 5 is a flowchart of step S70 of a method for identifying a flipped image in an embodiment of the invention;
FIG. 6 is a flowchart before step S70 of a method for identifying a flipped image in accordance with an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a flipped image recognition device in accordance with an embodiment of the present invention;
FIG. 8 is a schematic block diagram of a first extraction module in a flipped image recognition device in accordance with an embodiment of the present invention;
FIG. 9 is a schematic block diagram of an identification module in a flipped image recognition device in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The method for identifying the flipped image provided by the invention can be applied to an application environment as shown in fig. 1, wherein a client (computer equipment) communicates with a server through a network. Among them, clients (computer devices) include, but are not limited to, personal computers, notebook computers, smartphones, tablet computers, cameras, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for identifying a flipped image is provided, and the technical scheme mainly includes the following steps S10-S80:
s10, acquiring an image to be identified.
The image to be identified may be an identification card front color chart including RGB channels (red channel, green channel, blue channel). In one embodiment, the server may capture a picture by a camera installed at the terminal to obtain the image to be identified; the server can also obtain the image to be identified from the local album stored in the terminal through the terminal.
S20, inputting the image to be identified into a head portrait detection model, and extracting the head portrait of the image to be identified.
Understandably, the image to be identified is input into an avatar detection model, the avatar detection model includes mtcnn (Multi-task convolutional neural network, multitasking convolutional neural network) algorithm, the mtcnn algorithm puts the collarbone and above avatar region detection and the collarbone and above avatar key point detection together, and identifies the final avatar region and feature point positions. Performing head portrait detection on the image to be identified through mtcnn algorithm in the head portrait detection model so as to detect whether the image to be identified contains head portraits; when the head portrait detection model detects that the image to be identified contains a head portrait, the image to be identified is extracted, and the extracted image is marked as the head portrait of the image to be identified. For example: inputting the front photo of the identity card into a head portrait detection model, detecting that the photo contains a head portrait, cutting the front photo of the identity card according to the detected head portrait area (generally, the area is a rectangular area), and obtaining the cut image as the head photo of the front photo of the identity card.
In an embodiment, as shown in fig. 4, in the step S20, that is, inputting the image to be identified into a head portrait detection model, extracting a head portrait of the image to be identified includes:
s201, inputting the image to be identified into a head portrait detection model, and performing head portrait detection on the image to be identified through mtcnn algorithm in the head portrait detection model so as to obtain head portrait confidence probability of the image to be identified;
Understandably, the image to be identified is input into a head portrait detection model, and head portrait detection is performed on the image to be identified through mtcnn algorithm in the head portrait detection model, so that head portrait confidence probability of the image to be identified can be extracted. For example, the image to be identified is an identity card front image, and the accuracy of the detection of the identity card front image can be obtained to be 99.8% according to experimental data.
S202, when the head portrait confidence probability of the image to be identified is larger than a preset threshold value, extracting a head portrait region of the image to be identified, and marking the head portrait region as a head portrait of the image to be identified.
Understandably, when the head portrait confidence probability is greater than a preset threshold, and when the head portrait is judged to be included in the image to be identified, a head portrait area of the image to be identified can be identified, wherein the head portrait area comprises a rectangular area of the head of the front person above the collarbone in the image to be identified. Wherein, whether the image to be identified contains the head portrait or not can be judged according to the confidence probability of the head portrait in the image to be identified, for example: and when the confidence probability of the head portrait in the identified image is set to be more than 90%, determining that the identified image contains the head portrait.
Therefore, the image data interference generated by the images except the head image area can be reduced by carrying out the head image identification through the mtcnn algorithm, and the texture features of the image to be identified are mainly concentrated in the head image area, so that the accuracy of the flip identification can be improved.
S30, carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo.
As can be appreciated, the head photograph of the image to be identified includes an image of three channels RGB, that is, each pixel point in each head photograph has three wind values of three channels, which are respectively an R component value, a G component value and a B component value, and each pixel point in the head photograph is subjected to gray processing, and a gray value of each pixel point is obtained by a formula of a weighted average method, so as to generate a gray image of the head photograph.
S40, acquiring phase information of each pixel point of the gray level image and a preset local area block corresponding to each pixel point.
It is understood that the gray image is composed of pixels, for example, 80×60 gray images, that is, images with a length of 80 pixels and a width of 60 pixels, and there are 4800 pixels in total, and the preset local area block corresponding to each pixel is a square area with a single pixel as the center of the preset local area block and a side length of M, where M may be an odd number of 3, 5,7, etc. so that the center position of the preset local area block coincides with the pixel.
S50, processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method, and calculating a local phase quantization characteristic value corresponding to each pixel point.
Understandably, the local phase quantization method (Local Phase Quantization, abbreviated as LPQ) includes performing fourier transform on the gray-scale image to obtain phase information of the gray-scale image, and according to the phase information of each pixel point and the preset local area block corresponding to each pixel point, the preset local area block may be a square area with a side length of M. The phase information is an information set of the phases in four directions of one pixel point, and the local phase quantization characteristic value corresponding to each pixel point is calculated through the local phase quantization method. The local phase quantization characteristic value is also called as an LPQ characteristic value, and is obtained by quantizing each pixel point by a local phase quantization method (LPQ) to obtain a corresponding integer value of 0 to 255.
In this way, the image characteristic extraction processing is performed after the gray level image is generated, and the characteristic of fuzzy invariance of the Fourier transformation phase is utilized in the processing process of the local phase quantization method, so that the phase information of the original image is reserved, and the identification accuracy and reliability are improved.
And S60, arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic map of the gray image.
It can be understood that, the local phase quantization feature values of all the pixels are arranged to generate a local phase quantization feature map of the gray image, that is, the local phase quantization feature values of all the pixels are arranged according to the positions of the corresponding pixels, so that the local phase quantization feature map of the gray image can be generated, for example, the gray image is an image with a length of 80 pixels and a width of 60 pixels, and has an arrangement matrix of 4800 pixels in total, and the local phase quantization feature map also corresponds to an image with a length of 80 pixels and a width of 60 pixels, and has an arrangement matrix of 4800 pixels in total.
In this way, since the phase information obtained by the fourier transform has blurring invariance, extracting texture features from the phase information can improve robustness to head photograph recognition. Through image analysis, some more remarkable texture features such as ripples, abnormal speckles and the like exist in the flipped image. The local phase quantization method has the advantages of high efficiency, strong recognition performance and the like when applied to the field of texture classification, so that the accuracy and reliability of recognition are improved by recognizing texture features through the local phase quantization method to perform the beat detection.
S70, inputting the local phase quantization feature map into a trained shallow neural network model, and performing recognition processing on the local phase quantization feature map through the shallow neural network model so as to obtain the probability of the local phase quantization feature map to be flipped.
The shallow neural network model is a trained neural network model, and the local phase quantization feature map is input to the shallow neural network model, so that the local phase quantization feature map is identified, that is, the local phase quantization feature map is identified by texture features, so that an identification result of texture feature statistics can be obtained, and the probability of the local phase quantization feature map to be flipped is counted. Because the input image of the shallow neural network model is quantized image data, complex image transformation processing is not required to be carried out on the image at the input layer, and therefore the shallow neural network model has high processing speed and small capacity and can be applied to a mobile terminal with small portable capacity.
In an embodiment, as shown in fig. 5, in step S70, that is, the step of inputting the local phase quantization feature map into a trained shallow neural network model, performing a beat recognition process on the local phase quantization feature map through the shallow neural network model to obtain a beat probability includes:
And S701, extracting a local phase quantization characteristic histogram from the local phase quantization characteristic map.
Understandably, all local phase quantization feature values in the local phase quantization feature map are counted together, and since the local phase quantization feature values are integer values from 0 to 255, statistics of 256 dimensions are obtained after counting, the statistics are represented by a histogram, the local phase quantization feature histogram is extracted and generated, the abscissa of the local phase quantization feature histogram is 256 dimension values corresponding to the local phase quantization feature values, and the ordinate is a total value equal to the local phase quantization feature values corresponding to the dimension values of the abscissa.
S702, carrying out enhancement processing on the local phase quantization characteristic histogram through a Gaussian noise algorithm to obtain the neuron data.
It is understandable that the local phase quantization feature histogram is subjected to enhancement processing by a function in the gaussian noise algorithm, that is, an average value and a variance of the local phase quantization feature histogram are calculated by a function in the gaussian noise algorithm, the local phase quantization feature histogram is converted into a gaussian profile by the average value and the variance, and parameter data (including a plurality of neurons) in the gaussian profile is labeled as the neuron data.
S703, inputting the neuron data into a random inactivation layer in the shallow neural network, extracting texture features of the neuron data through the random inactivation layer, and obtaining the prediction probability which is output by the random inactivation layer and is matched with the texture features.
The random inactivation layer is also called dropout layer, and is used for outputting probability after weighting the neurons in the neuron data according to parameters. The random inactivation layer inputs the neurons in the neuron data into a parameter weighting function, the parameter weighting function extracts the texture feature of each neuron in the neuron data, namely, the texture feature probability corresponding to each neuron is calculated and output through the parameter weighting function, and therefore the output texture feature probability is marked as the prediction probability matched with the texture feature.
S704, inputting the prediction probability into an activation layer in the shallow neural network, and processing the prediction probability by the activation layer through a sigmoid function to obtain the probability of the local phase quantization characteristic map.
The method comprises the steps of inputting the prediction probability into the sigmoid function in the activation layer, classifying the prediction probability by the sigmoid function, and setting multiple classifications according to requirements, namely, outputting probability values corresponding to multiple classifications by the sigmoid function, wherein the sigmoid function is preferably set into two classifications, namely, a flip and a non-flip, and can be also understood as outputting probability corresponding to the flip and probability corresponding to the non-flip by the sigmoid function, and marking the probability corresponding to the flip output by the sigmoid function as the flip probability of the local phase quantization characteristic map.
In this manner, since the object received by the shallow neural network model is an image with enhanced texture features, the shallow neural network model of an efficient and simple network structure hierarchy is designed for the local phase quantization feature map.
In an embodiment, as shown in fig. 6, before the step S70, that is, before the step S70, the step of inputting the local phase quantization feature map into the trained shallow neural network model, performing a beat recognition process on the local phase quantization feature map through the shallow neural network model, and obtaining a beat probability, the method includes:
S705, acquiring an image sample, wherein the image sample comprises a flipped image sample and a non-flipped image sample, and inputting the image sample into the shallow neural network model; wherein each of the image samples is associated with a flip label.
The image samples comprise a turnup image sample and a non-turnup image sample, each image sample is associated with a turnup label, the image samples are images subjected to gray processing and local phase quantization method transformation, the turnup labels comprise turnup images and non-turnup images, in a business process, a large number of scenes need to be identified by identity card turnup, the selection of the identity card turnup images and the identity card non-turnup images can respectively occupy 50% and 50% of the image samples, and the purpose is to train the shallow neural network model to cover all use scenes, so that the reliability of the shallow neural network model is improved.
S706, extracting texture features of the image sample through the shallow neural network model containing initial parameters.
It is understood that the shallow neural network model includes an input layer, a convolution layer and a full connection layer, and the structure of the shallow neural network model may be set as required, and the obtained image already shows obvious texture features after the local phase quantization method transformation, so the structure of the shallow neural network model may be set as a simple neural network structure with two classification problems, and the texture features of the image sample are extracted through the convolution layer, wherein the texture features include waved light grains and abnormal speckle features.
S707, acquiring an identification result output by the shallow neural network model according to the texture features, and determining a loss value according to the identification result and the matching degree of the flip label.
Understandably, the shallow neural network model outputs a two-class recognition result, that is, the recognition result includes two results (a probability corresponding to a flip and a probability corresponding to a non-flip), and the loss value is a value obtained by a loss function of the shallow neural network model according to the recognition result output by the shallow neural network model and a flip label associated with the image sample.
And S708, when the loss value reaches a preset convergence condition, training the shallow neural network model is completed.
Understandably, the preset convergence condition may be a condition that the value of the loss value is very small and will not fall down after 5000 times of calculation, that is, when the value of the loss value is very small and will not fall down again after 5000 times of calculation, training is stopped, and the shallow neural network model after convergence is recorded as a trained shallow neural network model; the preset convergence condition may be a condition that the loss value is smaller than a set threshold, that is, when the loss value is smaller than the set threshold, training is stopped, and the converged shallow neural network model is recorded as a trained shallow neural network model.
The shallow neural network model is used for identifying texture features of the image sample, and the accuracy of the shallow neural network model can reach 90% by training the shallow neural network model to identify the texture features.
In this way, since the input image of the shallow neural network model is quantized image data, complex image transformation processing is not required to be performed on the image at the input layer, and only texture features are required to be identified, the processing speed and capacity of the shallow neural network model are high, and the capacity of the shallow neural network model is generally about 300 bytes, so that the shallow neural network model can be applied to a mobile terminal with small portable capacity.
In an embodiment, when the loss value does not reach a preset convergence condition, iteratively updating initial parameters of the shallow neural network model until the loss value reaches the preset convergence condition, and recording the shallow neural network model after convergence as a shallow neural network model after training is completed. The iterative updating of the initial parameters of the shallow neural network model means that parameter values calculated according to different loss function optimization algorithms are matched in different ranges of the loss values to update the initial parameters of the shallow neural network model, so that the initial parameters of the shallow neural network model are iteratively updated through the loss function optimization algorithm, and the model identification efficiency is improved.
And S80, when the probability of the image to be identified is greater than or equal to a preset probability threshold, determining that the image to be identified is a flipped image.
Understandably, the preset probability threshold may be set according to the requirement. Preferably, the preset probability threshold is set to 90%. Thus, the accuracy of the shallow neural network model reaches 96%. And when the probability of the image to be identified is larger than or equal to a preset probability threshold, determining that the image to be identified is a image to be identified.
The method comprises the steps of obtaining an image to be identified; inputting the image to be identified into a head portrait detection model, and extracting a head portrait of the image to be identified; transforming the head photo of the image to be identified by a local phase quantization method to obtain a local phase quantization characteristic diagram of the head photo; inputting the local phase quantization characteristic map into a trained shallow neural network model, and performing a flap recognition process on the local phase quantization characteristic map through the shallow neural network model to obtain a flap probability; and when the probability of the image to be identified is larger than or equal to a preset probability threshold, determining that the image to be identified is a image to be identified. In this way, by extracting the head photo in the image to be identified, converting the head photo into the local phase quantization feature map by a local phase quantization method, inputting the local phase quantization feature map into the trained shallow neural network model, identifying the texture features of the local phase quantization feature map, obtaining the identification result of the texture feature statistics, obtaining the probability of the image to be identified, and determining that the image to be identified is the image to be identified when the probability of the image to be identified exceeds a preset threshold, thereby realizing automatic identification of the image to be identified, improving the identification accuracy and the identification efficiency and reliability.
In another embodiment, as shown in fig. 3, after the step S70, that is, after the step S70, the step S further includes:
S90, judging whether the probability of the flap is smaller than the preset probability threshold.
Understandably, the probability of the beat is obtained and the probability of the beat is judged.
And S100, if the reproduction probability is smaller than the preset probability threshold, determining that the image to be identified is a non-reproduction image.
Understandably, if the probability of the flipping does not reach the preset probability threshold, the texture features of the image to be identified are not obvious, so as to determine that the image to be identified is a non-flipped image, that is, the image to be identified is a real image.
S110, acquiring user identity information associated with the image to be identified, and storing the image to be identified in a designated storage position associated with the user identity information.
Understandably, user identity information associated with the image to be identified, such as an identity card number of a user identity, a name of the user identity, etc., is acquired, the image to be identified is stored in a designated storage location associated with the user identity information, and the designated storage location may be a user table in a database, so as to record user identity information, because the image to be identified is a real image, and also facilitate subsequent reading of the user identity or verification of the user identity.
In one embodiment, a device for identifying a flipped image is provided, where the flipped image identifying device corresponds to the flipped image identifying method in the above embodiment one by one. As shown in fig. 7, the apparatus for recognizing a flip image includes a first acquisition module 11, a first extraction module 12, a conversion module 13, a second acquisition module 14, a first calculation module 15, a generation module 16, a recognition module 17, and a determination module 18. The functional modules are described in detail as follows:
a first acquiring module 11, configured to acquire an image to be identified.
The first extraction module 12 is configured to input the image to be identified into a head portrait detection model, and extract a head portrait of the image to be identified.
And the conversion module 13 is used for carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo.
The second obtaining module 14 is configured to obtain phase information of each pixel point of the gray-scale image and a preset local area block corresponding to each pixel point.
The first calculating module 15 is configured to process, by using a local phase quantization method, phase information of each pixel point and the preset local area block corresponding to each pixel point, and calculate a local phase quantization feature value corresponding to each pixel point.
And the generating module 16 is configured to arrange the local phase quantization characteristic values of all the pixel points, and generate a local phase quantization characteristic map of the gray image.
The recognition module 17 is configured to input the local phase quantization feature map into a trained shallow neural network model, and perform recognition processing on the local phase quantization feature map through the shallow neural network model, so as to obtain a probability of a tap of the local phase quantization feature map.
The determining module 18 is configured to determine that the image to be identified is a flipped image when the flipped probability is greater than or equal to a preset probability threshold.
In one embodiment, as shown in fig. 8, the first extraction module 12 includes:
The detection module 21 is configured to input the image to be identified into a head portrait detection model, and perform head portrait detection on the image to be identified through a mtcnn algorithm in the head portrait detection model, so as to obtain a head portrait confidence probability of the image to be identified;
and the marking module 22 is configured to extract a head portrait area of the image to be identified and mark the head portrait area as a head portrait of the image to be identified when the head portrait confidence probability of the image to be identified is greater than a preset threshold.
In one embodiment, as shown in fig. 9, the identification module 17 includes:
a second extraction module 71 for extracting a local phase quantization feature histogram from the local phase quantization feature map;
an enhancement module 72, configured to perform enhancement processing on the local phase quantization characteristic histogram by using a gaussian noise algorithm, so as to obtain neuron data;
A second calculation module 73, configured to input the neuron data into a random inactivation layer in the shallow neural network, perform texture feature extraction on the neuron data through the random inactivation layer, and obtain a prediction probability output by the random inactivation layer and matched with the texture feature;
And the output module 74 is used for inputting the prediction probability into an activation layer in the shallow neural network, and the activation layer processes the prediction probability through a sigmoid function to obtain the flip probability of the local phase quantization characteristic map.
In an embodiment, the apparatus for identifying a flipped image further includes:
The judging module is used for judging whether the flipping probability is smaller than the preset probability threshold value;
The confirming module is used for confirming that the image to be identified is a non-flipped image if the flipped probability is smaller than the preset probability threshold;
And the storage module is used for acquiring the user identity information associated with the image to be identified and storing the image to be identified to a designated storage position associated with the user identity information.
In an embodiment, the identification module 17 further comprises:
The image sampling device comprises an acquisition sample unit, a sampling unit and a sampling unit, wherein the acquisition sample unit is used for acquiring image samples, the image samples comprise a flipped image sample and a non-flipped image sample, and the image samples are input into the shallow neural network model; wherein each of the image samples is associated with a tap tag;
The training extraction unit is used for extracting texture features of the image sample through the shallow neural network model containing initial parameters;
the training recognition unit is used for acquiring a recognition result output by the shallow neural network model according to the texture characteristics and determining a loss value according to the recognition result and the matching degree of the flip label;
and the training completion unit is used for completing the training of the shallow neural network model when the loss value reaches a preset convergence condition.
For specific limitations of the apparatus for identifying a flipped image, reference may be made to the above limitation of the method for identifying a flipped image, and no further description is given here. The above-described respective modules in the apparatus for recognizing a flip image may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of flipped image recognition.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method for identifying a flipped image of the above embodiment when executing the computer program.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the method for identifying a flipped image in the above embodiment.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (7)

1. A method for identifying a flip image, comprising:
acquiring an image to be identified;
inputting the image to be identified into a head portrait detection model, and extracting a head portrait of the image to be identified;
carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo;
acquiring phase information of each pixel point of the gray level image and a preset local area block corresponding to each pixel point;
processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method, and calculating a local phase quantization characteristic value corresponding to each pixel point;
arranging the local phase quantization characteristic values of all the pixel points to generate a local phase quantization characteristic map of the gray image;
Inputting the local phase quantization feature map into a trained shallow neural network model, and identifying the local phase quantization feature map through the shallow neural network model so as to obtain the probability of the local phase quantization feature map for flipping;
When the probability of the image to be identified is larger than or equal to a preset probability threshold value, determining that the image to be identified is a reproduction image;
The step of inputting the local phase quantization feature map into a trained shallow neural network model, and performing recognition processing on the local phase quantization feature map through the shallow neural network model to obtain the probability of the local phase quantization feature map for flipping, comprising:
extracting a local phase quantization characteristic histogram from the local phase quantization characteristic map;
carrying out enhancement processing on the local phase quantization characteristic histogram through a Gaussian noise algorithm to obtain neuron data;
Inputting the neuron data into a random inactivation layer in the shallow neural network, extracting texture features of the neuron data through the random inactivation layer, and obtaining the prediction probability which is output by the random inactivation layer and is matched with the texture features;
inputting the prediction probability into an activation layer in the shallow neural network, and processing the prediction probability by the activation layer through a sigmoid function to obtain the flip probability of the local phase quantization characteristic map;
the step of inputting the local phase quantization feature map into a trained shallow neural network model, and performing a beat recognition process on the local phase quantization feature map through the shallow neural network model, wherein before obtaining the beat probability, the step of obtaining the beat probability comprises the following steps:
Acquiring an image sample, wherein the image sample comprises a flipped image sample and a non-flipped image sample, and inputting the image sample into the shallow neural network model; wherein each of the image samples is associated with a tap tag;
Extracting texture features of the image sample through the shallow neural network model containing initial parameters;
acquiring an identification result output by the shallow neural network model according to the texture features, and determining a loss value according to the identification result and the matching degree of the flap tag;
And when the loss value reaches a preset convergence condition, the training of the shallow neural network model is completed.
2. The method for recognizing a flip image as claimed in claim 1, wherein said inputting the image to be recognized into a head portrait detection model, extracting a head portrait of the image to be recognized, comprises:
inputting the image to be identified into a head portrait detection model, and performing head portrait detection on the image to be identified through mtcnn algorithm in the head portrait detection model so as to obtain head portrait confidence probability of the image to be identified;
And when the head portrait confidence probability of the image to be identified is larger than a preset threshold value, extracting a head portrait region of the image to be identified, and marking the head portrait region as the head portrait of the image to be identified.
3. The method for recognizing a reproduction image according to claim 1, wherein the reproduction recognition model performs reproduction recognition processing on the local phase quantization feature map by the reproduction recognition model, and after obtaining the reproduction probability, comprises:
judging whether the probability of the flap is smaller than the preset probability threshold;
If the reproduction probability is smaller than the preset probability threshold, determining that the image to be identified is a non-reproduction image;
And acquiring user identity information associated with the image to be identified, and storing the image to be identified to a designated storage position associated with the user identity information.
4. A flip image recognition device, comprising:
the first acquisition module is used for acquiring an image to be identified;
the first extraction module is used for inputting the image to be identified into a head portrait detection model and extracting a head portrait of the image to be identified;
the conversion module is used for carrying out gray processing on the head photo of the image to be identified to obtain a gray image of the head photo;
The second acquisition module is used for acquiring phase information of each pixel point of the gray level image and a preset local area block corresponding to each pixel point;
The first calculation module is used for processing the phase information of each pixel point and the preset local area block corresponding to each pixel point through a local phase quantization method and calculating a local phase quantization characteristic value corresponding to each pixel point;
the generation module is used for arranging the local phase quantization characteristic values of all the pixel points and generating a local phase quantization characteristic map of the gray image;
The identification module is used for inputting the local phase quantization characteristic map into a trained shallow neural network model, and carrying out identification processing on the local phase quantization characteristic map through the shallow neural network model so as to obtain the probability of the local phase quantization characteristic map for flipping;
The determining module is used for determining that the image to be identified is a reproduction image when the reproduction probability is greater than or equal to a preset probability threshold;
The identification module comprises:
A second extraction module for extracting a local phase quantization feature histogram from the local phase quantization feature map;
The enhancement module is used for enhancing the local phase quantization characteristic histogram through a Gaussian noise algorithm to obtain neuron data;
The second calculation module is used for inputting the neuron data into a random inactivation layer in the shallow neural network, extracting texture features of the neuron data through the random inactivation layer, and acquiring prediction probability which is output by the random inactivation layer and is matched with the texture features;
The output module is used for inputting the prediction probability into an activation layer in the shallow neural network, and the activation layer processes the prediction probability through a sigmoid function to obtain the flip probability of the local phase quantization characteristic map;
the identification module further comprises:
The image sampling device comprises an acquisition sample unit, a sampling unit and a sampling unit, wherein the acquisition sample unit is used for acquiring image samples, the image samples comprise a flipped image sample and a non-flipped image sample, and the image samples are input into the shallow neural network model; wherein each of the image samples is associated with a tap tag;
The training extraction unit is used for extracting texture features of the image sample through the shallow neural network model containing initial parameters;
the training recognition unit is used for acquiring a recognition result output by the shallow neural network model according to the texture characteristics and determining a loss value according to the recognition result and the matching degree of the flip label;
and the training completion unit is used for completing the training of the shallow neural network model when the loss value reaches a preset convergence condition.
5. A flip-flop image recognition device according to claim 4, wherein said first extraction module comprises:
The detection module is used for inputting the image to be identified into a head portrait detection model, and performing head portrait detection on the image to be identified through mtcnn algorithm in the head portrait detection model so as to obtain head portrait confidence probability of the image to be identified;
And the marking module is used for extracting the head portrait area of the image to be identified and marking the head portrait area as the head portrait of the image to be identified when the head portrait confidence probability of the image to be identified is greater than a preset threshold value.
6. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of identifying a flip image as claimed in any one of claims 1 to 3 when the computer program is executed by the processor.
7. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of identifying a flip image as claimed in any one of claims 1 to 3.
CN201911366794.2A 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying flip image Active CN111191568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911366794.2A CN111191568B (en) 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying flip image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911366794.2A CN111191568B (en) 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying flip image

Publications (2)

Publication Number Publication Date
CN111191568A CN111191568A (en) 2020-05-22
CN111191568B true CN111191568B (en) 2024-06-14

Family

ID=70705848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911366794.2A Active CN111191568B (en) 2019-12-26 2019-12-26 Method, device, equipment and medium for identifying flip image

Country Status (1)

Country Link
CN (1) CN111191568B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754166A (en) * 2020-05-29 2020-10-09 大亚湾核电运营管理有限责任公司 Nuclear power station spare part cross-power station allocation method and system and storage medium
CN111666890B (en) * 2020-06-08 2023-06-30 平安科技(深圳)有限公司 Spine deformation crowd identification method and device, computer equipment and storage medium
CN112116564B (en) * 2020-09-03 2023-10-20 深圳大学 Anti-beat detection countermeasure sample generation method, device and storage medium
CN112258481A (en) * 2020-10-23 2021-01-22 北京云杉世界信息技术有限公司 Portal photo reproduction detection method
CN112396058B (en) * 2020-11-11 2024-04-09 深圳大学 Document image detection method, device, equipment and storage medium
CN113128521B (en) * 2021-04-30 2023-07-18 西安微电子技术研究所 Method, system, computer equipment and storage medium for extracting characteristics of miniaturized artificial intelligent model
CN114005019B (en) * 2021-10-29 2023-09-22 北京有竹居网络技术有限公司 Method for identifying flip image and related equipment thereof
CN114333037B (en) * 2022-02-25 2022-05-13 北京结慧科技有限公司 Identification method and system for copied photo containing identity card
CN114565827A (en) * 2022-04-29 2022-05-31 深圳爱莫科技有限公司 Cigarette display anti-cheating detection method based on image recognition and model training method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549836A (en) * 2018-03-09 2018-09-18 通号通信信息集团有限公司 Reproduction detection method, device, equipment and the readable storage medium storing program for executing of photo
CN109754059A (en) * 2018-12-21 2019-05-14 平安科技(深圳)有限公司 Reproduction image-recognizing method, device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
CN109815960A (en) * 2018-12-21 2019-05-28 深圳壹账通智能科技有限公司 Reproduction image-recognizing method, device, equipment and medium based on deep learning
CN109815970B (en) * 2018-12-21 2023-04-07 平安科技(深圳)有限公司 Method and device for identifying copied image, computer equipment and storage medium
CN109886275A (en) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 Reproduction image-recognizing method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549836A (en) * 2018-03-09 2018-09-18 通号通信信息集团有限公司 Reproduction detection method, device, equipment and the readable storage medium storing program for executing of photo
CN109754059A (en) * 2018-12-21 2019-05-14 平安科技(深圳)有限公司 Reproduction image-recognizing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111191568A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191568B (en) Method, device, equipment and medium for identifying flip image
CN109508638B (en) Face emotion recognition method and device, computer equipment and storage medium
CN111275685B (en) Method, device, equipment and medium for identifying flip image of identity document
CN111476268B (en) Training of flip recognition model, image recognition method, device, equipment and medium
CN109543690B (en) Method and device for extracting information
WO2021027336A1 (en) Authentication method and apparatus based on seal and signature, and computer device
Rattani et al. A survey of mobile face biometrics
Fourati et al. Anti-spoofing in face recognition-based biometric authentication using image quality assessment
CN105488463B (en) Lineal relative's relation recognition method and system based on face biological characteristic
WO2021042505A1 (en) Note generation method and apparatus based on character recognition technology, and computer device
WO2020143325A1 (en) Electronic document generation method and device
CN112381775A (en) Image tampering detection method, terminal device and storage medium
WO2021137946A1 (en) Forgery detection of face image
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
CN106056083B (en) A kind of information processing method and terminal
CN112200136A (en) Certificate authenticity identification method and device, computer readable medium and electronic equipment
CN113111880B (en) Certificate image correction method, device, electronic equipment and storage medium
CN110796145A (en) Multi-certificate segmentation association method based on intelligent decision and related equipment
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
CN113378609B (en) Agent proxy signature identification method and device
CN110889341A (en) Form image recognition method and device based on AI (Artificial Intelligence), computer equipment and storage medium
CN110942067A (en) Text recognition method and device, computer equipment and storage medium
CN110956102A (en) Bank counter monitoring method and device, computer equipment and storage medium
Younis et al. IFRS: An indexed face recognition system based on face recognition and RFID technologies
US11087121B2 (en) High accuracy and volume facial recognition on mobile platforms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant