CN112560683A - Method and device for identifying copied image, computer equipment and storage medium - Google Patents

Method and device for identifying copied image, computer equipment and storage medium Download PDF

Info

Publication number
CN112560683A
CN112560683A CN202011487387.XA CN202011487387A CN112560683A CN 112560683 A CN112560683 A CN 112560683A CN 202011487387 A CN202011487387 A CN 202011487387A CN 112560683 A CN112560683 A CN 112560683A
Authority
CN
China
Prior art keywords
image
copied
face recognition
image information
reproduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011487387.XA
Other languages
Chinese (zh)
Inventor
赖众程
李会璟
王小红
梁俊杰
王晟宇
洪叁亮
郑松辉
施国灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011487387.XA priority Critical patent/CN112560683A/en
Publication of CN112560683A publication Critical patent/CN112560683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application belongs to the technical field of face recognition in artificial intelligence, and relates to a method and a device for recognizing a copied image, computer equipment and a storage medium. In addition, the present application also relates to a block chain technology, and the current image information acquired by the image acquisition device can be stored in the block chain. According to the method and the device, the untrained original classification model is classified and trained in advance to obtain the trained classification prediction model, so that the classification prediction model can identify the copied face image or video, and further perform subsequent face recognition operation, and the purposes of unlocking, logging, payment and the like caused by directly performing face recognition are effectively avoided.

Description

Method and device for identifying copied image, computer equipment and storage medium
Technical Field
The present application relates to the field of face recognition technology in artificial intelligence, and in particular, to a method and an apparatus for recognizing a copied image, a computer device, and a storage medium.
Background
The need for effective identification of personal identity authentication in all communities of society is becoming more and more urgent nowadays, so that biometric identification technology has been rapidly developed in recent decades. As an intrinsic attribute of people, the human face has strong self-stability and individual difference, and compared with a fingerprint identification mode and other modes, the human face identification is an ideal basis for automatic identity verification due to the advantages of non-mandatory property, non-contact property, parallelism and the like.
The existing face recognition method is a biological recognition technology for identity recognition based on face feature information of people. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
However, the applicant finds that the traditional face recognition method is generally not intelligent, because face feature information can still be collected by copying face images or videos displayed by various electronic screens (such as computers, tablets, mobile phones and televisions), and then face verification related operations are completed by using the copied face images or videos, the purposes of unlocking, logging in, payment and the like are achieved, further improper benefits are obtained, and benefits of attacked individuals and companies are seriously damaged, so that the problem that the face recognition is carried out maliciously by copying the face images cannot be solved by the traditional face recognition method.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for recognizing a copied image, a computer device, and a storage medium, so as to solve the problem that a traditional face recognition method cannot solve the problem that a copied face image is subjected to malicious face recognition.
In order to solve the above technical problem, an embodiment of the present application provides a method for identifying a captured image, which adopts the following technical solutions:
receiving current image information displayed by image display equipment and acquired by image acquisition equipment to obtain copied image information;
performing feature extraction operation on the copied image information to obtain copied feature vectors;
pre-labeling the reproduction characteristic vector to obtain a reproduction image sample;
inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model;
when a user carries out face recognition, receiving current image information acquired by the image acquisition equipment;
inputting the current image information into the classification prediction model to perform classification prediction operation to obtain a classification prediction result;
judging whether the current image information belongs to a copied image or not based on the classification prediction result;
if the current image information belongs to the copied image, outputting a face recognition failure signal;
and if the current image information does not belong to the copied image, inputting the current image information into an optimized face recognition model for face recognition operation to obtain a face recognition result.
In order to solve the above technical problem, an embodiment of the present application further provides a device for recognizing a captured image, which adopts the following technical solutions:
the system comprises a reproduction image acquisition module, a reproduction image acquisition module and a reproduction image display module, wherein the reproduction image acquisition module is used for receiving current image information displayed by image display equipment and acquired by image acquisition equipment to obtain reproduction image information;
the characteristic extraction module is used for carrying out characteristic extraction operation on the copied image information to obtain copied characteristic vectors;
the pre-labeling module is used for performing pre-labeling operation on the copied feature vector to obtain a copied image sample;
the classification training module is used for inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model;
the current image acquisition module is used for receiving current image information acquired by the image acquisition equipment when a user performs face recognition;
the classification prediction module is used for inputting the current image information into the classification prediction model to perform classification prediction operation to obtain a classification prediction result;
the copied image judging module is used for judging whether the current image information belongs to the copied image or not based on the classification prediction result;
the reproduction image confirmation module is used for outputting a face recognition failure signal if the current image information belongs to the reproduction image;
and the copied image non-recognition module is used for inputting the current image information into an optimized face recognition model to perform face recognition operation if the current image information does not belong to the copied image, so as to obtain a face recognition result.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
comprising a memory having computer readable instructions stored therein and a processor that when executed implements the steps of a method of identifying a copied image as described below.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
the computer readable storage medium has stored thereon computer readable instructions which, when executed by a processor, implement the steps of a method of identifying a copied image as described below.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the method for recognizing the copied image comprises the following steps: receiving current image information displayed by image display equipment and acquired by image acquisition equipment to obtain copied image information; performing feature extraction operation on the copied image information to obtain copied feature vectors; pre-labeling the reproduction characteristic vector to obtain a reproduction image sample; inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model; when a user carries out face recognition, receiving current image information acquired by the image acquisition equipment; inputting the current image information into the classification prediction model to perform classification prediction operation to obtain a classification prediction result; judging whether the current image information belongs to a copied image or not based on the classification prediction result; if the current image information belongs to the copied image, outputting a face recognition failure signal; and if the current image information does not belong to the copied image, inputting the current image information into an optimized face recognition model for face recognition operation to obtain a face recognition result. The method has the advantages that the untrained original classification model is classified and trained in advance to obtain the trained classification prediction model, so that the classification prediction model can identify the copied face image or video, and further perform subsequent face identification operation, and the purposes of unlocking, logging in, payment and the like caused by direct face identification are effectively avoided.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart illustrating an implementation of a method for recognizing a copied image according to an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of step S102 in FIG. 1;
FIG. 3 is a flowchart of an implementation of step S202 in FIG. 2;
fig. 4 is a flowchart of an implementation of obtaining an optimized face recognition model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a copied image recognition apparatus according to a second embodiment of the present application;
FIG. 6 is a schematic diagram of the structure of the pre-labeling module 130 in FIG. 5;
FIG. 7 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Example one
As shown in fig. 1, a flowchart of an implementation of a method for recognizing a captured image according to an embodiment of the present application is shown, and for convenience of description, only relevant portions of the present application are shown.
The method for recognizing the copied image comprises the following steps:
step S101: and receiving the current image information displayed by the image display equipment and acquired by the image acquisition equipment to obtain the copied image information.
In the embodiment of the present application, an image capturing device refers to a camera apparatus for focusing and recording a picture on a captured object as a reference, the image capturing device at least includes a camera assembly, and the image capturing device may be a camera, a video camera, a mobile terminal with a camera assembly, and the like.
In the embodiment of the present application, in a scene where a displayed face image is reproduced, an image display device refers to a device for displaying the face image, and the image display device at least comprises a display component.
In the embodiment of the present application, the reproduced image information is image data obtained by reproducing the displayed face image, and the image data is obtained by reproduction and thus cannot substantially have the characteristic of verifying the real person.
Step S102: and performing feature extraction operation on the copied image information to obtain a copied feature vector.
In the embodiment of the application, the feature extraction operation may be a high-pass filtering operation performed on the information of the copied image to obtain a copied residual image; performing texture description operation on the copied residual image based on a local binary pattern description method to obtain a copied texture image; performing matrix conversion operation on the copied texture image to obtain a copied symbiotic matrix; and carrying out normalization operation on the copying co-occurrence matrix to obtain a copying characteristic vector.
In the embodiment of the application, the reproduction feature vector is mainly used for describing feature information of reproduction image information through vector data, and the feature information can be used as a basis for distinguishing the type of the reproduction image.
Step S103: and pre-labeling the copying characteristic vector to obtain a copying image sample.
In the embodiment of the present application, the pre-labeling operation mainly marks the acquired current image information as a "copied image", so as to form the copied image sample.
In the embodiment of the application, the copied image sample is mainly used for training the original classification model.
Step S104: and inputting the copied image sample into the original classification model to perform classification training operation to obtain a classification prediction model.
In the embodiment of the application, the classifier can be classified and predicted by using a Support Vector Machine (SVM) or an integrated classifier, when the classifier is trained, an image library is selected firstly, a plurality of copied and non-copied images exist in the image library, the images are extracted to respective feature vectors through the feature extraction operation, and the feature vectors and data indicating whether the images are copied images are input into the classifier, so that the trained classifier is obtained.
In the embodiments of the present application, since a method for training a classifier is known to those skilled in the art, further description will not be made here.
Step S105: when the user carries out face recognition, the current image information collected by the image collecting equipment is received.
In the embodiment of the application, the current image information refers to that when face recognition detection is actually performed, an image acquired by image acquisition equipment is the current image information, wherein the current image information may be a copied image or a real face image.
Step S106: and inputting the current image information into a classification prediction model to perform classification prediction operation, so as to obtain a classification prediction result.
In the embodiment of the present application, since the trained classification prediction model itself has a function of identifying whether image information is a copied image, a prediction result of the current image information can be obtained by inputting the current image information into the trained classification prediction model for classification prediction, where the prediction result includes a "copied image" or a "non-copied image".
In the embodiment of the application, the classification prediction result is mainly used for uniquely identifying whether the current image information is a copied image.
Step S107: and judging whether the current image information belongs to the copied image or not based on the classification prediction result.
Step S108: and if the current image information belongs to the copied image, outputting a face recognition failure signal.
In the embodiment of the application, when the classification prediction result is a "copied image", it indicates that the current object subjected to face recognition is not a real person, and there is a suspicion of identity theft, so that subsequent face recognition operation cannot be performed.
Step S109: and if the current image information does not belong to the copied image, inputting the current image information into the optimized face recognition model for face recognition operation to obtain a face recognition result.
In the embodiment of the application, when the classification prediction result is a non-reproduction image, it indicates that the current object for face recognition is a real person, and there is no suspicion of identity theft, so that subsequent face recognition operation can be performed.
The method for recognizing the copied image comprises the following steps: receiving current image information displayed by image display equipment and acquired by image acquisition equipment to obtain copied image information; performing feature extraction operation on the copied image information to obtain copied feature vectors; pre-labeling the copying feature vector to obtain a copying image sample; inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model; when a user carries out face recognition, receiving current image information acquired by image acquisition equipment; inputting the current image information into a classification prediction model to perform classification prediction operation to obtain a classification prediction result; judging whether the current image information belongs to a copied image or not based on the classification prediction result; if the current image information belongs to the copied image, outputting a face recognition failure signal; and if the current image information does not belong to the copied image, inputting the current image information into the optimized face recognition model for face recognition operation to obtain a face recognition result. The method has the advantages that the untrained original classification model is classified and trained in advance to obtain the trained classification prediction model, so that the classification prediction model can identify the copied face image or video, and further perform subsequent face identification operation, and the purposes of unlocking, logging in, payment and the like caused by direct face identification are effectively avoided.
With continuing reference to fig. 2, a flowchart for implementing step S102 in fig. 1 is shown, and for convenience of illustration, only relevant portions of the present application are shown.
In some optional implementation manners of this embodiment, step S102 specifically includes: step S201, step S202, step S203, and step S204.
Step S201: and carrying out high-pass filtering operation on the information of the copied image to obtain a copied residual image.
In the embodiment of the present application, the electronic image may be high-pass filtered by using a filtering method in a steganalysis technique SRM (SRM), where the steganalysis technique has six residual types, and each residual type has multiple "spam" or "minmax" type residuals, so that it may finally obtain 39 residual models. The method proposed by the present application may alternatively use a residual model. It should be understood that the examples of the high-pass filtering operation are only for convenience of understanding and are not intended to limit the present application.
Step S202: and performing texture description operation on the copied residual image based on a local binary pattern description method to obtain a copied texture image.
In the embodiment of the application, the texture description operation may be to obtain the target pixel of the copied residual image and the average residual value l of eight pixels around the target pixel based on the LBP-mean methodm(ii) a Calculating a target pixel pcLBP value of (a); and obtaining the reproduction texture image after obtaining the LBP values of all pixels of the reproduction residual image.
Step S203: and carrying out matrix conversion operation on the copied texture image to obtain a copied symbiotic matrix.
In the embodiment of the present application, the matrix transformation operation may be to generate the co-occurrence matrix according to the number of occurrences of consecutive horizontally or vertically adjacent LBP values in the copied texture image, where each value in the co-occurrence matrix represents the number of occurrences of one consecutive horizontally or vertically adjacent LBP value in the copied texture image.
In the embodiment of the application, the change of the natural statistical characteristics of the image can be effectively found by counting the occurrence times of the numerical values of the adjacent positions. Used in the present invention is a co-occurrence matrix generated according to the number of occurrences of consecutive horizontally or vertically adjacent LBP values, i.e. each value in the co-occurrence matrix represents the number of occurrences of a consecutive horizontally or vertically adjacent LBP value in the texture description image. For example, if two adjacent LBP values are selected, the resulting co-occurrence matrix corresponds to a two-dimensional matrix a, each value a [ m, n ] in the matrix representing the number of times two adjacent LBP values (m, n) appear in the texture description image.
Step S204: and carrying out normalization operation on the copying co-occurrence matrix to obtain a copying characteristic vector.
In this embodiment, the co-occurrence matrix may be converted into a vector, then a maximum value in the vector is found, and a value obtained by dividing each value in the vector by the maximum value is used as a normalized vector value, so as to generate the flap feature vector.
With continuing reference to fig. 3, a flowchart for implementing step S202 in fig. 2 is shown, and for convenience of illustration, only relevant portions of the present application are shown.
In some optional implementation manners of this embodiment, the step S202 specifically includes: step S301, step S302, and step S303.
Step S301: LBP-mean method based target pixel of reproduction residual image and average residual value l of eight pixels around target pixelmAverage residual value lmExpressed as:
lm=mean(l0,l1,…,l7,lc)
where mean () represents the averaging function; l0,l1,…,l7,lcRespectively representing residual values of eight pixels around the target pixel; lcRepresenting a target pixel pcThe residual value of (d); lmMean residual values are indicated.
Step S302: calculating a target pixel pcThe LBP value of (a) is expressed as:
Figure BDA0002839721090000091
wherein, LBPpcRepresenting a target pixel pcLBP value of (a); s () represents a sign-taking function, i.e., s is 1 if the argument is positive, otherwise it is 0; l2iRepresenting and target pixel pcFour pixels, i.e. pixel p, sharing a common edge0、p2、p4And p6The residual value of (d); lmIs the average residual value.
P7 P0 P1
P6 Pc P2
P5 P4 P3
In the embodiment of the present application, it can be seen from the formula that more abundant local texture information is considered, and the value range is greatly reduced compared with the conventional LBP method, and the value range is [0, 15] according to the formula, which greatly reduces the complexity of the method in the subsequent calculation of the co-occurrence matrix.
Step S303: and obtaining a reproduction texture image after obtaining LBP values of all pixels of the reproduction residual image.
In the embodiment of the present application, the LBP description method can describe local texture of each pixel, and the conventional LBP description method uses eight pixel points around a pixel point to calculate the LBP value of the pixel, and it is not suitable for describing fine image modification traces from multiple directions and angles.
In the embodiment of the present application, the conventional LBP description method is easy to cause the loss of local texture information, and in addition, the LBP value range of the target pixel obtained by using the conventional LBP description method is large ([0, 255]), so that the generated co-occurrence matrix becomes larger (especially when the dimensionality of the co-occurrence matrix is high), thereby being not beneficial to the extraction of feature vectors and the classification of a classifier later. In order to reduce the complexity and overcome the problem that the texture information is true, the method screens more abundant local texture information, and the value range of the local texture information is greatly reduced compared with that of the traditional LBP method, and the value range is [0, 15] according to the formula, so that the complexity of the method is greatly reduced in the subsequent calculation of the co-occurrence matrix.
With continuing reference to fig. 4, a flowchart for implementing obtaining an optimized face recognition model provided in an embodiment of the present application is shown, and for convenience of description, only relevant portions of the present application are shown.
In some optional implementations of this embodiment, before step S109, the method further includes: step S401, step S402, and step S403.
Step S401: and reading the local database, and acquiring the unoccluded face image and the occlusion object image in the local database.
In the embodiments of the present application, a local database refers to a database that resides on a machine running a client application. The local database provides the fastest response time. Since there is no network transfer between the client (application) and the server. The local database stores in advance standard face images that are not occluded by any occlusion (i.e., the unoccluded face images described above) and occlusion images.
In the embodiment of the present application, the blocking object refers to an object for blocking a human face, and may be a mask, a veil, a face mask, a scarf, or the like. The mask image refers to an image corresponding to a mask, for example, various mask images and the like.
As a possible situation, the shelter image may be obtained by a separately placed shelter collected by the terminal device, may also be obtained by performing image segmentation on a face image, which is collected by the terminal device and is worn with the shelter, and the like, and is not limited herein.
Step S402: and carrying out fusion operation on the non-shielded face image and the shielding object image to obtain a fusion training image.
In the embodiment of the application, the fusion training image refers to a face image blocked by a blocking object. Such as an image of a face wearing a mask, etc.
In the embodiment of the application, after the non-occluded face image and the multiple occlusion object images are acquired, the multiple occlusion object images can be respectively fused to the specified positions of the non-occluded face image, so that multiple fusion training images are generated.
As a possible implementation manner, the plurality of obstruction images may be fused to the designated positions of the non-obstruction images, so as to obtain a plurality of second training images. For example, assuming that the mask image is a mask image, a plurality of mask images may be fused to a mask wearing position where the face image is not blocked, so as to block the nose, mouth, and chin of the face, and then a plurality of fusion training images may be obtained by image fusion.
Step S403: and inputting the unshielded face image and the fusion training image into a face recognition model to be optimized for training operation to obtain an optimized face recognition model.
In the embodiment of the application, the face recognition model to be optimized refers to the existing model capable of accurately recognizing the collected non-occluded face image.
In the embodiment of the application, after the non-shielding face image and the fusion training image are obtained, the non-shielding face image and the fusion training image can be input into the face recognition model, parameters of the face recognition model are adjusted, so that the face recognition model is optimized by adjusting the parameters, and the shielding face image and the non-shielding face image can be accurately recognized by the optimized face recognition model.
In some optional implementations of this embodiment, the face recognition model includes:
a feature extraction network and identification module;
the feature extraction network is used for extracting the weight according to the preset features and obtaining a feature map of the face image;
and the recognition module is used for comparing the characteristic graph of the face image with the characteristic graphs stored in the model base in advance so as to determine a face recognition result according to the comparison result.
In the embodiment of the present application, the face recognition model in the related art extracts feature information of each region in the face, such as eyes, mouth, nose, and the like, relatively uniformly, and then compares the feature information with the feature information. However, after wearing the mask, the mouth and nose are shielded, so that the characteristics cannot be normally extracted, and the loss of characteristic information is large. In order to improve the identification accuracy of the face identification model and ensure that the model can identify the unshielded face image and the face shielded image, the feature extraction of the eye region can be enhanced during feature extraction. That is, a higher extraction weight may be set for the eye region, so as to acquire the feature map of the face image extracted according to the preset feature extraction weight.
In the embodiment of the application, the face recognition model comprises a model base of a feature map corresponding to an unoccluded image and a model base of a feature map corresponding to an occluded image, and after the feature extraction network extracts the feature map of the face image, the feature map of the face image can be compared with the feature map stored in the model base in advance, so that a face recognition result is determined according to the comparison result.
In summary, the method for identifying a captured image provided by the present application includes: receiving current image information displayed by the image display equipment and sent by the image acquisition equipment to obtain copied image information; performing feature extraction operation on the copied image information to obtain copied feature vectors; pre-labeling the copying feature vector to obtain a copying image sample; inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model; when a user carries out face recognition, receiving current image information acquired by image acquisition equipment; inputting the current image information into a classification prediction model to perform classification prediction operation to obtain a classification prediction result; judging whether the current image information belongs to a copied image or not based on the classification prediction result; if the current image information belongs to the copied image, outputting a face recognition failure signal; and if the current image information does not belong to the copied image, inputting the current image information into the optimized face recognition model for face recognition operation to obtain a face recognition result. The method has the advantages that the untrained original classification model is classified and trained in advance to obtain the trained classification prediction model, so that the classification prediction model can identify the copied face image or video, and further perform subsequent face identification operation, and the purposes of unlocking, logging in, payment and the like caused by direct face identification are effectively avoided. Meanwhile, by screening richer local texture information, the value range of the local texture information is greatly reduced compared with that of the traditional LBP method, and the value range is [0, 15] according to the formula, so that the complexity of the method is greatly reduced in the subsequent calculation of the co-occurrence matrix; after the non-shielding face image and the fusion training image are obtained, the non-shielding face image and the fusion training image can be input into the face recognition model, parameters of the face recognition model are adjusted, face recognition after the parameters are adjusted is used for optimizing the face recognition model, and the shielding face image and the non-shielding face image can be accurately recognized by the optimized face recognition model.
It should be emphasized that, in order to further ensure the privacy and security of the current image information acquired by the image acquisition device, the current image information acquired by the image acquisition device may also be stored in a node of a block chain.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Example two
With further reference to fig. 5, as an implementation of the method shown in fig. 1, the present application provides an embodiment of a device for recognizing a captured image, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 1, and the device can be applied to various electronic devices.
As shown in fig. 5, the reproduced image recognition apparatus 100 of the present embodiment includes: the system comprises a copied image acquisition module 110, a feature extraction module 120, a pre-labeling module 130, a classification training module 140, a current image acquisition module 150, a classification prediction module 160, a copied image judgment module 170, a copied image confirmation module 180 and a copied image denial module 190. Wherein:
a copied image obtaining module 110, configured to receive current image information displayed by an image display device and acquired by an image acquisition device, and obtain copied image information;
the feature extraction module 120 is configured to perform feature extraction operation on the copied image information to obtain a copied feature vector;
the pre-labeling module 130 is configured to perform pre-labeling operation on the captured feature vectors to obtain captured image samples;
the classification training module 140 is configured to input the copied image sample to an original classification model for classification training operation, so as to obtain a classification prediction model;
a current image obtaining module 150, configured to receive current image information acquired by an image acquisition device when a user performs face recognition;
the classification prediction module 160 is configured to input the current image information to a classification prediction model to perform a classification prediction operation, so as to obtain a classification prediction result;
a copied image judgment module 170, configured to judge whether the current image information belongs to a copied image based on the classification prediction result;
the copied image confirming module 180 is used for outputting a face recognition failure signal if the current image information belongs to the copied image;
and the copied image non-confirmation module 190 is configured to, if the current image information does not belong to the copied image, input the current image information into the optimized face recognition model to perform face recognition operation, and obtain a face recognition result.
In the embodiment of the present application, an image capturing device refers to a camera apparatus for focusing and recording a picture on a captured object as a reference, the image capturing device at least includes a camera assembly, and the image capturing device may be a camera, a video camera, a mobile terminal with a camera assembly, and the like.
In the embodiment of the present application, in a scene where a displayed face image is reproduced, an image display device refers to a device for displaying the face image, and the image display device at least comprises a display component.
In the embodiment of the present application, the reproduced image information is image data obtained by reproducing the displayed face image, and the image data is obtained by reproduction and thus cannot substantially have the characteristic of verifying the real person.
In the embodiment of the application, the feature extraction operation may be a high-pass filtering operation performed on the information of the copied image to obtain a copied residual image; performing texture description operation on the copied residual image based on a local binary pattern description method to obtain a copied texture image; performing matrix conversion operation on the copied texture image to obtain a copied symbiotic matrix; and carrying out normalization operation on the copying co-occurrence matrix to obtain a copying characteristic vector.
In the embodiment of the application, the reproduction feature vector is mainly used for describing feature information of reproduction image information through vector data, and the feature information can be used as a basis for distinguishing the type of the reproduction image.
In the embodiment of the present application, the pre-labeling operation mainly marks the acquired current image information as a "copied image", so as to form the copied image sample.
In the embodiment of the application, the copied image sample is mainly used for training the original classification model.
In the embodiment of the application, the classifier can be classified and predicted by using a Support Vector Machine (SVM) or an integrated classifier, when the classifier is trained, an image library is selected firstly, a plurality of copied and non-copied images exist in the image library, the images are extracted to respective feature vectors through the feature extraction operation, and the feature vectors and data indicating whether the images are copied images are input into the classifier, so that the trained classifier is obtained.
In the embodiments of the present application, since a method for training a classifier is known to those skilled in the art, further description will not be made here.
In the embodiment of the application, the current image information refers to that when face recognition detection is actually performed, an image acquired by image acquisition equipment is the current image information, wherein the current image information may be a copied image or a real face image.
In the embodiment of the present application, since the trained classification prediction model itself has a function of identifying whether image information is a copied image, a prediction result of the current image information can be obtained by inputting the current image information into the trained classification prediction model for classification prediction, where the prediction result includes a "copied image" or a "non-copied image".
In the embodiment of the application, the classification prediction result is mainly used for uniquely identifying whether the current image information is a copied image.
In the embodiment of the application, when the classification prediction result is a "copied image", it indicates that the current object subjected to face recognition is not a real person, and there is a suspicion of identity theft, so that subsequent face recognition operation cannot be performed.
In the embodiment of the application, when the classification prediction result is a non-reproduction image, it indicates that the current object for face recognition is a real person, and there is no suspicion of identity theft, so that subsequent face recognition operation can be performed.
The application provides a reproduction image recognition device, through carrying out classification training operation to untrained original classification model in advance, obtain the classification prediction model that trains, make this classification prediction model can discern the face image or the video of reproduction, and then carry out subsequent face identification operation, thereby effectively avoid directly carrying out face identification often and lead to the unblock, login, purpose such as payment, this application effectively avoids lawless persons to try out improper interests, harm the personal and the interests of company attacked, effectively solve the problem of solving reproduction face image and carrying out malicious face identification.
With continued reference to fig. 6, a schematic structural diagram of the pre-labeling module 130 in fig. 5 is shown, which is only shown in relevant parts of the present application for convenience of explanation.
In some optional implementations of this embodiment, the pre-labeling module 130 specifically includes: a high-pass filtering sub-module 131, a texture description sub-module 132, a matrix conversion sub-module 133, and a normalization sub-module 134. Wherein:
the high-pass filtering submodule 131 is used for performing high-pass filtering operation on the copied image information to obtain a copied residual image;
the texture description submodule 132 is configured to perform texture description operation on the copied residual image based on a local binary pattern description method to obtain a copied texture image;
the matrix conversion submodule 133 is configured to perform matrix conversion operation on the copied texture image to obtain a copied co-occurrence matrix;
and the normalization submodule 134 is configured to perform normalization operation on the flip co-occurrence matrix to obtain a flip feature vector.
In the embodiment of the present application, the electronic image may be high-pass filtered by using a filtering method in a steganalysis technique SRM (SRM), where the steganalysis technique has six residual types, and each residual type has multiple "spam" or "minmax" type residuals, so that it may finally obtain 39 residual models. The method proposed by the present application may alternatively use a residual model. It should be understood that the examples of the high-pass filtering operation are only for convenience of understanding and are not intended to limit the present application.
In the embodiment of the application, the texture description operation may be to obtain the target pixel of the copied residual image and the average residual value l of eight pixels around the target pixel based on the LBP-mean methodm(ii) a Calculating a target pixel pcLBP value of (a); and obtaining the reproduction texture image after obtaining the LBP values of all pixels of the reproduction residual image.
In the embodiment of the present application, the matrix transformation operation may be to generate the co-occurrence matrix according to the number of occurrences of consecutive horizontally or vertically adjacent LBP values in the copied texture image, where each value in the co-occurrence matrix represents the number of occurrences of one consecutive horizontally or vertically adjacent LBP value in the copied texture image.
In the embodiment of the application, the change of the natural statistical characteristics of the image can be effectively found by counting the occurrence times of the numerical values of the adjacent positions. Used in the present invention is a co-occurrence matrix generated according to the number of occurrences of consecutive horizontally or vertically adjacent LBP values, i.e. each value in the co-occurrence matrix represents the number of occurrences of a consecutive horizontally or vertically adjacent LBP value in the texture description image. For example, if two adjacent LBP values are selected, the resulting co-occurrence matrix corresponds to a two-dimensional matrix a, each value a [ m, n ] in the matrix representing the number of times two adjacent LBP values (m, n) appear in the texture description image.
In this embodiment, the co-occurrence matrix may be converted into a vector, then a maximum value in the vector is found, and a value obtained by dividing each value in the vector by the maximum value is used as a normalized vector value, so as to generate the flap feature vector.
In some optional implementations of this embodiment, the texture description sub-module 132 specifically includes:
an average residual value obtaining unit for obtaining a target pixel of the reproduction residual image and an average residual value l of eight pixels around the target pixel based on an LBP-mean methodmAverage residual value lmExpressed as:
lm=mean(l0,l1,…,l7,lc)
where mean () represents the averaging function; l0,l1,…,l7,lcRespectively representing residual values of eight pixels around the target pixel; lcRepresenting a target pixel pcThe residual value of (d); lmRepresents the average residual value;
an LBP value calculating operator unit for calculating a target pixel pcThe LBP value of (a) is expressed as:
Figure BDA0002839721090000181
wherein,LBPpcRepresenting a target pixel pcLBP value of (a); s () represents a sign-taking function, i.e., s is 1 if the argument is positive, otherwise it is 0; l2iRepresenting and target pixel pcFour pixels, i.e. pixel p, sharing a common edge0、p2、p4And p6The residual value of (d); lmIs the average residual value;
and the copying texture image obtaining subunit is used for obtaining the copying texture image after obtaining the LBP values of all pixels of the copying residual image.
In some optional implementations of the present embodiment, the above-mentioned copied image recognition apparatus 100 further includes:
the reading database module is used for reading a local database and acquiring an unoccluded face image and an occlusion object image in the local database;
the fusion operation module is used for carrying out fusion operation on the unoccluded face image and the sheltered object image to obtain a fusion training image;
and the training operation module is used for inputting the unoccluded face image and the fusion training image into the face recognition model to be optimized for training operation to obtain the optimized face recognition model.
In some optional implementations of this embodiment, the face recognition model includes:
a feature extraction network and identification module;
the feature extraction network is used for extracting the weight according to the preset features and obtaining a feature map of the face image;
and the recognition module is used for comparing the characteristic graph of the face image with the characteristic graphs stored in the model base in advance so as to determine a face recognition result according to the comparison result.
To sum up, the image recognition device that reprints that this application provided carries out classification training operation through the primitive classification model to untraining in advance, obtain the classification prediction model of training, make this classification prediction model can discern the face image or the video of reprinting, and then carry out subsequent face identification operation, thereby effectively avoid directly carrying out face identification often and lead to unblock, login, purpose such as payment, this application effectively avoids lawless persons to gain improper interests, harm the individual and the interests of company attacked, effectively solve the problem that reprints face image and carry out malicious face identification. Meanwhile, by screening richer local texture information, the value range of the local texture information is greatly reduced compared with that of the traditional LBP method, and the value range is [0, 15] according to the formula, so that the complexity of the method is greatly reduced in the subsequent calculation of the co-occurrence matrix; after the non-shielding face image and the fusion training image are obtained, the non-shielding face image and the fusion training image can be input into the face recognition model, parameters of the face recognition model are adjusted, face recognition after the parameters are adjusted is used for optimizing the face recognition model, and the shielding face image and the non-shielding face image can be accurately recognized by the optimized face recognition model.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 7, fig. 7 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 200 includes a memory 210, a processor 220, and a network interface 230 communicatively coupled to each other via a system bus. It is noted that only computer device 200 having components 210 and 230 is shown, but it is understood that not all of the illustrated components are required and that more or fewer components may alternatively be implemented. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 210 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 210 may be an internal storage unit of the computer device 200, such as a hard disk or a memory of the computer device 200. In other embodiments, the memory 210 may also be an external storage device of the computer device 200, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 200. Of course, the memory 210 may also include both internal and external storage devices of the computer device 200. In this embodiment, the memory 210 is generally used for storing an operating system installed in the computer device 200 and various application software, such as computer readable instructions of a method for recognizing a copied image. In addition, the memory 210 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 220 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 220 is generally operative to control overall operation of the computer device 200. In this embodiment, the processor 220 is configured to execute the computer readable instructions or processing data stored in the memory 210, for example, execute the computer readable instructions of the copied image recognition method.
The network interface 230 may include a wireless network interface or a wired network interface, and the network interface 230 is generally used to establish a communication connection between the computer device 200 and other electronic devices.
According to the method for recognizing the copied image, the untrained original classification model is classified and trained in advance to obtain the trained classification prediction model, so that the classification prediction model can recognize the copied face image or video, and further perform subsequent face recognition operation, and the purposes of unlocking, logging, payment and the like caused by direct face recognition are effectively avoided.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the copied image recognition method as described above.
According to the method for recognizing the copied image, the untrained original classification model is classified and trained in advance to obtain the trained classification prediction model, so that the classification prediction model can recognize the copied face image or video, and further perform subsequent face recognition operation, and the purposes of unlocking, logging, payment and the like caused by direct face recognition are effectively avoided.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A method for recognizing a copied image is characterized by comprising the following steps:
receiving current image information displayed by image display equipment and acquired by image acquisition equipment to obtain copied image information;
performing feature extraction operation on the copied image information to obtain copied feature vectors;
pre-labeling the reproduction characteristic vector to obtain a reproduction image sample;
inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model;
when a user carries out face recognition, receiving current image information acquired by the image acquisition equipment;
inputting the current image information into the classification prediction model to perform classification prediction operation to obtain a classification prediction result;
judging whether the current image information belongs to a copied image or not based on the classification prediction result;
if the current image information belongs to the copied image, outputting a face recognition failure signal;
and if the current image information does not belong to the copied image, inputting the current image information into an optimized face recognition model for face recognition operation to obtain a face recognition result.
2. The method for recognizing the copied image according to claim 1, wherein the step of performing the feature extraction operation on the copied image information to obtain the copied feature vector specifically comprises:
carrying out high-pass filtering operation on the copied image information to obtain a copied residual image;
performing texture description operation on the copied residual image based on a local binary pattern description method to obtain a copied texture image;
performing matrix conversion operation on the copied texture image to obtain a copied symbiotic matrix;
and carrying out normalization operation on the reproduction symbiotic matrix to obtain the reproduction characteristic vector.
3. The method for recognizing the copied image according to claim 2, wherein the step of performing texture description operation on the copied residual image based on the local binary pattern description method to obtain the copied texture image specifically comprises:
obtaining a target pixel of the reproduction residual image and an average residual value l of eight pixels around the target pixel based on an LBP-mean methodmThe average residual value lmExpressed as:
lm=mean(l0,l1,...,l7,lc)
where mean () represents the averaging function; l0,l1,...,l7,lcRespectively representing residual values of eight pixels around the target pixel; lcRepresenting a target pixel pcThe residual value of (d); lmRepresents the average residual value;
calculating the target pixel pcThe LBP value of (a), the LBP value being expressed as:
Figure FDA0002839721080000021
wherein the content of the first and second substances,
Figure FDA0002839721080000022
representing a target pixel pcLBP value of (a); s () represents a sign-taking function, i.e., s is 1 if the argument is positive, otherwise it is 0; l2iRepresenting and target pixel pcFour pixels, i.e. pixel p, sharing a common edge0、p2、p4And p6The residual value of (d); lmIs the average residual value;
and obtaining the reproduction texture image after obtaining the LBP values of all pixels of the reproduction residual image.
4. The method for recognizing the copied image according to claim 1, wherein before the step of inputting the current image information to an optimized face recognition model for face recognition operation to obtain a face recognition result if the current image information does not belong to the copied image, the method further comprises:
reading a local database, and acquiring an unoccluded face image and an occlusion object image in the local database;
carrying out fusion operation on the unoccluded face image and the sheltering object image to obtain a fusion training image;
and inputting the unoccluded face image and the fusion training image into a face recognition model to be optimized for training operation to obtain the optimized face recognition model.
5. The method according to claim 1, wherein the face recognition model comprises:
a feature extraction network and identification module;
the feature extraction network is used for extracting the weight according to the preset features to obtain a feature map of the face image;
and the recognition module is used for comparing the characteristic graph of the face image with the characteristic graph pre-stored in the model library so as to determine a face recognition result according to the comparison result.
6. The method for recognizing the copied image according to claim 5, further comprising, after the step of receiving the current image information collected by the image collecting device when the user performs face recognition:
and storing the current image information into a block chain.
7. A reproduction image recognition apparatus, comprising:
the system comprises a reproduction image acquisition module, a reproduction image acquisition module and a reproduction image display module, wherein the reproduction image acquisition module is used for receiving current image information displayed by image display equipment and acquired by image acquisition equipment to obtain reproduction image information;
the characteristic extraction module is used for carrying out characteristic extraction operation on the copied image information to obtain copied characteristic vectors;
the pre-labeling module is used for performing pre-labeling operation on the copied feature vector to obtain a copied image sample;
the classification training module is used for inputting the copied image sample into an original classification model to perform classification training operation to obtain a classification prediction model;
the current image acquisition module is used for receiving current image information acquired by the image acquisition equipment when a user performs face recognition;
the classification prediction module is used for inputting the current image information into the classification prediction model to perform classification prediction operation to obtain a classification prediction result;
the copied image judging module is used for judging whether the current image information belongs to the copied image or not based on the classification prediction result;
the reproduction image confirmation module is used for outputting a face recognition failure signal if the current image information belongs to the reproduction image;
and the copied image non-recognition module is used for inputting the current image information into an optimized face recognition model to perform face recognition operation if the current image information does not belong to the copied image, so as to obtain a face recognition result.
8. The apparatus according to claim 7, wherein the feature extraction module comprises:
the high-pass filtering submodule is used for carrying out high-pass filtering operation on the copied image information to obtain a copied residual image;
the texture description submodule is used for carrying out texture description operation on the copied residual image based on a local binary pattern description method to obtain a copied texture image;
the matrix conversion submodule is used for carrying out matrix conversion operation on the copied texture image to obtain a copied symbiotic matrix;
and the normalization submodule is used for performing normalization operation on the reproduction symbiotic matrix to obtain the reproduction characteristic vector.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor which when executed implements the steps of the method of identifying a copied image of any of claims 1 to 6.
10. A computer-readable storage medium, having computer-readable instructions stored thereon, which, when executed by a processor, implement the steps of the method for identifying a copied image according to any one of claims 1 to 6.
CN202011487387.XA 2020-12-16 2020-12-16 Method and device for identifying copied image, computer equipment and storage medium Pending CN112560683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011487387.XA CN112560683A (en) 2020-12-16 2020-12-16 Method and device for identifying copied image, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011487387.XA CN112560683A (en) 2020-12-16 2020-12-16 Method and device for identifying copied image, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112560683A true CN112560683A (en) 2021-03-26

Family

ID=75064003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011487387.XA Pending CN112560683A (en) 2020-12-16 2020-12-16 Method and device for identifying copied image, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112560683A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938615A (en) * 2021-08-26 2022-01-14 秒针信息技术有限公司 Method and device for acquiring human face anti-counterfeiting data set and electronic equipment
CN114677769A (en) * 2022-04-08 2022-06-28 中国平安人寿保险股份有限公司 Method and device for identifying copied certificate, computer equipment and storage medium
CN115619410A (en) * 2022-10-19 2023-01-17 闫雪 Self-adaptive financial payment platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117729A (en) * 2015-05-11 2015-12-02 杭州金培科技有限公司 Method and device for recognizing copied image
CN111241873A (en) * 2018-11-28 2020-06-05 马上消费金融股份有限公司 Image reproduction detection method, training method of model thereof, payment method and payment device
WO2020147445A1 (en) * 2019-01-16 2020-07-23 深圳壹账通智能科技有限公司 Rephotographed image recognition method and apparatus, computer device, and computer-readable storage medium
CN111914628A (en) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 Training method and device of face recognition model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117729A (en) * 2015-05-11 2015-12-02 杭州金培科技有限公司 Method and device for recognizing copied image
CN111241873A (en) * 2018-11-28 2020-06-05 马上消费金融股份有限公司 Image reproduction detection method, training method of model thereof, payment method and payment device
WO2020147445A1 (en) * 2019-01-16 2020-07-23 深圳壹账通智能科技有限公司 Rephotographed image recognition method and apparatus, computer device, and computer-readable storage medium
CN111914628A (en) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 Training method and device of face recognition model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938615A (en) * 2021-08-26 2022-01-14 秒针信息技术有限公司 Method and device for acquiring human face anti-counterfeiting data set and electronic equipment
CN114677769A (en) * 2022-04-08 2022-06-28 中国平安人寿保险股份有限公司 Method and device for identifying copied certificate, computer equipment and storage medium
CN115619410A (en) * 2022-10-19 2023-01-17 闫雪 Self-adaptive financial payment platform
CN115619410B (en) * 2022-10-19 2024-01-26 闫雪 Self-adaptive financial payment platform

Similar Documents

Publication Publication Date Title
CN109886697B (en) Operation determination method and device based on expression group and electronic equipment
CN109697416B (en) Video data processing method and related device
WO2022161286A1 (en) Image detection method, model training method, device, medium, and program product
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
CN112560683A (en) Method and device for identifying copied image, computer equipment and storage medium
CN112381075B (en) Method and system for carrying out face recognition under specific scene of machine room
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
CN107169458B (en) Data processing method, device and storage medium
CN105518709A (en) Method, system and computer program product for identifying human face
CN111626163B (en) Human face living body detection method and device and computer equipment
CN111144366A (en) Strange face clustering method based on joint face quality assessment
CN108108711B (en) Face control method, electronic device and storage medium
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN111222433B (en) Automatic face auditing method, system, equipment and readable storage medium
CN110879986A (en) Face recognition method, apparatus and computer-readable storage medium
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN112418167A (en) Image clustering method, device, equipment and storage medium
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
US11709914B2 (en) Face recognition method, terminal device using the same, and computer readable storage medium
CN113837006B (en) Face recognition method and device, storage medium and electronic equipment
CN114494990A (en) Target detection method, system, terminal equipment and storage medium
CN112528261A (en) Method and device for identifying user identity of SIM card
WO2023028947A1 (en) Palm vein non-contact three-dimensional modeling method and apparatus, and authentication method
CN113869419A (en) Method, device and equipment for identifying forged image and storage medium
JP2018169943A (en) Face authentication processing device, face authentication processing method and face authentication processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination