CN113205035B - Identity recognition method, device, equipment and storage medium - Google Patents

Identity recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN113205035B
CN113205035B CN202110463971.XA CN202110463971A CN113205035B CN 113205035 B CN113205035 B CN 113205035B CN 202110463971 A CN202110463971 A CN 202110463971A CN 113205035 B CN113205035 B CN 113205035B
Authority
CN
China
Prior art keywords
image
complement
pseudo
training
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110463971.XA
Other languages
Chinese (zh)
Other versions
CN113205035A (en
Inventor
李晓风
许金林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Zhongke Lattice Technology Co ltd
Original Assignee
Anhui Zhongke Lattice Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Zhongke Lattice Technology Co ltd filed Critical Anhui Zhongke Lattice Technology Co ltd
Priority to CN202110463971.XA priority Critical patent/CN113205035B/en
Publication of CN113205035A publication Critical patent/CN113205035A/en
Application granted granted Critical
Publication of CN113205035B publication Critical patent/CN113205035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention belongs to the technical field of identity recognition, and discloses an identity recognition method, an identity recognition device, identity recognition equipment and a storage medium. The invention collects the current image of the user and detects whether the face image in the current image is partially blocked; when the face image is partially blocked, extracting features of the current image through a preset feature extraction model to obtain the features of the current image; performing image complementation on the current image features through a preset pseudo-image generator to obtain a completed user image; and carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result. Because the image of the current image is complemented by the preset feature extraction model and the preset pseudo-graph generator when the face image of the current image of the user is partially blocked, the complete user image which can be used for carrying out identity recognition is obtained, and the identity of the user can still be determined through face recognition when the complete face image of the user cannot be obtained.

Description

Identity recognition method, device, equipment and storage medium
Technical Field
The present invention relates to the field of identity identification technologies, and in particular, to an identity identification method, apparatus, device, and storage medium.
Background
Today, the application of identification is extremely extensive, according to the difference of recognition mode, identification mainly can divide into fingerprint identification and face identification two kinds, but because there is fingerprint membrane to copy and wait the security evasion means now, fingerprint identification probably appears the security risk because of fingerprint forging, and the face identification is farther apart from than the fingerprint, it is more convenient to use, and because the security evasion means that can evade face identification does not appear at present, use face identification also very safe, consequently, carry out identification through face identification now widely used personnel identity validity check, but, according to the difference of actual use scene, partial scene is difficult to obtain personnel's complete facial image, thereby lead to the face identification inefficacy, personnel identity is unable to discern, for example: the scene with complex terrain is difficult to shoot a complete face image of a person due to the limited angle of the image acquisition equipment; in application scenes such as hospitals, sites or factories with heavy dust, complete facial images of people are difficult to shoot due to the existence of shielding objects such as caps, masks and the like; in application scenes such as cinema and station, because more people are likely to exist, the phenomenon that people are mutually blocked, and the like, the complete face image of the people cannot be shot.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide an identity recognition method, an identity recognition device, identity recognition equipment and a storage medium, and aims to solve the technical problem that the identity of a person cannot be determined through face recognition when the complete face image of the person cannot be obtained in the prior art.
In order to achieve the above object, the present invention provides an identification method, which includes the following steps:
collecting a current image of a user, and detecting whether a face image in the current image is partially blocked;
when the face image is partially blocked, extracting the characteristics of the current image through a preset characteristic extraction model to obtain the characteristics of the current image;
performing image complementation on the current image features through a preset pseudo-image generator to obtain a completed complete user image;
and carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result.
Optionally, before the step of collecting the current image of the user and detecting whether the face image in the current image is partially blocked, the method further includes:
Acquiring a clear complete image sample set, and traversing the clear complete image sample set to obtain a current clear complete image sample;
carrying out random face shielding treatment on the clear complete image sample to obtain a partial shielding image sample;
constructing an image complement training sample according to the clear complete image sample and the partial shielding image sample;
when the traversing is finished, constructing an image complement training sample set according to all the obtained image complement training samples;
training an initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator, wherein the initial image complement model comprises the pseudo-graph generator and the feature extraction model.
Optionally, the initial image complement model further includes an image identifier;
the step of training the initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator comprises the following steps:
training the feature extraction model through a feature extraction training set to obtain a preset feature extraction model;
selecting a current image complement training sample from the image complement training sample set;
Analyzing the current image complement training sample to obtain a clear complete image and a partial shielding image;
extracting the characteristics of the partial shielding image through the characteristic extraction model to obtain the characteristics of the shielding image;
image complement is carried out on the shielding image characteristics through the pseudo-image generator so as to obtain a complement pseudo-image;
the image discriminator judges the authenticity of the clear complete image, the partial shielding image and the complement pseudo-image to obtain a discrimination result;
determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result;
and when the training loss value meets a preset training ending condition, judging that training is completed, and taking the training-completed pseudo-graph generator as a preset pseudo-graph generator.
Optionally, after the step of determining the training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph and the discrimination result, the method further includes:
and when the training loss value does not meet the preset training ending condition, performing parameter optimization on the pseudo-graph generation and the image discriminator, and returning to the step of selecting the current image complement training sample from the image complement training sample set.
Optionally, the step of performing image complement on the occlusion image feature by the pseudo-graph generator to obtain a complement pseudo-graph includes:
acquiring random noise characteristics, and carrying out characteristic fusion on the shielding image characteristics and the random noise characteristics to acquire fusion image characteristics;
performing image complementation on the fused image features through the pseudo-image generator to obtain a complement pseudo-image;
correspondingly, the step of determining the training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result comprises the following steps:
and determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result.
Optionally, the step of determining a training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph, the random noise feature and the discrimination result includes:
determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result through a preset loss function;
The preset loss function is as follows:
F=E x,y [log(D(x,y))]+E y,z [log(1-D(y,G(y,z)))]+λE x,y,z [||(F x -F y )-(F G(y,z) -F y )|| 2 ]
wherein F is a training loss value, and G is a pseudo-graph generator; d is an image discriminator; lambda is a preset coefficient and the value is constant; x is a clear complete image, y is a partial occlusion image, z is a random noise feature, F x To clarify the image characteristics of the complete image, F y To partially mask image features of the image, F G(y,z) In order to complement the image characteristics of the pseudo-graph, E is a mathematical expected symbol, D (x, y) is the true and false discrimination result of the clear complete image and the partial shielding image in the discrimination result, and D (y, G (y, z)) is the true and false discrimination result of the partial shielding image and the complement pseudo-graph in the discrimination result.
Optionally, when the training loss value meets a preset training ending condition, determining that training is completed, and after the step of using the trained pseudo-graph generator as the preset pseudo-graph generator, further includes:
acquiring an image complement test sample set, and performing image complement on each image complement sample in the image complement test sample set through the preset feature extraction model and a preset pseudo-graph generator to acquire a model complement image set;
acquiring a standard clear image set corresponding to the image complement test sample set;
determining an image complement accuracy according to the standard clear image and the model complement image set;
And returning to the step of obtaining the clear complete image sample set when the image complement accuracy is lower than a preset accuracy threshold.
In addition, in order to achieve the above purpose, the invention also provides an identity recognition device, which comprises the following modules:
the image acquisition module is used for acquiring a current image of a user and detecting whether a face image in the current image is partially blocked;
the feature extraction module is used for extracting features of the current image through a preset feature extraction model when the face image is partially blocked, so as to obtain current image features;
the image complement module is used for carrying out image complement on the current image characteristics through a preset pseudo-image generator so as to obtain a complete user image after complement;
and the identity recognition module is used for carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result.
In addition, to achieve the above object, the present invention also proposes an identification device comprising: the system comprises a processor, a memory and an identification program stored on the memory and capable of running on the processor, wherein the identification program realizes the steps of the identification method when being executed by the processor.
In addition, in order to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon an identification program which, when executed, implements the steps of the identification method as described above.
The invention collects the current image of the user and detects whether the face image in the current image is partially blocked; when the face image is partially blocked, extracting features of the current image through a preset feature extraction model to obtain the features of the current image; performing image complementation on the current image features through a preset pseudo-image generator to obtain a completed user image; and carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result. Because the image of the current image is complemented by the preset feature extraction model and the preset pseudo-graph generator when the face image of the current image of the user is partially blocked, the complete user image which can be used for carrying out identity recognition is obtained, and the identity of the user can still be determined through face recognition when the complete face image of the user cannot be obtained.
Drawings
FIG. 1 is a schematic diagram of an electronic device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of the identification method of the present invention;
FIG. 3 is a flowchart of a second embodiment of the identification method of the present invention;
fig. 4 is a block diagram of a first embodiment of an identification device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of an identification device structure of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the electronic device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an identification program may be included in the memory 1005 as one type of storage medium.
In the electronic device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the electronic device of the present invention may be disposed in the identification device, where the electronic device invokes the identification program stored in the memory 1005 through the processor 1001, and executes the identification method provided by the embodiment of the present invention.
An embodiment of the present invention provides an identity recognition method, referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the identity recognition method of the present invention.
In this embodiment, the identification method includes the following steps:
step S10: and collecting a current image of a user, and detecting whether a face image in the current image is partially blocked.
It should be noted that, the execution body of the embodiment may be the identification device, and the identification device may be an electronic device such as a personal computer, a server, or other devices capable of implementing the same or similar functions, which is not limited in this embodiment, and in this embodiment and the following embodiments, the identification device is taken as an example to describe the identification method of the present invention.
It should be noted that, the current image may be an image acquired at the current time of the user, and the current image acquired by the user may be acquired by the image acquisition device when the sensor senses that the user is in the image acquisition area, where the sensor may be an infrared sensor, and the image acquisition device may be a device such as a camera, a video camera, a camera, or other devices with image acquisition functions, and this embodiment is not limited thereto. Whether the face image in the current image is partially blocked or not can be detected by firstly positioning the position of the face image in the current image through pattern recognition, then intercepting the face image in the current image according to the position of the face image, detecting and positioning each face organ in the face image, and judging whether the face image is partially blocked or not according to the number of the finally positioned face organs.
Step S20: and when the face image is partially blocked, extracting the characteristics of the current image through a preset characteristic extraction model so as to obtain the characteristics of the current image.
It should be noted that the preset feature extraction model may be a neural network model that is trained in advance and may perform feature extraction on an image. The current image features may be image features of a part of the current image, which is not blocked by the user, and feature extraction is performed on the current image through a preset feature extraction model to obtain the current image features, or the user image may be intercepted from the current image through the preset feature extraction model and feature extraction is performed on the user image to obtain the current image features.
Step S30: and carrying out image complementation on the current image features through a preset pseudo-image generator so as to obtain a completed complete user image.
It should be noted that the preset pseudo-graph generator may be a neural network model that is trained in advance and used for completing the image. The preset pseudo-image generator can decompose input current image features, determine the area of each image feature, determine the area of an occluded part in the current image, extract the neighborhood feature of the occluded part, deduce the picture feature of the occluded part according to the neighborhood feature, combine the deduced picture feature of the occluded part with the current image feature to complement the image feature, and finally generate and output a complete user image according to the complemented image feature, wherein according to actual use requirements, before the current image feature is input into the preset pseudo-image generator, a random image feature can be obtained as a random noise feature, the random image feature is fused with the current image feature, and then the fused image feature is input into the preset pseudo-image generator.
Step S40: and carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result.
In practical use, after the completed complete user image is obtained, face recognition can be performed according to the complete user image, corresponding personnel identities are searched in a personnel identity management library according to the recognition results, and the identity recognition results are generated according to the search results and displayed on a corresponding display device, wherein the personnel identity management library can be a database which is preset by management personnel and contains personnel identity information of each person allowed to pass through, and the display device can be a display screen or other equipment with a display function.
It can be understood that when the corresponding personnel identity can be found in the personnel identity management library according to the identification result, the identification result can be determined as passing of the identification; when the corresponding personnel identity is not found in the personnel identity management library according to the identification result, the identification result can be determined that the identification is not passed.
Further, in order to facilitate problem investigation, after step S40 of the present embodiment, the method may further include:
generating an identity recognition report according to the current image, the current image characteristics, the complete user image and the identity recognition result; and storing the identification report into a preset identification report library.
It can be understood that in the actual use process, phenomena such as recognition failure or recognition abnormality may occur, if relevant process information is not recorded, troubleshooting becomes very difficult, so that an identity recognition report can be generated according to the current image, the current image characteristics, the complete user portrait and the identity recognition result, and each processing result in the execution process of the identity recognition method in the embodiment can be contained in the identity recognition report.
The embodiment detects whether the face image in the current image is partially blocked or not by collecting the current image of the user; when the face image is partially blocked, extracting features of the current image through a preset feature extraction model to obtain the features of the current image; performing image complementation on the current image features through a preset pseudo-image generator to obtain a completed user image; and carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result. Because the image of the current image is complemented by the preset feature extraction model and the preset pseudo-graph generator when the face image of the current image of the user is partially blocked, the complete user image which can be used for carrying out identity recognition is obtained, and the identity of the user can still be determined through face recognition when the complete face image of the user cannot be obtained.
Referring to fig. 3, fig. 3 is a flowchart of a second embodiment of an identification method according to the present invention.
Based on the above first embodiment, the identity recognition method of this embodiment further includes, before the step S10:
step S01: and acquiring a clear complete image sample set, and traversing the clear complete image sample set to obtain a current clear complete image sample.
It should be noted that the clear and complete image sample set may be a set constructed by combining a plurality of clear and complete image books, where the clear and complete image sample may be a high-definition image of a person acquired in advance, or may be a person image for which face recognition has been successfully performed. The traversing of the clear complete image sample set to obtain the current clear complete image sample may be traversing the clear complete image sample set according to an index of the clear complete image sample set, and taking the clear complete image sample obtained in the current traversing stage as the current clear complete image sample.
Step S02: and carrying out random face shielding treatment on the clear complete image sample to obtain a partial shielding image sample.
It should be noted that, performing random face shielding processing on the clear complete image sample to obtain the partial shielding image sample may be identifying a face image position in the clear complete image sample to obtain a face image, obtaining a random shielding object, and covering the random shielding object on the face image to obtain the partial shielding image sample. The random shielding object can be shielding objects such as a mask and a cap, the random shielding object can be obtained by randomly selecting the shielding object in a preset shielding object sample library, the preset shielding object sample library can be a preset sample library, and various shielding object samples with different types and different styles can be stored in the preset shielding object sample library.
Step S03: and constructing an image complement training sample according to the clear complete image sample and the partial shielding image sample.
In practical use, the clear complete image sample and the partial shielding image sample are combined into a group of data, and then the image complement training sample can be obtained.
Step S04: and at the end of the traversal, constructing an image complement training sample set according to all the obtained image complement training samples.
It will be appreciated that a plurality of image complement training samples may be obtained at the end of the traversal, and that a set of image complement training samples may be obtained by adding all of the obtained image complement training samples to a pre-created set.
Step S05: training an initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator, wherein the initial image complement model comprises the pseudo-graph generator and the feature extraction model.
The initial image complement model may be a model constructed based on a condition generation countermeasure network (Conditional Generative Adversarial Networks, CGAN), and may include an image discriminator, a pseudo-graph generator, and a feature extraction model. The feature extraction model may be a convolutional neural network (Convolutional Neural Networks, CNN) model constructed by adopting classical network architectures such as AlexNet or VGGNet, and may include 5 convolutional layers, 5 pooling layers, and 2 full-connection layers. The convolution kernels of all the convolution layers are 3*3, the activation functions are ReLU (Rectified Linear Unit), the number of the convolution kernels of all the layers is 64, 128, 256, 320 and 380, the pooling units of all the pooling layers are 2 x 2, the pooling layers input the pooled feature vectors to the full-connection layer after batch normalization treatment, and the number of neurons of the 2-layer connection layer is 4096 and 1000. The input of the pseudo-graph generator is image characteristics, the structure of the pseudo-graph generator is similar to a characteristic extraction model, a pooling layer is not arranged, a convolution layer is replaced by a 5-layer deconvolution layer, the convolution kernels of all deconvolution layers are 3*3, the activation function of the first 4-layer deconvolution layer is a ReLU activation function, the activation function of the last layer is a Tanh activation function, and the completed complementary pseudo-graph is finally directly output. The input of the image discriminator is an image feature data pair, and is used for discriminating whether two different image features are corresponding to the same image or not, the image discriminator is provided with only one layer, the layer is provided with only one neuron, the activation function of the image discriminator is a Sigmoid activation function, the output of the image discriminator is 0 and 1,0 represents that the discrimination result is false, namely the corresponding image is not the same image, and 1 represents that the discrimination result is true, namely the corresponding image is the same image. The image feature data pairs may be data pairs of two different image features.
Further, to describe how to perform model training, step S05 of this embodiment may include:
training the feature extraction model through a feature extraction training set to obtain a preset feature extraction model; selecting a current image complement training sample from the image complement training sample set; analyzing the current image complement training sample to obtain a clear complete image and a partial shielding image; extracting the characteristics of the partial shielding image through the characteristic extraction model to obtain the characteristics of the shielding image; image complement is carried out on the shielding image characteristics through the pseudo-image generator so as to obtain a complement pseudo-image; the image discriminator judges the authenticity of the clear complete image, the partial shielding image and the complement pseudo-image to obtain a discrimination result; determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result; and when the training loss value meets a preset training ending condition, judging that training is completed, and taking the training-completed pseudo-graph generator as a preset pseudo-graph generator.
It should be noted that, the feature extraction training set may be a set constructed by a plurality of feature extraction training samples, the feature extraction training samples may be constructed by feature extraction images and standard image feature combinations, the feature extraction training set is used to train the feature extraction model to adjust each model parameter in the feature extraction model, and when the feature extraction error of the model is smaller than a preset error threshold, the training is determined to be completed, so as to obtain the preset feature extraction model, where the preset error threshold may be set according to actual needs. The trained preset feature extraction model can perform feature extraction on the input image.
It should be noted that the occlusion image feature may be an image feature of an unoccluded portion of the partial occlusion image. Image complement is carried out on the shielding image features through the pseudo-image generator so as to obtain a complement pseudo-image, wherein the image complement pseudo-image is generated through the pseudo-image generator by taking the shielding image features as image generation conditions, wherein the shielding image features are taken as the image generation conditions, so that the degree of freedom of image generation of the pseudo-image generator can be restrained, the diversity of the generation category of the pseudo-image generator is reduced, the convergence rate of the pseudo-image generator in training is improved, and the training efficiency can be improved.
It should be noted that, the image identifier may be used to perform the authenticity identification on the clear complete image, the partial shielding image and the complement pseudo-image, so as to obtain an identification result, that may be that the feature extraction is performed on the clear complete image, the partial shielding image and the complement pseudo-image through a preset feature extraction model, so as to obtain a complete image feature corresponding to the clear complete image, a shielding image feature corresponding to the partial shielding image and a pseudo-image feature corresponding to the complement pseudo-image, and the complete image feature and the shielding image feature are used as first data to be input into the image identifier to perform the authenticity identification, so as to obtain a first authenticity identification result, that is, an authenticity identification result of the clear complete image and the partial shielding image; and inputting the occlusion image features and the pseudo-image features as second data into an image discriminator for true and false discrimination to obtain a second true and false discrimination result, namely, the true of the partial occlusion image and the complement pseudo-image is the discrimination result, and finally, combining the first image discrimination result and the second image discrimination result into an image discrimination result.
In actual use, when the training loss value is smaller than the preset loss threshold, the training loss value can be judged to meet the preset training ending condition, and when the training loss value is not smaller than the preset loss threshold and all the image complement training samples in the image complement training sample set are selected, the training loss value can be judged to meet the preset training ending condition.
It can be understood that if the training loss value does not meet the preset training ending condition, the parameters of the pseudo-graph generator and the image discriminator may be optimally adjusted, and the current image complement training sample is reselected for training, so after the step of determining the training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph and the discrimination result in this embodiment, the method may further include:
and when the training loss value does not meet the preset training ending condition, performing parameter optimization on the pseudo-graph generator and the image discriminator, and returning to the step of selecting the current image complement training sample from the image complement training sample set.
In practical use, the parameter optimization of the pseudo-graph generator and the image discriminator may be performed according to the training loss value.
Further, in order to improve the generalization capability of the model, the step of performing image complementation on the occlusion image feature by the pseudo-graph generator to obtain a complement pseudo-graph in this embodiment may include:
acquiring random noise characteristics, and carrying out characteristic fusion on the shielding image characteristics and the random noise characteristics to acquire fusion image characteristics; and performing image complementation on the fused image features through the pseudo-image generator to obtain a complement pseudo-image.
Correspondingly, the step of determining the training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph and the discrimination result in this embodiment may include:
and determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result.
It should be noted that, if image complementation is directly performed on the occlusion image features by the pseudo-image generator to obtain a complement pseudo-image, and a random element is not introduced, a model overfitting phenomenon may occur, so that the generalization of the trained pseudo-image generator is low, and the capability of complementing various occlusion images is not provided. The feature fusion of the random noise feature and the occlusion image feature to obtain a fused image feature may be vector stitching of a feature vector of the random noise feature and the occlusion image feature to obtain the fused image feature.
In actual use, the training loss value may be determined according to the clear complete image, the partial shielding image, the complement pseudo-image, the random noise feature, and the discrimination result by determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-image, the random noise feature, and the discrimination result through a preset loss function.
The preset loss function may be:
F=E x,y [log(D(x,y))]+E y,z [log(1-D(y,G(y,z)))]+λE x,y,z [||(F x -F y )-(F G(y,z) -F y )|| 2 ]
wherein F is a training loss value, and G is a pseudo-graph generator; d is an image discriminator; lambda is a preset coefficient and the value is constant; x is a clear complete image, y is a partial occlusion image, z is a random noise feature, F x To clarify the image characteristics of the complete image, F y To partially mask image features of the image, F G(y,z) To complement the image characteristics of the pseudo-graph, E is a mathematical expected symbol, D (x, y) is a clear complete image in the discrimination resultAnd the true and false judging result of the partial shielding image and D (y, G (y, z)) are the true and false judging result of the partial shielding image and the complement pseudo-image in the judging result.
The image features of the clear complete image, the image features of the partial shielding image and the image features of the complement pseudo-graph can be obtained by respectively extracting the features of the clear complete image, the partial shielding image and the complement pseudo-graph through a preset feature extraction model.
It should be noted that, in the actual use process, a random gradient descent method (stochastic gradient descent, SGD) may be adopted, and the training loss value is calculated according to the clear complete image, the partial shielding image, the full pseudo-image, the random noise feature and the discrimination result through the preset loss function, and the random gradient descent method may improve the model training efficiency to a certain extent and shorten the model training time.
It should be noted that, the pseudo-graph generator and the image discriminator are constructed according to the principle of the condition generation countermeasure network, wherein, the optimization objective of the pseudo-graph generator is to make the generated pseudo-graph approach to reality, the image discriminator cannot distinguish, the optimization objective of the image discriminator is to evaluate the discrimination between true and false, and in the continuous game countermeasure, the pseudo-graph of the pseudo-graph generator is caused to be optimal, therefore, the whole objective function thereof can be as follows:
Figure BDA0003042037860000131
wherein G is a pseudo-graph generator, D is an image discriminator, lambda is a preset coefficient, the value is a constant, L cGAN (G, D) generating an opposing network loss function, L L2 (G) The generator additional loss function is used for punishing the complement effect of the constraint missing image, and is the L2 norm among image features, and the value of the generator additional loss is minimum.
And L is cGAN (G, D) and L L2 (G) Can be expressed as follows:
L cGAN (G,D)=E x,y [log(D(x,y))]+E y,z [log(1-D(y,G(y,z)))]
Figure BDA0003042037860000132
substituting the counternetwork loss function and the generator additional loss function into the integral objective function to obtain the preset loss function.
According to the embodiment, a clear complete image sample set is obtained, and the clear complete image sample set is traversed to obtain a current clear complete image sample; carrying out random face shielding treatment on the clear complete image sample to obtain a partial shielding image sample; constructing an image complement training sample according to the clear complete image sample and the partial shielding image sample; when the traversing is finished, constructing an image complement training sample set according to all the obtained image complement training samples; and training the initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator. Because the initial image complement model is constructed based on a condition generation countermeasure network, the pseudo-graph generator and the image discriminator can continuously perform countermeasure games in the training process, the preset pseudo-graph generator obtained when the training is completed can generate true-false and indistinct pseudo-graphs, the image complement accuracy is ensured, and the reliability of the identity recognition method is improved.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores an identification program, and the identification program realizes the steps of the identification method when being executed by a processor.
Referring to fig. 4, fig. 4 is a block diagram illustrating a first embodiment of an identification device according to the present invention.
As shown in fig. 4, the identity recognition device provided in the embodiment of the present invention includes:
the image acquisition module 401 is configured to acquire a current image of a user, and detect whether a face image in the current image is partially blocked;
the feature extraction module 402 is configured to perform feature extraction on the current image through a preset feature extraction model when the face image is partially blocked, so as to obtain a current image feature;
the image complement module 403 is configured to perform image complement on the current image feature through a preset pseudo-image generator, so as to obtain a completed user image after complement;
and the identity recognition module 404 is used for carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result.
The embodiment detects whether the face image in the current image is partially blocked or not by collecting the current image of the user; when the face image is partially blocked, extracting features of the current image through a preset feature extraction model to obtain the features of the current image; performing image complementation on the current image features through a preset pseudo-image generator to obtain a completed user image; and carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result. Because the image of the current image is complemented by the preset feature extraction model and the preset pseudo-graph generator when the face image of the current image of the user is partially blocked, the complete user image which can be used for carrying out identity recognition is obtained, and the identity of the user can still be determined through face recognition when the complete face image of the user cannot be obtained.
Further, the image acquisition module 401 is further configured to obtain a clear complete image sample set, and traverse the clear complete image sample set to obtain a current clear complete image sample; carrying out random face shielding treatment on the clear complete image sample to obtain a partial shielding image sample; constructing an image complement training sample according to the clear complete image sample and the partial shielding image sample; when the traversing is finished, constructing an image complement training sample set according to all the obtained image complement training samples; training an initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator, wherein the initial image complement model comprises the pseudo-graph generator and the feature extraction model.
Further, the image acquisition module 401 is further configured to train the feature extraction model through a feature extraction training set to obtain a preset feature extraction model; selecting a current image complement training sample from the image complement training sample set; analyzing the current image complement training sample to obtain a clear complete image and a partial shielding image; extracting the characteristics of the partial shielding image through the characteristic extraction model to obtain the characteristics of the shielding image; image complement is carried out on the shielding image characteristics through the pseudo-image generator so as to obtain a complement pseudo-image; the image discriminator judges the authenticity of the clear complete image, the partial shielding image and the complement pseudo-image to obtain a discrimination result; determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result; and when the training loss value meets a preset training ending condition, judging that training is completed, and taking the training-completed pseudo-graph generator as a preset pseudo-graph generator.
Further, the image acquisition module 401 is further configured to perform parameter optimization on the pseudo-graph generator and the image identifier when the training loss value does not meet a preset training end condition, and return to the step of selecting the current image complement training sample from the image complement training sample set.
Further, the image acquisition module 401 is further configured to acquire a random noise feature, and perform feature fusion on the occlusion image feature and the random noise feature to obtain a fused image feature; performing image complementation on the fused image features through the pseudo-image generator to obtain a complement pseudo-image;
the image acquisition module 401 is further configured to determine a training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph, the random noise feature, and the discrimination result.
Further, the image acquisition module 401 is further configured to determine a training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph, the random noise feature and the discrimination result through a preset loss function;
the preset loss function is as follows:
F=E x,y [log(D(x,y))]+E y,z [log(1-D(y,G(y,z)))]+λE x,y,z [||(F x -F y )-(F G(y,z) -F y )|| 2 ]
Wherein F is a training loss value, and G is a pseudo-graph generator; d is an image discriminator; lambda is a preset coefficient and the value is constant; x is a clear complete image, y is a partial occlusion image, z is a random noise feature, F x To clarify the image characteristics of the complete image, F y To partially mask image features of the image, F G(y,z) In order to complement the image characteristics of the pseudo-graph, E is a mathematical expected symbol, D (x, y) is the true and false discrimination result of the clear complete image and the partial shielding image in the discrimination result, and D (y, G (y, z)) is the true and false discrimination result of the partial shielding image and the complement pseudo-graph in the discrimination result.
Further, the image acquisition module 401 is further configured to obtain an image complement test sample set, and perform image complement on each image complement sample in the image complement test sample set through the preset feature extraction model and the preset pseudo-graph generator to obtain a model complement image set; acquiring a standard clear image set corresponding to the image complement test sample set; determining an image complement accuracy according to the standard clear image and the model complement image set; and returning to the step of obtaining the clear complete image sample set when the image complement accuracy is lower than a preset accuracy threshold.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in this embodiment may refer to the identity recognition method provided in any embodiment of the present invention, which is not described herein.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (6)

1. An identity recognition method is characterized by comprising the following steps: collecting a current image of a user, and detecting whether a face image in the current image is partially blocked; when the face image is partially blocked, extracting the characteristics of the current image through a preset characteristic extraction model to obtain the characteristics of the current image; performing image complementation on the current image features through a preset pseudo-image generator to obtain a completed complete user image; carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result;
before the step of collecting the current image of the user and detecting whether the face image in the current image is partially blocked, the method further comprises the following steps: acquiring a clear complete image sample set, and traversing the clear complete image sample set to obtain a current clear complete image sample; carrying out random face shielding treatment on the clear complete image sample to obtain a partial shielding image sample; constructing an image complement training sample according to the clear complete image sample and the partial shielding image sample; when the traversing is finished, constructing an image complement training sample set according to all the obtained image complement training samples; training an initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator, wherein the initial image complement model comprises the pseudo-graph generator and the feature extraction model;
The initial image complement model further comprises an image discriminator; the step of training the initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator comprises the following steps: training the feature extraction model through a feature extraction training set to obtain a preset feature extraction model; selecting a current image complement training sample from the image complement training sample set; analyzing the current image complement training sample to obtain a clear complete image and a partial shielding image; extracting the characteristics of the partial shielding image through the characteristic extraction model to obtain the characteristics of the shielding image; image complement is carried out on the shielding image characteristics through the pseudo-image generator so as to obtain a complement pseudo-image; the image discriminator judges the authenticity of the clear complete image, the partial shielding image and the complement pseudo-image to obtain a discrimination result; determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result; when the training loss value meets a preset training ending condition, judging that training is completed, and taking a pseudo-graph generator after training is completed as a preset pseudo-graph generator;
The step of performing image complementation on the occlusion image features by the pseudo-graph generator to obtain a complement pseudo-graph includes: acquiring random noise characteristics, and carrying out characteristic fusion on the shielding image characteristics and the random noise characteristics to acquire fusion image characteristics; performing image complementation on the fused image features through the pseudo-image generator to obtain a complement pseudo-image; correspondingly, the step of determining the training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result comprises the following steps: determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result;
the step of determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise feature and the discrimination result comprises the following steps: determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result through a preset loss function; the preset loss function is as follows:
Figure QLYQS_1
Wherein F is a training loss value, and G is a pseudo-graph generator; d is an image discriminator; lambda is a preset coefficient and the value is constant; x is clear complete image, y is partial shielding image, z is random noise feature, ++>
Figure QLYQS_2
For the definition of the image features of the complete image, +.>
Figure QLYQS_3
For partly occluding image features of the image +.>
Figure QLYQS_4
To complement the image features of the pseudo-graph, E is a mathematical desired symbol, +.>
Figure QLYQS_5
In order to distinguish true and false results of clear complete image and partial shielding image in the result, the method comprises the steps of +.>
Figure QLYQS_6
And the true and false judgment result of the partial shielding image and the complement pseudo-image in the judgment result is obtained.
2. The method for identifying an identity according to claim 1, wherein after the step of determining a training loss value according to the clear complete image, the partial occlusion image, the complement pseudo-graph, and the discrimination result, the method further comprises: and when the training loss value does not meet the preset training ending condition, performing parameter optimization on the pseudo-graph generator and the image discriminator, and returning to the step of selecting the current image complement training sample from the image complement training sample set.
3. The method for identifying an identity according to claim 1, wherein when the training loss value satisfies a preset training end condition, the step of determining that training is completed and using a trained pseudo-graph generator as a preset pseudo-graph generator further comprises: acquiring an image complement test sample set, and performing image complement on each image complement sample in the image complement test sample set through the preset feature extraction model and a preset pseudo-graph generator to acquire a model complement image set; acquiring a standard clear image set corresponding to the image complement test sample set; determining an image complement accuracy according to the standard clear image and the model complement image set; and returning to the step of obtaining the clear complete image sample set when the image complement accuracy is lower than a preset accuracy threshold.
4. An identity recognition device, characterized in that the identity recognition device comprises the following modules:
the image acquisition module is used for acquiring a current image of a user and detecting whether a face image in the current image is partially blocked;
the feature extraction module is used for extracting features of the current image through a preset feature extraction model when the face image is partially blocked, so as to obtain current image features;
the image complement module is used for carrying out image complement on the current image characteristics through a preset pseudo-image generator so as to obtain a complete user image after complement;
the identity recognition module is used for carrying out identity recognition on the complete user image, generating an identity recognition result and displaying the identity recognition result;
before the step of collecting the current image of the user and detecting whether the face image in the current image is partially blocked, the method further comprises the following steps: acquiring a clear complete image sample set, and traversing the clear complete image sample set to obtain a current clear complete image sample; carrying out random face shielding treatment on the clear complete image sample to obtain a partial shielding image sample; constructing an image complement training sample according to the clear complete image sample and the partial shielding image sample; when the traversing is finished, constructing an image complement training sample set according to all the obtained image complement training samples; training an initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator, wherein the initial image complement model comprises the pseudo-graph generator and the feature extraction model;
The initial image complement model further comprises an image discriminator; the step of training the initial image complement model through the image complement training sample set to obtain a preset feature extraction model and a preset pseudo-graph generator comprises the following steps: training the feature extraction model through a feature extraction training set to obtain a preset feature extraction model; selecting a current image complement training sample from the image complement training sample set; analyzing the current image complement training sample to obtain a clear complete image and a partial shielding image; extracting the characteristics of the partial shielding image through the characteristic extraction model to obtain the characteristics of the shielding image; image complement is carried out on the shielding image characteristics through the pseudo-image generator so as to obtain a complement pseudo-image; the image discriminator judges the authenticity of the clear complete image, the partial shielding image and the complement pseudo-image to obtain a discrimination result; determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result; when the training loss value meets a preset training ending condition, judging that training is completed, and taking a pseudo-graph generator after training is completed as a preset pseudo-graph generator;
The step of performing image complementation on the occlusion image features by the pseudo-graph generator to obtain a complement pseudo-graph includes: acquiring random noise characteristics, and carrying out characteristic fusion on the shielding image characteristics and the random noise characteristics to acquire fusion image characteristics; performing image complementation on the fused image features through the pseudo-image generator to obtain a complement pseudo-image; correspondingly, the step of determining the training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph and the discrimination result comprises the following steps: determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result;
the step of determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise feature and the discrimination result comprises the following steps: determining a training loss value according to the clear complete image, the partial shielding image, the complement pseudo-graph, the random noise characteristic and the discrimination result through a preset loss function; the preset loss function is as follows:
Figure QLYQS_7
Wherein F is a training loss value, and G is a pseudo-graph generator; d is an image discriminator; lambda is a preset coefficient and the value is constant; x is clear complete image, y is partial shielding image, z is random noise feature, ++>
Figure QLYQS_8
For the definition of the image features of the complete image, +.>
Figure QLYQS_9
For partly occluding image features of the image +.>
Figure QLYQS_10
To complement the image features of the pseudo-graph, E is a mathematical desired symbol, +.>
Figure QLYQS_11
In order to distinguish true and false results of clear complete image and partial shielding image in the result, the method comprises the steps of +.>
Figure QLYQS_12
And the true and false judgment result of the partial shielding image and the complement pseudo-image in the judgment result is obtained.
5. An identification device, characterized in that it comprises: a processor, a memory and an identification program stored on the memory and executable on the processor, which identification program when executed by the processor implements the steps of the identification method according to any of claims 1-3.
6. A computer-readable storage medium, characterized in that it has stored thereon an identification program which, when executed, implements the steps of the identification method according to any of claims 1-3.
CN202110463971.XA 2021-04-27 2021-04-27 Identity recognition method, device, equipment and storage medium Active CN113205035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463971.XA CN113205035B (en) 2021-04-27 2021-04-27 Identity recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463971.XA CN113205035B (en) 2021-04-27 2021-04-27 Identity recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113205035A CN113205035A (en) 2021-08-03
CN113205035B true CN113205035B (en) 2023-06-30

Family

ID=77029166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463971.XA Active CN113205035B (en) 2021-04-27 2021-04-27 Identity recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113205035B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474881B2 (en) * 2017-03-15 2019-11-12 Nec Corporation Video retrieval system based on larger pose face frontalization
CN108229348B (en) * 2017-12-21 2020-04-28 中国科学院自动化研究所 Identification device for shielding face image
CN108520503B (en) * 2018-04-13 2020-12-22 湘潭大学 Face defect image restoration method based on self-encoder and generation countermeasure network
CN109145745B (en) * 2018-07-20 2022-02-11 上海工程技术大学 Face recognition method under shielding condition
CN111985281B (en) * 2019-05-24 2022-12-09 内蒙古工业大学 Image generation model generation method and device and image generation method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
身份保持约束下的人脸图像补全;王旭东;卫红权;高超;黄瑞阳;;网络与信息安全学报(08);全文 *

Also Published As

Publication number Publication date
CN113205035A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN111310731B (en) Video recommendation method, device, equipment and storage medium based on artificial intelligence
US10891329B2 (en) Image recognition method and image recognition apparatus
CN109214337B (en) Crowd counting method, device, equipment and computer readable storage medium
CN111414858B (en) Face recognition method, target image determining device and electronic system
CN108875542A (en) A kind of face identification method, device, system and computer storage medium
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN116071790A (en) Palm vein image quality evaluation method, device, equipment and storage medium
CN111860056A (en) Blink-based in-vivo detection method and device, readable storage medium and equipment
CN112183504B (en) Video registration method and device based on non-contact palm vein image
Al-Dmour et al. Masked Face Detection and Recognition System Based on Deep Learning Algorithms
CN113205035B (en) Identity recognition method, device, equipment and storage medium
CN112001280A (en) Real-time online optimization face recognition system and method
CN110866458A (en) Multi-user action detection and identification method and device based on three-dimensional convolutional neural network
KR100696251B1 (en) Method and apparatus for setting of comparison area and generating of user authentication information for iris recognition
Scherhag Face Morphing and Morphing Attack Detection
KR102580281B1 (en) Related object detection method and device
CN114360015A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN114119970A (en) Target tracking method and device
CN113920565A (en) Authenticity identification method, authenticity identification device, electronic device and storage medium
CN115720664A (en) Object position estimating apparatus, object position estimating method, and recording medium
EP3702958A1 (en) Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and separating a portion of the image comprising the biometric characteristic from other portions of the image
CN111598144A (en) Training method and device of image recognition model
CN111368715A (en) Fingerprint anti-counterfeiting method, device and equipment
CN111275183A (en) Visual task processing method and device and electronic system
Gura et al. Customized Convolutional Neural Network for Accurate Detection of Deep Fake Images in Video Collections.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant