CN113936298A - Feature recognition method and device and computer readable storage medium - Google Patents

Feature recognition method and device and computer readable storage medium Download PDF

Info

Publication number
CN113936298A
CN113936298A CN202111172914.2A CN202111172914A CN113936298A CN 113936298 A CN113936298 A CN 113936298A CN 202111172914 A CN202111172914 A CN 202111172914A CN 113936298 A CN113936298 A CN 113936298A
Authority
CN
China
Prior art keywords
preset
feature
sample
representation
feature representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111172914.2A
Other languages
Chinese (zh)
Inventor
陈卓
吴一超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111172914.2A priority Critical patent/CN113936298A/en
Publication of CN113936298A publication Critical patent/CN113936298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment discloses a feature recognition method and device and a computer-readable storage medium, wherein the method comprises the following steps: under the condition that the initial feature representation is received, inputting the initial feature representation into a preset image generation network to obtain a reconstructed image corresponding to the initial feature representation; inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image; and under the condition that the feature representation to be recognized is obtained, determining whether the feature representation to be recognized is matched with the current feature representation or not so as to finish the process of feature recognition.

Description

Feature recognition method and device and computer readable storage medium
Description of the cases
The present disclosure is proposed based on the chinese patent application with application number 201910381801.X, application date 2019, 05 and 08, entitled a feature recognition method and apparatus, and computer readable storage medium, which proposes a divisional case within the scope described in the chinese patent application, and the entire content of the chinese patent application is incorporated into the present application by reference again.
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a feature recognition method and apparatus, and a computer-readable storage medium.
Background
The biological feature recognition comprises face recognition, fingerprint recognition and the like, the application scenes are very wide, the application scenes comprise intelligent access control, security monitoring, mobile phone unlocking and the like, and the biological feature recognition has an important application value in the aspect of identity authentication and becomes an important research object in the field of computer vision. In recent years, biometric identification has been developed with the introduction of algorithms such as deep learning and the enhancement of the performance of core computing units.
Specifically, the feature recognition systems used in different application scenes are different, so that feature representations learned by different feature recognition systems are different for the same biological features, and when the feature recognition systems are replaced in an application scene or the feature representations learned by one feature recognition system need to be applied to other feature recognition systems, the biological features need to be input into other feature recognition systems again to obtain other feature representations of the biological features in other feature recognition systems, so that the problems of complicated implementation process and low intelligence of feature recognition are caused.
Disclosure of Invention
The present embodiment provides a feature recognition method and apparatus, and a computer-readable storage medium, which can simplify the implementation process of feature recognition and improve the intelligence of feature recognition when performing feature recognition using different feature recognition systems.
The technical scheme of the disclosure is realized as follows:
the embodiment provides a feature identification method, which comprises the following steps:
under the condition of receiving an initial feature representation, inputting the initial feature representation into a preset image generation network to obtain a reconstructed image corresponding to the initial feature representation;
inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image;
and under the condition that the feature representation to be recognized is obtained, determining whether the feature representation to be recognized is matched with the current feature representation or not so as to finish the process of feature recognition.
In the above method, before the inputting the initial feature representation into a preset image generation network, the method further includes:
and training the transfer convolution neural network according to a preset image sample and the initial characteristic representation sample to obtain the preset image generation network.
In the above method, before the inputting the reconstructed image into the preset feature extraction network, the method further includes:
and training a convolutional neural network according to the preset image sample and the current feature representation sample to obtain the preset feature extraction network.
In the above method, the training a transformed convolutional neural network according to a preset image sample and an initial feature representation sample to obtain the preset image generation network includes:
inputting the initial characteristic representation sample into the transposed convolution neural network to obtain a reconstructed image sample;
determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network;
and adjusting the transposed convolutional neural network based on the preset index value.
In the above method, the determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network includes:
inputting the reconstructed image sample and the preset image sample into the countermeasure network, determining a countermeasure loss value between the reconstructed image sample and the preset image sample, and determining the countermeasure loss value as a preset index value.
In the above method, the determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network includes:
determining a first spatial distance value between the reconstructed image sample and the preset image sample;
and determining the first space distance value as the preset index value.
In the above method, the determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network includes:
inputting the reconstructed image sample and the preset image sample into the countermeasure network, and determining a countermeasure loss value between the reconstructed image sample and the preset image sample;
determining a first spatial distance value between the reconstructed image sample and the preset image sample;
and determining the preset index value according to the countermeasure loss value and the first space distance value.
In the above method, the training a transformed convolutional neural network according to a preset image sample and an initial feature representation sample to obtain the preset image generation network includes:
adjusting the transposed convolutional neural network based on the preset index value when the preset index value does not meet a first preset threshold;
and under the condition that the preset index value meets a first preset threshold value, determining the transposed convolutional neural network as the preset image generation network.
In the above method, training a convolutional neural network according to the preset image sample and the current feature representation sample to obtain the preset feature extraction network, includes:
inputting the preset image sample into the convolutional neural network to obtain a first feature representation sample;
determining a second spatial distance value between the first feature representation sample and the current feature representation sample;
adjusting the convolutional neural network based on the second spatial distance value.
In the above method, training a convolutional neural network according to the preset image sample and the current feature representation sample to obtain the preset feature extraction network, includes:
adjusting the convolutional neural network based on the second spatial distance value if the second spatial distance value does not satisfy a second preset threshold;
and under the condition that the second spatial distance value meets a second preset threshold value, determining the convolutional neural network as the preset feature extraction network.
In the above method, the determining whether the feature representation to be recognized matches the current feature representation to complete the process of feature recognition includes:
determining a similarity value between the feature representation to be identified and the current feature representation;
and determining that the feature recognition of the feature image to be recognized is successful under the condition that the similarity value meets a preset similarity index.
The present embodiment provides a feature recognition apparatus, including:
the first image reconstruction module is used for inputting the initial feature representation into a preset image generation network under the condition of receiving the initial feature representation to obtain a reconstructed image corresponding to the initial feature representation;
the first feature extraction module is used for inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image;
and the feature matching module is used for determining whether the feature representation to be identified is matched with the current feature representation under the condition of acquiring the feature representation to be identified so as to finish the process of feature identification.
In the above apparatus, the apparatus further comprises:
and the preset image generation network training module is used for training the transfer convolution neural network according to a preset image sample and the initial characteristic representation sample to obtain the preset image generation network.
In the above apparatus, the apparatus further comprises:
and the preset feature extraction network training module is used for training the convolutional neural network according to the preset image sample and the current feature representation sample to obtain the preset feature extraction network.
In the above apparatus, the preset image generation network training module includes:
the second image reconstruction module is used for inputting the initial characteristic representation sample into the transposed convolutional neural network to obtain a reconstructed image sample;
the preset index value determining module is used for determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network;
and the first neural network updating module is used for adjusting the transposed convolutional neural network based on the preset index value.
In the above apparatus, the preset index value determining module includes:
a countermeasure loss value determination module, configured to input the reconstructed image sample and the preset image sample into the countermeasure network, and determine a countermeasure loss value between the reconstructed image sample and the preset image sample;
and the first preset index value determining submodule is used for determining the countermeasure loss value as a preset index value.
In the above apparatus, the preset index value determining module includes:
a first spatial distance value determining module, configured to determine a first spatial distance value between the reconstructed image sample and the preset image sample;
and the second preset index value determining submodule is used for determining the first spatial distance value as the preset index value.
In the above apparatus, the preset index value determining module includes:
a countermeasure loss value determination module, configured to input the reconstructed image sample and the preset image sample into the countermeasure network, and determine a countermeasure loss value between the reconstructed image sample and the preset image sample;
a first spatial distance value determining module, configured to determine a first spatial distance value between the reconstructed image sample and the preset image sample;
and the third preset index value determining submodule is used for determining the preset index value according to the confrontation loss value and the first space distance value.
In the above apparatus, the first neural network updating module is configured to, when the preset index value does not satisfy a first preset threshold, adjust the transposed convolutional neural network based on the preset index value; and under the condition that the preset index value meets a first preset threshold value, determining the transposed convolutional neural network as the preset image generation network.
In the above apparatus, the preset feature extraction network training module includes:
the second feature extraction module is used for inputting the preset image sample into the convolutional neural network to obtain a first feature representation sample;
a second spatial distance value determination module for determining a second spatial distance value between the first feature representation sample and the current feature representation sample;
a second neural network update module to adjust the convolutional neural network based on the second spatial distance value.
In the above apparatus, the second neural network updating module is configured to, if the second spatial distance value does not satisfy a second preset threshold, adjust the convolutional neural network based on the second spatial distance value; and under the condition that the second spatial distance value meets a second preset threshold value, determining the convolutional neural network as the preset feature extraction network.
In the above apparatus, the feature matching module is configured to determine a similarity value between the feature representation to be identified and the current feature representation; and determining that the feature recognition of the feature image to be recognized is successful under the condition that the similarity value meets a preset similarity index.
The present embodiment provides an image apparatus including:
a memory;
and the processor is connected with the memory and is used for realizing the feature identification method provided by any one of the above items by executing the computer executable instructions on the memory.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which is applied to a feature recognition apparatus, and when the computer program is executed by a processor, implements the feature recognition method provided in any one of the above.
The embodiment discloses a feature recognition method and device and a computer-readable storage medium, wherein the method comprises the following steps: under the condition that the initial feature representation is received, inputting the initial feature representation into a preset image generation network to obtain a reconstructed image corresponding to the initial feature representation; inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image; and under the condition that the feature representation to be recognized is obtained, determining whether the feature representation to be recognized is matched with the current feature representation or not so as to finish the process of feature recognition. By adopting the method, the preset image generation network and the preset feature extraction network are preset in the feature recognition device, when the feature recognition device judges that the feature recognition system changes, the feature recognition device converts the initial feature representation obtained by the initial feature recognition system into the current feature representation corresponding to the current feature recognition system by using the preset image generation network and the preset feature extraction network, and at the moment, when the feature recognition device performs feature recognition through the current feature recognition system, the process of performing feature recognition on the collected feature image to be recognized is directly performed by using the current feature representation, so that the implementation process of feature recognition is simplified, and the intelligence of feature recognition is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a first flowchart of a feature identification method according to this embodiment;
fig. 2 is a second flowchart of a feature identification method provided in this embodiment;
FIG. 3 is a diagram illustrating an exemplary network architecture for identifying a conversion system according to the present embodiment;
fig. 4 is a schematic structural diagram of a feature recognition apparatus provided in this embodiment;
fig. 5 is a schematic structural diagram of an image apparatus provided in this embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the disclosure. And are not intended to limit the present disclosure.
The embodiment discloses a feature identification method, as shown in fig. 1, the method may include:
s101, under the condition that the initial feature representation is received, inputting the initial feature representation into a preset image generation network to obtain a reconstructed image corresponding to the initial feature representation.
The feature recognition method provided by the embodiment is suitable for a scene of performing biological feature recognition by using different feature recognition systems.
In this embodiment, the initial feature representation is a feature representation corresponding to an original image obtained by feature recognition of an input original image by an initial feature recognition system, where the original image may be an existing image in a database, and when the initial feature representation of the original image needs to be converted into a current feature representation having a feature format consistent with a feature format output by a current feature recognition system, the preset recognition conversion system converts the initial feature representation recognized by the initial feature recognition system, so as to perform feature recognition by using the converted initial feature representation.
In this embodiment, the feature recognition device includes an initial feature recognition system, a current feature recognition system, and a preset recognition conversion system that performs feature conversion on the initial feature representation, where the feature recognition device performs feature recognition on the initial feature recognition system and/or the current feature recognition system, trains the preset recognition conversion system, and performs feature conversion on the initial feature representation by using the trained preset recognition conversion system.
In this embodiment, when the feature recognition device determines that the current feature recognition system is upgraded or the format or the type of the current feature recognition system changes, the feature recognition device receives an initial feature representation recognized by the initial feature recognition system; or, when the feature recognition device receives a feature recognition conversion instruction for converting the initial feature representation on the display interface, the feature recognition device obtains the initial feature representation according to the feature recognition conversion instruction, specifically selects according to an actual situation, and this embodiment is not specifically limited.
In this embodiment, the preset identification conversion system includes a preset image generation network and a preset feature extraction network, where the preset image generation network is configured to reconstruct an image corresponding to the initial feature representation, and specifically, the feature identification device inputs the initial feature representation into the preset image generation network in the preset identification conversion system, and obtains a reconstructed image corresponding to the initial feature representation through multiple times of transposition convolution operations.
In this embodiment, the existing images and the feature images to be recognized in the database include face images, iris images, fingerprint images, and the like, which are specifically selected according to actual situations, and this embodiment is not particularly limited.
In this embodiment, the feature is expressed as a parameter such as a feature vector that can express the feature, and the specific selection is performed according to the actual situation, and this embodiment is not specifically limited.
In this embodiment, the feature recognition device trains the convolutional neural network in advance according to the preset image sample and the initial feature representation sample corresponding to the preset image sample to obtain a preset image generation network, the preset image sample may be an existing image in a database, or an image sample acquired through a network, specifically an image sample selected according to an actual situation, and this embodiment is not specifically limited, where the initial feature representation sample is a feature representation sample identified by an initial feature identification system, the preset identification conversion system is used for converting the initial feature representation identified by the initial feature identification system into the current feature representation identified by the current feature identification system, so that the preset image sample is respectively input into the initial feature identification system and the current feature identification system, and the initial feature representation sample and the current feature representation sample corresponding to the preset image sample are obtained. Specifically, training the convolutional neural network according to the preset image sample and the initial characteristic representation sample corresponding to the preset image sample, and obtaining the preset image generation network comprises the following processes: inputting the initial characteristic representation sample into a transposed convolution neural network by the characteristic identification device to obtain a reconstructed image sample; then, determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network; and adjusting the transposed convolutional neural network based on a preset index value.
And S102, inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image.
And when the initial feature representation is input into the preset image generation network by the feature recognition device to obtain a reconstructed image, the reconstructed image is input into the preset feature extraction network by the feature recognition device to obtain the current feature representation corresponding to the reconstructed image.
In this embodiment, the reconstructed image is input into a preset feature extraction network, and the current feature representation of the reconstructed image is learned by using a convolutional neural network.
In this embodiment, the feature recognition device trains the convolutional neural network in advance according to the preset image sample and the current feature representation sample to obtain the preset feature extraction network. Specifically, the feature recognition device inputs a preset image sample into a convolutional neural network to obtain a first feature representation sample; thereafter, the feature recognition means determines a second spatial distance value between the first feature representation sample and the current feature representation sample; and adjusting the convolutional neural network based on the second spatial distance value. In one possible implementation, the second spatial distance value may be a euclidean distance value.
S103, under the condition that the feature representation to be recognized is obtained, whether the feature representation to be recognized is matched with the current feature representation is determined, and the feature recognition process is completed.
And after the feature recognition device obtains the current feature representation corresponding to the reconstructed image, determining whether the feature representation to be recognized is matched with the current feature representation by the feature recognition device under the condition of obtaining the feature representation to be recognized, and finishing the process of feature recognition.
In this embodiment, a feature acquisition module is arranged in the feature recognition device, the feature recognition device acquires a feature image to be recognized by using the feature acquisition module, and after the feature recognition device acquires the feature image to be recognized by using the feature acquisition module, the feature recognition device inputs the feature image to be recognized into the current feature recognition system and outputs a feature representation to be recognized corresponding to the feature image to be recognized.
In this embodiment, the feature recognition apparatus determines a similarity value between the feature representation to be recognized and the current feature representation; then, the feature recognition device compares the similarity value with a preset similarity index, and under the condition that the similarity value meets the preset similarity index, the feature recognition device determines that the feature recognition of the feature image to be recognized is successful; and under the condition that the similarity value does not meet the preset similarity index, the feature recognition device determines that the feature recognition of the feature image to be recognized fails, and at the moment, the feature recognition device acquires the feature image to be recognized again.
It can be understood that the feature recognition device is preset with a preset image generation network and a preset feature extraction network, when the feature recognition device determines that the feature recognition system changes, the feature recognition device converts the initial feature representation obtained by the initial feature recognition system into the current feature representation corresponding to the current feature recognition system by using the preset image generation network and the preset feature extraction network, and at this time, when the feature recognition device performs feature recognition through the current feature recognition system, the process of performing feature recognition on the collected feature image to be recognized is directly performed by using the current feature representation, so that the implementation process of feature recognition is simplified, and the intelligence of feature recognition is improved.
The usage scenario of the feature recognition method provided by this embodiment may include: the method comprises the steps of realizing an interactive scene of a mobile terminal and a cloud face recognition system, for example, retrieving and authenticating a face image acquired by a mobile phone in a face recognition library of the cloud, wherein the mobile terminal and the cloud use different feature recognition systems to recognize face features; when the feature recognition system is updated to a new feature recognition system, the feature representation of the new model is obtained by converting the feature representation of the old model without referring to the original face image, and the feature recognition system is updated at the feature level; the method comprises the steps of carrying out unified scene on a plurality of feature recognition systems used in a plurality of application scenes, wherein the plurality of feature recognition systems used in the application scenes such as mobile phone face unlocking, security monitoring, entrance guard card punching and the like share the same face recognition library.
Based on the foregoing embodiment, in this embodiment, as shown in fig. 2, before the feature recognition apparatus inputs the initial feature representation into the preset image generation network and obtains the reconstructed image corresponding to the initial feature representation, that is, before S101, the method for feature recognition by the feature recognition apparatus may further include the following steps:
s201, training the transfer convolution neural network by the feature recognition device according to a preset image sample and the initial feature representation sample to obtain a preset image generation network.
In this embodiment, the feature recognition device inputs a preset image sample into the feature recognition device for training the preset recognition conversion system, and first, the feature recognition device determines an initial feature representation sample and a current feature representation sample corresponding to the preset image sample, and specifically determines that the process of determining the initial feature representation sample and the current feature representation sample corresponding to the preset image sample is as follows: inputting a preset image sample into an initial feature recognition system by a feature recognition device to obtain an initial feature representation sample corresponding to the preset image sample; the feature recognition device inputs the preset image sample into the current feature recognition system to obtain a current feature representation sample corresponding to the preset image sample.
In this embodiment, the feature recognition device inputs the initial feature representation sample into the transposed convolutional neural network to obtain a reconstructed image sample; then, determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network; the feature identification device adjusts the transposed convolutional neural network based on a preset index value.
It should be noted that, the stage of training the feature recognition device to obtain the preset recognition conversion system may include three paths, which are respectively: the image processing method comprises the steps of reconstructing a path, representing a path and a regression path, wherein the feature recognition device trains the inverted convolutional neural network by using the reconstructed path to obtain a preset image generation network.
The process of reconstructing the path, namely training the transfer convolution neural network by the feature recognition device according to the preset image sample and the initial feature representation sample to obtain the preset image generation network, is as follows: a network structure is designed in advance in the feature recognition device, the network structure is a stacked transposed convolutional neural network, namely a plurality of layers of transposed convolutional layers are overlapped together from top to bottom, and then the feature recognition device inputs an initial feature representation sample into the transposed convolutional neural network to obtain a reconstructed image sample; then, the feature recognition device determines a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network; and adjusting the transposed convolutional neural network based on a preset index value.
Optionally, the process of determining the preset index value by the feature recognition device according to the reconstructed image sample, the preset image sample and/or the countermeasure network is as follows: the feature recognition device inputs the reconstructed image sample and the preset image sample into a countermeasure network, determines a countermeasure loss value between the reconstructed image sample and the preset image sample, and determines the countermeasure loss value as a preset index value.
Optionally, the process of determining the preset index value by the feature recognition device according to the reconstructed image sample, the preset image sample and/or the countermeasure network is as follows: the method comprises the steps that a first space distance value between a reconstructed image sample and a preset image sample is determined by a feature recognition device; and determining the first spatial distance value as a preset index value.
In this embodiment, the spatial distance value may be a euclidean distance value.
Optionally, the process of determining the preset index value by the feature recognition device according to the reconstructed image sample, the preset image sample and/or the countermeasure network is as follows: inputting the reconstructed image sample and the preset image sample into a countermeasure network by the feature recognition device, and determining a countermeasure loss value between the reconstructed image sample and the preset image sample; determining a first spatial distance value between a reconstructed image sample and a preset image sample; and then, the characteristic identification device determines a preset index value according to the confrontation loss value and the first space distance value.
In this embodiment, the calculation process of the preset index value is shown in formula (1):
LG(G,D)=λRecLRec(G)+λAdvLAdv(G,D) (1)
in the formula (1), LG(G, D) is a predetermined index value, LRec(G) Is a first spatial distance value, λRecIs the weight of the first spatial distance value, LAdv(G, D) is the loss-in-resistance value, lambdaAdvIs a weight against the loss value.
Specifically, the process of adjusting the transposed convolutional neural network based on the preset index value by the feature recognition device is as follows: the method comprises the steps that a first preset threshold value is preset in a feature identification device and used for judging whether the transposed convolutional neural network needs to be adjusted or not, under the condition that the feature identification device judges that a preset index value meets the first preset threshold value, the feature identification device judges that the transposed convolutional neural network does not need to be continuously adjusted, and at the moment, the feature identification device determines the transposed convolutional neural network as a preset image generation network; under the condition that the feature identification device judges that the preset index value does not meet the first preset threshold value, the feature identification device adjusts the transposed convolutional neural network based on the preset index value; and determining the transposed convolutional neural network as the preset image generation network by the feature recognition device until the feature recognition device judges that the preset index value corresponding to the transposed convolutional neural network meets a first preset threshold.
In this embodiment, the feature recognition device may adjust the transposed convolutional neural network by using a random gradient descent method based on a preset index value.
S202, the feature recognition device trains the convolutional neural network according to the preset image sample and the current feature representation sample to obtain a preset feature extraction network.
After the feature recognition device trains the convolutional neural network to obtain a preset image generation network, the feature recognition device trains the convolutional neural network according to the preset image sample and the current feature representation sample to obtain a preset feature extraction network.
In this embodiment, the feature recognition device inputs a preset image sample into a convolutional neural network to obtain a first feature representation sample; and determining a second spatial distance value between the first feature representation sample and the current feature representation sample; thereafter, the feature recognition device adjusts the convolutional neural network based on the second spatial distance value. In an alternative embodiment, the second spatial distance value may be a euclidean distance value.
The feature recognition device trains the convolutional neural network by using the representation path to obtain a preset feature extraction network; the method comprises the following steps of representing a path, namely training a convolutional neural network by a feature recognition device according to a preset image sample and a current feature representation sample, and obtaining a preset feature extraction network: the method comprises the steps that a network structure is designed in advance in a feature recognition device, the network structure is a stacked convolutional neural network, namely, a plurality of layers of convolutional layers are overlapped together from top to bottom, and then the feature recognition device inputs a preset image sample into the convolutional neural network to obtain a first feature representation sample; thereafter, the feature recognition means determines a second spatial distance value between the first feature representation sample and the current feature representation sample; and adjusting the convolutional neural network based on the second spatial distance value.
Specifically, the process of adjusting the convolutional neural network based on the second spatial distance value by the feature recognition device is as follows: the characteristic identification device is preset with a second preset threshold value and used for judging whether the convolutional neural network needs to be adjusted or not, and under the condition that the characteristic identification device judges that the second spatial distance value meets the second preset threshold value, the characteristic identification device judges that the convolutional neural network does not need to be adjusted, and at the moment, the characteristic identification device determines the convolutional neural network as a preset characteristic extraction network; under the condition that the characteristic identification device judges that the second spatial distance value does not meet a second preset threshold value, the characteristic identification device adjusts the convolutional neural network based on the second spatial distance value; and determining the convolutional neural network as a preset feature extraction network by the feature recognition device until the feature recognition device judges that a second spatial distance value corresponding to the convolutional neural network meets a second preset threshold value.
And S203, the feature recognition device generates a preset image generation network and a preset feature extraction network to form a preset recognition conversion system.
After the convolutional neural network is trained by the feature recognition device to obtain the preset feature extraction network, the feature recognition device generates a preset image generation network and the preset feature extraction network to form a preset recognition conversion system.
And S204, training a preset recognition conversion system by the feature recognition device according to the initial feature representation sample and the current feature representation sample.
After the feature recognition device forms a preset recognition conversion system by the preset image generation network and the preset feature extraction network, the feature recognition device trains the preset recognition conversion system according to the initial feature representation sample and the current feature representation sample.
In this embodiment, the feature recognition device trains the preset recognition and conversion system by using a regression path to obtain the preset recognition and conversion system, and the regression path is that the feature recognition device trains the preset recognition and conversion system according to the initial feature representation sample and the current feature representation sample to obtain the data transmission path corresponding to the preset recognition and conversion system.
In this embodiment, the feature recognition device inputs the initial feature representation sample into a preset recognition conversion system to obtain a second feature representation sample; and determining a third spatial distance value between the second feature representation sample and the current feature representation sample; then, the feature recognition device adjusts the preset recognition conversion system based on the third spatial distance value.
Specifically, the specific process of inputting the initial characteristic representation sample into the preset identification conversion system by the characteristic identification device to obtain the second characteristic representation sample is as follows: inputting the initial characteristic representation sample into a preset image generation network by the characteristic identification device to obtain an initial reconstruction biological characteristic sample; and then, the feature recognition device inputs the initial reconstruction biological feature sample into a preset feature extraction network to obtain a second feature representation sample.
Specifically, the specific process of adjusting the preset identification conversion system by the feature identification device based on the third spatial distance value is as follows: the feature recognition device is preset with a third preset threshold value and used for judging whether the preset recognition conversion system needs to be adjusted or not, the feature recognition device judges whether the third spatial distance value meets the third preset threshold value or not, and under the condition that the feature recognition device judges that the third spatial distance value meets the third preset threshold value, the feature recognition device judges that the preset recognition conversion system does not need to be adjusted; under the condition that the feature recognition device judges that the third spatial distance value does not meet a third preset threshold value, the feature recognition device adjusts a preset recognition conversion system by using a preset network adjustment method; and stopping the process of adjusting the preset identification conversion system by the characteristic identification device until the third spatial distance value corresponding to the preset identification conversion system is judged to meet the preset threshold value.
For example, fig. 3 is a network structure of an identification conversion system, where the identification conversion system includes an image generation network and a feature extraction network, and the feature identification device first learns the identification conversion system, and then performs a feature conversion process using the learned identification conversion system. In the learning phase of identifying the conversion system, three paths are included: reconstructing a path, an expression path and a regression path, wherein in the reconstruction path, an original feature is input into an image to generate a network to obtain a reconstructed face, wherein the original feature is to input a real face into an initial pathStarting with the features obtained by the feature recognition system, the Euclidean distance L between the reconstructed face and the real face is calculatedRec(G) Inputting the reconstructed face and the real face into a confrontation network, and calculating the confrontation loss L between the reconstructed face and the real faceAdv(G, D) and according to LRec(G) And LAdv(G, D) these two criteria adjust the image generation network until LRec(G) And LAdv(G, D) satisfying a preset threshold; in the representing path, inputting the real face into a feature extraction network to obtain a first feature, and calculating the Euclidean distance L between the first feature and a target featureRep(E) Wherein, the target feature is obtained by inputting a real face into the current feature recognition system and is according to LRep(E) This criterion adjusts the feature extraction network until LRep(E) A preset threshold value is met; in the regression path, inputting the original features into the image generation network to obtain a reconstructed face, then inputting the reconstructed face into the feature extraction network to obtain second features, and calculating the Euclidean distance L between the second features and the target features by the feature recognition deviceReg(G, E) and according to LReg(G,') this criterion adjusts the recognition translation system until LReg(G, E) satisfying a preset threshold; at this time, the feature recognition device completes the learning phase of the recognition conversion system. In the application stage of the recognition conversion system, the feature recognition device inputs the original features into the recognition conversion system to obtain the target features of the original features in the current feature recognition system.
The present embodiment provides a feature recognition apparatus 1, as shown in fig. 4, including:
the first image reconstruction module 10 is configured to, when an initial feature representation is received, input the initial feature representation into a preset image generation network to obtain a reconstructed image corresponding to the initial feature representation;
the first feature extraction module 11 is configured to input the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image;
and the feature matching module 12 is configured to determine whether the feature representation to be recognized is matched with the current feature representation under the condition that the feature representation to be recognized is obtained, so as to complete a process of feature recognition.
Optionally, the apparatus further comprises:
and the preset image generation network training module 13 is configured to train the convolutional neural network according to a preset image sample and the initial feature representation sample, so as to obtain the preset image generation network.
Optionally, the apparatus further comprises:
and the preset feature extraction network training module 14 is configured to train the convolutional neural network according to the preset image sample and the current feature representation sample to obtain the preset feature extraction network.
Optionally, the preset image generation network training module 13 includes:
a second image reconstruction module 130, configured to input the initial feature representation sample into the transposed convolutional neural network to obtain a reconstructed image sample;
a preset index value determining module 131, configured to determine a preset index value according to the reconstructed image sample, the preset image sample, and/or the countermeasure network;
the first neural network updating module 132 is configured to adjust the transposed convolutional neural network based on the preset index value.
Optionally, the preset index value determining module 131 includes:
a countermeasure loss value determination module 1310, configured to input the reconstructed image sample and the preset image sample into the countermeasure network, and determine a countermeasure loss value between the reconstructed image sample and the preset image sample;
a first preset index value determining sub-module 1311, configured to determine the countermeasure loss value as a preset index value.
Optionally, the preset index value determining module 131 includes:
a first spatial distance value determining module 1312 for determining a first spatial distance value between the reconstructed image sample and the preset image sample;
a second preset index value determining sub-module 1313, configured to determine the first spatial distance value as the preset index value.
Optionally, the preset index value determining module 131 includes:
a countermeasure loss value determination module 1310, configured to input the reconstructed image sample and the preset image sample into the countermeasure network, and determine a countermeasure loss value between the reconstructed image sample and the preset image sample;
a first spatial distance value determining module 1312 for determining a first spatial distance value between the reconstructed image sample and the preset image sample;
a third preset index value determining submodule 1314, configured to determine the preset index value according to the countermeasure loss value and the first spatial distance value.
Optionally, the first neural network updating module 132 is configured to, when the preset index value does not satisfy a first preset threshold, adjust the transposed convolutional neural network based on the preset index value; and under the condition that the preset index value meets a first preset threshold value, determining the transposed convolutional neural network as the preset image generation network.
Optionally, the preset feature extraction network training module 14 includes:
the second feature extraction module 140 is configured to input the preset image sample into the convolutional neural network to obtain a first feature representation sample;
a second spatial distance value determining module 141, configured to determine a second spatial distance value between the first feature representation sample and the current feature representation sample;
a second neural network update module 142 to adjust the convolutional neural network based on the second spatial distance value.
Optionally, the second neural network updating module 142 is configured to, if the second spatial distance value does not satisfy a second preset threshold, adjust the convolutional neural network based on the second spatial distance value; and under the condition that the second spatial distance value meets a second preset threshold value, determining the convolutional neural network as the preset feature extraction network.
Optionally, the feature matching module 12 is configured to determine a similarity value between the feature representation to be identified and the current feature representation; and determining that the feature recognition of the feature image to be recognized is successful under the condition that the similarity value meets a preset similarity index.
In the feature recognition apparatus provided in this embodiment, when an initial feature representation is received, the initial feature representation is input into a preset image generation network, so as to obtain a reconstructed image corresponding to the initial feature representation; inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image; and under the condition that the feature representation to be recognized is obtained, determining whether the feature representation to be recognized is matched with the current feature representation or not so as to finish the process of feature recognition. Therefore, in the feature recognition device provided in this embodiment, the preset image generation network and the preset feature extraction network are preset in the feature recognition device, and when the feature recognition device determines that the feature recognition system changes, the feature recognition device converts the initial feature representation obtained by the initial feature recognition system into the current feature representation corresponding to the current feature recognition system by using the preset image generation network and the preset feature extraction network, and at this time, when the feature recognition device performs feature recognition by using the current feature recognition system, the current feature representation is directly used to perform a feature recognition process on the collected feature image to be recognized, so that the implementation process of feature recognition is simplified, and the intelligence of feature recognition is improved.
Fig. 5 is a schematic diagram of a first composition structure of the image apparatus 2 according to the present embodiment, and in practical application, based on the same disclosure concept of the foregoing embodiment, as shown in fig. 5, the image apparatus 2 according to the present embodiment includes: a processor 20, a memory 21, and a communication bus 22.
In a Specific embodiment, the first image reconstructing module 10, the first feature extracting module 11, the feature matching module 12, the preset image generating network training module 13, the second image reconstructing module 130, the preset index value determining module 131, the immunity loss value determining module 1310, the first preset index value determining sub-module 1311, the first spatial distance value determining module 1312, the second preset index value determining sub-module 1313, the third preset index value determining sub-module 1314, the first neural network updating module 132, the preset feature extracting network training module 14, the second feature extracting module 140, the second spatial distance value determining module 141, and the second neural network updating module 142 may be implemented by a processor 20 located on the image device 2, and the processor 20 may be an Application Specific Integrated Circuit (ASIC), a digital signal processor (DSP, digital Signal Processor), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), CPU, controller, microcontroller, and microprocessor. It is understood that the electronic device for implementing the above-mentioned processor function may be other devices, and the embodiment is not limited in particular.
In the embodiment of the present disclosure, the communication bus 22 is used to implement connection communication between the processor 20 and the memory 21; the processor 20 is configured to execute the operating program stored in the memory 21 to implement the following steps:
under the condition of receiving an initial feature representation, inputting the initial feature representation into a preset image generation network to obtain a reconstructed image corresponding to the initial feature representation; inputting the reconstructed image into a preset feature extraction network to obtain a current feature representation corresponding to the reconstructed image; and under the condition that the feature representation to be recognized is obtained, determining whether the feature representation to be recognized is matched with the current feature representation or not so as to finish the process of feature recognition.
In this embodiment, the processor 20 is further configured to train the transformed convolutional neural network according to a preset image sample and an initial feature representation sample, so as to obtain the preset image generation network.
In this embodiment, the processor 20 is further configured to train a convolutional neural network according to the preset image sample and the current feature representation sample, so as to obtain the preset feature extraction network.
In this embodiment, the processor 20 is further configured to input the initial feature representation sample into the transposed convolutional neural network to obtain a reconstructed image sample; determining a preset index value according to the reconstructed image sample, the preset image sample and/or the countermeasure network; and adjusting the transposed convolutional neural network based on the preset index value.
In this embodiment, the processor 20 is further configured to input the reconstructed image sample and the preset image sample into the countermeasure network, determine a countermeasure loss value between the reconstructed image sample and the preset image sample, and determine the countermeasure loss value as a preset index value.
In this embodiment, further, the processor 20 is further configured to determine a first spatial distance value between the reconstructed image sample and the preset image sample; and determining the first space distance value as the preset index value.
In this embodiment, the processor 20 is further configured to input the reconstructed image sample and the preset image sample into the countermeasure network, and determine a countermeasure loss value between the reconstructed image sample and the preset image sample; determining a first spatial distance value between the reconstructed image sample and the preset image sample; and determining the preset index value according to the countermeasure loss value and the first space distance value.
In this embodiment, the processor 20 is further configured to, when the preset index value does not satisfy a first preset threshold, adjust the transposed convolutional neural network based on the preset index value; and under the condition that the preset index value meets a first preset threshold value, determining the transposed convolutional neural network as the preset image generation network.
In this embodiment, the processor 20 is further configured to input the preset image sample into the convolutional neural network to obtain a first feature representation sample; determining a second spatial distance value between the first feature representation sample and the current feature representation sample; adjusting the convolutional neural network based on the second spatial distance value.
In this embodiment, the processor 20 is further configured to, in a case that the second spatial distance value does not satisfy a second preset threshold, adjust the convolutional neural network based on the second spatial distance value; and under the condition that the second spatial distance value meets a second preset threshold value, determining the convolutional neural network as the preset feature extraction network.
In this embodiment, further, the processor 20 is further configured to determine a similarity value between the feature representation to be identified and the current feature representation; and determining that the feature recognition of the feature image to be recognized is successful under the condition that the similarity value meets a preset similarity index.
The present embodiment provides a computer-readable storage medium, which stores one or more programs, which are executable by one or more processors and applied to a feature recognition apparatus, and when the programs are executed by the processors, the method for feature recognition according to the above embodiment is implemented.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a feature recognition device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present disclosure.
The above description is only for the preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure.

Claims (15)

1. A method of model training, the method comprising:
training the transfer convolution neural network according to a preset image sample and an initial characteristic representation sample to obtain a preset image generation network;
training a convolutional neural network according to the preset image sample and the current feature representation sample to obtain a preset feature extraction network;
training a preset recognition conversion system by using the initial characteristic representation sample and the current characteristic representation sample to obtain the trained preset recognition conversion system; the preset identification conversion system is composed of the preset image generation network and the preset feature extraction network.
2. The method according to claim 1, wherein training the convolutional neural network according to the preset image sample and the initial feature representation sample to obtain a preset image generation network comprises:
inputting the initial characteristic representation sample into the transposed convolution neural network to obtain a reconstructed image sample;
determining a preset index value according to the reconstructed image sample and the preset image sample or the reconstructed image sample, the preset image sample and a countermeasure network;
and adjusting the transposed convolutional neural network based on the preset index value to obtain the preset image generation network.
3. The method according to claim 2, wherein determining a predetermined metric value from the reconstructed image samples, the predetermined image samples and/or the countermeasure network comprises:
inputting the reconstructed image sample and the preset image sample into the countermeasure network, determining a countermeasure loss value between the reconstructed image sample and the preset image sample, and determining the countermeasure loss value as a preset index value.
4. The method according to claim 2, wherein determining a predetermined metric value from the reconstructed image samples, the predetermined image samples and/or the countermeasure network comprises:
determining a first spatial distance value between the reconstructed image sample and the preset image sample;
and determining the first space distance value as the preset index value.
5. The method according to claim 2, wherein determining a predetermined metric value from the reconstructed image samples, the predetermined image samples and/or the countermeasure network comprises:
inputting the reconstructed image sample and the preset image sample into the countermeasure network, and determining a countermeasure loss value between the reconstructed image sample and the preset image sample;
determining a first spatial distance value between the reconstructed image sample and the preset image sample;
and determining the preset index value according to the countermeasure loss value and the first space distance value.
6. The method according to any one of claims 2 to 5, wherein the adjusting the transposed convolutional neural network based on the preset index value to obtain the preset image generation network comprises:
under the condition that the preset index value does not meet a first preset threshold value, adjusting the transposed convolutional neural network based on the preset index value to obtain the preset image generation network;
and under the condition that the preset index value meets a first preset threshold value, determining the transposed convolutional neural network as the preset image generation network.
7. The method of claim 1, wherein training the convolutional neural network according to the preset image sample and the current feature representation sample to obtain a preset feature extraction network comprises:
inputting the preset image sample into the convolutional neural network to obtain a first feature representation sample;
determining a second spatial distance value between the first feature representation sample and the current feature representation sample;
adjusting the convolutional neural network based on the second spatial distance value.
8. The method of claim 7, wherein training the convolutional neural network according to the preset image sample and the current feature representation sample to obtain a preset feature extraction network comprises:
adjusting the convolutional neural network based on the second spatial distance value if the second spatial distance value does not satisfy a second preset threshold;
and under the condition that the second spatial distance value meets a second preset threshold value, determining the convolutional neural network as the preset feature extraction network.
9. A method of feature recognition, the method comprising:
when the characteristic recognition system is detected to be changed, acquiring initial characteristic representation and inputting the initial characteristic representation into a trained preset recognition conversion system to obtain current characteristic representation corresponding to the initial characteristic representation; the trained preset identification transformation system is obtained by training through the method of any one of claims 1-8;
and under the condition of acquiring the feature representation to be identified, determining whether the feature representation to be identified is matched with the current feature representation so as to finish the process of identifying the features by using the current feature identification system.
10. The method of claim 9, wherein the inputting the initial feature representation into a trained pre-determined recognition transformation system to obtain a current feature representation corresponding to the initial feature representation comprises:
inputting the initial feature representation into a preset image generation network in the trained preset recognition conversion system to obtain a reconstructed image corresponding to the initial feature representation;
and inputting the reconstructed image into a preset feature extraction network in the trained preset recognition conversion system to obtain the current feature representation corresponding to the reconstructed image.
11. The method of claim 9, wherein the determining whether the feature representation to be recognized matches the current feature representation to complete the feature recognition process using the current feature recognition system comprises:
determining a similarity value between the feature representation to be identified and the current feature representation;
and determining that the feature recognition of the feature image to be recognized is successful under the condition that the similarity value meets a preset similarity index.
12. A model training apparatus, the apparatus comprising:
the preset image generation network training module is used for training the transfer convolution neural network according to the preset image sample and the initial characteristic representation sample to obtain a preset image generation network;
the preset feature extraction network training module is used for training the convolutional neural network according to the preset image sample and the current feature representation sample to obtain a preset feature extraction network;
the preset recognition and conversion system training module is used for training a preset recognition and conversion system by using the initial feature representation sample and the current feature representation sample to obtain a trained preset recognition and conversion system; the preset identification conversion system is composed of the preset image generation network and the preset feature extraction network.
13. An apparatus for feature recognition, the apparatus comprising:
the system comprises a feature recognition module, a feature recognition module and a feature recognition module, wherein the feature recognition module is used for acquiring initial feature representation when detecting that a feature recognition system changes, inputting the initial feature representation into a trained preset recognition conversion system, and obtaining current feature representation corresponding to the initial feature representation; the trained preset recognition conversion system is obtained by training through the device of claim 12;
and the feature matching module is used for determining whether the feature representation to be identified is matched with the current feature representation under the condition of acquiring the feature representation to be identified so as to finish the process of carrying out feature identification by using the current feature identification system.
14. An image device, characterized in that the image device comprises:
a memory;
a processor coupled to the memory and configured to implement the method of any of claims 1 to 11 by executing computer-executable instructions located on the memory.
15. A computer storage medium, wherein the computer storage medium stores computer-executable instructions; the computer-executable instructions, when executed by a processor, are capable of implementing the method as provided by any one of claims 1 to 11.
CN202111172914.2A 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium Pending CN113936298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111172914.2A CN113936298A (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111172914.2A CN113936298A (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium
CN201910381801.XA CN110119746B (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910381801.XA Division CN110119746B (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113936298A true CN113936298A (en) 2022-01-14

Family

ID=67521943

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910381801.XA Active CN110119746B (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium
CN202111172914.2A Pending CN113936298A (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910381801.XA Active CN110119746B (en) 2019-05-08 2019-05-08 Feature recognition method and device and computer readable storage medium

Country Status (1)

Country Link
CN (2) CN110119746B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110659582A (en) * 2019-08-29 2020-01-07 深圳云天励飞技术有限公司 Image conversion model training method, heterogeneous face recognition method, device and equipment
CN110675312B (en) * 2019-09-24 2023-08-29 腾讯科技(深圳)有限公司 Image data processing method, device, computer equipment and storage medium
CN110956127A (en) * 2019-11-28 2020-04-03 重庆中星微人工智能芯片技术有限公司 Method, apparatus, electronic device, and medium for generating feature vector
CN110956129A (en) * 2019-11-28 2020-04-03 重庆中星微人工智能芯片技术有限公司 Method, apparatus, device and medium for generating face feature vector
CN113298060B (en) * 2021-07-27 2021-10-15 支付宝(杭州)信息技术有限公司 Privacy-protecting biometric feature recognition method and device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915625B (en) * 2014-03-11 2019-04-26 重庆邮电大学 A kind of method and device of recognition of face
CN104680131B (en) * 2015-01-29 2019-01-11 深圳云天励飞技术有限公司 The auth method of identity-based certificate information and the identification of face multiple characteristics
CN105117712A (en) * 2015-09-15 2015-12-02 北京天创征腾信息科技有限公司 Single-sample human face recognition method compatible for human face aging recognition
CN107239766A (en) * 2017-06-08 2017-10-10 深圳市唯特视科技有限公司 A kind of utilization resists network and the significantly face of three-dimensional configuration model ajusts method
CN107423700B (en) * 2017-07-17 2020-10-20 广州广电卓识智能科技有限公司 Method and device for verifying testimony of a witness
CN107563510A (en) * 2017-08-14 2018-01-09 华南理工大学 A kind of WGAN model methods based on depth convolutional neural networks
CN108229381B (en) * 2017-12-29 2021-01-08 湖南视觉伟业智能科技有限公司 Face image generation method and device, storage medium and computer equipment
CN108257195A (en) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 A kind of facial expression synthetic method that generation confrontation network is compared based on geometry
CN108573479A (en) * 2018-04-16 2018-09-25 西安电子科技大学 The facial image deblurring and restoration methods of confrontation type network are generated based on antithesis
CN109003331A (en) * 2018-06-13 2018-12-14 东莞时谛智能科技有限公司 A kind of image reconstructing method
CN109508669B (en) * 2018-11-09 2021-07-23 厦门大学 Facial expression recognition method based on generative confrontation network
CN109726654A (en) * 2018-12-19 2019-05-07 河海大学 A kind of gait recognition method based on generation confrontation network

Also Published As

Publication number Publication date
CN110119746A (en) 2019-08-13
CN110119746B (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN110119746B (en) Feature recognition method and device and computer readable storage medium
CN108491794B (en) Face recognition method and device
KR20200145827A (en) Facial feature extraction model learning method, facial feature extraction method, apparatus, device, and storage medium
US11410327B2 (en) Location determination apparatus, location determination method and computer program
CN111783506A (en) Method and device for determining target characteristics and computer-readable storage medium
CN114529765A (en) Data processing method, data processing equipment and computer readable storage medium
CN112884147A (en) Neural network training method, image processing method, device and electronic equipment
CN116994188A (en) Action recognition method and device, electronic equipment and storage medium
CN117115595B (en) Training method and device of attitude estimation model, electronic equipment and storage medium
CN116152938A (en) Method, device and equipment for training identity recognition model and transferring electronic resources
CN116758590B (en) Palm feature processing method, device, equipment and medium for identity authentication
KR20210018586A (en) Method and apparatus for identifying video content using biometric features of characters
CN115984977A (en) Living body detection method and system
CN114494809A (en) Feature extraction model optimization method and device and electronic equipment
CN116778534B (en) Image processing method, device, equipment and medium
CN110795972A (en) Pedestrian identity recognition method, device, equipment and storage medium
KR100998842B1 (en) Method and apparatus for face recognition using boosting method
US20230259600A1 (en) Adaptive personalization for anti-spoofing protection in biometric authentication systems
CN116665315A (en) Living body detection model training method, living body detection method and living body detection system
CN116189315A (en) Living body detection method and system
CN116259116A (en) Living body detection method and system
CN117011906A (en) Face key point recognition model training method, face recognition method and storage medium
CN116110135A (en) Living body detection method and system
CN118015386A (en) Image recognition method and device, storage medium and electronic equipment
CN115995028A (en) Living body detection model training method, living body detection method and living body detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination