CN110647864A - Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network - Google Patents

Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network Download PDF

Info

Publication number
CN110647864A
CN110647864A CN201910945327.9A CN201910945327A CN110647864A CN 110647864 A CN110647864 A CN 110647864A CN 201910945327 A CN201910945327 A CN 201910945327A CN 110647864 A CN110647864 A CN 110647864A
Authority
CN
China
Prior art keywords
feature vector
feature
countermeasure network
person
recognition method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910945327.9A
Other languages
Chinese (zh)
Inventor
康燕斌
张志齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Is According To Figure Network Technology Co Ltd
Shanghai Yitu Network Science and Technology Co Ltd
Original Assignee
Shanghai Is According To Figure Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Is According To Figure Network Technology Co Ltd filed Critical Shanghai Is According To Figure Network Technology Co Ltd
Priority to CN201910945327.9A priority Critical patent/CN110647864A/en
Publication of CN110647864A publication Critical patent/CN110647864A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A single multi-graph feature recognition method, device and medium based on a generated countermeasure network are provided, and the method comprises the following steps: obtaining a single human face data set, and extracting to obtain a characteristic vector set; constructing and generating a countermeasure network, and compressing the feature vector set into a first feature vector through a generator; extracting a second feature vector of the face picture to be recognized; and putting the two feature vectors into a discriminator to judge the probability with the fusion relation. The method reserves the commonality of the input features during prediction, and removes the difference which does not influence the recognition among the input features; all information in the input features is contained as much as possible; the performance of comparison is improved when a plurality of pictures are taken.

Description

Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network
Technical Field
The invention relates to the technical field of computer image processing, in particular to a single multi-graph feature recognition method, equipment and medium based on a generated countermeasure network.
Background
The face recognition technology has a high development prospect and economic benefit in the fields of public security investigation, access control systems, target tracking and other civil safety control systems.
Machine learning belongs to one of the branches of artificial intelligence and is at the core position. Machine learning enables a computer to learn, can simulate the learning behavior of human beings, establishes learning ability and realizes recognition and judgment. Machine learning uses algorithms to analyze mass data, finds out rules from the mass data, completes learning, and makes decisions and predictions on real events by using learned thinking models. Machine learning is an artificial intelligence method, deep learning is a technology for realizing machine learning, and a generative confrontation network is a classification in deep learning.
The Convolutional Neural Networks (CNN) is an identification algorithm based on an artificial Neural network and provided in combination with a deep learning theory. The convolutional neural network can extract high-level features and improve the expression capability of the features.
Generation of countermeasure Networks (GAN) was first proposed in the paper "Generation adaptive Networks" by IangGoodfellow et al in 2014 and introduced the field of deep learning. The GAN is a probability generation model, comprises a network model of a generator G and a network model of a discriminator D, is innovatively trained on two neural networks by using a confrontation training mechanism, and realizes parameter optimization by using a Stochastic Gradient Descent (SGD) method. GAN has prominent performances in the field of computer vision, such as image translation, image super-resolution, image restoration, and the like, and is also an important research technology for face recognition.
The prior art has low information utilization degree on a facial picture set, basically only uses single information and some simple statistical information, does not utilize more information, and has the problems of inaccurate and insufficient performance.
Patent application No. CN201910329770.3 relates to a face image recognition method and device, an electronic device and a storage medium, the method includes: obtaining a plurality of face images; obtaining a plurality of target objects to be identified according to a plurality of feature vectors obtained by correspondingly extracting features of the plurality of face images; obtaining a gradient parameter according to the plurality of feature vectors and the classification reference vector; and classifying the target objects to be recognized according to the gradient parameters to obtain a classification result, so that the recognition effect on the human face can be improved.
Disclosure of Invention
The invention aims to solve the existing problems and provides a single person multi-graph feature identification method, equipment and medium based on a generation countermeasure network.
In order to achieve the above object, the present invention adopts a technical solution comprising:
s1, obtaining a single human face data set, and extracting to obtain a characteristic vector set;
s2, constructing and generating a countermeasure network, and compressing the feature vector set into a first feature vector through a generator;
s3, extracting a second feature vector of the face picture to be recognized;
and S4, putting the two feature vectors into a discriminator, and judging the probability of fusion relation.
Further, in the step S4, it is determined that there is a fusion relationship, if the second feature vector belongs to the feature vector set, it is determined that there is a fusion relationship, that is, the two feature vectors are the same person; otherwise, it is not.
Further, in the discriminator in S4, the probability of having the fusion relationship is determined by calculating the cosine similarity between the first feature vector and the second feature vector.
Further, in the generation countermeasure network, the loss function is composed of two parts:
Figure 100002_DEST_PATH_IMAGE001
Figure 635276DEST_PATH_IMAGE002
Figure 100002_DEST_PATH_IMAGE003
1. further, in S4, the cosine similarity is calculated by substituting the picture connection characteristics of the single person and the person to be recognized into A, B, and then the probability of being the same person is evaluated, the formula is as follows;
Figure 393148DEST_PATH_IMAGE004
a threshold value threshold is cut out in the calculation, and when similarity > = threshold, the same person is considered, and when similarity < threshold, different persons are considered.
A face recognition system characterized by: the system comprises a storage unit, a processing unit and a processing unit, wherein the storage unit is used for storing a human face data set of a single person; generating a countermeasure network, comprising: the generator network inputs a feature vector set and outputs a first feature vector, namely, the generator compresses the information of the input features into one feature; and inputting the first characteristic vector and the second characteristic vector by the discriminator network, and judging the probability that the two characteristic vectors have a fused relation, namely calculating the cosine distance.
An electronic device, characterized in that: comprising a processor configured to perform any of the methods described above, and a memory for storing executable instructions for the processor.
A computer readable medium having computer program instructions stored thereon, characterized in that: the computer program instructions, when executed, perform any of the methods described above.
Compared with the prior art, the method of the invention comprises the characteristic distribution information of one person and multiple figures on the training set, the commonality of the input characteristics in the prediction process is kept, and the difference which does not influence the recognition among the input characteristics is removed; all information in the input features is contained as much as possible; the recognition performance is ensured, so that the comparison performance of a plurality of pictures can be improved.
The training process of the generator can enable the generator to learn the distribution of the features under the condition of one person with multiple graphs, namely the difference and the commonality between the features, and the training process is applied to a predicted scene, so that the commonality between the features can be kept, the more accurate features of the person can be obtained, and irrelevant information can be removed and identified.
The existence of the discriminator enables the generator to generate the features, and the features contain the information of all the input features as much as possible; and the existence of the identification loss function can enable the generator to generate the identification characteristics of a single person, and the identification performance is improved to a certain extent.
Drawings
Fig. 1 is a schematic diagram of the generation of a countermeasure network in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The deep learning machine using NVIDIA DGX-1 in the embodiment comprises 8 Nvidia Tesla-V100 computing cards, each computing card has more than 210 hundred million transistors, and the core area is 815 square millimeters (other equivalent computing resources for providing computing power can also be adopted); and use a tensoflow training framework (or other deep learning training framework).
The problem that this embodiment was solved is, on the basis of having the face identification model, promote the performance that face set discerned and compares. First, an existing face recognition model is obtained and recorded as REC. The REC can be a multitask convolutional neural network (MTCNN), the MTCNN combines face region detection and face key point detection, and the overall structure can be divided into three-layer network structures of P-Net, R-Net and O-Net for face detection and key point positioning; or FaceNet, the method changes the characteristic into a point on the Euclidean plane directly by learning, then judges by comparing the distance between the points, and extracts the characteristic of face recognition. Any other existing visible light recognition model and method can be used for achieving the task.
The input of the REC is a face image, and the output is a feature vector of the face. For the two graphs p1 and p2, the probability that the two graphs are the same person is obtained by calculating the cosine distance between the feature vector REC (p1) and the feature vector REC (p 2).
In the scenario of this embodiment, two sets of pictures Set _ a and Set _ B are stored, where:
set _ a contains several faces of the same person Pa;
set _ B contains several faces of the same person Pb.
Whether Pa and Pb are the same person or not needs to be judged, the following steps are provided:
1. data preparation
Preparing a plurality of groups of annotation data to form an annotation set Anno; each group of data in the Anno is denoted as set _ ann, and comprises a plurality of face sets of the same person.
Preparing a plurality of groups of data to form a negative case set Q; each group of data in Q is recorded as Q, and is a face image of a person, so that the person corresponding to Q is not shown in Anno
The formal training set is divided into two training sets, X _0 and X _ 1. Each set of data for X _0 consists of one set _ ann and one graph for Q in D, and each set of data for X _1 consists of one set _ ann and one graph for set _ ann in D.
2. Algorithm design
To address this problem, a generative countermeasure network is designed. Referring to fig. 1, fig. 1 illustrates the generation of a competing network of the present invention,
processing a picture set from a person by using an REC to obtain a characteristic vector set FS _ A;
the generator G inputs a feature vector set and outputs a feature vector; the generator compresses the information of the input features into one feature
The discriminator D inputs the two feature vectors and judges the probability that the two feature vectors have a fused relationship. One of the feature vectors f is from the generator fusion, and the other feature vector fx is from any picture obtained by using REC feature extraction.
The judgment rule is as follows: if fx comes from FS _ A, D should be biased to consider that the two feature vectors have a fused relationship; if fx does not come from FS _ A, D should be biased to assume that the two feature vectors do not have a fused relationship.
Preferably, the discriminator D and the generator G are both multilayer perceptrons and adopt a multilayer transform structure; there are 2 layers, each layer has 1024 neurons, and a global pooling layer is connected.
The loss function of the network consists of two parts, one of GAN and one of identification correlation, where:
Figure 683315DEST_PATH_IMAGE001
Figure 625863DEST_PATH_IMAGE002
Figure 1481DEST_PATH_IMAGE003
in the training, 4 neural networks are respectively trained through a stochastic gradient descent and a back propagation algorithm. When the training is converged, stopping the training and providing a mode for judging the neural network training convergence, wherein other modes for judging the training convergence can also be adopted, when the loss function value does not decrease any more in one day, the learning rate is reduced to one tenth of the previous mode to continue the training, and when the loss function cannot be continuously reduced for 5 times continuously, the training is considered to be converged.
Preferably, the present embodiment is based on calculation
Figure DEST_PATH_IMAGE005
To evaluate the probability that Pa and Pb are the same person. The formula is as follows;
cutting a threshold value threshold in the calculation, when similarity > = threshold, the people are considered to be the same person, and when similarity < threshold, the people are considered to be different persons; different threshold values are cut from the test set, different false alarm rates and different recall rates are obtained, and a threshold value needs to be selected according to the scene.
In the embodiment, the generated features are superior to direct selection, and feature representation can be improved, so that the performance of face recognition is improved. In the process of generating the confrontation network training, the training process of the generator can enable the generator to learn the distribution of the features under the condition of one person with multiple graphs, namely the difference and the commonality between the features, and the generator is applied to a predicted scene, so that the commonality between the features can be kept, the more accurate features of the person can be obtained, and irrelevant information can be removed and identified.
The existence of the discriminator enables the generator to generate the features, and the features contain the information of all the input features as much as possible; and the existence of the identification loss function can enable the generator to generate the identification characteristics of a single person, and the identification performance is improved to a certain extent.
In summary, the features thus generated have the following advantages: the method indirectly comprises the feature distribution information of one person and multiple graphs on the training set, retains the commonality of input features during prediction, and removes the difference without influencing recognition among the input features; all information in the input features is contained as much as possible; the recognition performance is ensured, so that the comparison performance of a plurality of pictures can be improved.
Based on the same technical concept, the embodiment further provides a face recognition system, which comprises a storage unit, a face recognition unit and a face recognition unit, wherein the storage unit is used for storing a face data set of a single person; generating a countermeasure network, wherein the generator network inputs a feature vector set and outputs a first feature vector, namely, the generator compresses the information of the input features into one feature; and inputting the first characteristic vector and the second characteristic vector by the discriminator network, and judging the probability that the two characteristic vectors have a fused relation, namely calculating the cosine distance.
Based on the same technical concept, the embodiment further provides an electronic device, which includes at least one processor and at least one memory for storing executable instructions of the processor, and in the embodiment of the present application, a specific connection medium between the processor and the memory is not limited, and the processor and the memory are connected through a bus; and the bus may be divided into an address bus, a data bus, a control bus, etc.
The processor is the control center of the electronic device,
the present embodiments may be implemented or performed with a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc., that may interface various portions of an electronic device using various interfaces and lines, and perform any of the methods described above by executing or executing instructions stored in memory and by invoking data stored in memory.
The present embodiment also provides a computer readable medium storing a computer program executable by an electronic device, the computer program instructions when executed by a terminal device being processed to implement any of the methods described above. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The embodiments of the present invention have been described in conjunction with the accompanying drawings and examples, which are given by way of illustration and not of limitation, and it will be apparent to those skilled in the art that various changes and modifications may be made as required within the scope of the appended claims.

Claims (7)

1. A single multi-graph feature recognition method based on a generated countermeasure network is characterized by comprising the following steps:
s1, obtaining a single human face data set, and extracting to obtain a characteristic vector set;
s2, constructing and generating a countermeasure network, and compressing the feature vector set into a first feature vector through a generator;
s3, extracting a second feature vector of the face picture to be recognized;
and S4, putting the two feature vectors into a discriminator, and judging the probability of fusion relation.
2. The single-person multi-graph feature recognition method based on generation of a countermeasure network as claimed in claim 1, wherein: s4, judging that the second feature vector has a fusion relation, if the second feature vector belongs to the feature vector set, the second feature vector has a fusion relation, namely the second feature vector and the second feature vector are the same person; otherwise, it is not.
3. The single-person multi-graph feature recognition method based on generation of a countermeasure network as claimed in claim 1 or 2, wherein: in S4, the discriminator determines the probability of having a fusion relationship by calculating the cosine similarity between the first feature vector and the second feature vector.
4. The single-person multi-graph feature recognition method based on generation of a countermeasure network as claimed in claim 1, wherein: in the generative countermeasure network, the loss function consists of two parts:
Figure 320223DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
the single-person multi-graph feature recognition method based on generation of a countermeasure network as claimed in claim 1 or 2, wherein: in S4, calculating cosine similarity by substituting the picture connection characteristics of the single person and the person to be identified into A, B, and further evaluating the probability of the same person, wherein the formula is as follows;
a threshold value threshold is cut out in the calculation, and when similarity > = threshold, the same person is considered, and when similarity < threshold, different persons are considered.
5. A face recognition system characterized by: the system comprises a storage unit, a processing unit and a processing unit, wherein the storage unit is used for storing a human face data set of a single person;
generating a countermeasure network, comprising:
the generator network inputs a feature vector set and outputs a first feature vector, namely, the generator compresses the information of the input features into one feature;
and inputting the first characteristic vector and the second characteristic vector by the discriminator network, and judging the probability that the two characteristic vectors have a fused relation, namely calculating the cosine distance.
6. An electronic device, characterized in that: comprising a processor, and a memory for storing executable instructions for the processor, the processor configured to: performing the method of any one of claims 1-5.
7. A computer readable medium having computer program instructions stored thereon, characterized in that: the computer program instructions, when processed and executed, implement the method of any of claims 1-5.
CN201910945327.9A 2019-09-30 2019-09-30 Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network Pending CN110647864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910945327.9A CN110647864A (en) 2019-09-30 2019-09-30 Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910945327.9A CN110647864A (en) 2019-09-30 2019-09-30 Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN110647864A true CN110647864A (en) 2020-01-03

Family

ID=68993462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910945327.9A Pending CN110647864A (en) 2019-09-30 2019-09-30 Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN110647864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288032A (en) * 2020-11-18 2021-01-29 上海依图网络科技有限公司 Method and device for quantitative model training based on generation of confrontation network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517104A (en) * 2015-01-09 2015-04-15 苏州科达科技股份有限公司 Face recognition method and face recognition system based on monitoring scene
CN105991916A (en) * 2015-02-05 2016-10-05 联想(北京)有限公司 Information processing method and electronic equipment
CN106599837A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Face identification method and device based on multi-image input
CN107958444A (en) * 2017-12-28 2018-04-24 江西高创保安服务技术有限公司 A kind of face super-resolution reconstruction method based on deep learning
CN107977932A (en) * 2017-12-28 2018-05-01 北京工业大学 It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium
CN108268845A (en) * 2018-01-17 2018-07-10 深圳市唯特视科技有限公司 A kind of dynamic translation system using generation confrontation network synthesis face video sequence
CN108319932A (en) * 2018-03-12 2018-07-24 中山大学 A kind of method and device for the more image faces alignment fighting network based on production
CN108446609A (en) * 2018-03-02 2018-08-24 南京邮电大学 A kind of multi-angle human facial expression recognition method based on generation confrontation network
CN109308725A (en) * 2018-08-29 2019-02-05 华南理工大学 A kind of system that expression interest figure in mobile terminal generates
CN109544656A (en) * 2018-11-23 2019-03-29 南京信息工程大学 A kind of compressed sensing image rebuilding method and system based on generation confrontation network
CN110222668A (en) * 2019-06-17 2019-09-10 苏州大学 Based on the multi-pose human facial expression recognition method for generating confrontation network
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517104A (en) * 2015-01-09 2015-04-15 苏州科达科技股份有限公司 Face recognition method and face recognition system based on monitoring scene
CN105991916A (en) * 2015-02-05 2016-10-05 联想(北京)有限公司 Information processing method and electronic equipment
CN106599837A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Face identification method and device based on multi-image input
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium
CN107958444A (en) * 2017-12-28 2018-04-24 江西高创保安服务技术有限公司 A kind of face super-resolution reconstruction method based on deep learning
CN107977932A (en) * 2017-12-28 2018-05-01 北京工业大学 It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method
CN108268845A (en) * 2018-01-17 2018-07-10 深圳市唯特视科技有限公司 A kind of dynamic translation system using generation confrontation network synthesis face video sequence
CN108446609A (en) * 2018-03-02 2018-08-24 南京邮电大学 A kind of multi-angle human facial expression recognition method based on generation confrontation network
CN108319932A (en) * 2018-03-12 2018-07-24 中山大学 A kind of method and device for the more image faces alignment fighting network based on production
CN109308725A (en) * 2018-08-29 2019-02-05 华南理工大学 A kind of system that expression interest figure in mobile terminal generates
CN109544656A (en) * 2018-11-23 2019-03-29 南京信息工程大学 A kind of compressed sensing image rebuilding method and system based on generation confrontation network
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110222668A (en) * 2019-06-17 2019-09-10 苏州大学 Based on the multi-pose human facial expression recognition method for generating confrontation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YONGMING RAO ET AL.: "Learning Discriminative Aggregation Network for Video-based Face Recognition", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
ZHAOXIANG LIU ET AL.: "Fine-grained Attention-based Video Face Recognition", 《ARXIV:1905.01796V1 [CS.CV] 6 MAY 2019》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288032A (en) * 2020-11-18 2021-01-29 上海依图网络科技有限公司 Method and device for quantitative model training based on generation of confrontation network
CN112288032B (en) * 2020-11-18 2022-01-14 上海依图网络科技有限公司 Method and device for quantitative model training based on generation of confrontation network

Similar Documents

Publication Publication Date Title
Chen et al. An edge traffic flow detection scheme based on deep learning in an intelligent transportation system
US10417526B2 (en) Object recognition method and device
WO2019228317A1 (en) Face recognition method and device, and computer readable medium
WO2021258989A1 (en) Facial anti-counterfeiting recognition method and apparatus, and device and storage medium
CN110222573B (en) Face recognition method, device, computer equipment and storage medium
CN111695415A (en) Construction method and identification method of image identification model and related equipment
Tripathi et al. Real time security framework for detecting abnormal events at ATM installations
CN111428771B (en) Video scene classification method and device and computer-readable storage medium
CN110633698A (en) Infrared picture identification method, equipment and medium based on loop generation countermeasure network
Xia et al. Face occlusion detection based on multi-task convolution neural network
CN111368672A (en) Construction method and device for genetic disease facial recognition model
EP3779775B1 (en) Media processing method and related apparatus
WO2023179429A1 (en) Video data processing method and apparatus, electronic device, and storage medium
CN111160555A (en) Processing method and device based on neural network and electronic equipment
CN113705596A (en) Image recognition method and device, computer equipment and storage medium
CN111738074B (en) Pedestrian attribute identification method, system and device based on weak supervision learning
Vallimeena et al. CNN algorithms for detection of human face attributes–a survey
Fatima et al. Global feature aggregation for accident anticipation
CN111652181A (en) Target tracking method and device and electronic equipment
Shao et al. COVAD: Content-oriented video anomaly detection using a self attention-based deep learning model
Abou Loume et al. Facial recognition in the opening of a door using deep learning and a cloud service
Qin et al. Application of video scene semantic recognition technology in smart video
CN113657272A (en) Micro-video classification method and system based on missing data completion
CN110647864A (en) Single multi-graph feature recognition method, equipment and medium based on generation countermeasure network
CN110866609A (en) Interpretation information acquisition method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103