CN109711546B - Neural network training method and device, electronic equipment and storage medium - Google Patents

Neural network training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109711546B
CN109711546B CN201811573466.5A CN201811573466A CN109711546B CN 109711546 B CN109711546 B CN 109711546B CN 201811573466 A CN201811573466 A CN 201811573466A CN 109711546 B CN109711546 B CN 109711546B
Authority
CN
China
Prior art keywords
training
network
identity
feature
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811573466.5A
Other languages
Chinese (zh)
Other versions
CN109711546A (en
Inventor
朱烽
赵瑞
陈大鹏
张凯鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201811573466.5A priority Critical patent/CN109711546B/en
Publication of CN109711546A publication Critical patent/CN109711546A/en
Application granted granted Critical
Publication of CN109711546B publication Critical patent/CN109711546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present disclosure relates to a neural network training method and apparatus, an electronic device, and a storage medium, the method including: the method comprises the steps of inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network respectively for processing, obtaining a plurality of first training images, and carrying out incremental training on a face recognition network based on the plurality of first training images and a plurality of newly-added second training images, so that the performance of the face recognition network is improved.

Description

Neural network training method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a neural network training method and apparatus, an electronic device, and a storage medium.
Background
Face recognition is one of the most important application fields in the current intelligent video monitoring field. The traditional face recognition algorithm is generally trained based on fixed training data (face data collected by a face recognition research institution) to continuously optimize a model, and is deployed in an actual application system. However, due to the problems of acquisition cost, privacy and the like, the training data is difficult to collect on a large scale, and simultaneously, all application scenes are difficult to cover, so that the improvement of the accuracy of the face recognition model is restricted.
Disclosure of Invention
The present disclosure provides a neural network training technical solution.
According to an aspect of the present disclosure, there is provided a neural network training method, including:
in a possible implementation manner, a plurality of first feature data included in a pre-stored feature set are respectively input into a pre-trained decoding network for processing, and a plurality of first training images are obtained, wherein the first feature data at least include identity-related features, and the identity-related features are used for representing identity information of an object in the first training image corresponding to the first feature data; and performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
In a possible implementation manner, a plurality of third training images are respectively input into a pre-trained coding network for processing, so as to obtain the plurality of first feature data.
In one possible implementation, training the decoding network based on the plurality of third training images and the plurality of first feature data, wherein training the decoding network includes: inputting the first characteristic data into the decoding network for processing to obtain a plurality of predicted images; determining a first loss of the decoding network according to the plurality of predicted images and the plurality of third training images; and training a decoding network according to the first loss.
In one possible implementation manner, the encoding network includes a first encoding subnetwork and a second encoding subnetwork, each of the first feature data further includes an identity-independent feature, and the identity-independent feature is used to represent feature information that is independent of the identity of the object in the first training image corresponding to the first feature data, where the plurality of third training images are respectively input into the pre-trained encoding network to be processed, so as to obtain the plurality of first feature data, and the method includes: inputting the third training image into a first coding sub-network for processing to obtain identity-related features of the first feature data; and inputting the third training image into a second coding sub-network for processing to obtain the identity-independent feature of the first feature data.
In one possible implementation, the training of the first coding subnetwork comprises: training the first coding subnetwork based on the labeled third training images.
In one possible implementation, after training the first coding subnetwork based on the labeled third training images, the method includes: training the second coding subnetwork based on a plurality of third training images; training the decoding network based on a plurality of third training images, a plurality of identity-related features, and a plurality of identity-independent features.
In one possible implementation, the feature set is deployed at a first server; the decoding network is deployed at a second server.
In one possible implementation, the feature set is deployed in a first server in an encrypted manner; the decoding network encryption is deployed at a second server.
According to an aspect of the present disclosure, there is provided an image recognition method including: acquiring an image, inputting the acquired image into a face recognition network after incremental training for processing, and acquiring face recognition data; incremental training of the face recognition network includes: respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least include identity-related features which are used for representing identity information of an object in the first training image corresponding to the first feature data; and performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
According to an aspect of the present disclosure, there is provided a neural network training apparatus including: the first image acquisition module is used for respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least include identity-related features which are used for representing identity information of an object in the first training image corresponding to the first feature data; and the incremental training module is used for carrying out incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
In one possible implementation, the apparatus further includes: and the first acquisition module is used for respectively inputting the plurality of third training images into a pre-trained coding network for processing to obtain the plurality of first characteristic data.
In one possible implementation manner, the decoding network training module is configured to train the decoding network based on the third training images and the first feature data, where the decoding network training module includes: the prediction image obtaining sub-module is used for inputting the first characteristic data into the decoding network for processing to obtain a plurality of prediction images; a first loss determining module, configured to determine a first loss of the decoding network according to the plurality of predicted images and the plurality of third training images; and the training submodule is used for training the decoding network according to the first loss.
In one possible implementation, the coding network comprises a first coding sub-network and a second coding sub-network, each first feature data further comprises an identity-independent feature representing object-identity-independent feature information in a first training image corresponding to the first feature data,
wherein the first obtaining module comprises: the first characteristic data acquisition sub-module is used for inputting the third training image into the first coding sub-network for processing to obtain the identity-related characteristic of the first characteristic data; and the second characteristic data acquisition sub-module is used for inputting the third training image into a second coding sub-network for processing to obtain the identity-independent characteristic of the first characteristic data.
In one possible implementation, the first training module is configured to train the first coding subnetwork based on a plurality of labeled third training images.
In one possible implementation, the apparatus further includes: a second training module to train the second coding subnetwork based on a plurality of third training images; a third training module to train the decoding network based on a plurality of third training images, a plurality of identity-related features, and a plurality of identity-independent features.
In one possible implementation, the feature set is deployed at a first server; the decoding network is deployed at a second server.
In one possible implementation, the feature set is deployed in a first server in an encrypted manner; the decoding network encryption is deployed at a second server.
According to an aspect of the present disclosure, there is provided an image recognition apparatus including: the second image acquisition module is used for acquiring images, and the face recognition data acquisition module is used for inputting the acquired images into the face recognition network after the incremental training for processing to acquire face recognition data; incremental training of the face recognition network includes: respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least include identity-related features which are used for representing identity information of an object in the first training image corresponding to the first feature data; and performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the first feature data which cannot be intuitively understood by the user is respectively input into the pre-trained decoding network for processing to obtain the plurality of first training images, and the face recognition network is subjected to incremental training based on the plurality of first training images and the plurality of newly added second training images, so that the storage space is saved, and the performance of the face recognition network is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow diagram of a neural network training method according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of training a coding network in a neural network training method according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram illustrating an application scenario of feature decoding in a neural network training method according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of an application scenario for extracting first feature data in a neural network training method according to an embodiment of the present disclosure.
FIG. 5 shows a flow diagram for training a first coding subnetwork, a second coding subnetwork, and a decoding network in a neural network training method according to an embodiment of the disclosure.
Fig. 6 is a schematic diagram illustrating an application scenario for training a first coding subnetwork in a neural network training method according to an embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating an application scenario of training a decoding network in a neural network training method according to an embodiment of the present disclosure.
Fig. 8 shows a flowchart of an image recognition method according to an embodiment of the present disclosure.
Fig. 9 illustrates a block diagram of a neural network training device, in accordance with an embodiment of the present disclosure.
Fig. 10 illustrates a block diagram of an image recognition apparatus according to an embodiment of the present disclosure.
FIG. 11 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
FIG. 12 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow diagram of a neural network training method according to an embodiment of the present disclosure. The neural network training method may be performed by a terminal device or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the neural network method may be implemented by a processor invoking computer readable instructions stored in a memory.
As shown in fig. 1, the method includes:
step S110, respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing, and obtaining a plurality of first training images.
In one possible implementation, the neural network training method can be used for incremental training of the neural network, so that the performance of the neural network in a specific use scenario (such as face recognition) is significantly improved.
In a possible implementation manner, the plurality of first feature data may be obtained from a feature set stored in advance, and in order to ensure the security of the data in the face image set, the embodiment may store the face image in a data format of the first feature data (e.g., a feature vector). In this implementation manner, the first training image in the face image set may cover multiple application scenes, and therefore, a neural network suitable for multiple application scenes may be obtained through training of the face image set.
In one possible implementation, the decoding network may be configured to process (restore) the first feature data into the first training image, which may be understood as the inverse process of the image feature extraction. In this implementation, the decoding network may be obtained by pre-training.
In one possible implementation, each of the first feature data includes at least an identity-related feature representing identity information of an object in the first training image corresponding to the first feature data. The identity-related features may include facial features such as skin tone, nose bridge, brow, cheekbones, chin, lips, eyes, pinna, and face shape of the human face.
And step S120, performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
The plurality of second training images may be face images acquired in a certain use scene, and the face recognition network with higher performance in the certain use scene may be obtained through training of the second training images.
In a possible implementation manner, the face recognition network can be obtained by training a face image set, the network can be used in each application scene, and the recognition performance in each application scene is high. The face recognition network may be any type of neural network used in a face recognition scenario, and the network structure of the face recognition network is not limited in this embodiment.
In one possible implementation, Incremental training (Incremental Learning) may enable the neural network to continually learn new "knowledge" from new training samples, and to preserve a large portion of the training process for features that have been previously learned. In this embodiment, the face recognition network may be subjected to incremental learning through the first training image and the plurality of second training images, so that the performance of the trained face recognition network is significantly improved in a specific use scenario on the premise that the original performance is maintained.
According to the embodiment of the disclosure, the first feature data which cannot be intuitively understood by the user can be respectively input into the pre-trained decoding network for processing to obtain a plurality of first training images, and the face recognition network is subjected to incremental training based on the plurality of first training images and the plurality of newly added second training images, so that the performance of the face recognition network is improved.
In one possible implementation, the neural network training method further includes: and respectively inputting a plurality of third training images into a pre-trained coding network for processing to obtain a plurality of first characteristic data.
The third training image may be a face image obtained from a training image set. The encoding network is used to process the face image into feature data corresponding to the face image (i.e., a process of feature extraction).
In a possible implementation manner, the neural network training method may perform a feature extraction operation on all face images in the training image set to obtain identity-related features included in the first feature data corresponding to each third training image, and store the obtained plurality of identity-related features in the feature set.
In one possible implementation, the present disclosure does not limit the network structure of the coding network, and developers may make a selection according to specific usage scenarios and performance requirements.
In a possible implementation manner, the process of inputting the plurality of third training images into the pre-trained coding network for processing, and obtaining the plurality of first feature data, may be performed before step S110 of the above implementation manner.
In one possible implementation, the method further includes: training the coding network based on the labeled third training images.
The coding network is configured to extract first feature data from the first training image in the foregoing implementation manner. The first characteristic data in such an implementation may comprise only identity-related characteristics. In another implementation, the third training image used for training the coding network may be obtained from the face image set in the above implementation, and in another implementation, the third training image and the first training image may be obtained from the same data set.
In some possible implementations, the coding network may be trained by means of supervised learning, and the label information used in the supervised learning process may include an identity label (e.g., an identity identification code) of an object in the third training image, and multiple third training images including the same object may be identified by the same identity label.
Fig. 2 shows a flow chart of training a coding network in a neural network training method according to an embodiment of the present disclosure. As shown in fig. 2, the step of training the decoding network may include:
step S210, inputting the plurality of first feature data into the decoding network respectively for processing, and obtaining a plurality of predicted images.
In the embodiment of the present disclosure, the decoding network may be trained by using the first feature data as an input and the third training image as a learning target.
Wherein the first feature data may be obtained from a pre-stored training set. In the training process, the first characteristic data are input data of the decoding network, the predicted images are output data of the decoding network, and each first characteristic data has one predicted image corresponding to the first characteristic data.
In a possible implementation manner, the process of training the decoding network may be divided into a plurality of sub-processes, the number of the first feature data input to the decoding network in each sub-process may be preset, and a plurality of predicted images obtained based on the plurality of first feature data in each sub-process may be processed in batch.
Step S220 is performed to determine a first loss of the decoding network according to the plurality of predicted images and the plurality of third training images.
Wherein the first loss is used to represent a difference between the predicted image and the third training image. The larger the gap, the worse the performance of the decoding network. The smaller the gap, the better the performance of the decoding network.
In one possible implementation, the first loss of the decoding network may be calculated by an L1 or L2 loss function. The present disclosure does not limit the specific type of the first loss.
And step S230, training a decoding network according to the first loss.
In one possible implementation, the process of training the decoding network is a process of continuously adjusting parameters of the decoding network. After the determination of the first loss of the decoding network, it may be determined whether the first loss satisfies a training condition, and when the first loss does not satisfy the training condition, a parameter of the decoding network may be adjusted according to the first loss.
In a possible implementation manner, in each subprocess included in the training of the decoding network, the process from step S210 to step S230 is performed once, and each execution may update the parameters in the decoding network until the first loss meets the training condition, and the decoding network completes the training.
In one possible implementation, the first feature data further comprises an identity independent feature. The identity-independent features are used to represent object identity-independent feature information in the first training image corresponding to the first feature data.
In one possible implementation, the first training image and the third training image include face images, the identity-related features include facial features, for example, the identity-related features include features such as skin color, nose bridge, eyebrow, cheekbone, chin, lips, eyes, auricle and face shape of a face, and the identity-independent features may express feature information that is not related to the identity of the object in the third training image, compared with the identity-related features, for example, the identity-independent features may include features such as hair style, makeup, whether to wear glasses, and the like; the information of the state of the object in the third training image, the background of the object and the like can be included, and the information of the face ornaments, the image background and the like in the third training image can be better represented through the identity-independent features.
In a possible implementation manner, the first feature data used in the incremental training process for the face recognition network, the process for extracting the first feature data based on the coding network, and the process for training the decoding network may include an identity-independent feature, so as to obtain a better processing result.
As an example, fig. 3 is a schematic diagram illustrating an application scenario of feature decoding in a neural network training method according to an embodiment of the present disclosure, as shown in fig. 3, when first feature data includes identity-related features and identity-independent features, a feature set 300 may include identity-related features 301 and identity-independent features 302 of a plurality of first training images, in an application scenario of feature decoding based on a decoding network, the identity-related features 301 and the identity-independent features 302 of the same first training image may be simultaneously input into a decoding network 303, and the decoding network 303 obtains a first training image 305 according to the identity-related features 301 and the identity-independent features 302, and stores the first training image into a human face image set 304.
In a possible implementation, the coding network comprises a first coding subnetwork and a second coding subnetwork, and the identity-related vector and the identity-independent vector can be extracted separately by the two subnetworks comprised by the coding network. In this implementation manner, the inputting the plurality of third training images into the pre-trained coding network respectively for processing to obtain the plurality of first feature data includes: inputting the third training image into a first coding sub-network for processing to obtain identity-related features of the first feature data; and inputting the third training image into a second coding sub-network for processing to obtain the identity-independent feature of the first feature data.
In this implementation, a first coding sub-network is used to extract identity-related features and a second coding sub-network is used to extract identity-independent features. For a first training image (or a third training image), there is one identity-related feature corresponding to it and one identity-independent feature corresponding to it.
Fig. 4 is a schematic diagram of an application scenario for extracting first feature data in a neural network training method according to an embodiment of the present disclosure. In the process of extracting the identity-related vector and the identity-independent vector from the two subnetworks included in the coding network, as shown in fig. 4, first randomly extracting a third training image 401 from a training image set 400 as a training sample, and inputting the third training image into a first coding subnetwork 402 and a second coding subnetwork 403, respectively, where the first coding subnetwork 402 extracts the identity-related feature 301 based on the third training image 401, the second coding subnetwork 403 extracts the identity-independent feature 302 based on the third training image 401, and a mapping relationship is established between the identity-related feature 301 and the identity-related feature 302 of the same training image, and the mapping relationship is stored in the feature set 300.
FIG. 5 shows a flow diagram for training a first coding subnetwork, a second coding subnetwork, and a decoding network in a neural network training method according to an embodiment of the disclosure. As shown in fig. 5, the method includes:
step S510, training the first coding subnetwork based on the labeled third training images.
As an example, the loss function used in training the first coding subnetwork may employ a Softmax loss function (normalized exponential function) and a cross-entropy loss function.
The operation described in step S510 of this implementation may refer to the procedure of the first coding sub-network shown in fig. 6.
Fig. 6 is a schematic diagram illustrating an application scenario for training a first coding subnetwork in a neural network training method according to an embodiment of the present disclosure. In the usage scenario shown in fig. 6, first, a third training image 401 is randomly extracted from a training image set (not shown) as a training sample, and the third training images 401 are respectively input into a coding network 402, where the coding network 402 extracts identity-related features 301 based on the third training image 401 until the feature extraction operation is completed on all training images in the training image set.
The third training image may be a face image obtained from a training image set. In one possible implementation, the neural network training method may train a plurality of third training images labeled in the training image set to the first coding subnetwork to change a parameter in the first coding subnetwork.
In one possible implementation, the present disclosure does not limit the network structure of the coding network, and developers may make a selection according to specific usage scenarios and performance requirements.
Step 520, training the second coding subnetwork based on the third training images.
Step 530, training the decoding network based on the plurality of third training images, the plurality of identity-related features, and the plurality of identity-independent features.
In one possible implementation, the parameters of the first coding subnetwork will not change after the training is completed. And training the second coding subnetwork and the decoding network based on the identity-related features extracted from the trained first coding network and the third training image. The identity-related features of the first feature data can be extracted from the trained first coding subnetwork, or can be directly obtained from the feature set.
In one possible implementation, during the training of the second coding subnetwork and the decoding network, the parameters in both the second coding subnetwork and the decoding network are updated until the first loss meets the training condition, and the second coding subnetwork and the decoding network complete the training.
Fig. 7 is a schematic diagram illustrating an application scenario of training a second coding sub-network and a decoding network in a neural network training method according to an embodiment of the present disclosure, and the present implementation may describe the training process of the second coding sub-network and the decoding network through the usage scenario illustrated in fig. 7.
As shown in fig. 7, first, a third training image 401 is randomly extracted from a training image set (not shown) as a training sample, and the third training image 401 is respectively input into a first coding sub-network 402 and a second coding sub-network 403, an identity-related feature 301 is extracted based on the first coding sub-network 402, an identity-independent feature 302 is extracted based on the second coding sub-network 403, and the identity-related feature 301 and the identity-independent feature 302 are simultaneously input into a decoding network 303, which is processed by the decoding network 303 to obtain a predicted image 701, a first loss of the decoding network is determined according to the plurality of predicted images 701 and the plurality of third training images 401, and the second coding sub-network and the decoding network are trained according to the first loss. The first loss and predicted image in this implementation refer to the description in the above implementation.
In a possible implementation, the network structures of the first coding subnetwork, the second coding subnetwork and the coding network are not limited, and a developer may make a selection according to a specific use scenario and performance requirements. For example, the network structure of the first coding sub-network may be a ResNet101 structure, the input is a face image, and the output length is a feature vector of 256 dimensions (the longer the output length is, the better the coding effect is, here, 256 dimensions are taken as an example), and the network structure of the second coding sub-network may be a VGG16 structure, the input is a face image, and the output length is a feature vector of 256 dimensions (the longer the output length is, the better the coding effect is, here, 256 dimensions are taken as an example). The network structure of the coding network can be an inverted VGG16 structure (i.e. the input and output directions of the VGG16 structure are inverted, the original maximum pooling layers are all replaced by upsampling layers, and the input and output dimension matching relationship between adjacent layers is adjusted), in this case, the input of the coding network can be 512-dimensional feature vectors, and the output data can be face images.
In one possible implementation, the second feature data of the newly added second image may be stored in the feature set to increase the data amount of the feature data in the feature set.
In one possible implementation, the feature set is deployed at a first server; the decoding network is deployed at a second server.
The first server and the second server described in this implementation may be used to represent different servers. Because the decoding network can process the first feature data to obtain the face image, in order to ensure that the face image is not leaked, the implementation mode can select to deploy the training set comprising a plurality of first feature data and the decoding network to different servers, and the safety of the face image is improved. Compared with the deployment of the first training image, the implementation method can greatly compress the storage space by deploying the first feature data corresponding to the first training image.
In one possible implementation, the feature set is deployed in a first server in an encrypted manner; the decoding network encryption is deployed at a second server. In order to further ensure the safety of the face image, the training set and the decoding network comprising a plurality of first feature data can be encrypted after being deployed to different servers. The training set and decoding network may be encrypted by an asymmetric encryption algorithm, a hash encryption algorithm, a message digest encryption algorithm, or a digital signature encryption algorithm, as examples.
In a possible implementation manner, the neural network training method can be well applied to the application scenario, the application scenario is specifically as follows, a city a deploys a set of face recognition system (hereinafter referred to as a system C) developed by company B, and a large number of face images can be captured and compared every day. The face recognition algorithm (face recognition network) used in the system C is trained and optimized only based on the internal data (face image set) of company B, and the recognition accuracy is low in the scene of city A. For data security, company B is not willing to deploy internal data into the C system, nor is city a willing to copy data collected by the C system into the internal server of company B. At this time, company B may encode internal data by using the neural network training method, deploy the encoded feature vector (identity-related feature and identity-unrelated feature) and the decoder (decoding network) to the system C, both the feature vector and the decoder may be encrypted, and obtaining any one of them alone may not result in data leakage. In the incremental training process, the original image data (first training image) is restored by decoding through a decoder (decoding network) and is used for incremental training. The accuracy of the finally obtained face recognition model in the current scene is greatly improved, and the accuracy in other scenes can still be maintained or slightly improved.
In the application scenario described in the disclosed embodiment, the performance of the face recognition network is improved and the data privacy of internal data is ensured by the neural network training method; meanwhile, when the face recognition network is subjected to incremental training, only the eigenvector corresponding to the internal data needs to be deployed, so that the storage space is greatly compressed, and meanwhile, the eigenvector can be stored in a matrix file form, so that the management efficiency of the eigenvector is improved.
Fig. 8 shows a flowchart of an image recognition method according to an embodiment of the present disclosure. As shown in fig. 8, the image recognition method includes:
step S810, an image is acquired.
Step S820, inputting the acquired image into the face recognition network after the incremental training for processing, and acquiring face recognition data;
wherein the incremental training of the face recognition network comprises: respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least include identity-related features which are used for representing identity information of an object in the first training image corresponding to the first feature data;
and performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
The image recognition method may be an application process of a face recognition network after incremental training, an image acquired in the process may be any face image, and the corresponding obtained face recognition data may be a recognition result of the face image, for example, the face recognition data may be data of age, name, and the like of an object in the face image.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Fig. 9 illustrates a block diagram of a neural network training device, in accordance with an embodiment of the present disclosure. As shown in fig. 9, the neural network training device includes a first image acquisition module 901 and an incremental training module 902.
The first image obtaining module 901 is configured to input a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network, respectively, and process the plurality of first feature data to obtain a plurality of first training images, where the first feature data at least includes an identity-related feature, and the identity-related feature is used to represent identity information of an object in the first training image corresponding to the first feature data;
an incremental training module 902, configured to perform incremental training on the face recognition network based on the multiple first training images and the multiple newly added second training images.
In one possible implementation, the apparatus further includes: and the first acquisition module is used for respectively inputting the plurality of third training images into a pre-trained coding network for processing to obtain the plurality of first characteristic data.
In one possible implementation manner, the decoding network training module is configured to train the decoding network based on the third training images and the first feature data, where the decoding network training module includes: the prediction image obtaining sub-module is used for inputting the first characteristic data into the decoding network for processing to obtain a plurality of prediction images; a first loss determining module, configured to determine a first loss of the decoding network according to the plurality of predicted images and the plurality of third training images; and the training submodule is used for training the decoding network according to the first loss.
In one possible implementation, the coding network comprises a first coding sub-network and a second coding sub-network, each first feature data further comprises an identity-independent feature representing object-identity-independent feature information in a first training image corresponding to the first feature data,
wherein the first obtaining module comprises: the first characteristic data acquisition sub-module is used for inputting the third training image into the first coding sub-network for processing to obtain the identity-related characteristic of the first characteristic data; and the second characteristic data acquisition sub-module is used for inputting the third training image into a second coding sub-network for processing to obtain the identity-independent characteristic of the first characteristic data.
In one possible implementation, the first training module is configured to train the first coding subnetwork based on a plurality of labeled third training images.
In one possible implementation, the apparatus further includes: a second training module to train the second coding subnetwork based on a plurality of third training images; a third training module to train the decoding network based on a plurality of third training images, a plurality of identity-related features, and a plurality of identity-independent features.
Fig. 10 illustrates a block diagram of an image recognition apparatus according to an embodiment of the present disclosure. As shown in fig. 10, the image recognition apparatus includes a second image acquisition module 1001 and a face recognition data acquisition module 1002.
Wherein, the second image obtaining module 1001 is used for obtaining images,
a facial recognition data acquisition module 1002, configured to input the acquired image into a facial recognition network after incremental training for processing, and acquire facial recognition data;
incremental training of the face recognition network includes:
respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least include identity-related features which are used for representing identity information of an object in the first training image corresponding to the first feature data;
and performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
Fig. 11 is a block diagram illustrating an electronic device 1100 in accordance with an example embodiment. For example, the electronic device 1100 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 11, electronic device 1100 may include one or more of the following components: processing component 1102, memory 1104, power component 1106, multimedia component 11011, audio component 1110, input/output (I/O) interface 1112, sensor component 1114, and communications component 1116.
The processing component 1102 generally controls the overall operation of the electronic device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1102 may include one or more processors 1120 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1102 may include one or more modules that facilitate interaction between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 11011 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operations at the electronic device 1100. Examples of such data include instructions for any application or method operating on the electronic device 1100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1104 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1106 provides power to the various components of the electronic device 1100. The power components 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1100.
The multimedia component 11011 includes a screen that provides an output interface between the electronic device 1100 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 11011 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1100 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1110 is configured to output and/or input audio signals. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio assembly 1110 further includes a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1114 includes one or more sensors for providing various aspects of state assessment for the electronic device 1100. For example, the sensor assembly 1114 may detect an open/closed state of the electronic device 1100, the relative positioning of components, such as a display and keypad of the electronic device 1100, the sensor assembly 1114 may also detect a change in the position of the electronic device 1100 or a component of the electronic device 1100, the presence or absence of user contact with the electronic device 1100, orientation or acceleration/deceleration of the electronic device 1100, and a change in the temperature of the electronic device 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1116 is configured to facilitate wired or wireless communication between the electronic device 1100 and other devices. The electronic device 1100 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1116 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1116 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 1104, is also provided that includes computer program instructions executable by the processor 1120 of the electronic device 1100 to perform the above-described method.
Fig. 12 is a block diagram illustrating an electronic device 1200 in accordance with an example embodiment. For example, the electronic device 1200 may be provided as a server. Referring to fig. 12, electronic device 1200 includes a processing component 1222 that further includes one or more processors, and memory resources, represented by memory 1232, for storing instructions, such as application programs, that are executable by processing component 1222. The application programs stored in memory 1232 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1222 is configured to execute instructions to perform the above-described methods.
The electronic device 1200 may also include a power supply component 1226 configured to perform power management of the electronic device 1200, a wired or wireless network interface 1250 configured to connect the electronic device 1200 to a network, and an input output (I/O) interface 1258. The electronic device 1200 may operate based on an operating system stored in the memory 1232, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1232, is also provided that includes computer program instructions executable by the processing component 1222 of the electronic device 1200 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. A neural network training method, comprising:
respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least comprise identity-related features which are used for representing identity information of an object in the first training images corresponding to the first feature data, and the first feature data also comprise identity-unrelated features which are used for representing feature information which is unrelated to the identity of the object in the first training images corresponding to the first feature data;
incrementally training a face recognition network based on the plurality of first training images and the newly added plurality of second training images,
wherein the feature set is deployed at a first server; the decoding network is deployed at a second server.
2. The method of claim 1, further comprising:
and respectively inputting a plurality of third training images into a pre-trained coding network for processing to obtain a plurality of first characteristic data.
3. The method of claim 2, further comprising: training the decoding network based on the plurality of third training images and the plurality of first feature data,
wherein training the decoding network comprises:
inputting the first characteristic data into the decoding network for processing to obtain a plurality of predicted images;
determining a first loss of the decoding network according to the plurality of predicted images and the plurality of third training images;
and training a decoding network according to the first loss.
4. The method of claim 2 or 3, wherein the coding network comprises a first coding sub-network and a second coding sub-network,
wherein, inputting a plurality of third training images into a pre-trained coding network respectively for processing, and obtaining the plurality of first feature data, comprises:
inputting the third training image into a first coding sub-network for processing to obtain identity-related features of the first feature data;
and inputting the third training image into a second coding sub-network for processing to obtain the identity-independent feature of the first feature data.
5. The method of claim 4, wherein the training of the first coding subnetwork comprises:
training the first coding subnetwork based on the labeled third training images.
6. The method of claim 5, wherein training the first coding subnetwork based on the labeled third plurality of training images comprises:
training the second coding subnetwork based on a plurality of third training images;
training the decoding network based on a plurality of third training images, a plurality of identity-related features, and a plurality of identity-independent features.
7. The method of claim 1,
the feature set encryption is deployed on a first server; the decoding network encryption is deployed at a second server.
8. An image recognition method, comprising:
acquiring an image;
inputting the acquired image into a face recognition network after incremental training for processing to acquire face recognition data;
incremental training of the face recognition network includes:
respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least comprise identity-related features which are used for representing identity information of an object in the first training images corresponding to the first feature data, and the first feature data also comprise identity-unrelated features which are used for representing feature information which is unrelated to the identity of the object in the first training images corresponding to the first feature data;
incrementally training a face recognition network based on the plurality of first training images and the newly added plurality of second training images,
wherein the feature set is deployed at a first server; the decoding network is deployed at a second server.
9. A neural network training device, comprising:
the first image acquisition module is used for respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least comprise identity-related features which are used for representing identity information of an object in the first training image corresponding to the first feature data, and the first feature data further comprise identity-unrelated features which are used for representing feature information which is unrelated to the identity of the object in the first training image corresponding to the first feature data;
an incremental training module for performing incremental training on the face recognition network based on the plurality of first training images and the plurality of newly added second training images,
wherein the feature set is deployed at a first server; the decoding network is deployed at a second server.
10. The apparatus of claim 9, further comprising:
and the first acquisition module is used for respectively inputting the plurality of third training images into a pre-trained coding network for processing to obtain the plurality of first characteristic data.
11. The apparatus of claim 10, further comprising: a decoding network training module for training the decoding network based on the plurality of third training images and the plurality of first feature data,
wherein, the decoding network training module comprises:
the prediction image obtaining sub-module is used for inputting the first characteristic data into the decoding network for processing to obtain a plurality of prediction images;
a first loss determining module, configured to determine a first loss of the decoding network according to the plurality of predicted images and the plurality of third training images;
and the training submodule is used for training the decoding network according to the first loss.
12. The apparatus of claim 10 or 11, wherein the coding network comprises a first coding sub-network and a second coding sub-network,
wherein the first obtaining module comprises:
the first characteristic data acquisition sub-module is used for inputting the third training image into the first coding sub-network for processing to obtain the identity-related characteristic of the first characteristic data;
and the second characteristic data acquisition sub-module is used for inputting the third training image into a second coding sub-network for processing to obtain the identity-independent characteristic of the first characteristic data.
13. The apparatus of claim 12, further comprising:
a first training module to train the first coding subnetwork based on the labeled third training images.
14. The apparatus of claim 13, further comprising:
a second training module to train the second coding subnetwork based on a plurality of third training images;
a third training module to train the decoding network based on a plurality of third training images, a plurality of identity-related features, and a plurality of identity-independent features.
15. The apparatus of claim 9,
the feature set encryption is deployed on a first server; the decoding network encryption is deployed at a second server.
16. An image recognition apparatus, comprising:
a second image acquisition module for acquiring an image,
the face recognition data acquisition module is used for inputting the acquired image into the face recognition network after the incremental training for processing to acquire face recognition data;
incremental training of the face recognition network includes:
respectively inputting a plurality of first feature data included in a pre-stored feature set into a pre-trained decoding network for processing to obtain a plurality of first training images, wherein the first feature data at least comprise identity-related features which are used for representing identity information of an object in the first training images corresponding to the first feature data, and the first feature data also comprise identity-unrelated features which are used for representing feature information which is unrelated to the identity of the object in the first training images corresponding to the first feature data;
incrementally training a face recognition network based on the plurality of first training images and the newly added plurality of second training images,
wherein the feature set is deployed at a first server; the decoding network is deployed at a second server.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 8.
18. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN201811573466.5A 2018-12-21 2018-12-21 Neural network training method and device, electronic equipment and storage medium Active CN109711546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811573466.5A CN109711546B (en) 2018-12-21 2018-12-21 Neural network training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811573466.5A CN109711546B (en) 2018-12-21 2018-12-21 Neural network training method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109711546A CN109711546A (en) 2019-05-03
CN109711546B true CN109711546B (en) 2021-04-06

Family

ID=66255946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811573466.5A Active CN109711546B (en) 2018-12-21 2018-12-21 Neural network training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109711546B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287346B (en) * 2019-06-28 2021-11-30 深圳云天励飞技术有限公司 Data storage method, device, server and storage medium
CN113159288B (en) * 2019-12-09 2022-06-28 支付宝(杭州)信息技术有限公司 Coding model training method and device for preventing private data leakage
CN112990473B (en) * 2019-12-12 2024-02-02 杭州海康威视数字技术股份有限公司 Model training method, device and system
CN111275055B (en) * 2020-01-21 2023-06-06 北京市商汤科技开发有限公司 Network training method and device, and image processing method and device
CN112565777B (en) * 2020-11-30 2023-04-07 通号智慧城市研究设计院有限公司 Deep learning model-based video data transmission method, system, medium and device
WO2024007281A1 (en) * 2022-07-08 2024-01-11 Qualcomm Incorporated Offline multi-vendor training for cross-node machine learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414351A (en) * 2008-11-03 2009-04-22 章毅 Fingerprint recognition system and control method
CN106408825A (en) * 2016-12-03 2017-02-15 上海腾盛智能安全科技股份有限公司 Home safety monitoring system and method
CN106656506A (en) * 2016-11-18 2017-05-10 哈尔滨工程大学 Finger vein encryption method
CN108446680A (en) * 2018-05-07 2018-08-24 西安电子科技大学 A kind of method for secret protection in face authentication system based on edge calculations
CN108765261A (en) * 2018-04-13 2018-11-06 北京市商汤科技开发有限公司 Image conversion method and device, electronic equipment, computer storage media, program
CN108805258A (en) * 2018-05-23 2018-11-13 北京图森未来科技有限公司 A kind of neural network training method and its device, computer server

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778448B (en) * 2015-03-24 2017-12-15 孙建德 A kind of face identification method based on structure adaptive convolutional neural networks
CN106485235B (en) * 2016-10-24 2019-05-03 厦门美图之家科技有限公司 A kind of convolutional neural networks generation method, age recognition methods and relevant apparatus
CN106934364A (en) * 2017-03-09 2017-07-07 腾讯科技(上海)有限公司 The recognition methods of face picture and device
CN107292298B (en) * 2017-08-09 2018-04-20 北方民族大学 Ox face recognition method based on convolutional neural networks and sorter model
CN107545277B (en) * 2017-08-11 2023-07-11 腾讯科技(上海)有限公司 Model training, identity verification method and device, storage medium and computer equipment
CN108304846B (en) * 2017-09-11 2021-10-22 腾讯科技(深圳)有限公司 Image recognition method, device and storage medium
CN108133238B (en) * 2017-12-29 2020-05-19 国信优易数据有限公司 Face recognition model training method and device and face recognition method and device
CN108537135A (en) * 2018-03-16 2018-09-14 北京市商汤科技开发有限公司 The training method and device of Object identifying and Object identifying network, electronic equipment
CN108932299A (en) * 2018-06-07 2018-12-04 北京迈格威科技有限公司 The method and device being updated for the model to inline system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414351A (en) * 2008-11-03 2009-04-22 章毅 Fingerprint recognition system and control method
CN106656506A (en) * 2016-11-18 2017-05-10 哈尔滨工程大学 Finger vein encryption method
CN106408825A (en) * 2016-12-03 2017-02-15 上海腾盛智能安全科技股份有限公司 Home safety monitoring system and method
CN108765261A (en) * 2018-04-13 2018-11-06 北京市商汤科技开发有限公司 Image conversion method and device, electronic equipment, computer storage media, program
CN108446680A (en) * 2018-05-07 2018-08-24 西安电子科技大学 A kind of method for secret protection in face authentication system based on edge calculations
CN108805258A (en) * 2018-05-23 2018-11-13 北京图森未来科技有限公司 A kind of neural network training method and its device, computer server

Also Published As

Publication number Publication date
CN109711546A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109711546B (en) Neural network training method and device, electronic equipment and storage medium
US20210118112A1 (en) Image processing method and device, and storage medium
CN109800737B (en) Face recognition method and device, electronic equipment and storage medium
CN109740516B (en) User identification method and device, electronic equipment and storage medium
CN110084775B (en) Image processing method and device, electronic equipment and storage medium
CN110348537B (en) Image processing method and device, electronic equipment and storage medium
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN111524521B (en) Voiceprint extraction model training method, voiceprint recognition method, voiceprint extraction model training device and voiceprint recognition device
CN109766954B (en) Target object processing method and device, electronic equipment and storage medium
CN110909815B (en) Neural network training method, neural network training device, neural network processing device, neural network training device, image processing device and electronic equipment
CN110569777B (en) Image processing method and device, electronic device and storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN110458218B (en) Image classification method and device and classification network training method and device
CN110287671B (en) Verification method and device, electronic equipment and storage medium
CN111178538A (en) Federated learning method and device for vertical data
CN109165738B (en) Neural network model optimization method and device, electronic device and storage medium
CN110633700B (en) Video processing method and device, electronic equipment and storage medium
CN111242303B (en) Network training method and device, and image processing method and device
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
CN109934240B (en) Feature updating method and device, electronic equipment and storage medium
CN110532956B (en) Image processing method and device, electronic equipment and storage medium
CN111310664B (en) Image processing method and device, electronic equipment and storage medium
CN111582383A (en) Attribute identification method and device, electronic equipment and storage medium
CN110415258B (en) Image processing method and device, electronic equipment and storage medium
CN111192218B (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant