CN111553320B - Feature extraction method for protecting personal data privacy, model training method and hardware - Google Patents

Feature extraction method for protecting personal data privacy, model training method and hardware Download PDF

Info

Publication number
CN111553320B
CN111553320B CN202010409389.0A CN202010409389A CN111553320B CN 111553320 B CN111553320 B CN 111553320B CN 202010409389 A CN202010409389 A CN 202010409389A CN 111553320 B CN111553320 B CN 111553320B
Authority
CN
China
Prior art keywords
image sequence
sample object
frame image
data
encryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010409389.0A
Other languages
Chinese (zh)
Other versions
CN111553320A (en
Inventor
杨成平
赵凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010409389.0A priority Critical patent/CN111553320B/en
Priority to CN202210067114.2A priority patent/CN114419712A/en
Publication of CN111553320A publication Critical patent/CN111553320A/en
Priority to PCT/CN2021/093367 priority patent/WO2021228148A1/en
Application granted granted Critical
Publication of CN111553320B publication Critical patent/CN111553320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a feature extraction method, a model training method and hardware for protecting personal data privacy. The feature extraction method comprises the following steps: a sequence of multi-frame images presenting a sample object is acquired. And performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object. And performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object, wherein the target encrypted feature data can be used as model training data.

Description

Feature extraction method for protecting personal data privacy, model training method and hardware
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a feature extraction method, a model training method, and hardware for protecting personal data privacy.
Background
Deep learning models have become increasingly popular by virtue of having the ability to mechanically manipulate information. Face recognition is a common business form in the field of deep learning. The human face recognition principle is that the human face features of a user to be recognized are approximately matched with the sample human face features based on a deep learning model, so that the identity of the user to be recognized is determined. Obviously, the sample face features belonging to personal data leave the background to have a leakage risk, and privacy cannot be effectively protected.
In view of the above, there is an urgent need for a technical solution capable of protecting personal data privacy in the deep learning field.
Disclosure of Invention
The embodiment of the specification aims to provide a feature extraction method, a model training method and hardware for protecting personal data privacy, and the personal data privacy can be protected in the deep learning field.
In order to achieve the above object, the embodiments of the present specification are implemented as follows:
in a first aspect, a feature extraction method for protecting personal data privacy is provided, including:
acquiring a multi-frame image sequence presenting a sample object;
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
and performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
In a second aspect, a model training method for protecting personal data privacy is provided, including:
acquiring a multi-frame image sequence presenting a sample object;
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object;
and training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
In a third aspect, a feature extraction apparatus that protects private data includes:
the image sequence acquisition module is used for acquiring a multi-frame image sequence presenting a sample object;
the characteristic encryption representation module is used for carrying out characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object;
and the characteristic ensemble learning module is used for carrying out ensemble learning on the initial encrypted characteristic data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted characteristic data corresponding to the sample object.
In a fourth aspect, an electronic device is provided comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring a multi-frame image sequence presenting a sample object;
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
and performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
In a fifth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a multi-frame image sequence presenting a sample object;
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
and performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
In a sixth aspect, a model training apparatus for protecting privacy of personal data is provided, including:
the image sequence acquisition module is used for acquiring a multi-frame image sequence presenting a sample object;
the characteristic encryption representation module is used for carrying out characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object;
the characteristic ensemble learning module is used for carrying out ensemble learning on the initial encrypted characteristic data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted characteristic data corresponding to the sample object;
and the model training module is used for training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
In a seventh aspect, an electronic device is provided that includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring a multi-frame image sequence presenting a sample object;
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object;
and training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
In an eighth aspect, a computer readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a multi-frame image sequence presenting a sample object;
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object;
and training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
The scheme of the embodiment of the description adopts an encryption mode of nonlinear conversion to perform feature encryption extraction on a multi-frame image sequence presenting a sample object to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, and then integrates the initial encryption feature data of the multi-frame image sequence through an ensemble learning mode to obtain high-order target encryption feature data. Because the whole scheme depends on the encrypted image characteristic data, if the encrypted image characteristic data is leaked after being sampled, the personal data of the sample object cannot be exposed, and the privacy protection effect is realized. Meanwhile, the target encryption characteristic data is obtained by integrating the initial encryption characteristic data of the multi-frame image sequence, so that the loss caused by image characteristic encryption can be effectively corrected, and better model performance can be obtained when the target encryption characteristic data is subsequently used for model training.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative efforts.
Fig. 1 is a schematic flow chart of a feature extraction method provided in an embodiment of the present specification.
Fig. 2 is a schematic flowchart of a model training method provided in an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a feature extraction device provided in an embodiment of this specification.
Fig. 4 is a schematic structural diagram of a model training apparatus provided in an embodiment of the present specification.
Fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of this specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
As mentioned above, the face recognition principle is to perform approximate matching between the face features of the user to be recognized and the sample face features based on the deep learning model, so as to determine the identity of the user to be recognized. Wherein the training of the deep learning model depends on the sample face image. And the sample face images belong to personal data of the user, and the risk of privacy leakage exists at the end. For this reason, this document aims to propose a technical solution capable of protecting personal data privacy in the field of deep learning.
Fig. 1 is a flowchart of a feature extraction method for protecting privacy of personal data according to an embodiment of the present disclosure. The method shown in fig. 1 may be performed by a corresponding apparatus, comprising:
step S102, acquiring a multi-frame image sequence presenting the sample object.
In particular, this step may intercept a sequence of multiple frames of images from a video presented with the sample object. For example, a sample object is subjected to video shooting through an image pick-up disclosure of a terminal device, and a multi-frame image sequence presenting the sample object is intercepted according to a preset frame rate.
And step S104, performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object.
Based on the nonlinear conversion, the ratio of the amount of change in the output value (initial encrypted feature data) to the amount of change in its corresponding input value (multi-frame image sequence) is not a constant conversion, and has an encrypting action.
In practical applications, the manner of implementing the nonlinear conversion is not exclusive, and the present specification is not particularly limited. By way of exemplary presentation:
in the step, a locality sensitive hashing algorithm can be adopted to perform hashing conversion on the obtained multi-frame image sequence to obtain initial encryption characteristic data of the sample object corresponding to the multi-frame image sequence.
Or,
feature encryption extraction can also be carried out on the multi-frame image sequence by utilizing a convolutional neural network model. Wherein, the convolutional neural network model may include: the convolutional layer is used for carrying out convolution processing on the obtained multi-frame image sequence to obtain a convolutional layer output feature set; the pooling layer is used for pooling the convolution layer output feature set based on a maximum pooling algorithm and/or a mean pooling algorithm to obtain a pooling layer output feature set; and the full connection layer converts the output feature set of the pooling layer into initial encryption feature data of a specified dimension. Obviously, the initial encryption feature data corresponding to the sample object output by the convolutional neural network model can be obtained by inputting the obtained multi-frame image sequence into the convolutional neural network model.
And step S106, performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
Among them, ensemble learning (ensemble learning) is an existing machine learning method. It accomplishes the learning task by building and combining multiple individual learners. The individual learners are usually generated from training data by an existing learning algorithm, such as C4.5 decision algorithm, BP neural network algorithm, etc., and the ensemble only includes the same type of individual learners, such as decision trees in "decision tree ensemble" and neural networks in "neural network ensemble", and such ensemble is "homogeneous". The individual learners in homogeneous integration are also referred to as "base learners". The corresponding learning algorithm is referred to as the "base learning algorithm". An integration may also contain different types of individual learners, e.g., both decision trees and neural networks, such an integration being referred to as "heterogeneous". The individual learners in heterogeneous integration are generated by different learning algorithms, and at this time, there is no longer a base learning algorithm, often referred to as a "component learner" or directly as an individual learner. The embodiments of the present specification can integrate the individual learners through a conventional ensemble learning strategy (such as an averaging method, a voting method, and a learning method), so as to realize the selection integration of the initial encrypted feature data to obtain the higher-order target encrypted feature data.
It should be understood that the initial encrypted feature data and/or the target encrypted feature data obtained in this step may be used as model training data. Therefore, only the initial encryption characteristic data and/or the target encryption characteristic data need to be sampled. And the multi-frame image sequence obtained before can be deleted after the characteristic representation is finished, namely burning after using.
The characteristic method of the embodiment of the present specification adopts an encryption mode of nonlinear conversion to perform characteristic encryption extraction on a multi-frame image sequence presenting a sample object, to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and then integrates the initial encryption characteristic data of the multi-frame image sequence through an ensemble learning mode to obtain high-order target encryption characteristic data. Because the whole scheme depends on the encrypted image characteristic data, if the encrypted image characteristic data is leaked after being sampled, the personal data of the sample object cannot be exposed, and the privacy protection effect is realized. Meanwhile, the target encryption characteristic data is obtained by integrating the initial encryption characteristic data of the multi-frame image sequence, so that the loss caused by image characteristic encryption can be effectively corrected, and better model performance can be obtained when the target encryption characteristic data is subsequently used for model training.
The method of the embodiments of the present disclosure is described below with reference to practical application scenarios.
The process for acquiring the feature data required by face recognition in the application scene comprises the following steps:
the method comprises the steps of firstly, carrying out video shooting on a sample object face, and intercepting a multi-frame face image sequence presenting the sample object face from a video.
And step two, performing feature representation on the obtained multi-frame human face image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data.
And step three, deleting the multi-frame face image sequence of the sample object intercepted from the video.
And step four, storing the initial encrypted feature data corresponding to the sample object into a feature library in a correlated manner.
And fifthly, performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
And step six, storing the target encryption characteristic data corresponding to the sample object into a characteristic library in a correlation manner.
Based on the above process, in the subsequent process of training the user identification model, the initial encrypted feature data and/or the target encrypted feature data corresponding to the sample object may be called from the feature library, and the user identification model may be trained.
Correspondingly, the embodiment of the specification further provides a model training method for protecting personal data privacy. FIG. 2 is a flowchart of a model training method according to an embodiment of the present disclosure. The method shown in fig. 2 may be performed by a corresponding apparatus, comprising:
in step S202, a sequence of multiple frames of images representing a sample object is acquired.
And step S204, performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object.
And step S206, performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
And S208, training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
In a specific training process, target encryption characteristic data corresponding to a sample object is used as input data of a preset learning model, and a model classification label corresponding to a sample user is used as output data of the preset learning model. After the target encryption characteristic data is input into the preset learning model, a training result given by the preset learning model can be obtained. The training result is a predicted classification result of the preset learning model for the sample user, and may be different from a true value classification result indicated by the model classification label of the sample user. In the embodiments of the present disclosure, an error value between the predicted classification result and the true classification result may be calculated based on a loss function derived from the maximum likelihood estimation, and parameters in a preset learning model (for example, a weight value of a bottom vector) may be adjusted to reduce the error value, so as to achieve a training effect.
The model training method in the embodiment of the present specification adopts an encryption mode of nonlinear conversion to perform feature encryption extraction on a multi-frame image sequence presenting a sample object, to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, and then integrates the initial encryption feature data of the multi-frame image sequence through an integrated learning mode, to obtain high-order target encryption feature data. Because the whole scheme depends on the encrypted image characteristic data, if the encrypted image characteristic data is leaked after being sampled, the personal data of the sample object cannot be exposed, and the privacy protection effect is realized. Meanwhile, the target encryption characteristic data is obtained by integrating the initial encryption characteristic data of the multi-frame image sequence, so that the loss caused by image characteristic encryption can be effectively corrected, and better model performance can be obtained after the preset learning model is trained.
It should be understood that the trained preset learning model can be used for prediction and recognition, so as to provide data support for relevant business decisions.
For example, the preset learning model in the embodiment of the present specification may be applied to a face payment service. In the process of user identity authentication for face payment, a multi-frame image sequence presenting a payment object to be authenticated can be acquired; and then, taking the same nonlinear conversion as an encryption mode, and performing characteristic representation on the multi-frame image sequence of the payment object to obtain initial encryption characteristic data of the payment object. Similarly, the initial encryption feature data of the payment object is subjected to ensemble learning to obtain target encryption feature data of the payment object, the target encryption feature data of the payment object is input into the preset learning model, and whether the payment object is a payment authorized user (target user) is judged by the preset learning model. And finally, determining whether to initiate face payment or not according to the recognition result of the preset learning model.
Corresponding to the feature extraction method, the embodiment of the specification further provides a feature extraction device for protecting private data. Fig. 3 is a schematic structural diagram of a feature extraction apparatus 300 according to an embodiment of the present specification, including:
an image sequence acquiring module 310, configured to acquire a multi-frame image sequence representing a sample object;
the characteristic encryption representing module 320 is configured to perform characteristic representation on the obtained multi-frame image sequence by using nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, where the characteristic data of the sample object represented by the multi-frame image sequence belongs to personal data of the sample object;
the feature ensemble learning module 330 performs ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
The feature extraction device in the embodiment of the present specification performs feature encryption extraction on a multi-frame image sequence representing a sample object by using a nonlinear conversion encryption manner to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, and then integrates the initial encryption feature data of the multi-frame image sequence by using an ensemble learning manner to obtain high-order target encryption feature data. Because the whole scheme depends on the encrypted image characteristic data, if the encrypted image characteristic data is leaked after being sampled, the personal data of the sample object cannot be exposed, and the privacy protection effect is realized. Meanwhile, the target encryption characteristic data is obtained by integrating the initial encryption characteristic data of the multi-frame image sequence, so that the loss caused by image characteristic encryption can be effectively corrected, and better model performance can be obtained when the target encryption characteristic data is subsequently used for model training.
Optionally, the feature encryption representing module 320 specifically inputs the obtained multi-frame image sequence into a preset convolutional neural network model, so as to obtain initial encryption feature data of the sample object corresponding to the multi-frame image sequence. Here, the convolutional neural network model includes: the convolutional layer is used for carrying out convolution processing on the obtained multi-frame image sequence to obtain a convolutional layer output feature set; the pooling layer is used for pooling the convolution layer output feature set based on a maximum pooling algorithm and/or a mean pooling algorithm to obtain a pooling layer output feature set; and the full connection layer converts the output feature set of the pooling layer into initial encryption feature data of a specified dimension.
Optionally, the feature encryption representing module 320 may further perform hash conversion on the obtained multi-frame image sequence based on a locality sensitive hash algorithm to obtain initial encryption feature data of the sample object corresponding to the multi-frame image sequence.
Optionally, the feature extraction apparatus 300 of the embodiment of the present specification may further include:
and the storage module is used for storing the sample object, the corresponding initial encryption characteristic data and/or the target encryption characteristic data in a correlation manner.
Optionally, the feature extraction apparatus 300 of the embodiment of the present specification may further include:
and the deleting module is used for deleting the obtained multi-frame image sequence after the initial encryption characteristic data of the sample object corresponding to the multi-frame image sequence is obtained.
Corresponding to the above feature extraction method, an embodiment of the present specification further provides a model training apparatus for protecting privacy of personal data. Fig. 4 is a schematic structural diagram of a model training apparatus 400 according to an embodiment of the present disclosure, including:
an image sequence acquisition module 410, which acquires a multi-frame image sequence presenting a sample object;
the characteristic encryption representing module 420 is configured to perform characteristic representation on the obtained multi-frame image sequence by using nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, where the characteristic data of the sample object represented by the multi-frame image sequence belongs to personal data of the sample object;
the feature ensemble learning module 430 is configured to perform ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object;
the model training module 440 trains a preset learning model based on the target encryption feature data corresponding to the sample object and the model classification label of the sample object.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present specification. Referring to fig. 5, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
Alternatively, the processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program, so as to form the above-mentioned feature extraction device on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
a sequence of multi-frame images presenting a sample object is acquired.
And performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object.
And performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
Or the processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the model training device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
a sequence of multi-frame images presenting a sample object is acquired.
And performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object.
And performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
And training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
The feature extraction method disclosed in the embodiment shown in fig. 1 or the model training method disclosed in the embodiment shown in fig. 2 may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It should be understood that the electronic device of the embodiment of the present specification may implement the functions of the above-described feature extraction apparatus in the embodiment shown in fig. 1, or implement the functions of the above-described model training apparatus in the embodiment shown in fig. 2. Since the principle is the same, the detailed description is omitted here.
Of course, besides the software implementation, the electronic device in this specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Furthermore, the present specification embodiments also propose a computer-readable storage medium storing one or more programs, the one or more programs including instructions.
Wherein the above instructions, when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 1, and are specifically configured to perform the following steps:
a sequence of multi-frame images presenting a sample object is acquired.
And performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object.
And performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
Alternatively, the above instructions, when executed by a portable electronic device comprising a plurality of application programs, can cause the portable electronic device to perform the method of the embodiment shown in fig. 2, and is specifically configured to perform the following steps:
a sequence of multi-frame images presenting a sample object is acquired.
And performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the feature data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object.
And performing ensemble learning on the initial encrypted feature data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object.
And training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification. Moreover, all other embodiments obtained by a person skilled in the art without making any inventive step shall fall within the scope of protection of this document.

Claims (10)

1. A feature extraction method for protecting personal data privacy comprises the following steps:
acquiring a multi-frame image sequence presenting a sample object;
performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
and performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object, wherein the target encrypted feature data are used as model training data.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the method comprises the following steps:
inputting the obtained multi-frame image sequence into a preset convolutional neural network model to obtain initial encryption characteristic data of the sample object corresponding to the multi-frame image sequence; wherein the convolutional neural network model comprises:
the convolutional layer is used for carrying out convolution processing on the obtained multi-frame image sequence to obtain a convolutional layer output feature set;
the pooling layer is used for pooling the convolution layer output feature set based on a maximum pooling algorithm and/or a mean pooling algorithm to obtain a pooling layer output feature set;
and the full connection layer converts the output feature set of the pooling layer into initial encryption feature data of a specified dimension.
3. The method as set forth in claim 1,
performing feature representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption feature data of the multi-frame image sequence corresponding to the sample object, wherein the method comprises the following steps:
and performing hash conversion on the obtained multi-frame image sequence based on a locality sensitive hash algorithm to obtain initial encryption characteristic data of the sample object corresponding to the multi-frame image sequence.
4. The method of any of claims 1-3, further comprising:
and performing associated storage on the sample object, the corresponding initial encryption characteristic data and/or the target encryption characteristic data.
5. A model training method for protecting personal data privacy comprises the following steps:
acquiring a multi-frame image sequence presenting a sample object;
performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object;
and training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
6. A feature extraction apparatus that protects private data, comprising:
the image sequence acquisition module is used for acquiring a multi-frame image sequence presenting a sample object;
the characteristic encryption representing module is used for carrying out characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object;
and the characteristic ensemble learning module is used for carrying out ensemble learning on the initial encrypted characteristic data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted characteristic data corresponding to the sample object, and the target encrypted characteristic data is used as model training data.
7. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring a multi-frame image sequence presenting a sample object;
performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
and performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object, wherein the target encrypted feature data are used as model training data.
8. A computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a multi-frame image sequence presenting a sample object;
performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
and performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object, wherein the target encrypted feature data are used as model training data.
9. A model training apparatus for protecting privacy of personal data, comprising:
the image sequence acquisition module is used for acquiring a multi-frame image sequence presenting a sample object;
the characteristic encryption representing module is used for carrying out characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to the personal data of the sample object;
the characteristic ensemble learning module is used for carrying out ensemble learning on the initial encrypted characteristic data of the multi-frame image sequence corresponding to the sample object to obtain target encrypted characteristic data corresponding to the sample object;
and the model training module is used for training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
10. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring a multi-frame image sequence presenting a sample object;
performing characteristic representation on the obtained multi-frame image sequence by taking nonlinear conversion as an encryption mode to obtain initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, and deleting the obtained multi-frame image sequence after obtaining the initial encryption characteristic data of the multi-frame image sequence corresponding to the sample object, wherein the characteristic data of the sample object presented by the multi-frame image sequence belongs to personal data of the sample object;
performing ensemble learning on initial encrypted feature data of a multi-frame image sequence corresponding to the sample object to obtain target encrypted feature data corresponding to the sample object;
and training a preset learning model based on the target encryption characteristic data corresponding to the sample object and the model classification label of the sample object.
CN202010409389.0A 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware Active CN111553320B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010409389.0A CN111553320B (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware
CN202210067114.2A CN114419712A (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware
PCT/CN2021/093367 WO2021228148A1 (en) 2020-05-14 2021-05-12 Feature extraction method for protecting personal data privacy, model training method, and hardware

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010409389.0A CN111553320B (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210067114.2A Division CN114419712A (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware

Publications (2)

Publication Number Publication Date
CN111553320A CN111553320A (en) 2020-08-18
CN111553320B true CN111553320B (en) 2021-12-21

Family

ID=72006412

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010409389.0A Active CN111553320B (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware
CN202210067114.2A Pending CN114419712A (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210067114.2A Pending CN114419712A (en) 2020-05-14 2020-05-14 Feature extraction method for protecting personal data privacy, model training method and hardware

Country Status (2)

Country Link
CN (2) CN111553320B (en)
WO (1) WO2021228148A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553320B (en) * 2020-05-14 2021-12-21 支付宝(杭州)信息技术有限公司 Feature extraction method for protecting personal data privacy, model training method and hardware
CN114676396B (en) * 2022-05-30 2022-08-30 山东极视角科技有限公司 Protection method and device for deep neural network model, electronic equipment and medium
CN116055651B (en) * 2023-01-06 2023-11-10 广东电网有限责任公司 Shared access method, device, equipment and medium for multi-center energy economic data

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100097861A (en) * 2009-02-27 2010-09-06 홍익대학교 산학협력단 Improved performance of face recognition system using auto removal background
CN105631296A (en) * 2015-12-30 2016-06-01 北京工业大学 Design method of safety face verification system based on CNN (convolutional neural network) feature extractor
CN106447625A (en) * 2016-09-05 2017-02-22 北京中科奥森数据科技有限公司 Facial image series-based attribute identification method and device
CN106682650A (en) * 2017-01-26 2017-05-17 北京中科神探科技有限公司 Mobile terminal face recognition method and system based on technology of embedded deep learning
US9785796B1 (en) * 2014-05-28 2017-10-10 Snap Inc. Apparatus and method for automated privacy protection in distributed images
CN108596056A (en) * 2018-04-10 2018-09-28 武汉斑马快跑科技有限公司 A kind of taxi operation behavior act recognition methods and system
CN110087099A (en) * 2019-03-11 2019-08-02 北京大学 A kind of monitoring method and system for protecting privacy
CN110378092A (en) * 2019-07-26 2019-10-25 北京积加科技有限公司 Identification system and client, server and method
CN110598606A (en) * 2019-09-02 2019-12-20 南京邮电大学 Indoor falling behavior detection method with visual privacy protection advantage
CN110633650A (en) * 2019-08-22 2019-12-31 首都师范大学 Convolutional neural network face recognition method and device based on privacy protection
CN110991462A (en) * 2019-10-31 2020-04-10 福建师范大学 Privacy protection CNN-based secret image identification method and system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958244B (en) * 2018-01-12 2020-07-10 成都视观天下科技有限公司 Face recognition method and device based on video multi-frame face feature fusion
CN108681698B (en) * 2018-04-28 2021-03-16 武汉大学 Large-scale iris recognition method with privacy protection function
CN108764486A (en) * 2018-05-23 2018-11-06 哈尔滨工业大学 A kind of feature selection approach and device based on integrated study
CN110110120B (en) * 2018-06-11 2021-05-25 北方工业大学 Image retrieval method and device based on deep learning
CN108960119B (en) * 2018-06-28 2021-06-08 武汉市哈哈便利科技有限公司 Commodity recognition algorithm for multi-angle video fusion of unmanned sales counter
CN108960207B (en) * 2018-08-08 2021-05-11 广东工业大学 Image recognition method, system and related components
CN109359210A (en) * 2018-08-09 2019-02-19 中国科学院信息工程研究所 The face retrieval method and system of double blind secret protection
US10915995B2 (en) * 2018-09-24 2021-02-09 Movidius Ltd. Methods and apparatus to generate masked images based on selective privacy and/or location tracking
CN108898191A (en) * 2018-09-26 2018-11-27 苏州米特希赛尔人工智能有限公司 Convolutional neural networks feature-extraction images sensor
CN109871749B (en) * 2019-01-02 2021-07-16 上海高重信息科技有限公司 Pedestrian re-identification method and device based on deep hash and computer system
CN110427972B (en) * 2019-07-09 2022-02-22 众安信息技术服务有限公司 Certificate video feature extraction method and device, computer equipment and storage medium
CN110363183B (en) * 2019-07-30 2020-05-08 贵州大学 Service robot visual image privacy protection method based on generating type countermeasure network
CN111080593B (en) * 2019-12-07 2023-06-16 上海联影智能医疗科技有限公司 Image processing device, method and storage medium
CN111091102B (en) * 2019-12-20 2022-05-24 华中科技大学 Video analysis device, server, system and method for protecting identity privacy
CN111553320B (en) * 2020-05-14 2021-12-21 支付宝(杭州)信息技术有限公司 Feature extraction method for protecting personal data privacy, model training method and hardware

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100097861A (en) * 2009-02-27 2010-09-06 홍익대학교 산학협력단 Improved performance of face recognition system using auto removal background
US9785796B1 (en) * 2014-05-28 2017-10-10 Snap Inc. Apparatus and method for automated privacy protection in distributed images
CN105631296A (en) * 2015-12-30 2016-06-01 北京工业大学 Design method of safety face verification system based on CNN (convolutional neural network) feature extractor
CN106447625A (en) * 2016-09-05 2017-02-22 北京中科奥森数据科技有限公司 Facial image series-based attribute identification method and device
CN106682650A (en) * 2017-01-26 2017-05-17 北京中科神探科技有限公司 Mobile terminal face recognition method and system based on technology of embedded deep learning
CN108596056A (en) * 2018-04-10 2018-09-28 武汉斑马快跑科技有限公司 A kind of taxi operation behavior act recognition methods and system
CN110087099A (en) * 2019-03-11 2019-08-02 北京大学 A kind of monitoring method and system for protecting privacy
CN110378092A (en) * 2019-07-26 2019-10-25 北京积加科技有限公司 Identification system and client, server and method
CN110633650A (en) * 2019-08-22 2019-12-31 首都师范大学 Convolutional neural network face recognition method and device based on privacy protection
CN110598606A (en) * 2019-09-02 2019-12-20 南京邮电大学 Indoor falling behavior detection method with visual privacy protection advantage
CN110991462A (en) * 2019-10-31 2020-04-10 福建师范大学 Privacy protection CNN-based secret image identification method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Image Feature Extraction in Encrypted Domain With Privacy-Preserving Hahn Moments;TENGFEI YANG 等;《IEEE Access》;20180921;47521-47534 *
Secure outsourcing SIFT: Efficient and Privacy-Preserving Image Feature Extraction in the Encrypted Domain;Linzhi Jiang 等;《IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING》;20200228;第17卷(第1期);179-192 *
卷积神经网络的人脸隐私保护识别;章坚武 等;《中国图象图形学报》;20190531;第24卷(第5期);744-752 *
红外图像中基于多特征提取的跌倒检测算法研究;杨任兵 等;《红外技术》;20171231;第39卷(第12期);1131-1138 *

Also Published As

Publication number Publication date
CN114419712A (en) 2022-04-29
CN111553320A (en) 2020-08-18
WO2021228148A1 (en) 2021-11-18

Similar Documents

Publication Publication Date Title
CN111553320B (en) Feature extraction method for protecting personal data privacy, model training method and hardware
CN112084946B (en) Face recognition method and device and electronic equipment
CN111862035B (en) Training method of light spot detection model, light spot detection method, device and medium
CN107818301B (en) Method and device for updating biological characteristic template and electronic equipment
CN109040594B (en) Photographing method and device
CN112949545B (en) Method, apparatus, computing device and medium for recognizing face image
CN106651797B (en) Method and device for determining effective area of signal lamp
CN111079816A (en) Image auditing method and device and server
CN112016475A (en) Human body detection and identification method and device
CN113505682A (en) Living body detection method and device
CN113850211A (en) Method and device for detecting injected video attack
CN111125713B (en) Method and device for detecting horizontal override vulnerability and electronic equipment
CN111428740A (en) Detection method and device for network-shot photo, computer equipment and storage medium
CN115830668A (en) User authentication method and device based on facial recognition, computing equipment and medium
CN113343295B (en) Image processing method, device, equipment and storage medium based on privacy protection
CN112016057B (en) Privacy protection method and device, evaluation method and device of code file and electronic equipment
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium
CN113553469B (en) Data processing method, device, electronic equipment and computer storage medium
CN114449308A (en) Automatic video clipping method and device and cloud clipping server
CN110489592B (en) Video classification method, apparatus, computer device and storage medium
CN111783742A (en) Image classification method for defending against attack, service decision method and device
CN111539022B (en) Feature matching method, target object identification method and related hardware
CN114596638A (en) Face living body detection method, device and storage medium
CN113052045A (en) Method, apparatus, computing device and medium for recognizing finger vein image
CN111901673A (en) Video prediction method, device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035841

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant