CN110765917A - Active learning method, device, terminal and medium suitable for face recognition model training - Google Patents

Active learning method, device, terminal and medium suitable for face recognition model training Download PDF

Info

Publication number
CN110765917A
CN110765917A CN201910988214.7A CN201910988214A CN110765917A CN 110765917 A CN110765917 A CN 110765917A CN 201910988214 A CN201910988214 A CN 201910988214A CN 110765917 A CN110765917 A CN 110765917A
Authority
CN
China
Prior art keywords
face
data set
vector
data
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910988214.7A
Other languages
Chinese (zh)
Inventor
井怡
高鹏
汪宏
何峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Information Technology Research Center
Shanghai Advanced Research Institute of CAS
Original Assignee
Shanghai Information Technology Research Center
Shanghai Advanced Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Information Technology Research Center, Shanghai Advanced Research Institute of CAS filed Critical Shanghai Information Technology Research Center
Priority to CN201910988214.7A priority Critical patent/CN110765917A/en
Publication of CN110765917A publication Critical patent/CN110765917A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an active learning method, device, terminal and medium suitable for face recognition model training, which comprises the following steps: extracting the face characteristic vector of the labeled data set by using the recognition model to form a vector set; extracting a feature vector of each face image of the unmarked data set by using the identification model, and performing similarity calculation on the extracted face feature vector and each feature vector in the vector set to form a face data set to be marked; and carrying out diversity detection on the face data set so as to retrieve the data to be labeled from the face data set. The invention provides an active learning method for training a face recognition model, which selects data to be labeled for manual labeling through two-stage screening, and adds the data to a training sample set for training the model.

Description

Active learning method, device, terminal and medium suitable for face recognition model training
Technical Field
The present application relates to the field of machine learning, and in particular, to an active learning method, apparatus, terminal, and medium suitable for face recognition model training.
Background
In the field of image analysis, in some cases, data without class labels is quite abundant while data with class labels is quite rare, and manually labeling data is costly. In this case, the algorithm can actively select which data to label through active learning, then the data are sent to experts for labeling processing, and the labeled data are added into a training sample set to train the algorithm.
Generally, different objects have different class labels, and a traditional active learning model is trained by adding labeled data on the basis of an existing data set aiming at fixed classes, so that the accuracy of a classifier is improved, and the pre-training model is trained. However, for new category data, the algorithm cannot actively select which data set it belongs to, and the label cannot be filled into the labeled data set for training.
Content of application
In view of the above drawbacks of the prior art, an object of the present application is to provide an active learning method, apparatus, terminal, and medium suitable for training a face recognition model, for solving a technical problem that, in the prior art, for new class data, an algorithm cannot actively select which data set the new class data belongs to, and thus cannot label the new class data and fill the new class data into a labeled data set for training.
To achieve the above and other related objects, a first aspect of the present application provides an active learning method for face recognition model training, which includes: extracting the face characteristic vector of the labeled data set by using the recognition model to form a vector set; extracting a feature vector of each face image of the unmarked data set by using the identification model, and performing similarity calculation on the extracted face feature vector and each feature vector in the vector set to form a face data set to be marked; and carrying out diversity detection on the face data set so as to retrieve the data to be labeled from the face data set.
In some embodiments of the first aspect of the present application, let the labeled data set be data set a; enabling the unmarked data set to be a data set U; a vector set C of a vector set formed by extracting the face characteristic vector of the data set A by using the recognition model; the method comprises the following steps: and (3) carrying out a plurality of equal divisions on the data set U, wherein each equal division L comprises n face identities, so that the face data set is U' ═ { U ═ UiI belongs to L; wherein n and L are both natural numbers greater than 0; and extracting a characteristic vector from each face image in the face data Ui by using a face recognition model, and performing similarity calculation with each characteristic vector in the vector set C to obtain a face data set U' for extracting data from the face data and labeling.
In some embodiments of the first aspect of the present application, the calculation manner of the face data set U ″ includes: extracting a characteristic vector from each face image in the face data Ui by using a face recognition model, and performing cosine calculation on the characteristic vector and each characteristic vector in a vector set C to obtain a cosine distance set; taking the smallest w from the set of cosine distances1N and w with the largest sum2And recording the corresponding face data set as the face data set U' by the n distances.
In some embodiments of the first aspect of the present application, the performing diversity detection on the face data set includes: each type of face data U 'in calculation of face data set U'iCalculating the cosine distance between every two images; calculating distribution variance for the calculated cosine distances to form a data set U2(ii) a From the data set U2Middle capture maximum front w3*[U2]Manually labeling the group images; wherein [ U ]2]Represents U2The size of (2).
To achieve the above and other related objects, a second aspect of the present application provides an active learning apparatus suitable for face recognition model training, comprising: the characteristic extraction module is used for extracting the face characteristic vector of the labeled data set by using the recognition model to form a vector set; the similarity calculation module is used for extracting a feature vector of each face image of the unmarked data set by using the identification model and calculating the similarity of the extracted face feature vector and each feature vector in the vector set to form a face data set to be marked; and the active learning module is used for carrying out diversity detection on the face data set so as to retrieve data to be labeled from the face data set.
In some implementations of the second aspect of the present application, the labeled data set is made data set a; enabling the unmarked data set to be a data set U; a vector set C of a vector set formed by extracting the face characteristic vector of the data set A by using the recognition model; the similarity calculation module is used for: and (3) carrying out a plurality of equal divisions on the data set U, wherein each equal division L comprises n face identities, so that the face data set is U' ═ { U ═ UiI belongs to L; wherein n and L are both natural numbers greater than 0; and (3) carrying out a plurality of equal divisions on the data set U, wherein each equal division L comprises n face identities, so that the face data set is U' ═ { U ═ UiI belongs to L; wherein n and L are both natural numbers larger than 0.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having a computer program stored thereon, which, when being executed by a processor, implements the active learning method adapted for face recognition model training.
To achieve the above and other related objects, a fourth aspect of the present application provides an electronic terminal comprising: a processor and a memory; the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the terminal to execute the active learning method suitable for face recognition model training.
As described above, the active learning method, device, terminal and medium suitable for face recognition model training according to the present application have the following beneficial effects: the invention provides an active learning method for training a face recognition model, which selects data to be labeled for manual labeling through two-stage screening, and adds the data to a training sample set for training the model.
Drawings
Fig. 1 is a schematic diagram illustrating an active learning algorithm label training process according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a schematic block diagram of an active learning algorithm according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating an active learning method suitable for face recognition model training according to an embodiment of the present application.
Fig. 4 is a block diagram illustrating an active learning method suitable for face recognition model training according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an active learning apparatus suitable for face recognition model training according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic terminal according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that, in the following description, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
Aiming at the technical problem that the existing image analysis algorithm cannot actively select the data set to which the new class data belongs and cannot mark and fill the new class data into the data set with the label for training, the invention provides the active learning method for the training of the face recognition model.
Specifically, as shown in the active learning algorithm labeling training process in fig. 1, a labeled data set is input into an active learning model and trained to obtain a feature vector, for unlabeled data input into the model, the algorithm can actively select which data to label through active learning, and after the data are labeled (for example, the data are sent to an expert for labeling), the labeled data are added into a training sample set to train the algorithm, so that the labeling accuracy of the classifier is further improved.
Further, as shown in the schematic block diagram of the active learning algorithm in fig. 2, the labeled data set is trained through a face recognition model, and a labeled face feature vector set is output. And performing algorithm processing on the unmarked face image data and the feature vector set, outputting a calculation result, finding out corresponding face data through the corresponding relation, outputting the face data, and performing subsequent manual marking.
The core idea of the invention is explained correspondingly; hereinafter, the technical solution of the present invention will be described in more detail with reference to a plurality of embodiments.
Example one
Fig. 3 is a schematic flow chart showing an active learning method suitable for face recognition model training according to an embodiment of the present invention. The active learning method of the present embodiment includes steps S31 to S33.
It is to be noted that the methods in the present embodiment and in the following embodiments are applicable to various types of hardware devices. The hardware device is, for example, a controller, which includes but is not limited to: ARM (advanced RISC machines) controllers, FPGA (field Programmable Gate array) controllers, SoC (System on chip) controllers, DSP (digital Signal processing) controllers, or MCU (micro controller Unit) controllers, among others. Such as a computer including components such as memory, a memory controller, one or more processing units (CPUs), peripheral interfaces, RF circuitry, audio circuitry, speakers, a microphone, input/output (I/O) subsystems, a display screen, other output or control devices, and external ports; the computer includes, but is not limited to, Personal computers such as desktop computers, notebook computers, tablet computers, smart phones, smart televisions, Personal Digital Assistants (PDAs), and the like. The hardware device may also be, for example, a server, where the server may be arranged on one or more entity servers according to various factors such as functions and loads, or may be formed by a distributed or centralized server cluster, which is not limited in this embodiment.
For convenience of description, the face data set related to the present invention is divided into a data set a and a data set U, where the data set a is labeled data and the data set U is unlabeled data, and is specifically shown in fig. 4. The marked data comprises two parts of a face image and an identity label corresponding to the face image, the unmarked data comprises the face identity label and a plurality of collected corresponding face images, the part is not confirmed manually, and errors may exist in the corresponding images.
S31: extracting the face feature vectors of the labeled data set by using the recognition model to form a vector set.
And recording the number of the identity labels of different people as N for the labeled data set A, and training a face recognition model M for the N types of face data. For N types of labeled human faces, selecting one representative human face image as a standard image of each type, extracting human face feature vectors of the type by using a model M, and forming by the N feature vectorsSet C, C ═ { v ═ v1,v2,…,vN}。
And dividing the unlabeled data set U into k equal parts according to a preset identity label, and setting each equal part to contain n human face identities. For the k-point unlabeled face data, the following steps S32 and S33 are respectively circulated.
S32: and extracting a feature vector of each face image of the unmarked data set by using the identification model, and performing similarity calculation on the extracted face feature vector and each feature vector in the vector set to form a face data set to be marked.
Specifically, the set of identity tags of the face in the current equal division is set to be L, and the size of the L is n. The face data set in the equal score is recorded as U' ═ UiAnd for each face identity label i belongs to L and UiA set of possible face images representing a person of identity i,
Figure BDA0002237391470000051
wherein
Figure BDA0002237391470000052
Representing the jth possible face image of the person with identity i. Parameter w1,w2,w3Are each a number of 0 or more and 1 or less.
To UiExtracting a characteristic vector by using the model M, calculating a cosine distance with each characteristic vector in the set C, and taking each face imagedist (a, b) represents the cosine distance between vectors a, b. For each UiBy performing the above calculation, the distance set D ═ D corresponding to U' is obtainediGet the smallest value of w from D1N and w with the largest sum2And recording the corresponding face data set as U 'by n distances, wherein U' is the face data set to be marked selected in the step.
The purpose of cosine distance calculation is to select face data with small cosine distance obtained by calculating the labeled data for labeling, because the two input data are too similar and are not easy to be identified by a model algorithm, the unlabeled data are extracted and are manually labeled, which is favorable for improving the identification precision; in the diversity detection, in the same face data set, data with large variance of calculation results are extracted for face labeling, and the improvement of the diversity of the same face recognition is facilitated.
S33: and carrying out diversity detection on the face data set so as to extract data to be labeled from the face data set.
Specifically, in U ', for each type of face data U' thereiniCalculating cosine distances between every two pictures, and calculating distribution variance of the distances to form a data set U2. Slave U2The front w with the maximum variance is selected3*[U2]And manually labeling the group images. Wherein [ U ] is2]Represents U2The size of (2). And finally, adding the marked image into the data set A.
Example two
Fig. 5 is a schematic structural diagram of an active learning apparatus suitable for face recognition model training according to an embodiment of the present invention. The active learning apparatus of the present embodiment includes a feature extraction module 51, a similarity calculation module 52, and an active learning module 53.
The feature extraction module 51 is configured to extract a face feature vector of the labeled data set by using the recognition model to form a vector set; the similarity calculation module 52 is configured to perform feature vector extraction on each face image of the unlabeled data set by using the recognition model, and perform similarity calculation on the extracted face feature vector and each feature vector in the vector set to form a face data set to be labeled; the active learning module 53 is configured to perform diversity detection on the face data set for retrieving data to be labeled therefrom.
Optionally, the labeled data set is made to be a data set a; enabling the unmarked data set to be a data set U; a vector set C of a vector set formed by extracting the face characteristic vector of the data set A by using the recognition model; the similarity calculation module 52 is configured to: for data set UA number of aliquots are made, and each aliquot L includes n face identities, then the face data set is U' ═ UiI belongs to L; wherein n and L are both natural numbers greater than 0; and extracting a characteristic vector from each face image in the face data Ui by using a face recognition model, and performing similarity calculation with each characteristic vector in the vector set C to obtain a face data set U' for extracting data from the face data and labeling.
It should be noted that the embodiment of the active learning apparatus for face recognition model training in this embodiment is similar to the embodiment of the active learning method for face recognition model training in the foregoing embodiment, and therefore is not described again.
It should be understood that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the feature extraction module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the feature extraction module. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
EXAMPLE III
Fig. 6 is a schematic structural diagram of an electronic terminal according to an embodiment of the present invention. This example provides an electronic terminal, includes: a processor 61, a memory 62, a communicator 63; the memory 62 is connected with the processor 61 and the communicator 63 through a system bus and completes mutual communication, the memory 62 is used for storing computer programs, the communicator 63 is used for communicating with other equipment, and the processor 61 is used for operating the computer programs, so that the electronic terminal executes the steps of the active learning method suitable for the face recognition model training.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Example four
In this embodiment, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the active learning method suitable for face recognition model training.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In summary, the present application provides an active learning method, an active learning device, a terminal, and a medium suitable for training a face recognition model, and the present invention provides an active learning method for training a face recognition model, wherein data to be labeled is selected for manual labeling through two stages of screening, and the data is added into a training sample set to train the model. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (8)

1. An active learning method suitable for face recognition model training is characterized by comprising the following steps:
extracting the face characteristic vector of the labeled data set by using the recognition model to form a vector set;
extracting a feature vector of each face image of the unmarked data set by using the identification model, and performing similarity calculation on the extracted face feature vector and each feature vector in the vector set to form a face data set to be marked;
and carrying out diversity detection on the face data set so as to retrieve the data to be labeled from the face data set.
2. The method of claim 1, wherein:
let the labeled dataset be dataset A; enabling the unmarked data set to be a data set U; a vector set C of a vector set formed by extracting the face characteristic vector of the data set A by using the recognition model; the method comprises the following steps:
and (3) carrying out a plurality of equal divisions on the data set U, wherein each equal division L comprises n face identities, so that the face data set is U' ═ { U ═ UiI belongs to L; wherein n and L are both natural numbers greater than 0;
and extracting a characteristic vector from each face image in the face data Ui by using a face recognition model, and performing similarity calculation with each characteristic vector in the vector set C to obtain a face data set U' for extracting data from the face data and labeling.
3. The method of claim 2, wherein the face data set U "is computed in a manner comprising:
extracting a characteristic vector from each face image in the face data Ui by using a face recognition model, and performing cosine calculation on the characteristic vector and each characteristic vector in a vector set C to obtain a cosine distance set;
taking the smallest w from the set of cosine distances1N and w with the largest sum2And recording the corresponding face data set as the face data set U' by the n distances.
4. The method of claim 2, wherein the diversity detection of the face data set comprises:
each type of face data U 'in calculation of face data set U'iCalculating the cosine distance between every two images;
calculating distribution variance for the calculated cosine distances to form a data set U2
From the data set U2Middle capture maximum front w3*[U2]Manually labeling the group images; wherein [ U ]2]Represents U2The size of (2).
5. An active learning device suitable for face recognition model training, comprising:
the characteristic extraction module is used for extracting the face characteristic vector of the labeled data set by using the recognition model to form a vector set;
the similarity calculation module is used for extracting a feature vector of each face image of the unmarked data set by using the identification model and calculating the similarity of the extracted face feature vector and each feature vector in the vector set to form a face data set to be marked;
and the active learning module is used for carrying out diversity detection on the face data set so as to retrieve data to be labeled from the face data set.
6. The learning apparatus according to claim 5, characterized in that:
let the labeled dataset be dataset A; enabling the unmarked data set to be a data set U; a vector set C of a vector set formed by extracting the face characteristic vector of the data set A by using the recognition model; the similarity calculation module is used for:
and (3) carrying out a plurality of equal divisions on the data set U, wherein each equal division L comprises n face identities, so that the face data set is U' ═ { U ═ UiI belongs to L; wherein n and L are both greater than 0Counting;
and extracting a characteristic vector from each face image in the face data Ui by using a face recognition model, and performing similarity calculation with each characteristic vector in the vector set C to obtain a face data set U' for extracting data from the face data and labeling.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the active learning method of any one of claims 1 to 4, which is suitable for face recognition model training.
8. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory to cause the terminal to perform the active learning method adapted for face recognition model training according to any one of claims 1 to 4.
CN201910988214.7A 2019-10-17 2019-10-17 Active learning method, device, terminal and medium suitable for face recognition model training Pending CN110765917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910988214.7A CN110765917A (en) 2019-10-17 2019-10-17 Active learning method, device, terminal and medium suitable for face recognition model training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910988214.7A CN110765917A (en) 2019-10-17 2019-10-17 Active learning method, device, terminal and medium suitable for face recognition model training

Publications (1)

Publication Number Publication Date
CN110765917A true CN110765917A (en) 2020-02-07

Family

ID=69332271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910988214.7A Pending CN110765917A (en) 2019-10-17 2019-10-17 Active learning method, device, terminal and medium suitable for face recognition model training

Country Status (1)

Country Link
CN (1) CN110765917A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766347A (en) * 2021-01-12 2021-05-07 合肥黎曼信息科技有限公司 Active learning method combining labeling quality control
CN113221747A (en) * 2021-05-13 2021-08-06 支付宝(杭州)信息技术有限公司 Privacy data processing method, device and equipment based on privacy protection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model
CN109818929A (en) * 2018-12-26 2019-05-28 天翼电子商务有限公司 Based on the unknown threat cognitive method actively from step study, system, storage medium, terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model
CN109818929A (en) * 2018-12-26 2019-05-28 天翼电子商务有限公司 Based on the unknown threat cognitive method actively from step study, system, storage medium, terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
上官芳芳: "基于主动学习的人脸表情识别研究", 《中国优秀博硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766347A (en) * 2021-01-12 2021-05-07 合肥黎曼信息科技有限公司 Active learning method combining labeling quality control
CN113221747A (en) * 2021-05-13 2021-08-06 支付宝(杭州)信息技术有限公司 Privacy data processing method, device and equipment based on privacy protection
CN113221747B (en) * 2021-05-13 2022-04-29 支付宝(杭州)信息技术有限公司 Privacy data processing method, device and equipment based on privacy protection

Similar Documents

Publication Publication Date Title
US20200356818A1 (en) Logo detection
US9349076B1 (en) Template-based target object detection in an image
WO2020244075A1 (en) Sign language recognition method and apparatus, and computer device and storage medium
WO2022247005A1 (en) Method and apparatus for identifying target object in image, electronic device and storage medium
WO2019147413A1 (en) Face synthesis
CN110765882B (en) Video tag determination method, device, server and storage medium
CN111461164B (en) Sample data set capacity expansion method and model training method
CN113379786B (en) Image matting method, device, computer equipment and storage medium
WO2019119396A1 (en) Facial expression recognition method and device
CN112990318B (en) Continuous learning method, device, terminal and storage medium
CN113221918B (en) Target detection method, training method and device of target detection model
US20230021551A1 (en) Using training images and scaled training images to train an image segmentation model
CN111178196B (en) Cell classification method, device and equipment
CN110765917A (en) Active learning method, device, terminal and medium suitable for face recognition model training
CN111062440A (en) Sample selection method, device, equipment and storage medium
CN113516029A (en) Image crowd counting method, device, medium and terminal based on partial annotation
CN114723652A (en) Cell density determination method, cell density determination device, electronic apparatus, and storage medium
CN117197479A (en) Image analysis method, device, computer equipment and storage medium applying corn ear outer surface
WO2023109086A1 (en) Character recognition method, apparatus and device, and storage medium
CN108229498B (en) Zipper piece identification method, device and equipment
CN111767710B (en) Indonesia emotion classification method, device, equipment and medium
CN110287943B (en) Image object recognition method and device, electronic equipment and storage medium
CN112699908A (en) Method for labeling picture, electronic terminal, computer readable storage medium and equipment
CN114911963B (en) Template picture classification method, device, equipment, storage medium and product
CN116071625B (en) Training method of deep learning model, target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207

RJ01 Rejection of invention patent application after publication