CN113688764A - Training method and device for face optimization model and computer readable medium - Google Patents

Training method and device for face optimization model and computer readable medium Download PDF

Info

Publication number
CN113688764A
CN113688764A CN202111011611.2A CN202111011611A CN113688764A CN 113688764 A CN113688764 A CN 113688764A CN 202111011611 A CN202111011611 A CN 202111011611A CN 113688764 A CN113688764 A CN 113688764A
Authority
CN
China
Prior art keywords
face
optimization model
training
matching
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111011611.2A
Other languages
Chinese (zh)
Inventor
刘淼
林恒杰
钟子恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lusheng Technology Co ltd
Original Assignee
Lusheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lusheng Technology Co ltd filed Critical Lusheng Technology Co ltd
Priority to CN202111011611.2A priority Critical patent/CN113688764A/en
Publication of CN113688764A publication Critical patent/CN113688764A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a training method, a device and a computer readable medium of a face optimization model, wherein the method comprises the following steps: acquiring a face image database and a face feature database, and selecting a face image in the face image database to input into a face recognition model; obtaining a result of matching the output quantity of the face recognition model with the face feature map database; determining training data of the face optimization model according to the matching result; and training the face optimization model based on the training data. The method and the device realize the adaptive screening of the face optimization model on the face image and improve the identification accuracy of the subsequent face identification network.

Description

Training method and device for face optimization model and computer readable medium
Technical Field
The invention mainly relates to the field of artificial intelligence, in particular to a training method and a training device for a face optimization model and a computer readable medium.
Background
With the development of artificial intelligence technology, the application of the artificial intelligence technology in the field of computer vision is more mature. Especially in the field of security protection. Face recognition has become one of the very important requirements in security scenes.
At present, the human face recognition under the security scene has a bigger challenge compared with other scenes, the human face under the security scene is very small, and the human face is in a free motion scene when the image is obtained, so that the complex human face posture and expression exist, and the security scene possibly has the problems of complex illumination, shielding and the like, and the human face recognition effect can be greatly influenced by the problems, so that the human face cannot be recognized or the recognition is wrong. Therefore, human face optimization is introduced on the basis. In some application scenes, such as security scenes, the face optimization model can be used for screening the quality of faces sent to the recognition network, so that the problems that some postures are too large and the shielding is serious, and some faces with low illumination and image quality are sent to the recognition network to cause recognition errors, and the recognition accuracy is low are avoided.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a training method, a device and a computer readable medium for a face optimization model, so that the face image can be adaptively screened by the face optimization model, and the subsequent recognition accuracy of a face recognition network can be improved.
In order to solve the technical problem, the invention provides a training method of a face optimization model, which comprises the following steps: acquiring a face image database and a face feature database, and selecting a face image in the face image database to input into a face recognition model; obtaining a result of matching the output quantity of the face recognition model with the face feature map database; determining training data of the face optimization model according to the matching result; and training the face optimization model based on the training data.
In an embodiment of the present invention, the result of matching the output quantity of the face recognition model with the face feature map database includes whether the ID of the face image is correctly matched and whether the matching degree of the face image reaches or exceeds a set threshold.
In an embodiment of the present invention, determining the training data of the face optimization model according to the matching result includes:
taking the face image with the correct ID matching and the matching degree of the face image reaching or being greater than a set threshold value as the training data of the positive example;
taking the face image with the correct ID matching but the matching degree of the face image not reaching the set threshold value as negative example training data;
taking the face image with the wrong ID matching and the matching degree reaching or being more than a set threshold value as negative example training data;
and taking the face image with the wrong ID matching but the matching degree of the face image not reaching the set threshold value as negative example training data.
In an embodiment of the present invention, training the face optimization model based on the training data includes: and training the face optimization model based on positive training data and negative training data.
In an embodiment of the present invention, the process of ID matching of the face image includes: matching the input face image with an image in the face feature map database based on the face feature map database; obtaining a face image with the highest matching degree between an image in the face feature map database and the input face image; and detecting whether the ID of the face image with the highest matching degree is the same as the ID of the input face image.
In an embodiment of the present invention, the value of the matching degree is obtained by a matching algorithm.
In an embodiment of the present invention, the method further includes determining a loss function of the face optimization model in a symmetric cross entropy manner, and optimizing the face optimization model based on the loss function.
In an embodiment of the invention, the loss function comprises symmetric cross entropy
ιsce=αιce+βιrce
Wherein, cross entropy
Figure BDA0003238624510000031
Inverse cross entropy
Figure BDA0003238624510000032
Alpha and beta are hyper-parameters, p is a predicted value, and q is a tag value. K is the batch number of training data.
In an embodiment of the present invention, the face optimization model includes a convolutional neural network.
The invention also provides a training device of the face optimization model, which comprises: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method of any preceding claim.
The invention also provides a computer readable medium having stored thereon computer program code which, when executed by a processor, implements a method as in any of the preceding.
Compared with the prior art, the invention has the following advantages: the training method of the face optimization model can generalize information such as face pose, illumination, shielding and the like under the face optimization module, and can filter pictures which have good face quality and cannot be recognized by a recognition network, so that the accuracy of face recognition can be directly improved, and the recognition network has good adaptability.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the principle of the application. In the drawings:
fig. 1 is a flowchart of a training method of a face optimization model according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a training process of a face optimization model according to an embodiment of the present application.
Fig. 3 is an overall flow diagram of face recognition.
Fig. 4 is a schematic diagram of a training apparatus for a face optimization model according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
Furthermore, it should be noted that the terms "first", "second", etc. are used to define the components or assemblies, and are only used for convenience to distinguish the corresponding components or assemblies, and the terms have no special meaning if not stated, and therefore, the scope of protection of the present application should not be construed as being limited. Further, although the terms used in the present application are selected from publicly known and used terms, some of the terms mentioned in the specification of the present application may be selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein. Further, it is required that the present application is understood not only by the actual terms used but also by the meaning of each term lying within.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
The embodiment of the application describes a training method, a device and a computer readable medium of a face optimization model.
As mentioned above, the face optimization can be used for screening the quality of the face sent into the recognition network, so as to avoid that some faces with overlarge postures and serious shielding and some faces with lower illumination and image quality are sent into the recognition network to cause recognition errors, thereby causing low recognition accuracy.
In addition to the complexity of the images, such as that provided by security scenes, described above, there are limitations that the algorithms themselves provide. Specifically, because the interpretability of the current neural network is not high, a definite derivation process or a solution cannot be obtained theoretically to obtain an analytic solution. Therefore, some behaviors (such as output results) of the neural network are difficult to interpret, including that the pose, illumination, occlusion, image quality and the like of some human faces fed into the network exist in the face recognition network, which are all standard (such as human face images which are all front and have no occlusion), but the face recognition network is not accurately recognized, and there is a reason that the algorithm network design cannot be generalized, but is limited by the computing power of local equipment, and the neural network cannot be designed to be too large.
However, in some face optimization networks, only pictures with relatively good face pose, illumination and image quality can be screened out, so that the problems cannot be solved well. But a considerable part of the challenges of face recognition tasks in security scenarios are posed internally to the algorithm. Therefore, the method is a problem to be solved urgently in face identification in a security scene. The technical scheme of the application solves the problems through training and optimizing the face optimization model (or called as a face optimization network).
Fig. 1 is a flowchart of a training method of a face optimization model according to an embodiment of the present application.
As shown in fig. 1, the training method of the face optimization model includes step 101, obtaining a face image database and a face feature database, and selecting a face image in the face image database to input into a face recognition model. And 102, acquiring a result of matching the output quantity of the face recognition model with the face feature map database. And 103, determining training data of the face optimization model according to the matching result. And 104, training the face optimization model based on the training data.
Fig. 2 is a schematic diagram of a training process of a face optimization model according to an embodiment of the present application.
Referring to fig. 2, in step 101, a face image database and a face feature database are obtained, and a face image in the face image database is selected and input into the face recognition model 203. The input face image data is denoted by 201 in fig. 2. In some embodiments, the face images in the face image database are input into the face recognition model, and a face feature image corresponding to the face images is obtained. The face feature map comprises feature parameters characterizing the face image. The data of the face feature map is stored in a matrix form, for example, and the visualization result of the face feature map can be formed after the data is visualized and converted.
In some embodiments, the face images in the face image database have corresponding IDs, and the face images in the face feature database also have corresponding IDs. A face image database is, for example, a training data set that can also be used for face recognition models (or face recognition networks).
In step 102, a result of matching the output quantity of the face recognition model with the face feature map database is obtained.
In some embodiments, the output quantity of the face recognition model and the face feature map database are subjected to matching calculation to obtain a calculation result, including whether the ID of the face image is correctly matched and whether the matching degree of the face image reaches or is greater than a set threshold.
In some embodiments, the value of the degree of match is derived, for example, by a matching algorithm.
In step 103, determining training data of the face optimization model according to the matching result.
In some embodiments, determining the training data of the face optimization model according to the matching result comprises: taking the face image with the correct ID matching and the matching degree of the face image reaching or being greater than a set threshold value as the training data 212 of the positive example; taking the face image with the correct ID matching but the matching degree of the face image not reaching the set threshold value as negative example training data 214; taking the face image with the wrong ID matching and the matching degree of the face image reaching or being greater than a set threshold value as negative example training data 216; the face image whose ID matching is incorrect but whose matching degree does not reach the set threshold is taken as negative example training data 218.
In step 104, the face optimization model is trained based on the training data.
With continued reference to fig. 2, in some embodiments, training the face-preferred model based on the training data includes: based on the positive case training data 311 and the negative case training data 312, training data 221 of a face optimization model is formed, and the face optimization model 208 is input to train the face optimization model.
In some embodiments, the process of ID matching of the face image includes: 231, matching the input face image with an image in the face feature map database based on the face feature map database; step 232, obtaining a face image with the highest matching degree between the image in the face feature map database and the input face image; step 233, detecting whether the ID of the face image with the highest matching degree is the same as the ID of the input face image.
In some embodiments, the facial feature map database may include feature parameters of facial images, for example, by means of a facial detection algorithm, a facial tracking algorithm, a facial feature point detection algorithm, and the like. The characteristic parameters are characterized, for example, by the form of vectors.
As described above, the value of the degree of matching of the face image is obtained by, for example, a matching algorithm. The matching algorithm includes, for example, euclidean distance or cosine distance between feature value vectors. The set threshold value can be a numerical parameter set according to actual needs.
Fig. 3 is an overall flow diagram of face recognition. The face optimization module 303 proceeds to the face recognition module 308 to perform face image screening.
Referring to fig. 3, the overall flow of face recognition further includes, for example, a face image data input module 332, a face detection module 334, a face tracking module 336, and a face interception module 338. The face detection module 334, the face tracking module 336 and the face interception module 338 belong to, for example, the face image data processing module 341 in a general manner.
The training method of the face optimization model (which may also be referred to as a face optimization method based on knowledge distillation) is different from the existing face optimization scheme in that a fixed prior strategy (for example, screening after artificial visual judgment) is used, and the technical scheme of the application performs posterior estimation of the face optimization model based on prior of the face recognition model. Specifically, the conventional face optimization algorithm selects an image that is considered by people (or considered by visual observation of people) that the face pose is located on the front, the image quality is clear, and the image is not blocked and has standard illumination. The human prior considers that the standard human face that we select is recognizable by the recognition model. In practice, due to the black box property of the neural network, many standard faces cannot be recognized in the face recognition network. The training method of the face optimization model (or called as knowledge distillation-based face optimization) can well solve the technical problems and achieve the technical effect of improving the recognition rate of the face recognition network.
Specifically, the training method of the face optimization model applied by the method enables the face optimization model to filter out face images with low image quality, and can filter out the face which is considered to have good image quality based on prior (namely human prior), but pictures which cannot be identified by a face identification network. The special screening aiming at the subsequent face recognition network is realized, so that the adaptability and the recognition accuracy of the face recognition task are greatly improved. The method also greatly reduces the economic and time cost of the research and development of the face optimization model, does not need to reconstruct a face training data set and redesign a network structure, and can realize the training of the face optimization model only by using the existing training data set (such as the training data set of the face recognition model), thereby realizing the improvement of the recognition accuracy of the face recognition network.
In addition, the method also greatly improves the processing efficiency, firstly, the image selecting and processing speed based on the method is extremely high, the face image which is fit with the follow-up face recognition network can be screened out with very little calculation force, and the face recognition training times which are not practical in the face recognition network are greatly reduced.
Based on the prior knowledge of people (specifically, through the visual judgment of people), the human face posture is considered to be a positive example if the human face posture is a positive face and the human face posture is well illuminated and has no fuzzy area, and otherwise, the human face posture is considered to be a negative example. According to the technical scheme, the obtained training data of the face optimization model is not the same, specifically, some faces are side faces or the illumination of some faces is too strong in positive training data, and some face images with the face postures facing the front and good illumination conditions exist in negative training data. In this case, there is a certain crossover between the different classes of training data.
Therefore, in some embodiments of the present application, in order to make a decision boundary of the face optimization model training clearer, the training method of the face optimization model further includes determining a loss function of the face optimization model in a Symmetric Cross Entropy (SCE) manner, and optimizing the face optimization model based on the loss function.
In some embodiments, the loss function includes symmetric cross entropy
ιsce=αιce+βιrce
Wherein, cross entropy
Figure BDA0003238624510000083
Inverse cross entropy
Figure BDA0003238624510000082
Alpha and beta are hyper-parameters, p is a predicted value, and q is a tag value. K is the batch number of the training data of the face preferred model, and K is a counting mark. ε is the weight adjustment parameter.
p is a predicted value, and may specifically refer to a probability value (for example, a value between 0 and 1), and specifically may refer to a ratio between positive training data and negative training data in training data of the face optimization model. q is a label value, and particularly, a positive example or a negative example is characterized by the label value, for example, a one-hot (one-hot) coding mode indicates that '10' is a positive example and '01' is a negative example, and the value of q can be obtained by combining the coding with the corresponding positive example and negative example ratio value. ε is a weight adjustment parameter that can be used to balance the weights of logq without making it too heavy, e.g., 0.0001. For q, a small value greater than zero may be set as its lower limit, leaving q less than 0 to avoid logq taking a negative infinite value.
In some embodiments, the face optimization model includes a convolutional neural network. The specific structure of the convolutional neural network can be designed and constructed as required.
The application also provides a training device of the face optimization model, which comprises: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method as previously described.
Fig. 4 is a schematic diagram of a training apparatus for a face optimization model according to an embodiment of the present application. The training apparatus 400 for the face optimization model may include an internal communication bus 401, a Processor (Processor)402, a Read Only Memory (ROM)403, a Random Access Memory (RAM)404, and a communication port 405. The training device 400 of the face optimization model is connected with the network through a communication port and can be connected with other equipment. The internal communication bus 401 may enable data communication between components of the training apparatus 400 of the face optimization model. The processor 402 may make the determination and issue the prompt. In some embodiments, processor 402 may be comprised of one or more processors. The communication port 405 may enable sending and receiving information and data from the network. The training apparatus 400 for the face optimization model may also include various forms of program storage units and data storage units, such as a Read Only Memory (ROM)403 and a Random Access Memory (RAM)404, capable of storing various data files for computer processing and/or communication use, as well as possible program instructions for execution by the processor 402. The processor executes these instructions to implement the main parts of the method. The results of the processing by the processor may be communicated to the user device via the communication port for display on the user interface.
The training apparatus 400 for a face optimization model may be implemented as a computer program, stored in a memory, and recorded in the processor 402 for execution, so as to implement the training method for a face optimization model of the present application.
The present application also provides a computer readable medium having stored thereon computer program code which, when executed by a processor, implements a method of training a face optimization model as described above.
Aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable medium may comprise a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. The computer readable medium can be any computer readable medium that can communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Although the present application has been described with reference to the present specific embodiments, it will be recognized by those skilled in the art that the foregoing embodiments are merely illustrative of the present application and that various changes and substitutions of equivalents may be made without departing from the spirit of the application, and therefore, it is intended that all changes and modifications to the above-described embodiments that come within the spirit of the application fall within the scope of the claims of the application.

Claims (11)

1. A training method of a face optimization model comprises the following steps:
acquiring a face image database and a face feature database, and selecting a face image in the face image database to input into a face recognition model;
obtaining a result of matching the output quantity of the face recognition model with the face feature map database;
determining training data of the face optimization model according to the matching result;
and training the face optimization model based on the training data.
2. The training method of the face optimization model according to claim 1, wherein the results of the matching between the output quantity of the face recognition model and the face feature map database include whether the ID of the face image is correctly matched and whether the matching degree of the face image reaches or exceeds a set threshold.
3. The method for training a face optimization model according to claim 2, wherein determining the training data of the face optimization model according to the matching result comprises:
taking the face image with the correct ID matching and the matching degree of the face image reaching or being greater than a set threshold value as the training data of the positive example;
taking the face image with the correct ID matching but the matching degree of the face image not reaching the set threshold value as negative example training data;
taking the face image with the wrong ID matching and the matching degree reaching or being more than a set threshold value as negative example training data;
and taking the face image with the wrong ID matching but the matching degree of the face image not reaching the set threshold value as negative example training data.
4. The training method of the face optimization model according to claim 1, wherein training the face optimization model based on the training data comprises:
and training the face optimization model based on positive training data and negative training data.
5. The training method of the face optimization model according to claim 2, wherein the process of ID matching of the face image comprises:
matching the input face image with an image in the face feature map database based on the face feature map database;
obtaining a face image with the highest matching degree between an image in the face feature map database and the input face image;
and detecting whether the ID of the face image with the highest matching degree is the same as the ID of the input face image.
6. The training method of the face optimization model according to claim 2, wherein the value of the matching degree is obtained by a matching algorithm.
7. The method for training the face optimization model according to claim 1, further comprising determining a loss function of the face optimization model in a symmetric cross entropy manner, and optimizing the face optimization model based on the loss function.
8. The method of claim 7, wherein the loss function comprises symmetric cross entropy
ιsce=αιce+βιrce
Wherein, cross entropy
Figure FDA0003238624500000021
Inverse cross entropy
Figure FDA0003238624500000022
Alpha and beta are hyper-parameters, p is a predicted value, q is a label value, and K is the batch number of training data.
9. The method of claim 1, wherein the face optimization model comprises a convolutional neural network.
10. A training device for a face optimization model comprises:
a memory for storing instructions executable by the processor; and
a processor for executing the instructions to implement the method of any one of claims 1-9.
11. A computer-readable medium having stored thereon computer program code which, when executed by a processor, implements the method of any of claims 1-9.
CN202111011611.2A 2021-08-31 2021-08-31 Training method and device for face optimization model and computer readable medium Pending CN113688764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011611.2A CN113688764A (en) 2021-08-31 2021-08-31 Training method and device for face optimization model and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011611.2A CN113688764A (en) 2021-08-31 2021-08-31 Training method and device for face optimization model and computer readable medium

Publications (1)

Publication Number Publication Date
CN113688764A true CN113688764A (en) 2021-11-23

Family

ID=78584405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011611.2A Pending CN113688764A (en) 2021-08-31 2021-08-31 Training method and device for face optimization model and computer readable medium

Country Status (1)

Country Link
CN (1) CN113688764A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423678A (en) * 2017-05-27 2017-12-01 电子科技大学 A kind of training method and face identification method of the convolutional neural networks for extracting feature
CN107992844A (en) * 2017-12-14 2018-05-04 合肥寰景信息技术有限公司 Face identification system and method based on deep learning
CN108829900A (en) * 2018-07-31 2018-11-16 成都视观天下科技有限公司 A kind of Research on face image retrieval based on deep learning, device and terminal
US20190065906A1 (en) * 2017-08-25 2019-02-28 Baidu Online Network Technology (Beijing) Co., Ltd . Method and apparatus for building human face recognition model, device and computer storage medium
CN110110650A (en) * 2019-05-02 2019-08-09 西安电子科技大学 Face identification method in pedestrian
CN110688901A (en) * 2019-08-26 2020-01-14 苏宁云计算有限公司 Face recognition method and device
CN111414858A (en) * 2020-03-19 2020-07-14 北京迈格威科技有限公司 Face recognition method, target image determination method, device and electronic system
CN112507833A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Face recognition and model training method, device, equipment and storage medium
CN112633051A (en) * 2020-09-11 2021-04-09 博云视觉(北京)科技有限公司 Online face clustering method based on image search
CN113177533A (en) * 2021-05-28 2021-07-27 济南博观智能科技有限公司 Face recognition method and device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423678A (en) * 2017-05-27 2017-12-01 电子科技大学 A kind of training method and face identification method of the convolutional neural networks for extracting feature
US20190065906A1 (en) * 2017-08-25 2019-02-28 Baidu Online Network Technology (Beijing) Co., Ltd . Method and apparatus for building human face recognition model, device and computer storage medium
CN107992844A (en) * 2017-12-14 2018-05-04 合肥寰景信息技术有限公司 Face identification system and method based on deep learning
CN108829900A (en) * 2018-07-31 2018-11-16 成都视观天下科技有限公司 A kind of Research on face image retrieval based on deep learning, device and terminal
CN110110650A (en) * 2019-05-02 2019-08-09 西安电子科技大学 Face identification method in pedestrian
CN110688901A (en) * 2019-08-26 2020-01-14 苏宁云计算有限公司 Face recognition method and device
CN111414858A (en) * 2020-03-19 2020-07-14 北京迈格威科技有限公司 Face recognition method, target image determination method, device and electronic system
CN112633051A (en) * 2020-09-11 2021-04-09 博云视觉(北京)科技有限公司 Online face clustering method based on image search
CN112507833A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Face recognition and model training method, device, equipment and storage medium
CN113177533A (en) * 2021-05-28 2021-07-27 济南博观智能科技有限公司 Face recognition method and device and electronic equipment

Similar Documents

Publication Publication Date Title
Wang et al. Adaptive fusion for RGB-D salient object detection
CN111444881A (en) Fake face video detection method and device
Sharma et al. Brain tumor segmentation using genetic algorithm and artificial neural network fuzzy inference system (ANFIS)
EP3136292A1 (en) Method and device for classifying an object of an image and corresponding computer program product and computer-readable medium
CN112507990A (en) Video time-space feature learning and extracting method, device, equipment and storage medium
Chen et al. Occlusion-aware face inpainting via generative adversarial networks
CN110827265B (en) Image anomaly detection method based on deep learning
CN113674288B (en) Automatic segmentation method for digital pathological image tissue of non-small cell lung cancer
Lu et al. Multi-object detection method based on YOLO and ResNet hybrid networks
CN111611849A (en) Face recognition system for access control equipment
CN111695462A (en) Face recognition method, face recognition device, storage medium and server
CN112069887A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN111488805A (en) Video behavior identification method based on saliency feature extraction
US20230229897A1 (en) Distances between distributions for the belonging-to-the-distribution measurement of the image
Wu et al. Cascaded fully convolutional DenseNet for automatic kidney segmentation in ultrasound images
CN114066987A (en) Camera pose estimation method, device, equipment and storage medium
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
Niu et al. Boundary-aware RGBD salient object detection with cross-modal feature sampling
CN113688764A (en) Training method and device for face optimization model and computer readable medium
US20230297823A1 (en) Method and system for training a neural network for improving adversarial robustness
CN112632601B (en) Crowd counting method for subway carriage scene
Huang et al. Progressive context-aware dynamic network for salient object detection in optical remote sensing images
CN115240647A (en) Sound event detection method and device, electronic equipment and storage medium
CN112989869B (en) Optimization method, device, equipment and storage medium of face quality detection model
CN110555342B (en) Image identification method and device and image equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination