CN111291685B - Training method and device for face detection model - Google Patents

Training method and device for face detection model Download PDF

Info

Publication number
CN111291685B
CN111291685B CN202010084622.2A CN202010084622A CN111291685B CN 111291685 B CN111291685 B CN 111291685B CN 202010084622 A CN202010084622 A CN 202010084622A CN 111291685 B CN111291685 B CN 111291685B
Authority
CN
China
Prior art keywords
skin tone
type
face
detection model
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010084622.2A
Other languages
Chinese (zh)
Other versions
CN111291685A (en
Inventor
徐崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Labs Singapore Pte Ltd
Original Assignee
Alipay Labs Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Labs Singapore Pte Ltd filed Critical Alipay Labs Singapore Pte Ltd
Priority to CN202010084622.2A priority Critical patent/CN111291685B/en
Publication of CN111291685A publication Critical patent/CN111291685A/en
Application granted granted Critical
Publication of CN111291685B publication Critical patent/CN111291685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

One or more embodiments of the present disclosure disclose a training method and apparatus for a face detection model, so as to solve the problems in the prior art that sample face images are difficult to obtain and the face detection model is inaccurate. The method comprises the following steps: a plurality of first sample face pictures of a first skin tone type are acquired. And determining first position information corresponding to the designated part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture with a second skin color type. And training a target face detection model based on the second sample face picture. The target face detection model is used for face detection of the user of the second skin color type.

Description

Training method and device for face detection model
Technical Field
The present document relates to the field of model training and data processing technologies, and in particular, to a training method and device for a face detection model.
Background
Along with the development of deep learning technology, face recognition algorithms are increasingly mature, and large-scale practical floor application is achieved in various domestic life fields (such as security protection, payment, authentication and the like), such as face recognition technology is utilized to expand business scenes of face recognition login, face recognition payment, face recognition real-name authentication and the like. However, along with the gradual advancement of internationalization battle, multiple types of faces with larger differences often appear in international business scenes, for example, the face appearance of african people is greatly different from that of Chinese faces, so that the existing face living body detection algorithm trained by single type face data (such as face data only containing middle skin color faces) is inaccurate, and more false interception situations can appear when the face living body detection algorithm is applied to business scenes.
Disclosure of Invention
In one aspect, one or more embodiments of the present disclosure provide a training method for a face detection model, including: a plurality of first sample face pictures of a first skin tone type are acquired. And determining first position information corresponding to the designated part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture with a second skin color type. And training a target face detection model based on the second sample face picture. The target face detection model is used for face detection of the user of the second skin color type.
In another aspect, one or more embodiments of the present disclosure provide a training apparatus for a face detection model, including: the acquisition module acquires a plurality of first sample face pictures of a first skin color type. The determining module determines first position information corresponding to a designated part area in the first sample face picture. And the rendering module is used for rendering the specified part area based on the first position information to obtain a second sample face picture with a second skin color type. And the training module is used for training the target face detection model based on the second sample face picture. The target face detection model is used for face detection of the user of the second skin color type.
In yet another aspect, one or more embodiments of the present specification provide a training apparatus for a face detection model, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: a plurality of first sample face pictures of a first skin tone type are acquired. And determining first position information corresponding to the designated part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture with a second skin color type. And training a target face detection model based on the second sample face picture. The target face detection model is used for face detection of the user of the second skin color type.
In yet another aspect, one or more embodiments of the present description provide a storage medium storing computer-executable instructions that, when executed, implement the following: a plurality of first sample face pictures of a first skin tone type are acquired. And determining first position information corresponding to the designated part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture with a second skin color type. And training a target face detection model based on the second sample face picture. The target face detection model is used for face detection of the user of the second skin color type.
Drawings
In order to more clearly illustrate one or more embodiments of the present specification or the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described, it being apparent that the drawings in the following description are only some of the embodiments described in one or more embodiments of the present specification, and that other drawings may be obtained from these drawings without inventive faculty for a person of ordinary skill in the art.
FIG. 1 is a schematic flow chart of a training method of a face detection model according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a training method of a face detection model according to another embodiment of the present description;
fig. 3 (a) to 3 (b) are schematic face analysis diagrams in a training method of a face detection model according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of skin color rendering results in a training method of a face detection model according to an embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of a training apparatus for a face detection model according to an embodiment of the present disclosure;
fig. 6 is a schematic block diagram of a training apparatus of a face detection model according to an embodiment of the present specification.
Detailed Description
One or more embodiments of the present disclosure provide a training method and apparatus for a face detection model, so as to solve the problems in the prior art that sample face images are difficult to obtain and the face detection model is inaccurate.
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive effort by one of ordinary skill in the art, are intended to be within the scope of the present disclosure.
Fig. 1 is a schematic flowchart of a training method of a face detection model according to an embodiment of the present disclosure, as shown in fig. 1, the method includes:
s102, acquiring a plurality of first sample face pictures of a first skin color type.
Wherein skin tone type may be determined based on a variety of dimensions. If the dimension is skin tone darkness, the skin tone type may include a dark skin tone type, a light skin tone type, a medium skin tone type, and the like.
S104, determining first position information corresponding to the designated part area in the first sample face picture.
For example, the specified region may be a human eye region, a face region (excluding a five sense organs region), an ear region, a hair region, a neck region, or the like in the first sample face picture.
And S106, rendering the designated part area based on the first position information to obtain a second sample face picture of a second skin color type.
S108, training a target face detection model based on the second sample face picture, wherein the target face detection model is used for face detection of the user with the second skin color type.
In this embodiment, the second skin tone type is different from the first skin tone type. For example, if the first skin color type is a medium skin color type and the second skin color type is a dark skin color type, the sample face picture of the dark skin color type can be obtained by rendering the designated part area in the sample face picture of the medium skin color type.
According to the technical scheme of one or more embodiments of the present disclosure, a second sample face picture of a second skin color type can be obtained by rendering a designated part area in a plurality of first sample face pictures of the first skin color type, and further, training is performed on a target face detection model based on the second sample face picture, so that sample face pictures of different skin color types can be mutually converted through rendering operation, when the target face detection model for face detection of a user of the second skin color type needs to be trained, sample face pictures of the second skin color type which are not easy to obtain are not required to be obtained, and only sample face pictures which are easy to obtain are required to be obtained and rendering of the designated part area are required to be obtained, so that the acquisition cost of the sample face pictures of the second skin color type is greatly reduced, and because the method is simple to implement, the acquisition of a large number of sample face pictures of the second skin color type is easy to implement, and the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
In one embodiment, the first skin tone type and the second skin tone type are determined based on skin tone lightening information. When the appointed region is rendered based on the first position information corresponding to the appointed region in the first sample face picture, the target skin tone information corresponding to the second skin tone type can be determined first, and then the appointed region is rendered in a matching mode with the target skin tone information.
The skin tone information may include skin tone levels, and based thereon, skin tone types may be classified into deep skin tone types, light skin tone types, medium skin tone types, and the like.
For example, the first skin tone type is a light skin tone type and the second skin tone type is a dark skin tone type. And rendering the appointed part area in the sample face picture with the light skin color type into dark skin color to obtain the sample face picture with the dark skin color type.
In one embodiment, the rendering of the specified site area that matches the target skin tone information corresponding to the second skin tone type may be accomplished by converting the skin color in the specified site area to a skin color that matches the second skin tone type.
For example, the first skin color type may be determined as a light skin color type and the second skin color type may be determined as a dark skin color type based on the difference of skin darkness, and then the sample face picture of the dark skin color type may be obtained by converting the skin color of the designated region in the sample face picture of the light skin color type from light skin color to dark skin color.
In this embodiment, the sample face images with different skin color types can be obtained only by converting the skin colors of the designated part areas in the sample face images, so that the acquisition cost of the sample face images with different skin color types is greatly reduced, and the training cost of the target face detection model corresponding to various different skin color types is reduced. The target face detection model corresponding to the skin color type is a face detection model which is trained by taking a sample face picture corresponding to the skin color type as a training sample and is used for carrying out face detection on a user corresponding to the skin color type.
In one embodiment, the designated region is a region other than a human eye region in the first sample face picture. The following steps may be adopted to determine first location information of the designated part area in the first sample face picture:
step one, carrying out face analysis on the first sample face picture to obtain second position information corresponding to each partial region.
In this step, the existing Face semantic analysis (Face Parsing) algorithm may be used to locally cut the sample Face image, so as to generate a mask (also referred to as a "mask") of pixel granularity of each partial region, that is, position information corresponding to each partial region respectively.
And step two, determining third position information corresponding to other areas except the human eye area in each partial area based on the second position information.
And thirdly, determining the third position information as first position information corresponding to the designated part area.
Wherein, the first sample face picture can comprise any one or more of the following areas: the human eye region, the face region (excluding the five sense organs region), the ear region, the hair region, the neck region, and the like. Designated part areas (i.e., other areas than the human eye area in each partial area) are as follows: a face area, a neck area and an ear area.
In one embodiment, after the second sample face picture is obtained, face feature information in the second sample face picture can be extracted, wherein the face feature information comprises face key point information, face information, facial expression information and the like; and further performing deep learning based on the second sample face picture and the face feature information to obtain a target face detection model.
If the face feature information in the second sample face picture comprises face key point information, the trained target face detection model can be used for detecting the face key points of the user with the second skin color type; if the face feature information in the second sample face picture comprises facial expression information, the trained target face detection model can be used for detecting facial expression information of a user with a second skin color type; etc.
The face detection function of the target face detection model for the user of the second skin color type corresponds to the face detection function of the face detection model corresponding to the first skin color type for the user of the first skin color type. For example, if the face detection model corresponding to the first skin tone type is capable of detecting the facial expression of the user of the first skin tone type, the face detection model corresponding to the second skin tone type is also capable of detecting the facial expression of the user of the second skin tone type. For another example, if the face detection model corresponding to the first skin tone type can detect the location of the keypoint of the user of the first skin tone type, the face detection model corresponding to the second skin tone type can also detect the location of the keypoint of the user of the second skin tone type.
Fig. 2 is a schematic flow chart of a training method of a face detection model according to another embodiment of the present specification. In this embodiment, taking the first skin color type as the light skin color type and the second skin color type as the dark skin color type as examples. As shown in fig. 2, the method includes:
s201, a plurality of first sample face pictures of a light skin color type are obtained.
S202, carrying out face analysis on the first sample face picture by using a face semantic analysis algorithm to obtain a face analysis result.
The face analysis result comprises position information corresponding to each partial region on the face picture. Fig. 3 (a) to 3 (b) schematically show a face analysis result, fig. 3 (a) is a first sample face picture, and fig. 3 (b) is an analysis result obtained by performing semantic analysis on the first sample face picture.
S203, determining position information corresponding to the designated part area in the first sample face picture based on the face analysis result.
Wherein, the appointed part area is as follows: a face area, a neck area and an ear area.
And S204, rendering the skin color of the designated part area into dark skin color based on the position information corresponding to the designated part area in the first sample face image, and obtaining a second sample face image with a dark skin color type.
Fig. 4 schematically shows a rendering result of rendering a skin color of a specified region to a dark skin color.
S205, extracting face characteristic information in the second sample face picture.
The face feature information may include face key point information, face information, facial expression information, and the like.
S206, performing deep learning based on the second sample face picture and the face feature information to obtain a target face detection model for performing face detection on the deep skin type user.
In the embodiment, the second sample face picture of the dark skin color type can be obtained by performing skin color rendering on the appointed part area in the plurality of first sample face pictures of the light skin color type, and further, the target face detection model is trained based on the second sample face picture, so that sample face pictures of different skin color types can be mutually converted through rendering operation, when the target face detection model for carrying out face detection on a user of the dark skin color type needs to be trained, the sample face picture of the dark skin color type which is not easy to acquire is not required to be acquired, only the sample face picture of the light skin color type which is easy to acquire is required to be acquired, and the appointed part area is rendered, so that the acquisition cost of the sample face picture of the dark skin color type is greatly reduced, and the acquisition of the sample face picture of the dark skin color type is easy to realize in a large batch due to the simple realization of the method, and the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The above training method for the face detection model provided by one or more embodiments of the present disclosure is based on the same idea, and the one or more embodiments of the present disclosure further provide a training device for the face detection model.
Fig. 5 is a schematic block diagram of a training apparatus for a face detection model according to an embodiment of the present disclosure, and as shown in fig. 5, a training apparatus 500 for a face detection model includes:
an obtaining module 510 that obtains a plurality of first sample face pictures of a first skin color type;
a determining module 520, configured to determine first location information corresponding to a designated location area in the first sample face picture;
the rendering module 530 renders the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
the training module 540 is used for training the target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
In one embodiment, the first skin tone type and the second skin tone type are determined based on skin tone shade information;
the rendering module 530 includes:
a first determining unit for determining target skin tone information corresponding to the second skin tone type;
and the rendering unit is used for rendering the appointed part area in a manner of matching with the target skin tone information.
In one embodiment, the first skin tone type is a light skin tone type; the second skin tone type is a deep skin tone type.
In an embodiment, the rendering unit is further for:
converting the skin color in the specified site area to a skin color that matches the second skin tone type.
In one embodiment, the specified portion area is another area except a human eye area in the first sample face picture;
the determining module 520 includes:
the analysis unit is used for carrying out face analysis on the first sample face picture to obtain second position information corresponding to each partial region;
a second determination unit that determines third position information corresponding to other regions than the human eye region among the partial regions, based on the second position information;
and a third determination unit configured to determine the third position information as the first position information corresponding to the specified region.
In one embodiment, the other regions of the partial regions than the human eye region include at least one of: a face area, a neck area and an ear area.
In one embodiment, the training module 540 includes:
an extraction unit for extracting face feature information in the second sample face picture; the face characteristic information comprises at least one of face key point information, face information and facial expression information;
and the training unit is used for performing deep learning based on the second sample face picture and the face characteristic information to obtain the target face detection model.
By adopting the device of one or more embodiments of the present disclosure, a second sample face picture of a second skin color type can be obtained by rendering a designated part area in a plurality of first sample face pictures of the first skin color type, and further training a target face detection model based on the second sample face picture, so that sample face pictures of different skin color types can be mutually converted through rendering operation, when the target face detection model for face detection of a user of the second skin color type needs to be trained, sample face pictures of the second skin color type which are not easy to obtain are not required to be obtained, and only sample face pictures which are easy to obtain are required to be obtained and rendering of the designated part area are required to be obtained, so that the acquisition cost of the sample face pictures of the second skin color type is greatly reduced, and because the method is simple to implement, the acquisition of a large number of sample face pictures of the second skin color type is easy to implement, and the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
It should be understood by those skilled in the art that the training device of the face detection model can be used to implement the training method of the face detection model described above, and the detailed description thereof should be similar to the description of the method section above, so as to avoid complexity and avoid redundancy.
Based on the same thought, one or more embodiments of the present disclosure further provide a training device for a face detection model, as shown in fig. 6. The training device of the face detection model may have a relatively large difference due to different configurations or performances, and may include one or more processors 601 and a memory 602, where the memory 602 may store one or more storage applications or data. Wherein the memory 602 may be transient storage or persistent storage. The application program stored in the memory 602 may include one or more modules (not shown in the figures), each of which may include a series of computer-executable instructions in a training device for a face detection model. Still further, the processor 601 may be arranged to communicate with the memory 602, executing a series of computer executable instructions in the memory 602 on a training device of the face detection model. The training device of the face detection model may also include one or more power supplies 603, one or more wired or wireless network interfaces 604, one or more input/output interfaces 605, and one or more keyboards 606.
In particular, in this embodiment, the training device of the face detection model includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer executable instructions for the training device of the face detection model, and executing the one or more programs by the one or more processors includes computer executable instructions for:
acquiring a plurality of first sample face pictures of a first skin color type;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
Optionally, the first skin tone type and the second skin tone type are determined based on skin tone shade information;
the computer executable instructions, when executed, may further cause the processor to:
determining target skin tone information corresponding to the second skin tone type;
and rendering the designated part area in matching with the target skin tone information.
Optionally, the first skin tone type is a light skin tone type; the second skin tone type is a deep skin tone type.
Optionally, the computer executable instructions, when executed, may further cause the processor to:
converting the skin color in the specified site area to a skin color that matches the second skin tone type.
Optionally, the specified part area is other areas except a human eye area in the first sample face picture;
the computer executable instructions, when executed, may further cause the processor to:
carrying out face analysis on the first sample face picture to obtain second position information corresponding to each partial region;
determining third position information corresponding to other areas except the human eye area in each part of areas based on the second position information;
and determining the third position information as the first position information corresponding to the designated part area.
Optionally, the other regions of the partial regions than the human eye region include at least one of: a face area, a neck area and an ear area.
Optionally, the computer executable instructions, when executed, may further cause the processor to:
extracting face characteristic information in the second sample face picture; the face characteristic information comprises at least one of face key point information, face information and facial expression information;
and performing deep learning based on the second sample face picture and the face feature information to obtain the target face detection model.
According to the technical scheme of one or more embodiments of the present disclosure, a second sample face picture of a second skin color type can be obtained by rendering a designated part area in a plurality of first sample face pictures of the first skin color type, and further, training is performed on a target face detection model based on the second sample face picture, so that sample face pictures of different skin color types can be mutually converted through rendering operation, when the target face detection model for face detection of a user of the second skin color type needs to be trained, sample face pictures of the second skin color type which are not easy to obtain are not required to be obtained, and only sample face pictures which are easy to obtain are required to be obtained and rendering of the designated part area are required to be obtained, so that the acquisition cost of the sample face pictures of the second skin color type is greatly reduced, and because the method is simple to implement, the acquisition of a large number of sample face pictures of the second skin color type is easy to implement, and the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
One or more embodiments of the present specification also provide a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by an electronic device that includes a plurality of application programs, enable the electronic device to perform the training method of the face detection model described above, and specifically for performing:
acquiring a plurality of first sample face pictures of a first skin color type;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing one or more embodiments of the present description.
One skilled in the art will appreciate that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (trans itory media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is merely one or more embodiments of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of one or more embodiments of the present disclosure, are intended to be included within the scope of the claims of one or more embodiments of the present disclosure.

Claims (13)

1. A training method of a face detection model comprises the following steps:
acquiring a plurality of first sample face pictures of a first skin color type;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type; wherein the first skin tone type and the second skin tone type are determined based on skin tone lightening information, the first skin tone type and the second skin tone type being different skin tone types;
training a target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
2. The method of claim 1, the rendering the specified site area based on the first location information, comprising:
determining target skin tone information corresponding to the second skin tone type;
and rendering the designated part area in matching with the target skin tone information.
3. The method of claim 2, the first skin tone type being a light skin tone type; the second skin tone type is a deep skin tone type.
4. The method of claim 3, the rendering the specified site area to match the target skin tone information comprising:
converting the skin color in the specified site area to a skin color that matches the second skin tone type.
5. The method of claim 1, wherein the specified region is a region of the first sample face picture other than a human eye region;
the determining the first position information of the designated part area in the first sample face picture includes:
carrying out face analysis on the first sample face picture to obtain second position information corresponding to each partial region;
determining third position information corresponding to other areas except the human eye area in each part of areas based on the second position information;
and determining the third position information as the first position information corresponding to the designated part area.
6. The method of claim 5, the other of the partial regions than the human eye region comprising at least one of: a face area, a neck area and an ear area.
7. The method of claim 1, the training a target face detection model based on the second sample face picture, comprising:
extracting face characteristic information in the second sample face picture; the face characteristic information comprises at least one of face key point information, face information and facial expression information;
and performing deep learning based on the second sample face picture and the face feature information to obtain the target face detection model.
8. A training device for a face detection model, comprising:
the acquisition module acquires a plurality of first sample face pictures of a first skin color type;
the determining module is used for determining first position information corresponding to a designated part area in the first sample face picture;
the rendering module is used for rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type; wherein the first skin tone type and the second skin tone type are determined based on skin tone lightening information, the first skin tone type and the second skin tone type being different skin tone types;
the training module is used for training the target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
9. The apparatus of claim 8, the rendering module comprising:
a first determining unit for determining target skin tone information corresponding to the second skin tone type;
and the rendering unit is used for rendering the appointed part area in a manner of matching with the target skin tone information.
10. The apparatus of claim 9, the first skin tone type being a light skin tone type; the second skin tone type is a deep skin tone type.
11. The apparatus of claim 8, the designated region being a region of the first sample face picture other than a human eye region;
the determining module includes:
the analysis unit is used for carrying out face analysis on the first sample face picture to obtain second position information corresponding to each partial region;
a second determination unit that determines third position information corresponding to other regions than the human eye region among the partial regions, based on the second position information;
and a third determination unit configured to determine the third position information as the first position information corresponding to the specified region.
12. A training device for a face detection model, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring a plurality of first sample face pictures of a first skin color type;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type; wherein the first skin tone type and the second skin tone type are determined based on skin tone lightening information, the first skin tone type and the second skin tone type being different skin tone types;
training a target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
13. A storage medium storing computer-executable instructions that when executed implement the following:
acquiring a plurality of first sample face pictures of a first skin color type;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type; wherein the first skin tone type and the second skin tone type are determined based on skin tone lightening information, the first skin tone type and the second skin tone type being different skin tone types;
training a target face detection model based on the second sample face picture; the target face detection model is used for face detection of the user of the second skin color type.
CN202010084622.2A 2020-02-10 2020-02-10 Training method and device for face detection model Active CN111291685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010084622.2A CN111291685B (en) 2020-02-10 2020-02-10 Training method and device for face detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010084622.2A CN111291685B (en) 2020-02-10 2020-02-10 Training method and device for face detection model

Publications (2)

Publication Number Publication Date
CN111291685A CN111291685A (en) 2020-06-16
CN111291685B true CN111291685B (en) 2023-06-02

Family

ID=71017529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010084622.2A Active CN111291685B (en) 2020-02-10 2020-02-10 Training method and device for face detection model

Country Status (1)

Country Link
CN (1) CN111291685B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870102B (en) * 2021-12-06 2022-03-08 深圳市大头兄弟科技有限公司 Animation method, device, equipment and storage medium of image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL131108A0 (en) * 1999-07-26 2001-01-28 Haim Shani Method and apparatus for diagnosis of actual or pre-shock state
CN101706874A (en) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 Method for face detection based on features of skin colors
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
WO2018033143A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, apparatus and electronic device
CN107798314A (en) * 2017-11-22 2018-03-13 北京小米移动软件有限公司 Skin color detection method and device
CN108009465A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN108537881A (en) * 2018-04-18 2018-09-14 腾讯科技(深圳)有限公司 A kind of faceform's processing method and its equipment, storage medium
CN108573527A (en) * 2018-04-18 2018-09-25 腾讯科技(深圳)有限公司 A kind of expression picture generation method and its equipment, storage medium
CN110032915A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of human face in-vivo detection method, device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL131108A0 (en) * 1999-07-26 2001-01-28 Haim Shani Method and apparatus for diagnosis of actual or pre-shock state
CN101706874A (en) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 Method for face detection based on features of skin colors
WO2018033143A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, apparatus and electronic device
CN108009465A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN107798314A (en) * 2017-11-22 2018-03-13 北京小米移动软件有限公司 Skin color detection method and device
CN110032915A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN108537881A (en) * 2018-04-18 2018-09-14 腾讯科技(深圳)有限公司 A kind of faceform's processing method and its equipment, storage medium
CN108573527A (en) * 2018-04-18 2018-09-25 腾讯科技(深圳)有限公司 A kind of expression picture generation method and its equipment, storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Jose M. Chaves-GonzálezMiguel A. Vega-Rodríguez...."Detecting skin in face recognition systems: A colour spaces study".《Digital Signal Processing》.2010,Pages 806-823. *
P. KakumanuS. MakrogiannisN. Bourbakis." A survey of skin-color modeling and detection methods".《Pattern Recognition》.2007,Pages 1106-1122. *
糜元根 ; 陈丹驰 ; 季鹏 ; .基于几何特征与新Haar特征的人脸检测算法.传感器与微系统.2017,全文. *
范一峰 ; 颜志英 ; .基于Adaboost算法和肤色验证的人脸检测研究.微计算机信息.2010,全文. *
黄知超 ; 张鹏 ; 赵华荣 ; 赵文明."智能视频监控系统中基于肤色信息的人脸检测算法研究".《现代电子技术》.2018,全文. *

Also Published As

Publication number Publication date
CN111291685A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
US11887344B2 (en) Encoding and decoding a stylized custom graphic
CN108121952B (en) Face key point positioning method, device, equipment and storage medium
US11463631B2 (en) Method and apparatus for generating face image
CN110363091B (en) Face recognition method, device and equipment under side face condition and storage medium
CN111275784B (en) Method and device for generating image
US10679039B2 (en) Detecting actions to discourage recognition
CN112800468B (en) Data processing method, device and equipment based on privacy protection
CN114238904A (en) Identity recognition method, and training method and device of two-channel hyper-resolution model
CN110717484B (en) Image processing method and system
CN111291685B (en) Training method and device for face detection model
CN112837202B (en) Watermark image generation and attack tracing method and device based on privacy protection
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
CN114943976B (en) Model generation method and device, electronic equipment and storage medium
CN105913024A (en) Android mobile terminal detecting method based on LAP operator for resisting replay attacks
CN115984977A (en) Living body detection method and system
CN115993973A (en) Compiling method of deep learning model, electronic equipment and storage medium
CN114445632A (en) Picture processing method and device
KR102502034B1 (en) Method and system for retrieving de-identified object in video
CN114973426B (en) Living body detection method, device and equipment
CN113822020B (en) Text processing method, text processing device and storage medium
US20200219235A1 (en) Method and device for sensitive data masking based on image recognition
CN111310630A (en) Living body detection method and device
CN118015669A (en) Face alignment method and device, electronic equipment, chip and medium
CN117972436A (en) Training method and training device for large language model, storage medium and electronic equipment
CN116737268A (en) Application processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant