CN111291685A - Training method and device of face detection model - Google Patents

Training method and device of face detection model Download PDF

Info

Publication number
CN111291685A
CN111291685A CN202010084622.2A CN202010084622A CN111291685A CN 111291685 A CN111291685 A CN 111291685A CN 202010084622 A CN202010084622 A CN 202010084622A CN 111291685 A CN111291685 A CN 111291685A
Authority
CN
China
Prior art keywords
skin color
face
color type
position information
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010084622.2A
Other languages
Chinese (zh)
Other versions
CN111291685B (en
Inventor
徐崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Labs Singapore Pte Ltd
Original Assignee
Alipay Labs Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Labs Singapore Pte Ltd filed Critical Alipay Labs Singapore Pte Ltd
Priority to CN202010084622.2A priority Critical patent/CN111291685B/en
Publication of CN111291685A publication Critical patent/CN111291685A/en
Application granted granted Critical
Publication of CN111291685B publication Critical patent/CN111291685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

One or more embodiments of the present specification disclose a training method and apparatus for a face detection model, so as to solve the problems in the prior art that a sample face picture is difficult to obtain and a face detection model is inaccurate. The method comprises the following steps: a plurality of first skin color type first skin face pictures are obtained. And determining first position information corresponding to the specified part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type. And training a target face detection model based on the second sample face picture. And the target face detection model is used for carrying out face detection on the user with the second skin color type.

Description

Training method and device of face detection model
Technical Field
The present document relates to the technical field of model training and data processing, and in particular, to a training method and apparatus for a face detection model.
Background
With the development of deep learning technology, face recognition algorithms are becoming mature day by day and are applied to various domestic living fields (such as security, payment, authentication and the like) in a large-scale practical landing manner, such as expanding business scenes of face-brushing login, face-brushing payment, face-brushing real-name authentication and the like by using the face recognition technology. However, with the gradual advance of international battles, a plurality of types of faces with large differences often appear in an international business scene, for example, the appearance of the faces of africans is greatly different from that of the faces of Chinese people, so that the existing face living body detection algorithm trained by single type of face data (such as face data only containing faces with medium skin colors) is inaccurate, and a large number of false interception situations appear when the face living body detection algorithm is applied in the business scene.
Disclosure of Invention
In one aspect, one or more embodiments of the present specification provide a method for training a face detection model, including: a plurality of first skin color type first skin face pictures are obtained. And determining first position information corresponding to the specified part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type. And training a target face detection model based on the second sample face picture. And the target face detection model is used for carrying out face detection on the user with the second skin color type.
In another aspect, one or more embodiments of the present specification provide a training apparatus for a face detection model, including: the acquisition module acquires a plurality of first skin color type face pictures of a first skin color type. And the determining module is used for determining first position information corresponding to a specified part area in the first sample face picture. And the rendering module is used for rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type. And the training module is used for training a target face detection model based on the second sample face picture. And the target face detection model is used for carrying out face detection on the user with the second skin color type.
In another aspect, one or more embodiments of the present specification provide a training apparatus for a face detection model, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: a plurality of first skin color type first skin face pictures are obtained. And determining first position information corresponding to the specified part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type. And training a target face detection model based on the second sample face picture. And the target face detection model is used for carrying out face detection on the user with the second skin color type.
In yet another aspect, one or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed, implement the following: a plurality of first skin color type first skin face pictures are obtained. And determining first position information corresponding to the specified part area in the first sample face picture. And rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type. And training a target face detection model based on the second sample face picture. And the target face detection model is used for carrying out face detection on the user with the second skin color type.
Drawings
In order to more clearly illustrate one or more embodiments or technical solutions in the prior art in the present specification, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in one or more embodiments of the present specification, and other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a method for training a face detection model according to an embodiment of the present description;
FIG. 2 is a schematic flow chart diagram of a method for training a face detection model according to another embodiment of the present description;
fig. 3(a) -3 (b) are schematic diagrams illustrating face analysis in a training method of a face detection model according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a skin color rendering result in a training method of a face detection model according to an embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of an apparatus for training a face detection model according to an embodiment of the present disclosure;
FIG. 6 is a schematic block diagram of a training apparatus for a face detection model according to an embodiment of the present disclosure.
Detailed Description
One or more embodiments of the present disclosure provide a training method and apparatus for a face detection model, so as to solve the problems in the prior art that a sample face image is difficult to obtain and a face detection model is inaccurate.
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
Fig. 1 is a schematic flow chart of a training method of a face detection model according to an embodiment of the present specification, as shown in fig. 1, the method includes:
s102, obtaining a plurality of first skin color type face pictures of a first skin color type.
Wherein the skin tone type may be determined based on a plurality of dimensions. If the dimension is skin tone depth, the skin tone type may include a dark skin tone type, a light skin tone type, a medium skin tone type, and the like.
S104, first position information corresponding to the designated part area in the first sample face picture is determined.
For example, the designated region may be a human eye region, a face region (not including five sense organs region), an ear region, a hair region, a neck region, or the like in the first sample face picture.
And S106, rendering the specified position area based on the first position information to obtain a second sample face picture of a second skin color type.
And S108, training a target face detection model based on the second sample face picture, wherein the target face detection model is used for carrying out face detection on the user with the second skin color type.
In this embodiment, the second skin tone type is different from the first skin tone type. For example, if the first skin color type is a medium skin color type and the second skin color type is a dark skin color type, the sample face picture of the dark skin color type can be obtained by rendering the specified region area in the sample face picture of the medium skin color type.
By adopting the technical scheme of one or more embodiments of the specification, a second sample face picture of a second skin color type can be obtained by rendering the appointed part area in a plurality of first sample face pictures of the first skin color type, and then a target face detection model is trained based on the second sample face picture, so that the sample face pictures of different skin color types can be mutually converted through rendering operation, therefore, when the target face detection model for carrying out face detection on a user of the second skin color type needs to be trained, the sample face picture of the second skin color type which is not easy to obtain does not need to be obtained, but only the sample face picture which is easy to obtain is obtained and the rendering of the appointed part area is carried out, the obtaining cost of the sample face picture of the second skin color type is greatly reduced, and the method is simple to realize, therefore, the acquisition of a large batch of sample face pictures of the second skin color type is easy to realize, so that the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
In one embodiment, the first skin tone type and the second skin tone type are determined based on skin tone shading information. When the designated part region is rendered based on the first position information corresponding to the designated part region in the first sample face picture, the target skin color depth information corresponding to the second skin color type can be determined first, and then the designated part region is rendered in a manner matched with the target skin color depth information.
The skin color depth information may include skin color depth, and based on this, the skin color types may be divided into dark skin color types, light skin color types, medium skin color types, and the like.
For example, the first skin tone type is a light skin tone type and the second skin tone type is a dark skin tone type. And rendering the specified part region in the sample face picture of the light skin color type into the dark skin color to obtain the sample face picture of the dark skin color type.
In one embodiment, rendering the specified site area to match the target skin tone shade information corresponding to the second skin tone type may be achieved by converting the skin color in the specified site area to a skin color matching the second skin tone type.
For example, based on the difference in skin depth, the first skin color type may be determined as a light skin color type, and the second skin color type may be determined as a dark skin color type, so that the sample face picture of the dark skin color type may be obtained by converting the skin color of the specified region area in the sample face picture of the light skin color type from a light skin color to a dark skin color.
In the embodiment, the sample face pictures with different skin color types can be obtained only by converting the skin color of the designated part area in the sample face picture, so that the acquisition cost of the sample face pictures with different skin color types is greatly reduced, and the training cost of the target face detection model corresponding to various skin color types is reduced. The target face detection model corresponding to the skin color type refers to a face detection model for detecting the face of a user corresponding to the skin color type by taking a sample face picture corresponding to the skin color type as a training sample.
In one embodiment, the designated part area is other areas except for the human eye area in the first same human face picture. The following steps can be adopted to determine the first position information of the designated part area in the first sample face picture:
step one, carrying out face analysis on the first same face picture to obtain second position information respectively corresponding to each part of area.
In this step, the sample Face picture may be partially cut by using an existing Face semantic Parsing (Face Parsing) algorithm, so as to generate pixel granularity masks (also referred to as "masks") of each partial region, that is, position information corresponding to each partial region.
And secondly, determining third position information corresponding to other regions except the human eye region in each region based on the second position information.
And step three, determining the third position information as the first position information corresponding to the designated part area.
The first face picture of the same person can comprise any one or more of the following areas: the human eye region, the face region (excluding the five sense organs region), the ear region, the hair region, the neck region, and the like. The designated region (i.e., the region other than the eye region in each region) includes: face region, neck region, and ear region.
In one embodiment, after the second sample face picture is obtained, face feature information in the second sample face picture can be extracted, wherein the face feature information comprises face key point information, face information, facial expression information and the like; and then performing deep learning based on the second sample face picture and the face feature information to obtain a target face detection model.
If the face feature information in the second sample face picture comprises face key point information, the trained target face detection model can be used for detecting face key points of users with a second skin color type; if the facial feature information in the second sample facial picture comprises facial expression information, the trained target facial detection model can be used for detecting the facial expression information of the user with the second skin color type; and so on.
The target face detection model has a face detection function for the user with the second skin color type, and the face detection model corresponding to the first skin color type has a face detection function for the user with the first skin color type. For example, the face detection model corresponding to the first skin color type can detect the facial expression of the user of the first skin color type, and the face detection model corresponding to the second skin color type can also detect the facial expression of the user of the second skin color type. For another example, the face detection model corresponding to the first skin color type can detect the key point position of the user of the first skin color type, and the face detection model corresponding to the second skin color type can also detect the key point position of the user of the second skin color type.
Fig. 2 is a schematic flow chart of a training method of a face detection model according to another embodiment of the present specification. In this embodiment, the first skin color type is a light skin color type, and the second skin color type is a dark skin color type. As shown in fig. 2, the method includes:
s201, obtaining a plurality of first skin color type face pictures of a first skin color type.
S202, carrying out face analysis on the first same face picture by using a face semantic analysis algorithm to obtain a face analysis result.
The face analysis result comprises position information corresponding to each part area on the face picture. Fig. 3(a) to 3(b) schematically show a face analysis result, where fig. 3(a) is a first sample face picture, and fig. 3(b) is an analysis result obtained by performing semantic analysis on the first sample face picture.
And S203, determining position information corresponding to the designated part area in the first sample face picture based on the face analysis result.
Wherein, the designated part area is as follows: face region, neck region, and ear region.
And S204, rendering the skin color of the specified part area into a dark skin color based on the position information corresponding to the specified part area in the first sample face picture, and obtaining a second sample face picture of the dark skin color type.
Fig. 4 schematically shows a rendering result of rendering the skin color of the designated region area as a dark skin color.
And S205, extracting the face feature information in the second sample face picture.
The face feature information may include face key point information, face information, facial expression information, and the like.
And S206, performing deep learning based on the second sample face picture and the face feature information to obtain a target face detection model for performing face detection on the user with the dark skin color type.
In the embodiment, a second sample face picture of a dark skin color type can be obtained by rendering skin colors in specified position areas in a plurality of first sample face pictures of a light skin color type, and then a target face detection model is trained based on the second sample face picture, so that the sample face pictures of different skin color types can be mutually converted through rendering operation, therefore, when a target face detection model for detecting the face of a user of the dark skin color type needs to be trained, a sample face picture of the dark skin color type which is not easy to obtain is not required to be obtained, but only a sample face picture of the light skin color type which is easy to obtain is required to be obtained, and rendering of the specified position areas is carried out, thereby greatly reducing the obtaining cost of the sample face pictures of the dark skin color type, and because the method is simple to realize, the obtaining of a large batch of sample face pictures of the dark skin color type is easy to realize, therefore, the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same idea, the above training method for the face detection model provided in one or more embodiments of the present specification further provides a training device for the face detection model.
Fig. 5 is a schematic block diagram of an apparatus for training a face detection model according to an embodiment of the present disclosure, and as shown in fig. 5, an apparatus 500 for training a face detection model includes:
an obtaining module 510, configured to obtain a plurality of first skin color type face pictures of a first skin color type;
a determining module 520, configured to determine first location information corresponding to a designated location area in the first sample face picture;
a rendering module 530, configured to render the designated region based on the first location information, so as to obtain a second sample face picture of a second skin color type;
a training module 540, configured to train a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
In one embodiment, the first skin tone type and the second skin tone type are determined based on skin tone shading information;
the rendering module 530 includes:
the first determining unit is used for determining target skin color depth information corresponding to the second skin color type;
and the rendering unit is used for rendering the specified part area matched with the target skin color depth information.
In one embodiment, the first skin tone type is a light skin tone type; the second skin color type is a dark skin color type.
In one embodiment, the rendering unit is further to:
converting the skin color in the specified location area to a skin color matching the second skin color type.
In one embodiment, the designated part region is a region of the first sample face picture except for a human eye region;
the determining module 520 includes:
the analysis unit is used for carrying out face analysis on the first same face picture to obtain second position information respectively corresponding to each part of area;
a second determining unit that determines third position information corresponding to the other regions of the respective regions excluding the human eye region, based on the second position information;
a third specifying unit that specifies the third position information as the first position information corresponding to the specified portion region.
In one embodiment, the other of the partial regions than the human eye region comprises at least one of: face region, neck region, and ear region.
In one embodiment, the training module 540 comprises:
the extraction unit is used for extracting the face characteristic information in the second sample face picture; the face feature information comprises at least one item of face key point information, face information and facial expression information;
and the training unit is used for carrying out deep learning based on the second sample face picture and the face characteristic information to obtain the target face detection model.
By adopting the device of one or more embodiments of the specification, a second sample face picture of a second skin color type can be obtained by rendering the appointed part area in a plurality of first sample face pictures of the first skin color type, and then a target face detection model is trained based on the second sample face picture, so that the sample face pictures of different skin color types can be mutually converted through rendering operation, therefore, when the target face detection model for carrying out face detection on a user of the second skin color type needs to be trained, the sample face picture of the second skin color type which is not easy to obtain does not need to be obtained, but only the sample face picture which is easy to obtain is obtained and the rendering of the appointed part area is carried out, the obtaining cost of the sample face picture of the second skin color type is greatly reduced, and the method is simple to realize, therefore, the acquisition of a large batch of sample face pictures of the second skin color type is easy to realize, so that the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
It should be understood by those skilled in the art that the above-mentioned training apparatus for a face detection model can be used to implement the above-mentioned training method for a face detection model, wherein the detailed description should be similar to the above-mentioned method, and is not repeated herein in order to avoid complexity.
Based on the same idea, one or more embodiments of the present specification further provide a training device for a face detection model, as shown in fig. 6. The training device for the face detection model may have a relatively large difference due to different configurations or performances, and may include one or more processors 601 and a memory 602, where one or more stored applications or data may be stored in the memory 602. Wherein the memory 602 may be transient or persistent storage. The application program stored in memory 602 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in a training device for a face detection model. Still further, the processor 601 may be arranged in communication with the memory 602 to execute a series of computer executable instructions in the memory 602 on a training device for a face detection model. The training apparatus for the face detection model may also include one or more power supplies 603, one or more wired or wireless network interfaces 604, one or more input-output interfaces 605, and one or more keyboards 606.
In particular, in this embodiment, the training apparatus for the face detection model includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions in the training apparatus for the face detection model, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
obtaining a plurality of first skin color type first face pictures;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
Optionally, the first skin tone type and the second skin tone type are determined based on skin tone shading information;
the computer executable instructions, when executed, may further cause the processor to:
determining target skin color depth information corresponding to the second skin color type;
and rendering the specified part area matched with the target skin color depth information.
Optionally, the first skin tone type is a light skin tone type; the second skin color type is a dark skin color type.
Optionally, the computer executable instructions, when executed, may further cause the processor to:
converting the skin color in the specified location area to a skin color matching the second skin color type.
Optionally, the designated part region is a region other than a human eye region in the first same person face picture;
the computer executable instructions, when executed, may further cause the processor to:
performing face analysis on the first same face picture to obtain second position information respectively corresponding to each part of the area;
determining third position information corresponding to other regions except the human eye region in the parts of regions based on the second position information;
and determining the third position information as the first position information corresponding to the designated part area.
Optionally, other regions of the partial regions than the human eye region include at least one of: face region, neck region, and ear region.
Optionally, the computer executable instructions, when executed, may further cause the processor to:
extracting face feature information in the second sample face picture; the face feature information comprises at least one item of face key point information, face information and facial expression information;
and performing deep learning based on the second sample face picture and the face feature information to obtain the target face detection model.
By adopting the technical scheme of one or more embodiments of the specification, a second sample face picture of a second skin color type can be obtained by rendering the appointed part area in a plurality of first sample face pictures of the first skin color type, and then a target face detection model is trained based on the second sample face picture, so that the sample face pictures of different skin color types can be mutually converted through rendering operation, therefore, when the target face detection model for carrying out face detection on a user of the second skin color type needs to be trained, the sample face picture of the second skin color type which is not easy to obtain does not need to be obtained, but only the sample face picture which is easy to obtain is obtained and the rendering of the appointed part area is carried out, the obtaining cost of the sample face picture of the second skin color type is greatly reduced, and the method is simple to realize, therefore, the acquisition of a large batch of sample face pictures of the second skin color type is easy to realize, so that the training cost of the target face detection model is reduced; in addition, the sample according to which the target face detection model is trained is matched with the model detection object, so that the accuracy of the face detection result is improved.
One or more embodiments of the present specification also propose a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which, when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the above-mentioned training method of a face detection model, and in particular to perform:
obtaining a plurality of first skin color type first face pictures;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (trans-entity media) such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only one or more embodiments of the present disclosure, and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of claims of one or more embodiments of the present specification.

Claims (13)

1. A training method of a face detection model comprises the following steps:
obtaining a plurality of first skin color type first face pictures;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
2. The method of claim 1, the first skin tone type and the second skin tone type determined based on skin tone shading information;
the rendering the designated region based on the first position information includes:
determining target skin color depth information corresponding to the second skin color type;
and rendering the specified part area matched with the target skin color depth information.
3. The method of claim 2, the first skin tone type being a light skin tone type; the second skin color type is a dark skin color type.
4. The method of claim 3, the rendering the designated site area that matches the target skin tone shading information, comprising:
converting the skin color in the specified location area to a skin color matching the second skin color type.
5. The method according to claim 1, wherein the designated part area is other than the human eye area in the first sample face picture;
the determining the first position information of the designated part region in the first sample face picture comprises the following steps:
performing face analysis on the first same face picture to obtain second position information respectively corresponding to each part of the area;
determining third position information corresponding to other regions except the human eye region in the parts of regions based on the second position information;
and determining the third position information as the first position information corresponding to the designated part area.
6. The method of claim 5, the other of the partial regions than the human eye region comprising at least one of: face region, neck region, and ear region.
7. The method of claim 1, wherein training a target face detection model based on the second sample face picture comprises:
extracting face feature information in the second sample face picture; the face feature information comprises at least one item of face key point information, face information and facial expression information;
and performing deep learning based on the second sample face picture and the face feature information to obtain the target face detection model.
8. An apparatus for training a face detection model, comprising:
the acquisition module is used for acquiring a plurality of first skin color type face pictures of a first skin color type;
the determining module is used for determining first position information corresponding to a specified part area in the first sample face picture;
the rendering module is used for rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
the training module is used for training a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
9. The device of claim 8, the first skin tone type and the second skin tone type determined based on skin tone shading information;
the rendering module includes:
the first determining unit is used for determining target skin color depth information corresponding to the second skin color type;
and the rendering unit is used for rendering the specified part area matched with the target skin color depth information.
10. The device of claim 9, the first skin tone type being a light skin tone type; the second skin color type is a dark skin color type.
11. The device according to claim 8, wherein the designated part area is other than the human eye area in the first sample face picture;
the determining module comprises:
the analysis unit is used for carrying out face analysis on the first same face picture to obtain second position information respectively corresponding to each part of area;
a second determining unit that determines third position information corresponding to the other regions of the respective regions excluding the human eye region, based on the second position information;
a third specifying unit that specifies the third position information as the first position information corresponding to the specified portion region.
12. A training apparatus for a face detection model, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
obtaining a plurality of first skin color type first face pictures;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
13. A storage medium storing computer-executable instructions that, when executed, implement the following:
obtaining a plurality of first skin color type first face pictures;
determining first position information corresponding to a designated part area in the first sample face picture;
rendering the specified part area based on the first position information to obtain a second sample face picture of a second skin color type;
training a target face detection model based on the second sample face picture; and the target face detection model is used for carrying out face detection on the user with the second skin color type.
CN202010084622.2A 2020-02-10 2020-02-10 Training method and device for face detection model Active CN111291685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010084622.2A CN111291685B (en) 2020-02-10 2020-02-10 Training method and device for face detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010084622.2A CN111291685B (en) 2020-02-10 2020-02-10 Training method and device for face detection model

Publications (2)

Publication Number Publication Date
CN111291685A true CN111291685A (en) 2020-06-16
CN111291685B CN111291685B (en) 2023-06-02

Family

ID=71017529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010084622.2A Active CN111291685B (en) 2020-02-10 2020-02-10 Training method and device for face detection model

Country Status (1)

Country Link
CN (1) CN111291685B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870102A (en) * 2021-12-06 2021-12-31 深圳市大头兄弟科技有限公司 Animation method, device, equipment and storage medium of image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL131108A0 (en) * 1999-07-26 2001-01-28 Haim Shani Method and apparatus for diagnosis of actual or pre-shock state
CN101706874A (en) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 Method for face detection based on features of skin colors
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
WO2018033143A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, apparatus and electronic device
CN107798314A (en) * 2017-11-22 2018-03-13 北京小米移动软件有限公司 Skin color detection method and device
CN108009465A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN108537881A (en) * 2018-04-18 2018-09-14 腾讯科技(深圳)有限公司 A kind of faceform's processing method and its equipment, storage medium
CN108573527A (en) * 2018-04-18 2018-09-25 腾讯科技(深圳)有限公司 A kind of expression picture generation method and its equipment, storage medium
CN110032915A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of human face in-vivo detection method, device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL131108A0 (en) * 1999-07-26 2001-01-28 Haim Shani Method and apparatus for diagnosis of actual or pre-shock state
CN101706874A (en) * 2009-12-25 2010-05-12 青岛朗讯科技通讯设备有限公司 Method for face detection based on features of skin colors
WO2018033143A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, apparatus and electronic device
CN108009465A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN107798314A (en) * 2017-11-22 2018-03-13 北京小米移动软件有限公司 Skin color detection method and device
CN110032915A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN108537881A (en) * 2018-04-18 2018-09-14 腾讯科技(深圳)有限公司 A kind of faceform's processing method and its equipment, storage medium
CN108573527A (en) * 2018-04-18 2018-09-25 腾讯科技(深圳)有限公司 A kind of expression picture generation method and its equipment, storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JOSE M. CHAVES-GONZÁLEZMIGUEL A. VEGA-RODRÍGUEZ...: "\"Detecting skin in face recognition systems: A colour spaces study\"" *
P. KAKUMANUS. MAKROGIANNISN. BOURBAKIS: "\" A survey of skin-color modeling and detection methods\"" *
糜元根;陈丹驰;季鹏;: "基于几何特征与新Haar特征的人脸检测算法" *
范一峰;颜志英;: "基于Adaboost算法和肤色验证的人脸检测研究" *
黄知超;张鹏;赵华荣;赵文明: ""智能视频监控系统中基于肤色信息的人脸检测算法研究"" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870102A (en) * 2021-12-06 2021-12-31 深圳市大头兄弟科技有限公司 Animation method, device, equipment and storage medium of image
CN113870102B (en) * 2021-12-06 2022-03-08 深圳市大头兄弟科技有限公司 Animation method, device, equipment and storage medium of image

Also Published As

Publication number Publication date
CN111291685B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
KR102445193B1 (en) Image processing method and apparatus, electronic device, and storage medium
CN108121952B (en) Face key point positioning method, device, equipment and storage medium
CN111260545B (en) Method and device for generating image
CN111275784B (en) Method and device for generating image
CN104765857B (en) The transform method and device of a kind of background picture
CN110809090A (en) Call control method and related product
CN110968808B (en) Method and device for realizing webpage theme update
CN107220614B (en) Image recognition method, image recognition device and computer-readable storage medium
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
CN112836801A (en) Deep learning network determination method and device, electronic equipment and storage medium
CN110889379A (en) Expression package generation method and device and terminal equipment
CN116547717A (en) Facial animation synthesis
CN104766082B (en) Image-recognizing method and device based on android system
CN112532882B (en) Image display method and device
CN111401331B (en) Face recognition method and device
CN107578375B (en) Image processing method and device
CN110717484B (en) Image processing method and system
CN111291685B (en) Training method and device for face detection model
CN112837202B (en) Watermark image generation and attack tracing method and device based on privacy protection
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
CN110533020A (en) A kind of recognition methods of text information, device and storage medium
CN112182648A (en) Privacy image and face privacy processing method, device and equipment
CN113139527B (en) Video privacy protection method, device, equipment and storage medium
CN114186535A (en) Structure diagram reduction method, device, electronic equipment, medium and program product
CN109741243B (en) Color sketch image generation method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant