CN118115397A - Portrait tooth whitening special effect generation method, device and equipment - Google Patents

Portrait tooth whitening special effect generation method, device and equipment Download PDF

Info

Publication number
CN118115397A
CN118115397A CN202410433035.8A CN202410433035A CN118115397A CN 118115397 A CN118115397 A CN 118115397A CN 202410433035 A CN202410433035 A CN 202410433035A CN 118115397 A CN118115397 A CN 118115397A
Authority
CN
China
Prior art keywords
image
mouth region
sample
preset
tooth whitening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410433035.8A
Other languages
Chinese (zh)
Inventor
胡耀武
谭娟
李阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaoying Innovation Technology Co ltd
Original Assignee
Hangzhou Xiaoying Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiaoying Innovation Technology Co ltd filed Critical Hangzhou Xiaoying Innovation Technology Co ltd
Priority to CN202410433035.8A priority Critical patent/CN118115397A/en
Publication of CN118115397A publication Critical patent/CN118115397A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, a device and equipment for generating a special effect of whitening human teeth, and relates to the technical field of image processing. The method comprises the steps of obtaining an image to be processed; detecting key points of the human face of the image to be processed to obtain point positions of an original mouth area in the image to be processed; according to the point positions of the original mouth region, mapping the original mouth region into a preset mouth region template diagram, and generating a mouth region image; carrying out tooth whitening treatment on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram; and restoring the tooth whitening effect image into the image to be processed, and generating a target effect image corresponding to the image to be processed. Therefore, only the mouth area is whitened by adopting the template diagram of the preset mouth area and the preset tooth whitening model, so that the tooth whitening precision is higher, more real and suitable for various scenes.

Description

Portrait tooth whitening special effect generation method, device and equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and equipment for generating a special effect of whitening human teeth.
Background
At present, the special tooth whitening effect is a very commonly used function in the field of portrait photo beautification, such as photo taking and picture repairing in a film studio, photo taking and picture taking application in a smart phone and the like.
However, the current technical proposal is mainly a color adjustment proposal based on the whole face, the proposal relies on whitening the whole face to realize the whitening treatment on teeth, and finally achieves the effect of tooth whitening through the saturation and hue color matching of the tooth area.
This type of approach has several problems: 1. the teeth whitening accuracy is low, and the processing capability of shielding conditions is not provided; 2. based on traditional saturation and hue adjustment algorithm, intelligent processing cannot be carried out on scenes with various tooth colors (black teeth/yellow teeth/red teeth and the like), and the effect universality is poor.
Disclosure of Invention
The application aims to overcome the defects in the prior art, and provides a method, a device and equipment for generating a special effect of whitening human teeth, so as to solve the problems of low tooth whitening accuracy and the like in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides a method for generating a special effect of whitening a portrait tooth, where the method includes:
acquiring an image to be processed;
detecting the key points of the human face of the image to be processed to obtain the point positions of the original mouth area in the image to be processed;
according to the point positions of the original mouth region, mapping the original mouth region into a preset mouth region template diagram, and generating a mouth region image;
Performing tooth whitening treatment on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram;
and restoring the tooth whitening effect image to the image to be processed, and generating a target effect image corresponding to the image to be processed.
Optionally, the mapping the original mouth region to a preset mouth region template map according to the point position of the original mouth region, and generating a mouth region image includes:
constructing a first affine transformation matrix from the point position of the original mouth region to the point position of the template mouth region in the template diagram of the preset mouth region;
And mapping the original mouth region into the preset mouth region template diagram by adopting the first affine transformation matrix according to the point positions of the original mouth region, and generating the mouth region image.
Optionally, the restoring the tooth whitening effect map to the image to be processed, generating a target effect image corresponding to the image to be processed includes:
constructing a second affine transformation matrix from the point location of the template mouth region to the point location of the original mouth region;
Generating point positions of a mouth area in the tooth whitening effect map, and mapping the generated mouth area into the original mouth area by adopting the second affine transformation matrix to obtain a mapped tooth whitening effect map;
And restoring the mapped tooth whitening effect image to the image to be processed to generate the target effect image.
Optionally, the restoring the mapped tooth whitening effect map to the image to be processed, to generate the target effect image includes:
and restoring the tooth whitening effect graph after mapping to the image to be processed by adopting a preset image restoration formula according to preset tooth whitening intensity parameters, so as to generate the target effect image.
Optionally, the preset tooth whitening model is obtained by training in the following manner:
Processing each sample face image in a preset face data set by adopting a preset image generation model to generate a face tooth whitening generation diagram corresponding to the sample face image;
detecting key points of the human face of the sample human face image to obtain point positions of a sample original mouth area in the sample human face image and point positions of a sample generated mouth area in the human face tooth whitening generation diagram;
According to the point positions of the sample original mouth region, mapping the sample original mouth region into a template diagram of a preset mouth region, and generating a sample original mouth region image;
According to the point positions of the sample generation mouth region, mapping the sample generation mouth region into the preset mouth region template map, and generating a sample generation mouth region image;
Fusing the original mouth region image of the sample and the mouth region image generated by the sample to generate a sample mouth region effect diagram;
Constructing a tooth whitening data pair according to the sample original mouth area image and the sample mouth area effect image;
And training the model by adopting the tooth whitening data pair to obtain the preset tooth whitening model.
Optionally, the mapping the sample original mouth region to a preset mouth region template map according to the point position of the sample original mouth region, and generating a sample original mouth region image includes:
Constructing a first sample affine transformation matrix from the point location of the sample original mouth region to the point location of the template mouth region of the template diagram of the preset mouth region;
according to the point positions of the sample original mouth region, mapping the sample original mouth region into the preset mouth region template diagram by adopting the first sample affine transformation matrix, and generating the sample original mouth region image;
the generating the sample generating mouth region image by mapping the sample generating mouth region to the preset mouth region template map according to the point position of the sample generating mouth region comprises the following steps:
constructing a second sample affine transformation matrix from the point positions of the sample generation mouth region to the point positions of the template mouth region;
And according to the point positions of the sample generation mouth region, mapping the sample generation mouth region into the preset mouth region template diagram by adopting the second sample affine transformation matrix, and generating the sample generation mouth region image.
Optionally, the fusing the sample original mouth region image and the sample generated mouth region image to generate a sample mouth region effect map includes:
And fusing the original mouth region image of the sample and the generated mouth region image of the sample by adopting the preset mouth region template image to generate the sample mouth region effect image.
Optionally, the processing each sample face image in the preset face data set by using the preset image generation model to generate a face tooth whitening generation map corresponding to the sample face image includes:
And processing the sample face image by adopting the preset image generation model according to a preset generation prompt word corresponding to the sample face image to generate the face tooth whitening generation graph.
In a second aspect, an embodiment of the present application provides a portrait teeth whitening special effect generating device, including:
the acquisition module is used for acquiring the image to be processed;
the detection module is used for carrying out face key point detection on the image to be processed to obtain point positions of an original mouth area in the image to be processed;
The mapping module is used for mapping the original mouth region into a preset mouth region template diagram according to the point positions of the original mouth region, and generating a mouth region image;
The processing module is used for carrying out tooth whitening processing on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram;
The generating module is used for restoring the tooth whitening effect graph to the image to be processed and generating a target effect image corresponding to the image to be processed.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor and a storage medium, wherein the processor is in communication connection with the storage medium through a bus, the storage medium stores program instructions executable by the processor, and the processor calls a program stored in the storage medium to execute the steps of the portrait teeth whitening special effect generating method according to any one of the first aspect.
Compared with the prior art, the application has the following beneficial effects:
The application provides a method, a device and equipment for generating a special effect of whitening human teeth, wherein the method comprises the steps of obtaining an image to be processed; detecting key points of the human face of the image to be processed to obtain point positions of an original mouth area in the image to be processed; according to the point positions of the original mouth region, mapping the original mouth region into a preset mouth region template diagram, and generating a mouth region image; carrying out tooth whitening treatment on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram; and restoring the tooth whitening effect image into the image to be processed, and generating a target effect image corresponding to the image to be processed. Therefore, only the mouth area is whitened by adopting the template diagram of the preset mouth area and the preset tooth whitening model, so that the tooth whitening precision is higher, more real and suitable for various scenes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a training method of a tooth whitening model according to an embodiment of the present application;
Fig. 1A is a schematic diagram of a 101 face key point provided in an embodiment of the present application;
fig. 2 is a flowchart of a method for generating an image of a sample original mouth area according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a method for generating special effects of whitening portrait teeth according to an embodiment of the present application;
Fig. 4 is a flowchart of a method for generating a mouth area image according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for generating a target effect image according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a specific device for whitening portrait teeth according to an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present application.
Icon: 601-acquisition module, 602-detection module, 603-mapping module, 604-processing module, 605-generation module, 701-processor, 702-storage medium.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
In order to better understand the method for generating the special effect of whitening the portrait teeth, the training method of the tooth whitening model provided by the application is explained. Fig. 1 is a flowchart of a training method of a tooth whitening model according to an embodiment of the application. As shown in fig. 1, the preset tooth whitening model is trained as follows:
S101, processing each sample face image in a preset face data set by adopting a preset image generation model to generate a face tooth whitening generation diagram corresponding to the sample face image.
For example, the preset image generation model is an image whitening model, for example, the preset image generation model is a Stable Diffusion WebUI model, and the model is an open source tool constructed based on a AIGC diffusion large model StableDiffusion, is widely applied to the field of repairing images, and has the advantage of good generation effect.
The preset face dataset is an open source FFHQ (Flickr-Faces-High-Quality) face dataset. For example, 10000 face images are screened from FFHQ face data sets as sample face images, including scenes such as mouth occlusion, black teeth, yellow teeth, red teeth, mouth opening, mouth closing and the like.
Inputting the sample face image into a preset image generation model, and performing whitening treatment on the sample face image by the preset image generation model to generate a face tooth whitening generation diagram corresponding to the sample face image.
S102, performing face key point detection on the sample face image to obtain point positions of a sample original mouth region in the sample face image and point positions of a sample generated mouth region in the face tooth whitening generation diagram.
And detecting 101 face key points of the sample face image to obtain 101 face key points, and obtaining the point positions of the original mouth region of the sample. Fig. 1A is a schematic diagram of a 101 face key point provided in an embodiment of the present application. As shown in fig. 1A, 101 face key points are sequentially used for representing the whole facial feature, and the key points of the mouth area of the 101 face key points are determined to be the point positions of the original mouth area of the sample.
Because the number of the points in the mouth area is large, the leftmost point (75 th key point), the rightmost point (81 st key point) and the uppermost point (100 th key point) of the mouth area can be adopted as the contour point of the mouth area, and the contour point of the mouth area is adopted to represent the point of the mouth area.
Because the sample face image and the face tooth whitening generation map only have difference in whitening degree, and the point positions of the key points of the sample face image and the face tooth whitening generation map are identical. Therefore, the contour point positions of the original mouth region of the sample are also used for representing the point positions of the mouth region generated by the sample.
And S103, mapping the sample original mouth region into a template diagram of a preset mouth region according to the point positions of the sample original mouth region, and generating a sample original mouth region image.
For example, the preset mouth region template map may be a 512×512 size template map, a 256×512 size template map, or a 256×256 size template map, and the specific image pixel size may be set by the user, which is not limited herein.
The pixels of the original mouth region image of the sample are lower, the image is not clear enough, and the original mouth region image of the sample is generated by mapping the original mouth region of the sample into a template image of a preset mouth region, so that the image is clearer, and the details of more mouth regions are expressed.
S104, mapping the sample generation mouth region into a preset mouth region template diagram according to the point position of the sample generation mouth region, and generating a sample generation mouth region image.
The preset mouth area template diagram adopted in the step is the same as that in the step S403, and the beneficial effects that can be achieved are the same, and are not described here again.
S105, fusing the original mouth region image of the sample and the generated mouth region image of the sample to generate a sample mouth region effect diagram.
The original mouth region image of the sample characterizes the mouth characteristics before the whitening treatment, and belongs to the original image. And the sample generated mouth region image characterizes the mouth characteristics after the whitening treatment, there may be an excessive whitening situation. Therefore, the original mouth area image of the sample and the generated mouth area image of the sample are fused, the whitening effect is neutralized, and the obtained sample mouth area effect diagram is more real, so that the whitening effect is realized, and excessive whitening distortion is not caused.
S106, constructing a tooth whitening data pair according to the original mouth area image of the sample and the effect diagram of the mouth area of the sample.
In the tooth whitening data pair, the original mouth area image of the sample is an image before whitening, and the mouth area effect image of the sample is a target image after whitening.
By the method, a large number of high-precision tooth whitening data pairs can be built in batches, compared with a traditional data set building method, the tooth whitening effect generated by AIGC is finer in effect, the self-carried teeth are highlight adjusted, and the definition is higher; on the coverage of the user scene, the tooth whitening data pair contains scenes such as mouth opening/mouth closing/black teeth/yellow teeth/red teeth/shielding and the like, and has higher universality.
And S107, performing model training by adopting the tooth whitening data pair to obtain a preset tooth whitening model.
The model obtained by training is used for carrying out tooth whitening training, so that the model training is carried out by adopting the tooth whitening data pair mainly comprising the mouth area, and the characteristics of the teeth can be better obtained.
Illustratively, the tooth whitening model is to construct a tooth whitening network based on the Pix2Pix algorithm.
Outputting the original mouth area image of the sample to an initial tooth whitening model for whitening treatment to obtain an output image, comparing the output image with a sample mouth area effect diagram, and taking the trained tooth whitening model as a preset tooth whitening model if the similarity of the output image and the sample mouth area effect diagram is greater than or equal to a preset threshold (specifically, a loss function of the tooth whitening model is adopted for characterization, and the loss function value is greater than or equal to a preset function value), which indicates that the model can output the whitening image meeting the user requirement. Otherwise, training is continued until the similarity between the output image and the sample mouth area effect diagram is greater than or equal to a preset threshold. If the similarity between the output image and the sample mouth region effect map is always smaller than the preset threshold, the number of images of the preset face dataset is increased in step S401 until the similarity between the output image and the sample mouth region effect map is greater than or equal to the preset threshold.
Because the tooth whitening data pair comprises scenes such as mouth opening/mouth closing/black teeth/yellow teeth/red teeth/shielding, the preset tooth whitening model has strong robustness, and can well solve the tooth whitening problem under multiple scenes.
To sum up, in this embodiment, a preset image generation model is adopted to process each sample face image in a preset face data set, so as to generate a face tooth whitening generation diagram corresponding to the sample face image; detecting key points of the human face of the sample human face image to obtain point positions of an original mouth region of the sample in the sample human face image and point positions of a mouth region of the sample in the human face tooth whitening generation diagram; according to the point positions of the sample original mouth region, mapping the sample original mouth region into a template diagram of a preset mouth region, and generating a sample original mouth region image; according to the point positions of the sample generation mouth region, mapping the sample generation mouth region into a preset mouth region template map, and generating a sample generation mouth region image; fusing the original mouth region image of the sample and the generated mouth region image of the sample to generate a sample mouth region effect diagram; constructing a tooth whitening data pair according to the sample original mouth area image and the sample mouth area effect image; and training the model by adopting the tooth whitening data pair to obtain a preset tooth whitening model. Thus, the accurate training is performed to obtain a preset tooth whitening model.
On the basis of the embodiment corresponding to fig. 1, the embodiment of the application also provides a method for generating the original mouth area image of the sample. Fig. 2 is a flowchart of a method for generating an image of a sample original mouth area according to an embodiment of the present application. As shown in fig. 2, in S103, mapping the sample original mouth region to a preset mouth region template map according to the point location of the sample original mouth region, to generate a sample original mouth region image, including:
S201, constructing a first sample affine transformation matrix from the point location of the original mouth region of the sample to the point location of the template mouth region of the template diagram of the preset mouth region.
And determining the contour point positions of the template mouth regions in the template diagram of the preset mouth regions according to the contour point positions of the mouth regions. And constructing a first sample affine transformation matrix according to the contour points of the mouth region and the contour points of the template mouth region. That is, the point location of the original mouth region of the sample can be transformed to the point location of the template mouth region of the template map of the preset mouth region by the transformation rule of the affine transformation matrix of the first sample.
Illustratively, the first sample affine transformation matrix is calculated as shown in the following formulas (1), (2) and (3):
wherein M 0 is a first sample affine transformation matrix, (x, y) is coordinates of contour points of the mouth region, Coordinates of contour points of the template mouth region. The formula (1) characterizes the transformation rule of the affine transformation matrix of the first sample, and transforms the point positions of the original mouth region of the sample to the point positions of the template mouth region of the template diagram of the preset mouth region. Equation (2) characterizes the first sample affine transformation matrix. Equation (3) characterizes the way the affine transformation matrix of the first sample is calculated.
S202, mapping the sample original mouth region into a preset mouth region template diagram by adopting a first sample affine transformation matrix according to the point positions of the sample original mouth region, and generating a sample original mouth region image.
In step S501, the contour points of the mouth region and the contour points of the template mouth region are used to determine a first sample affine transformation matrix. The accuracy of other point positions in the mouth area is not required to be relied on, a good effect can be still given under the condition that certain deviation exists in the point positions, the robustness is high, and the defect of the traditional point position tooth whitening algorithm is effectively avoided.
And (3) inputting coordinates of the point positions of the original mouth region of the sample by adopting the formula (1), and calculating to obtain the coordinates of the point positions of the mouth region of the template. According to coordinates of points of the template mouth region, mapping image features of each point of the sample original mouth region into a preset mouth region template diagram to obtain a sample original mouth region image.
In S104, mapping the sample generation mouth region to a preset mouth region template map according to the point position of the sample generation mouth region, and generating a sample generation mouth region image, including:
S203, constructing a second sample affine transformation matrix from the point positions of the sample generation mouth region to the point positions of the template mouth region.
The construction method of the second sample affine transformation matrix is similar to that of the first sample affine transformation matrix, and is not repeated here.
S204, according to the point positions of the sample generation mouth region, a second sample affine transformation matrix is adopted, the sample generation mouth region is mapped into a preset mouth region template diagram, and a sample generation mouth region image is generated.
The generation manner of this step is similar to that of the original mouth region image of the sample, and will not be described here again.
To sum up, in the present embodiment, a first sample affine transformation matrix from the point location of the sample original mouth region to the point location of the template mouth region of the template map of the preset mouth region is constructed; according to the point positions of the sample original mouth region, mapping the sample original mouth region into a template diagram of a preset mouth region by adopting a first sample affine transformation matrix, and generating a sample original mouth region image; constructing a second sample affine transformation matrix from the point position of the sample generation mouth region to the point position of the template mouth region; and according to the point positions of the sample generation mouth region, mapping the sample generation mouth region into a preset mouth region template diagram by adopting a second sample affine transformation matrix, and generating a sample generation mouth region image. Thus, the original mouth region image of the sample is accurately generated, and the mouth region image of the sample is generated.
Based on the embodiment corresponding to fig. 1, in another embodiment of the present application, in S105, the fusing of the sample original mouth region image and the sample generated mouth region image to generate a sample mouth region effect map includes:
And fusing the original mouth region image of the sample and the generated mouth region image of the sample by adopting a preset mouth region template image to generate a sample mouth region effect image.
And fusing the original mouth region image of the sample with the generated mouth region image of the sample on a template image of the preset mouth region, wherein the generated effect image of the mouth region of the sample is also an image with the same pixel size as the template image of the preset mouth region.
The specific fusion formula is shown in the following formula (4):
D_mouth=S_mouth*(1-alpha)+Aigc_mouth*alpha (4)
D_mouth is a sample mouth region effect diagram, S_mouth is a sample original mouth region image, aigc _mouth is a sample generated mouth region image, and alpha is a fusion parameter.
To sum up, in this embodiment, a preset mouth region template diagram is adopted to fuse a sample original mouth region image and a sample generated mouth region image, so as to generate a sample mouth region effect diagram. Thus, the sample mouth region effect map is accurately generated.
On the basis of the embodiment corresponding to fig. 1, in another embodiment of the present application, processing each sample face image in the preset face data set by using the preset image generation model in S101 to generate a face tooth whitening generation map corresponding to the sample face image includes:
And generating a prompt word according to the preset corresponding to the sample face image, adopting a preset image generation model to process the sample face image, and generating a face tooth whitening generation diagram.
When the preset image generation model is adopted to process the sample face image, preset generation prompt words are set according to the user requirements, so that personalized adjustment is carried out on the processing process of the preset image generation model.
Illustratively, the preset generation hint words include: positive cue words, negative cue words and denoising intensity.
The forward hint word may be: best quality, high quality, facial, beautiful teeth, white teeth, perfect teeth, bright spots on teeth, straight teeth, precise details, 8K, artistic photo.
The negative direction hint word may be: the quality is the worst and the blur.
The sampling method can be as follows: euler a sampling. The denoising strength may be: 0.25.
To sum up, in this embodiment, according to a preset generation hint word corresponding to a sample face image, a preset image generation model is adopted to process the sample face image, so as to generate a face tooth whitening generation diagram. Thus, the sample face image is accurately subjected to the whitening process.
The method for generating the special effect of whitening the portrait teeth provided by the application is explained by a specific example. Fig. 3 is a schematic flow chart of a method for generating a special effect of whitening a portrait tooth according to an embodiment of the present application, where an execution subject of the method is an electronic device, and the electronic device may be a device with a computing function, for example, a desktop computer, a tablet computer, etc. As shown in fig. 3, the method includes:
S301, acquiring an image to be processed.
The image to be processed is a face image, for example.
S302, detecting key points of the face of the image to be processed to obtain the point positions of the original mouth area in the image to be processed.
And detecting 101 face key points of the image to be processed, detecting to obtain 101 face key points, and obtaining the point positions of the original mouth region in the image to be processed, wherein the point positions of the mouth region are more, and the leftmost point position (75 th key point), the rightmost point position (81 st key point) and the uppermost point position (100 th key point) of the mouth region can be adopted as the contour point positions of the mouth region, so that the contour point positions of the mouth region are adopted to represent the point positions of the mouth region.
S303, mapping the original mouth region into a preset mouth region template diagram according to the point positions of the original mouth region, and generating a mouth region image.
The original mouth region image of the sample is low in pixels, the image is not clear enough, and the original mouth region is mapped into a preset mouth region template image to generate a mouth region image, so that the image is clearer, and the details of more mouth regions are expressed. And also matches with a preset tooth whitening model trained by using a preset mouth region template map.
S304, performing tooth whitening treatment on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram.
Inputting the mouth area image into a preset tooth whitening model, and outputting to obtain a tooth whitening effect image.
The preset tooth whitening model is a model trained according to the mouth region image, and can accurately identify and whiten teeth.
S305, restoring the tooth whitening effect image into the image to be processed, and generating a target effect image corresponding to the image to be processed.
Because the tooth whitening effect image is only an image of the mouth area, the original mouth area in the image to be processed is replaced by the tooth whitening effect image, and the target effect image corresponding to the image to be processed is obtained. The target effect image includes both the mouth area after tooth whitening and other area artwork except the mouth area.
By adopting the template diagram of the preset mouth area to only whiten the mouth area, the whitening precision is higher, more real and suitable for various scenes.
To sum up, in the present embodiment, an image to be processed is acquired; detecting key points of the human face of the image to be processed to obtain point positions of an original mouth area in the image to be processed; according to the point positions of the original mouth region, mapping the original mouth region into a preset mouth region template diagram, and generating a mouth region image; carrying out tooth whitening treatment on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram; and restoring the tooth whitening effect image into the image to be processed, and generating a target effect image corresponding to the image to be processed. Therefore, only the mouth area is whitened by adopting the template diagram of the preset mouth area and the preset tooth whitening model, so that the tooth whitening precision is higher, more real and suitable for various scenes.
On the basis of the embodiment corresponding to fig. 3, the embodiment of the application further provides a method for generating the mouth area image. Fig. 4 is a flowchart of a method for generating a mouth area image according to an embodiment of the present application. As shown in fig. 4, in S303, mapping the original mouth region to a preset mouth region template map according to the point position of the original mouth region, to generate a mouth region image, including:
s401, constructing a first affine transformation matrix from the point position of the original mouth region to the point position of the template mouth region in the template diagram of the preset mouth region.
The construction method of the first affine transformation matrix is similar to that of the first sample affine transformation matrix, and is not repeated here.
S402, mapping the original mouth region into a preset mouth region template diagram by adopting a first affine transformation matrix according to the point positions of the original mouth region, and generating a mouth region image.
The generation manner of this step is similar to that of the original mouth region image of the sample, and will not be described here again.
To sum up, in the present embodiment, a first affine transformation matrix from the point location of the original mouth region to the point location of the template mouth region in the template diagram of the preset mouth region is constructed; according to the point positions of the original mouth region, a first affine transformation matrix is adopted to map the original mouth region into a preset mouth region template diagram, and a mouth region image is generated. Thus, the mouth area image is accurately generated.
On the basis of the embodiment corresponding to fig. 4, the embodiment of the application further provides a method for generating the target effect image. Fig. 5 is a flowchart of a method for generating a target effect image according to an embodiment of the application. As shown in fig. 5, in S305, restoring the tooth whitening effect map to the image to be processed, and generating a target effect image corresponding to the image to be processed includes:
S501, constructing a second affine transformation matrix from the point location of the template mouth region to the point location of the original mouth region.
The construction method of the second affine transformation matrix is similar to that of the first sample affine transformation matrix, and is not repeated here. The two are only different in that the construction direction of the first sample affine transformation matrix is from the point of the original mouth region of the sample to the point of the template mouth region, and the construction direction of the second affine transformation matrix is from the point of the template mouth region to the point of the original mouth region.
S502, generating point positions of a mouth area in the tooth whitening effect diagram, mapping the generated mouth area into an original mouth area by adopting a second affine transformation matrix, and obtaining the mapped tooth whitening effect diagram.
The generation manner of this step is similar to that of the original mouth region image of the sample, and will not be described here again.
S503, restoring the mapped tooth whitening effect image to the image to be processed, and generating a target effect image.
And replacing the original mouth area in the image to be processed by the mapped tooth whitening effect image to obtain a target effect image corresponding to the image to be processed. The target effect image includes both the mouth area after tooth whitening and other area artwork except the mouth area.
To sum up, in the present embodiment, a second affine transformation matrix from the point location of the template mouth region to the point location of the original mouth region is constructed; generating point positions of a mouth area in the tooth whitening effect map, mapping the generated mouth area into an original mouth area by adopting a second affine transformation matrix, and obtaining a mapped tooth whitening effect map; and restoring the mapped tooth whitening effect image to the image to be processed to generate a target effect image. Thus, the target effect image after the tooth whitening treatment is accurately obtained.
On the basis of the embodiment corresponding to fig. 5, in another embodiment of the present application, in S503, restoring the mapped tooth whitening effect map to the image to be processed, and generating the target effect image includes:
and restoring the mapped tooth whitening effect graph to the image to be processed by adopting a preset image restoration formula according to the preset tooth whitening intensity parameters, and generating a target effect image.
And replacing the original mouth area in the image to be processed by adopting the tooth whitening effect image to obtain an initial effect image corresponding to the image to be processed. The target effect image includes both the mouth area after tooth whitening and other area artwork except the mouth area. And generating a target effect image by adopting a preset image restoration formula according to the image to be processed and the initial effect image.
Illustratively, the user presets the preset tooth whitening intensity parameters prior to processing the portrait teeth. The preset image restoration formula is shown in the following formula (5):
D = D1 * k + S * (1-k) (5)
wherein k is a tooth whitening intensity parameter (more than 0 and less than 1), S is an image to be processed, D 1 is an initial effect image, and D is a target effect image.
The initial effect image containing the tooth whitening effect image and the image to be processed are fused, the whitening effect is neutralized, the obtained target effect image is more real, the whitening effect is achieved, excessive whitening distortion cannot be caused, and the method is suitable for multi-scene whitening.
To sum up, in this embodiment, according to the preset tooth whitening intensity parameter, a preset image restoration formula is adopted to restore the mapped tooth whitening effect map to the image to be processed, so as to generate the target effect image. Therefore, the whitening effect is achieved, excessive whitening distortion is avoided, and the method is suitable for multi-scene whitening.
The following describes a portrait teeth whitening special effect generating device, equipment, storage medium and the like, and specific implementation processes and technical effects thereof are referred to above, and are not described in detail.
Fig. 6 is a schematic diagram of a device for generating special effects of whitening human teeth according to an embodiment of the present application, as shown in fig. 6, the device includes:
an acquiring module 601 is configured to acquire an image to be processed.
The detection module 602 is configured to detect a face key point of an image to be processed, and obtain a point location of an original mouth region in the image to be processed.
The mapping module 603 is configured to map the original mouth region to a preset mouth region template map according to the point location of the original mouth region, and generate a mouth region image.
The processing module 604 is configured to perform tooth whitening processing on the mouth area image by using a preset tooth whitening model, so as to obtain a tooth whitening effect map.
The generating module 605 is configured to restore the tooth whitening effect map to the image to be processed, and generate a target effect image corresponding to the image to be processed.
Further, the mapping module 603 is specifically configured to construct a first affine transformation matrix from the point location of the original mouth region to the point location of the template mouth region in the template diagram of the preset mouth region; according to the point positions of the original mouth region, a first affine transformation matrix is adopted to map the original mouth region into a preset mouth region template diagram, and a mouth region image is generated.
Further, the generating module 605 is specifically further configured to construct a second affine transformation matrix from the point location of the template mouth region to the point location of the original mouth region; generating point positions of a mouth area in the tooth whitening effect map, mapping the generated mouth area into an original mouth area by adopting a second affine transformation matrix, and obtaining a mapped tooth whitening effect map; and restoring the mapped tooth whitening effect image to the image to be processed to generate a target effect image.
Further, the generating module 605 is specifically further configured to restore the mapped tooth whitening effect map to the image to be processed by using a preset image restoration formula according to the preset tooth whitening intensity parameter, so as to generate the target effect image.
Further, the generating module 605 is further configured to process each sample face image in the preset face data set by using a preset image generating model, and generate a face tooth whitening generating map corresponding to the sample face image.
Further, the detection module 602 is further configured to perform face key point detection on the sample face image to obtain a point location of a sample original mouth area in the sample face image and a point location of a sample generated mouth area in the face tooth whitening generation map.
Further, the mapping module 603 is further configured to map the sample original mouth region to a preset mouth region template map according to the point location of the sample original mouth region, so as to generate a sample original mouth region image.
Further, the mapping module 603 is further configured to map the sample generation mouth region to a preset mouth region template map according to the point location of the sample generation mouth region, and generate a sample generation mouth region image.
Further, the generating module 605 is further configured to fuse the sample original mouth region image and the sample generated mouth region image, and generate a sample mouth region effect map.
Further, the generating module 605 is further configured to construct a tooth whitening data pair according to the sample original mouth region image and the sample mouth region effect map.
Further, the processing module 604 is further configured to perform model training by using the tooth whitening data pair to obtain a preset tooth whitening model.
Further, the mapping module 603 is specifically further configured to construct a first sample affine transformation matrix from a point location of a sample original mouth region to a point location of a template mouth region of a template map of a preset mouth region; according to the point positions of the sample original mouth region, mapping the sample original mouth region into a template diagram of a preset mouth region by adopting a first sample affine transformation matrix, and generating a sample original mouth region image; constructing a second sample affine transformation matrix from the point position of the sample generation mouth region to the point position of the template mouth region; and according to the point positions of the sample generation mouth region, mapping the sample generation mouth region into a preset mouth region template diagram by adopting a second sample affine transformation matrix, and generating a sample generation mouth region image.
Further, the generating module 605 is specifically further configured to use a preset mouth region template map to fuse the sample original mouth region image and the sample generated mouth region image, so as to generate a sample mouth region effect map.
Further, the generating module 605 is specifically further configured to generate a prompt word according to a preset corresponding to the sample face image, and process the sample face image by adopting a preset image generating model to generate a face tooth whitening generating diagram.
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present application, where the electronic device may be a device with a computing function. As shown in fig. 7:
the electronic device includes: a processor 701, and a storage medium 702. The processor 701 and the storage medium 702 are connected by a bus.
The storage medium 702 is used to store a program, and the processor 701 calls the program stored in the storage medium 702 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.

Claims (10)

1. A method for generating a special effect of whitening human teeth, which is characterized by comprising the following steps:
acquiring an image to be processed;
detecting the key points of the human face of the image to be processed to obtain the point positions of the original mouth area in the image to be processed;
according to the point positions of the original mouth region, mapping the original mouth region into a preset mouth region template diagram, and generating a mouth region image;
Performing tooth whitening treatment on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram;
and restoring the tooth whitening effect image to the image to be processed, and generating a target effect image corresponding to the image to be processed.
2. The method according to claim 1, wherein the mapping the original mouth region into a preset mouth region template map according to the point location of the original mouth region, and generating a mouth region image, includes:
constructing a first affine transformation matrix from the point position of the original mouth region to the point position of the template mouth region in the template diagram of the preset mouth region;
And mapping the original mouth region into the preset mouth region template diagram by adopting the first affine transformation matrix according to the point positions of the original mouth region, and generating the mouth region image.
3. The method according to claim 2, wherein the restoring the tooth whitening effect map to the image to be processed, and generating the target effect image corresponding to the image to be processed, includes:
constructing a second affine transformation matrix from the point location of the template mouth region to the point location of the original mouth region;
Generating point positions of a mouth area in the tooth whitening effect map, and mapping the generated mouth area into the original mouth area by adopting the second affine transformation matrix to obtain a mapped tooth whitening effect map;
And restoring the mapped tooth whitening effect image to the image to be processed to generate the target effect image.
4. A method according to claim 3, wherein the restoring the mapped tooth whitening effect map into the image to be processed to generate the target effect image comprises:
and restoring the tooth whitening effect graph after mapping to the image to be processed by adopting a preset image restoration formula according to preset tooth whitening intensity parameters, so as to generate the target effect image.
5. The method according to claim 1, wherein the predetermined tooth whitening model is trained by:
Processing each sample face image in a preset face data set by adopting a preset image generation model to generate a face tooth whitening generation diagram corresponding to the sample face image;
detecting key points of the human face of the sample human face image to obtain point positions of a sample original mouth area in the sample human face image and point positions of a sample generated mouth area in the human face tooth whitening generation diagram;
According to the point positions of the sample original mouth region, mapping the sample original mouth region into a template diagram of a preset mouth region, and generating a sample original mouth region image;
According to the point positions of the sample generation mouth region, mapping the sample generation mouth region into the preset mouth region template map, and generating a sample generation mouth region image;
Fusing the original mouth region image of the sample and the mouth region image generated by the sample to generate a sample mouth region effect diagram;
Constructing a tooth whitening data pair according to the sample original mouth area image and the sample mouth area effect image;
And training the model by adopting the tooth whitening data pair to obtain the preset tooth whitening model.
6. The method according to claim 5, wherein the mapping the sample original mouth region into a preset mouth region template map according to the point location of the sample original mouth region, and generating a sample original mouth region image, includes:
Constructing a first sample affine transformation matrix from the point location of the sample original mouth region to the point location of the template mouth region of the template diagram of the preset mouth region;
according to the point positions of the sample original mouth region, mapping the sample original mouth region into the preset mouth region template diagram by adopting the first sample affine transformation matrix, and generating the sample original mouth region image;
the generating the sample generating mouth region image by mapping the sample generating mouth region to the preset mouth region template map according to the point position of the sample generating mouth region comprises the following steps:
constructing a second sample affine transformation matrix from the point positions of the sample generation mouth region to the point positions of the template mouth region;
And according to the point positions of the sample generation mouth region, mapping the sample generation mouth region into the preset mouth region template diagram by adopting the second sample affine transformation matrix, and generating the sample generation mouth region image.
7. The method of claim 5, wherein the fusing the sample original mouth region image and the sample generated mouth region image to generate a sample mouth region effect map comprises:
And fusing the original mouth region image of the sample and the generated mouth region image of the sample by adopting the preset mouth region template image to generate the sample mouth region effect image.
8. The method according to claim 5, wherein the processing each sample face image in the preset face data set by using the preset image generation model to generate the face teeth whitening generation map corresponding to the sample face image includes:
And processing the sample face image by adopting the preset image generation model according to a preset generation prompt word corresponding to the sample face image to generate the face tooth whitening generation graph.
9. A portrait teeth whitening effect generating device, the device comprising:
the acquisition module is used for acquiring the image to be processed;
the detection module is used for carrying out face key point detection on the image to be processed to obtain point positions of an original mouth area in the image to be processed;
The mapping module is used for mapping the original mouth region into a preset mouth region template diagram according to the point positions of the original mouth region, and generating a mouth region image;
The processing module is used for carrying out tooth whitening processing on the mouth area image by adopting a preset tooth whitening model to obtain a tooth whitening effect diagram;
The generating module is used for restoring the tooth whitening effect graph to the image to be processed and generating a target effect image corresponding to the image to be processed.
10. An electronic device, comprising: the system comprises a processor and a storage medium, wherein the processor is in communication connection with the storage medium through a bus, the storage medium stores program instructions executable by the processor, and the processor calls a program stored in the storage medium to execute the steps of the portrait teeth whitening special effect generating method according to any one of claims 1 to 8.
CN202410433035.8A 2024-04-11 2024-04-11 Portrait tooth whitening special effect generation method, device and equipment Pending CN118115397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410433035.8A CN118115397A (en) 2024-04-11 2024-04-11 Portrait tooth whitening special effect generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410433035.8A CN118115397A (en) 2024-04-11 2024-04-11 Portrait tooth whitening special effect generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN118115397A true CN118115397A (en) 2024-05-31

Family

ID=91219554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410433035.8A Pending CN118115397A (en) 2024-04-11 2024-04-11 Portrait tooth whitening special effect generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN118115397A (en)

Similar Documents

Publication Publication Date Title
WO2021057848A1 (en) Network training method, image processing method, network, terminal device and medium
CN112989904B (en) Method for generating style image, method, device, equipment and medium for training model
US8509499B2 (en) Automatic face detection and identity masking in images, and applications thereof
CN111028213A (en) Image defect detection method and device, electronic equipment and storage medium
CN111383232B (en) Matting method, matting device, terminal equipment and computer readable storage medium
CN110827371B (en) Certificate generation method and device, electronic equipment and storage medium
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
EP3644599A1 (en) Video processing method and apparatus, electronic device, and storage medium
CN111611934A (en) Face detection model generation and face detection method, device and equipment
CN111445384A (en) Universal portrait photo cartoon stylization method
CN113393540B (en) Method and device for determining color edge pixel points in image and computer equipment
WO2023005743A1 (en) Image processing method and apparatus, computer device, storage medium, and computer program product
KR20160044203A (en) Matting method for extracting object of foreground and apparatus for performing the matting method
CN112651953A (en) Image similarity calculation method and device, computer equipment and storage medium
Sharma et al. Image recognition system using geometric matching and contour detection
CN113052783A (en) Face image fusion method based on face key points
KR102391087B1 (en) Image processing methods, devices and electronic devices
Arsic et al. Improved lip detection algorithm based on region segmentation and edge detection
CN116309494B (en) Method, device, equipment and medium for determining interest point information in electronic map
CN116485944A (en) Image processing method and device, computer readable storage medium and electronic equipment
KR20210108283A (en) A method of improving the quality of 3D images acquired from RGB-depth camera
CN118115397A (en) Portrait tooth whitening special effect generation method, device and equipment
CN110310341A (en) The generation method of default parameters, device, equipment and storage medium in color algorithm
CN114255193A (en) Board card image enhancement method, device, equipment and readable storage medium
CN113781330A (en) Image processing method, device and electronic system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination