CN111507907A - System, method and storage medium for execution on computing device - Google Patents
System, method and storage medium for execution on computing device Download PDFInfo
- Publication number
- CN111507907A CN111507907A CN202010068688.2A CN202010068688A CN111507907A CN 111507907 A CN111507907 A CN 111507907A CN 202010068688 A CN202010068688 A CN 202010068688A CN 111507907 A CN111507907 A CN 111507907A
- Authority
- CN
- China
- Prior art keywords
- digital image
- individual
- cosmetic effect
- effect
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000000694 effects Effects 0.000 claims abstract description 230
- 239000002537 cosmetic Substances 0.000 claims abstract description 123
- 230000000007 visual effect Effects 0.000 claims abstract description 25
- 238000005286 illumination Methods 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 6
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The invention provides a system, a method and a storage medium for executing a computing device. The method executed on the computing device comprises the following steps that the computing device obtains a digital image containing an individual and determines the illumination condition of content in the digital image. The computing device obtains a selection from a user that includes a makeup effect and determines surface characteristics of the selected makeup effect. The computing device defines a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique. The computing device retrieves lighting conditions in the area of interest and adjusts visual characteristics of the cosmetic effect according to the surface characteristics of the cosmetic effect and the lighting conditions of the area of interest. The computing device virtually applies the adjusted makeup effect to the area of interest in the digital image.
Description
Technical Field
The present invention relates to a system, method and storage medium for execution on a computing device, and more particularly, to a system and method for virtually applying makeup effects to digital images according to lighting conditions and surface characteristics of the makeup effects.
Background
In the digital image, the individual is subjected to different illumination conditions, so that a vivid and natural result cannot be obtained when the makeup effect is applied virtually. Therefore, there is still a lack of an improved system and method for virtually applying various makeup effects to different illumination conditions in the prior art.
Disclosure of Invention
The present invention provides a method executed on a computing device, which includes the steps of obtaining a digital image including an individual by the computing device, and determining an illumination condition of a content in the digital image. The computing device obtains a selection from a user that includes a makeup effect and determines surface characteristics of the selected makeup effect. The computing device defines a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique. The computing device captures the lighting conditions in the area of interest and adjusts the visual characteristics of the makeup effect according to the surface characteristics of the makeup effect and the lighting conditions of the area of interest. The computing device virtually applies the adjusted makeup effect to the area of interest in the digital image.
Further, the step of determining the illumination condition in the digital image further comprises: at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image is evaluated.
Further, the step of evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image further comprises: comparing a shadow effect on the individual in the digital image with shadow effects in a plurality of predefined three-dimensional models, each predefined three-dimensional model having corresponding information related to the lighting conditions; identifying a three-dimensional model which best meets the requirements by comparing the shadow effect of the individuals in the digital image with the shadow effect of a plurality of predefined three-dimensional models; obtaining corresponding information related to the illumination condition in the most suitable three-dimensional model, wherein the corresponding information comprises at least one of the following information: an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
Further, the step of adjusting the visual characteristic of the cosmetic effect according to the surface characteristic of the cosmetic effect and the lighting condition of the area of interest further comprises: adjusting a color of the makeup effect according to at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
Further, the color of the makeup effect is adjusted in units of pixels in the region of interest.
Still further, the surface characteristics of the selected cosmetic effect include at least one of: a diffuse reflectance characteristic of the selected cosmetic effect; a specular reflection characteristic of the selected cosmetic effect; and a transparency of the selected cosmetic effect.
Further, the surface characteristics of the selected cosmetic effects are predefined and stored in a database.
The present invention is directed to a system implemented on a computing device. The system executed on the computing device includes a memory storing a plurality of instructions, and a processor coupled to the memory and configured with the plurality of instructions. The processor obtains a digital image including an individual and determines an illumination condition of a content in the digital image. The processor obtains a selection from a user containing a cosmetic effect and determines surface characteristics of the selected cosmetic effect. The processor defines a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique. The processor captures the illumination conditions in the area of interest and adjusts the visual characteristics of the makeup effect according to the surface characteristics of the makeup effect and the illumination conditions in the area of interest. The processor virtually applies the adjusted makeup effect to the region of interest in the digital image.
Further, the processor determines the lighting condition of the content in the digital image by evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
Further, the instructions for the processor to evaluate at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image further comprise: comparing a shadow effect on the individual in the digital image with shadow effects in a plurality of predefined three-dimensional models, each predefined three-dimensional model having corresponding information related to the lighting conditions; identifying a three-dimensional model which best meets the requirements by comparing the shadow effect of the individuals in the digital image with the shadow effect of a plurality of predefined three-dimensional models; obtaining corresponding information related to the illumination condition in the most suitable three-dimensional model, wherein the corresponding information comprises at least one of the following information: an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
Further, the instructions for the processor to adjust the visual characteristic of the cosmetic effect based on the surface characteristic of the cosmetic effect and the lighting condition of the area of interest include: adjusting a color of the makeup effect according to at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
Further, the processor adjusts the color of the makeup effect in units of pixels in the region of interest.
Still further, the surface characteristics of the selected cosmetic effect include at least one of: a diffuse reflectance characteristic of the selected cosmetic effect; a specular reflection characteristic of the selected cosmetic effect; and a transparency of the selected cosmetic effect.
Further, the surface characteristics of the selected cosmetic effects are predefined and stored in a database.
The present invention is directed to a non-transitory computer readable storage medium. The non-transitory computer readable storage medium stores instructions for execution on a computing device having a processor. When the processor executes the instructions, the computing device obtains a digital image including an individual and determines a lighting condition of a content in the digital image. The computing device obtains a selection from a user that includes a makeup effect and determines surface characteristics of the selected makeup effect. The computing device defines a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique. The computing device captures the lighting conditions in the area of interest and adjusts the visual characteristics of the makeup effect according to the surface characteristics of the makeup effect and the lighting conditions of the area of interest. The computing device virtually applies the adjusted makeup effect to the area of interest in the digital image.
Further, the processor determines the lighting condition in the digital image by evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
Further, the step of the processor evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image further comprises: comparing a shadow effect on the individual in the digital image with shadow effects in a plurality of predefined three-dimensional models, each predefined three-dimensional model having corresponding information related to the lighting conditions; identifying a three-dimensional model which best accords with the shadow effect of the individual in the digital image and the shadow effect of a plurality of predefined three-dimensional models; obtaining corresponding information related to the illumination condition in the most suitable three-dimensional model, wherein the corresponding information comprises at least one of the following information: an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
Further, the instructions for the processor to adjust the visual characteristic of the cosmetic effect based on the surface characteristic of the cosmetic effect and the lighting condition of the area of interest include: adjusting a color of the makeup effect according to at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
Further, the processor adjusts the color of the makeup effect in units of pixels in the region of interest.
Still further, the surface characteristics of the selected cosmetic effect include at least one of: a diffuse reflectance characteristic of the selected cosmetic effect; a specular reflection characteristic of the selected cosmetic effect; and a transparency of the selected cosmetic effect.
For a better understanding of the features and technical content of the present invention, reference should be made to the following detailed description of the invention and accompanying drawings, which are provided for purposes of illustration and description only and are not intended to limit the invention.
Drawings
For a more complete understanding of the present invention, reference is now made to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the various figures, like reference numerals designate corresponding parts throughout the several views.
FIG. 1 is a block diagram illustrating a computing device virtually applying makeup effects based on lighting conditions and surface characteristics of the makeup effects in various embodiments of the invention.
FIG. 2 illustrates a schematic diagram of the computing device of FIG. 1 in various embodiments of the invention.
FIG. 3 illustrates a top-level flow diagram of the computing device of FIG. 1 performing a portion of the functionality to virtually apply a cosmetic effect based on lighting conditions and surface characteristics of the cosmetic effect in various embodiments of the present invention.
FIG. 4 illustrates an exemplary digital image obtained by the computing device of FIG. 1 according to the present invention, wherein a shadow effect is displayed on a face of a portion of an individual in the digital image.
FIG. 5 illustrates an exemplary user interface displayed by the computing device of FIG. 1 that includes an effects toolbar for a user to select a favorite cosmetic effect in accordance with various embodiments of the present invention.
FIG. 6 illustrates an example of a predefined three-dimensional model that includes various shading effects and that may be utilized by the computing device of FIG. 1 in various embodiments of the invention.
FIG. 7 illustrates an example of the computing device of FIG. 1 identifying a three-dimensional model that best matches the digital image of FIG. 4 in various embodiments of the invention.
FIG. 8 illustrates an exemplary application of a facial alignment technique to the digital image of FIG. 4 by the computing device of FIG. 1 to define a region of interest corresponding to a cosmetic effect in various embodiments of the present invention.
FIG. 9 illustrates an exemplary surface characteristic of a selected cosmetic effect in various embodiments of the present invention.
FIG. 10 illustrates an exemplary embodiment of the computing device of FIG. 1 adjusting the visual characteristics of a selected cosmetic effect based on the surface characteristics of the cosmetic effect and the lighting conditions of the area of interest to produce a modified cosmetic effect, in accordance with various embodiments of the present invention.
Fig. 11 illustrates an exemplary example of adjusting a makeup effect in units of pixels in a region of interest in various embodiments of the present invention.
FIG. 12 shows an example of how the computing device of FIG. 1 may identify a three-dimensional model that best fits in various embodiments of the invention.
Detailed Description
The following is a description of how to virtually apply a makeup effect according to the lighting conditions and the surface characteristics of the makeup effect, by way of specific examples. In various embodiments, the lighting conditions surrounding the individual in the digital image and the surface characteristics of the cosmetic effect to be applied to the face of the individual are analyzed, the visual characteristics of the cosmetic effect are adjusted based on the analysis results, and the adjusted cosmetic effect is applied to the face of the individual to provide a more realistic and natural presentation.
A system for virtually applying a makeup effect according to the lighting conditions and the surface characteristics of the makeup effect will be described in detail, and then the operation of the components within the system will be described. FIG. 1 is a block diagram of an implementation computing device 102. The computing device 102 may be implemented as, for example, but not limited to: smart phones, tablet computing devices, notebook computers, or the like.
A virtual makeup application device 104 executes on a processor of the computing device 102, the virtual makeup application device 104 including a content analyzer 106, a makeup effect component 108, a target area analyzer 110, and a virtual effect applicator 112. The content analyzer 106 is configured to obtain a digital image including an individual and to determine lighting conditions of content in the digital image.
As known to those skilled in the art, the Digital Image may be encoded in the form of files such as but not limited to JPEG (Joint Photographic Experts Group), TIFF (tagged Image file format), PNG (Portable Network graphics) file, GIF (graphics interchange format) file, BMP (bitmap) file, or other types of Digital file formats, but not limited to this, the Digital Image may also be derived from a still Image of a Video, such as but not limited to MPEG-1(Motion Picture Experts Group-1), MPEG-2, MPEG-4, H.264, 3GPP (third Generation grouping project), 3GPP-2, SD-Video (Standard-Definition Video), 3GPP-2, SD-Video (HD-Definition-Video), Video-DVD (Video-HD-Definition), Video DVD (DVD-DVD), Video graphics, Video graphics (DVD), Video graphics, DVD (DVD), Video graphics, Video graphics, Video, Audio, Video, Audio, Video, Audio, Video, Audio, Video, Audio.
In some embodiments, the content analyzer 106 determines the lighting conditions in the digital image by evaluating some parameters. For example, the parameter may be an incident light angle on the individual in the digital image, a light intensity, an incident light color on the individual in the digital image, and so on. The one or more parameters (e.g., color, intensity) may then be used to adjust the cosmetic effect and apply the adjusted cosmetic effect to the face of the individual. In some embodiments, the lighting conditions of the digital image may be evaluated by comparing shadow effects on individuals in the digital image with shadow effects in a plurality of predefined three-dimensional models 118, the three-dimensional models 118 being stored in a database 116.
Each three-dimensional model 118 has therein corresponding information relating to lighting conditions, such as: the angle of incident light on the individual, the light intensity and the light color. The content analyzer 106 may identify a most conforming three-dimensional model 118 and evaluate the lighting conditions using the corresponding information of the most conforming three-dimensional model 118. How the content analyzer 106 identifies the most conforming three-dimensional model 118 will be described later in conjunction with FIG. 12.
The target area analyzer 110 is configured to define a region of interest on the face of the individual corresponding to the cosmetic effect by using a face alignment technique. For example, if the selected cosmetic effect includes a lipstick effect, the area of interest will be located on the individual's lips. The target area analyzer 110 is further configured to intercept the lighting conditions in the area of interest, and the cosmetic effect component 108 may then adjust the visual characteristics of the cosmetic effect according to the surface characteristics of the cosmetic effect and the lighting conditions of the area of interest. Since the lighting conditions are different for each pixel unit in the region of interest, the makeup effect component 108 takes the pixels in the region of interest as the basic unit for adjusting the makeup effect.
FIG. 2 shows an example of the Computing Device 102 of FIG. 1. the Computing Device 102 may be implemented as any of a variety of wired or wireless Computing devices, such as a desktop Computer, a portable Computer, a dedicated server Computer (DedcatedServer Computer), a multitasking Computing Device (multitasking Device), a smart phone or tablet, etc. referring to FIG. 2, the Computing Device 102 includes a memory 214, a processing Device 202, a plurality of Input/Output interfaces (I/O interfaces) 204, a network Interface 206, a display 208, a peripheral Interface 211, and a mass storage 226, each connected via a local data Bus (L ecalData Bus) 210.
The Processing device 202 may include any custom made or commercially available processor, a Central Processing Unit (CPU) or a coprocessor among several computing devices 102, a semiconductor microprocessor (in the form of a microchip), a Macroprocessor (microprocessor), one or more Application Specific Integrated Circuits (ASICs), a plurality of suitably configured digital logic gates, and other common electronic configurations comprising a plurality of discrete components for coordinating the overall operation of the computing system, both independently and in various combinations.
The Memory 214 may include any one of Volatile Memory components (Volatile Memory Elements) or non-Volatile Memory components (Nonvolatile Memory Elements). For example, the volatile Memory component includes a Random Access Memory (RAM), such as a Dynamic Random Access Memory (DRAM) or a Static Random Access Memory (SRAM). The non-volatile Memory component can be Read-Only Memory (ROM), hard disk, magnetic tape, or Compact Disc Read-Only Memory (Compact Disc Read-Only Memory). The memory 214 generally includes a native operating System 216, one or more native applications (native Application), Emulation System (Emulation System), or Emulated Application (Emulated Application) for any kind of operating System and/or Emulated hardware platform or Emulated operating System. For example, the aforementioned applications (i.e., native applications or simulation applications) may include specific software, i.e., include some or all of the components of computing device 102 in FIG. 1. In such embodiments, the components are stored in the memory 214 and executed by the processing device 202, such that the processing device 202 may perform the operations/functions of the features disclosed herein. The components in the memory 214 are well known to those skilled in the art, and therefore some of the components in the memory 214 are not described in detail for the sake of brevity. In some embodiments, the computing device 102 may execute in hardware and/or software.
For example, when the computing device 102 comprises a personal computer, the aforementioned components may be connected to one or more of the input/output interfaces 204, such as a keyboard and mouse, as shown in FIG. 2. the display 208 may comprise a computer monitor, a plasma screen (L CD) of a personal computer, a liquid crystal display of a hand held device, a touch screen, or other display device.
In the present disclosure, a non-transitory computer readable medium stores a program for use by or in connection with an instruction execution system, apparatus, or device. More specifically, specific examples of the computer-readable medium can include, but are not limited to, a Portable computer diskette, a random access Memory, a Read-Only Memory, an Erasable Programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM, EEPROM, or Flash Memory), and a Portable Compact Disc Read-Only-Memory (CDROM).
Referring to FIG. 3, FIG. 3 illustrates a flow chart 300 for the computing device 102 of FIG. 1 to virtually apply a cosmetic effect based on lighting conditions and surface characteristics of the cosmetic effect, in various embodiments of the invention. The flow diagram 300 of FIG. 3 is merely an example of different types of functional layouts that may be implemented by various components of the computing device 102 of FIG. 1. In other words, the flowchart 300 of fig. 3 may be considered to describe one or more embodiments of steps in performing a method of the computing device 102.
Although a particular order of execution is disclosed in the flowchart 300 of fig. 3, the order of execution is merely to aid in understanding the present invention, and the actual order of operation may vary from that described. For example, the order of execution of two or more block diagrams may be adjusted, reversed, or otherwise combined. Also, several blocks shown in FIG. 3 in sequential order may be performed simultaneously or partially simultaneously. And such modifications and alterations are still within the scope of the present disclosure.
At block 310, the computing device 102 obtains a digital image including the individual. At block 320, the computing device 102 determines the lighting conditions of the content in the digital image. In some embodiments, the computing device 102 determines the lighting condition of the digital image by evaluating the angle of incident light on the individual in the digital image, the intensity of the light, and/or the color of incident light on the individual in the digital image. In some embodiments, the computing device 102 may also evaluate the lighting conditions by comparing the shadow effects displayed on the digital image with various shadow effects in the predefined three-dimensional model. Each three-dimensional model has corresponding information related to lighting conditions.
The computing device 102 may then identify the most conforming three-dimensional model by comparing the shadow effects displayed on the digital image with various shadow effects in the predefined three-dimensional model. The computing device 102 then obtains corresponding information related to the lighting conditions from the most conforming three-dimensional model. In some embodiments, the correspondence information may include: the angle of incident light on the individual in the digital image, the intensity of the light, and/or the color of incident light on the individual in the digital image.
In block 330, the computing device 102 obtains a selection from the user that includes a cosmetic effect. At block 340, the computing device 102 determines the surface characteristics of the selected cosmetic effect. In some embodiments, the surface characteristics of the selected cosmetic effect include: a diffuse reflective characteristic of the selected cosmetic effect, a specular reflective characteristic of the selected cosmetic effect, and/or a transparency of the selected cosmetic effect. In some embodiments, the surface characteristics of the selected cosmetic effects may be predefined and stored in database 116 of FIG. 1. In block 350, the computing device 102 defines a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique. At block 360, the computing device 102 retrieves lighting conditions for the area of interest.
In block 370, the computing device 102 adjusts the visual characteristics of the cosmetic effect based on the surface characteristics of the cosmetic effect and the lighting conditions of the area of interest. In some embodiments, the computing device 102 adjusts the color in the cosmetic effect based on the angle of incident light on the individual in the digital image, the intensity of light, and/or the color of incident light on the individual in the digital image to achieve the effect of adjusting the visual characteristic. In some embodiments, the computing device 102 adjusts the color of the cosmetic effect in units of pixels in the area of interest. In block 380, the computing device 102 virtually applies the adjusted cosmetic effect to the area of interest in the digital image. Finally, the flow chart of fig. 3 ends.
For a further description of the various aspects of the invention, reference is made to the following drawings. In FIG. 4, computing device 102 of FIG. 1 is shown obtaining digital image 402, where a face of a portion of an individual in digital image 402 is shown with a shadow effect. As will be described further below, the selected cosmetic effect is adjusted to be applied to the individual's face based on the surface characteristics of the selected cosmetic effect and the lighting conditions of digital image 402.
FIG. 5 illustrates an example of the user interface 502 displayed by the computing device 102 of FIG. 1. The user interface 502 displays a makeup effects toolbar 504, and the makeup effects toolbar 504 may provide the user with the option of selecting a favorite makeup effect. In fig. 5, a user has selected a blush effect to apply to digital image 402.
As previously described, the content analyzer 106 of FIG. 1 determines the illumination condition of the digital image 402 by evaluating the angle, intensity, and/or color of the light incident on the individual in the digital image. In some embodiments, this information may be evaluated by comparing shadow effects on individuals in the digital image to various shadow effects in a predefined three-dimensional model 118, which three-dimensional model 118 may be as shown in FIG. 6. For convenience of explanation, six three-dimensional models 118 having different lighting conditions are shown in fig. 6, however, the types of lighting conditions are not limited to the above, and three-dimensional models 118 having other lighting conditions may be stored in the database 116. Each three-dimensional model 118 has corresponding information related to lighting conditions, and different corresponding information may result in different shading effects. Referring to FIG. 7, the content analyzer 106 of FIG. 1 can identify the three-dimensional model 118 that best matches the shading effect displayed on the individual in the digital image and the shading effects in the predefined three-dimensional model. The content analyzer 106 will retrieve the corresponding information related to the lighting conditions in the three-dimensional model 118 that best fits.
Referring to fig. 8, the target area analyzer 110 of fig. 1 may define a region of interest 802, 804 corresponding to a cosmetic effect on the face of an individual in the digital image 402 by using a face alignment technique. In the example of fig. 8, the user selects a blush makeup effect, and thus the target area analyzer 110 defines the individual's cheek as the regions of interest 802, 804 according to the selected blush makeup effect.
As described above, the visual characteristics of the selected makeup effect are adjusted according to the lighting conditions of the region of interest and the surface characteristics of the selected makeup effect, and then the adjusted makeup effect is applied to the region of interest. Referring to FIG. 9, the surface characteristics of the selected cosmetic effects may include, but are not limited to: a diffuse reflectance characteristic 902 of the selected cosmetic effect, a specular reflectance characteristic 904 of the selected cosmetic effect, and/or a transparency of the selected cosmetic effect.
The diffuse reflectance properties 902 of a selected cosmetic effect generally describe the scattering of a light ray at different angles after it is incident on a plane. A specular reflection characteristic 904 of a selected cosmetic effect generally describes the characteristic of a light ray that is specularly reflected after incidence on a plane. Transparency is a property that generally describes a material that allows light to pass through without causing the light to diverge. Other surface characteristics may also be subsurface scattering, which generally refers to scattering light rays off an object surface at different angles after passing through a translucent object and interacting with the material.
Referring to fig. 10, the makeup effect component 108 adjusts the visual characteristics of the selected makeup effect according to the surface characteristics 120 of the selected makeup effect and the lighting conditions 1004 of the region of interest to generate a modified makeup effect 1006. The virtual effect applicator 112 then virtually applies the embellished cosmetic effect 1006 to the areas of interest 1012, 1014 of the digital image 402. Notably, the makeup effect component 108 is a visual feature that adjusts the selected makeup effect in units of pixels in the region of interest. In the digital image 402 shown in fig. 4, one of the face regions shows a large-area shading effect, and thus, the visual characteristics of the selected cosmetic effect (e.g., blush effect) are different between the two regions of interest 1012 and 1014.
Further, referring to fig. 11, fig. 11 illustrates that the makeup effect is modified in units of pixels in the attention area 1104 in various embodiments of the present invention. In the example shown in fig. 11, the selected cosmetic effect includes a lipstick effect. Based on the above selections, the target region analyzer 110 of FIG. 1 may define a region of interest 1104 that includes the lips of the individual. As shown in fig. 11, a part of the lips of an individual is under the shadow 1102, and the visual characteristic of the lipstick effect is adjusted in units of pixels, so that the visual characteristic of the lipstick effect in the attention area 1104 is different. After adjusting the visual characteristics of the lipstick effect, the adjusted lipstick effect may be applied to the area of interest 1104.
FIG. 12 shows an example of how the computing device 102 of FIG. 1 may identify a three-dimensional model that best fits, in various embodiments of the invention. In some embodiments, content analyzer 106 executing on computing device 102 (FIG. 1) may analyze individuals in digital image 402 and identify the three-dimensional model 118 that best fits digital image 402 according to various lighting conditions or shading effects of three-dimensional model 118. The predefined three-dimensional models 118 each have different lighting conditions, as shown in FIG. 6.
As shown in FIG. 12, the content analyzer 106 first converts the digital image 402 of the individual into a brightness only image 1202. The content analyzer 106 then constructs a brightness-only image 1202 into a three-dimensional mesh model 1204. For each predefined three-dimensional model 118, content analyzer 106 processes three-dimensional mesh model 1204 associated with the individual and derives a plurality of vertices (1 through n) from corresponding points on the three-dimensional mesh model of the current three-dimensional model 118. In detail, the content analyzer 106 compares the plurality of vertices with the plurality of vertices(1 to n) brightness values l at corresponding image points1To lnThe brightness correlation between image a and image B is determined. Where image a corresponds to three-dimensional mesh model 1204 associated with the individual and image B corresponds to the three-dimensional mesh model of currently processed three-dimensional model 118. The intensity value l at each image point of the three-dimensional mesh model of the three-dimensional model 118 currently being processed and the three-dimensional mesh model 1204 corresponding to the individual1To lnWith the highest similarity, the three-dimensional model 118 is determined to be the most consistent three-dimensional model 118.
The disclosure is only a preferred embodiment of the invention, and is not intended to limit the scope of the claims, so that all technical equivalents and modifications using the contents of the specification and drawings are included in the scope of the claims.
Claims (20)
1. A method executed on a computing device, the method executed on the computing device comprising:
obtaining a digital image containing an individual;
determining the illumination condition of a content in the digital image;
obtaining a selection from a user comprising a cosmetic effect;
determining a surface characteristic of the selected cosmetic effect;
defining a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique;
capturing illumination conditions in the region of interest;
adjusting a visual characteristic of the cosmetic effect as a function of the surface characteristic of the cosmetic effect and the lighting condition of the area of interest; and
virtually applying the adjusted makeup effect to the region of interest in the digital image.
2. The method of claim 1, wherein the step of determining the lighting conditions in the digital image further comprises: evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
3. The method of claim 2, wherein the step of evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image further comprises:
comparing a shadow effect on the individual in the digital image with shadow effects in a plurality of predefined three-dimensional models, each of the predefined three-dimensional models having corresponding information related to lighting conditions;
identifying a three-dimensional model that best fits by comparing the shadow effect of the individual in the digital image to the shadow effects of the plurality of predefined three-dimensional models;
obtaining the corresponding information related to the illumination condition in the most suitable three-dimensional model, wherein the corresponding information comprises at least one of the following: an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
4. The method of claim 2, wherein the step of adjusting the visual characteristic of the cosmetic effect based on the surface characteristic of the cosmetic effect and the lighting conditions of the area of interest further comprises:
adjusting a color of the cosmetic effect according to at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
5. The method according to claim 4, wherein the color of the makeup effect is adjusted in units of pixels in the region of interest.
6. The method of claim 1, wherein the surface characteristics of the selected cosmetic effect comprise at least one of:
a diffuse reflectance characteristic of the selected cosmetic effect;
a specular reflection characteristic of the selected cosmetic effect; and
a transparency of the selected cosmetic effect.
7. The method of claim 1, wherein the surface characteristics of the selected cosmetic effects are predefined and stored in a database.
8. A system executing on a computing device, the system executing on the computing device comprising:
a memory, said memory storing a plurality of instructions; and
a processor coupled to the memory and configured with a plurality of the instructions, the plurality of instructions comprising:
obtaining a digital image containing an individual;
determining the illumination condition of a content in the digital image;
obtaining a selection from a user comprising a cosmetic effect;
determining a surface characteristic of the selected cosmetic effect;
defining a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique;
capturing illumination conditions in the region of interest;
adjusting a visual characteristic of the cosmetic effect as a function of the surface characteristic of the cosmetic effect and the lighting condition of the area of interest; and
virtually applying the adjusted makeup effect to the region of interest in the digital image.
9. The system of claim 8, wherein the processor determines the lighting condition of the content in the digital image by evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
10. The system of claim 9, wherein the instructions for the processor to evaluate at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image further comprise:
comparing a shadow effect on the individual in the digital image with shadow effects in a plurality of predefined three-dimensional models, each of the predefined three-dimensional models having corresponding information related to lighting conditions;
identifying a three-dimensional model that best fits by comparing the shadow effect of the individual in the digital image to the shadow effects of the plurality of predefined three-dimensional models;
obtaining the corresponding information related to the illumination condition in the most suitable three-dimensional model, wherein the corresponding information comprises at least one of the following: an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
11. The system of claim 9, wherein the instructions for the processor to adjust the visual characteristic of the cosmetic effect based on the surface characteristic of the cosmetic effect and the lighting conditions of the area of interest comprise:
adjusting a color of the cosmetic effect according to at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
12. The system of claim 11, wherein the processor adjusts the color of the cosmetic effect in units of pixels in the region of interest.
13. The system of claim 8, wherein the surface characteristics of the selected cosmetic effect include at least one of:
a diffuse reflectance characteristic of the selected cosmetic effect;
a specular reflection characteristic of the selected cosmetic effect; and
a transparency of the selected cosmetic effect.
14. The system of claim 8, wherein the surface characteristics of the selected cosmetic effects are predefined and stored in a database.
15. A non-transitory computer readable storage medium storing instructions, the instructions being executable by a computing device having a processor, wherein when the instructions are executed by the processor, the computing device at least performs:
obtaining a digital image containing an individual;
determining the illumination condition of a content in the digital image;
obtaining a selection from a user comprising a cosmetic effect;
determining a surface characteristic of the selected cosmetic effect;
defining a region of interest on a face of the individual corresponding to the cosmetic effect by using a face alignment technique;
capturing illumination conditions in the region of interest;
adjusting a visual characteristic of the cosmetic effect as a function of the surface characteristic of the cosmetic effect and the lighting condition of the area of interest; and
virtually applying the adjusted makeup effect to the region of interest in the digital image.
16. The non-transitory computer readable storage medium of claim 15, wherein the processor determines the lighting condition in the digital image by evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
17. The non-transitory computer readable storage medium of claim 16 wherein the step of the processor evaluating at least one of an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image further comprises:
comparing a shadow effect on the individual in the digital image with shadow effects in a plurality of predefined three-dimensional models, each of the predefined three-dimensional models having corresponding information related to lighting conditions;
identifying a three-dimensional model that best fits by comparing the shadow effect of the individual in the digital image to the shadow effects of the plurality of predefined three-dimensional models;
retrieving, in the most conforming three-dimensional model, the correspondence information relating to the lighting conditions, the correspondence information including at least one of: an angle of incident light on the individual in the digital image, a light intensity, and a color of incident light on the individual in the digital image.
18. The non-transitory computer-readable storage medium of claim 16, wherein the instructions for the processor to adjust the visual characteristics of the cosmetic effect based on the surface characteristics of the cosmetic effect and the lighting conditions of the area of interest comprise:
adjusting a color of the cosmetic effect according to at least one of an incident light angle on the individual in the digital image, a light intensity, and an incident light color on the individual in the digital image.
19. The non-transitory computer-readable storage medium of claim 18, wherein the processor adjusts the color of the cosmetic effect in units of pixels in the area of interest.
20. The non-transitory computer readable storage medium of claim 15, wherein the surface characteristics of the selected cosmetic effect include at least one of:
a diffuse reflectance characteristic of the selected cosmetic effect;
a specular reflection characteristic of the selected cosmetic effect; and
a transparency of the selected cosmetic effect.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962798784P | 2019-01-30 | 2019-01-30 | |
US62/798,784 | 2019-01-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111507907A true CN111507907A (en) | 2020-08-07 |
CN111507907B CN111507907B (en) | 2023-05-30 |
Family
ID=71863898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010068688.2A Active CN111507907B (en) | 2019-01-30 | 2020-01-21 | System, method and storage medium for executing on computing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507907B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060233426A1 (en) * | 2002-04-12 | 2006-10-19 | Agency For Science, Technology | Robust face registration via multiple face prototypes synthesis |
CN101371272A (en) * | 2006-01-17 | 2009-02-18 | 株式会社资生堂 | Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program |
US9449412B1 (en) * | 2012-05-22 | 2016-09-20 | Image Metrics Limited | Adaptive, calibrated simulation of cosmetic products on consumer devices |
CN107273837A (en) * | 2017-06-07 | 2017-10-20 | 广州视源电子科技股份有限公司 | The method and system virtually made up |
CN109191569A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation cosmetic device, simulation cosmetic method and equipment |
-
2020
- 2020-01-21 CN CN202010068688.2A patent/CN111507907B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060233426A1 (en) * | 2002-04-12 | 2006-10-19 | Agency For Science, Technology | Robust face registration via multiple face prototypes synthesis |
CN101371272A (en) * | 2006-01-17 | 2009-02-18 | 株式会社资生堂 | Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program |
US9449412B1 (en) * | 2012-05-22 | 2016-09-20 | Image Metrics Limited | Adaptive, calibrated simulation of cosmetic products on consumer devices |
CN107273837A (en) * | 2017-06-07 | 2017-10-20 | 广州视源电子科技股份有限公司 | The method and system virtually made up |
CN109191569A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation cosmetic device, simulation cosmetic method and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111507907B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3690825B1 (en) | Systems and methods for virtual application of makeup effects based on lighting conditions and surface properties of makeup effects | |
US10979640B2 (en) | Estimating HDR lighting conditions from a single LDR digital image | |
US10810469B2 (en) | Extracting material properties from a single image | |
Shi et al. | Learning non-lambertian object intrinsics across shapenet categories | |
García et al. | Learning image processing with OpenCV | |
JP6396890B2 (en) | Image processing apparatus, image processing method, and program capable of virtually reproducing state where makeup coating material is applied | |
KR101885090B1 (en) | Image processing apparatus, apparatus and method for lighting processing | |
US11380023B2 (en) | End-to-end relighting of a foreground object of an image | |
Jarabo et al. | Effects of approximate filtering on the appearance of bidirectional texture functions | |
EP3572973A1 (en) | Systems and methods for performing virtual application of makeup effects based on a source image | |
Chen et al. | A microfacet-based model for photometric stereo with general isotropic reflectance | |
Baggio | OpenCV 3.0 computer vision with java | |
US20180165855A1 (en) | Systems and Methods for Interactive Virtual Makeup Experience | |
US20190156561A1 (en) | Image processing device and method therefor | |
US20230051749A1 (en) | Generating synthesized digital images utilizing class-specific machine-learning models | |
KR20210098997A (en) | Automated real-time high dynamic range content review system | |
Escrivá et al. | Building Computer Vision Projects with OpenCV 4 and C++: Implement complex computer vision algorithms and explore deep learning and face detection | |
CN111507907B (en) | System, method and storage medium for executing on computing device | |
Liao et al. | Illumination animating and editing in a single picture using scene structure estimation | |
Ma et al. | A lighting robust fitting approach of 3D morphable model for face reconstruction | |
Choudhury et al. | A survey of image-based relighting techniques | |
CN110570476A (en) | System, method and storage medium for execution on computing device | |
CN115984426B (en) | Method, device, terminal and storage medium for generating hairstyle demonstration image | |
Ludwig et al. | Environment map based lighting for reflectance transformation images | |
WO2023227886A1 (en) | Simulating foundation makeup effect in augmented images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |