WO2018223561A1 - 虚拟化妆的方法和系统 - Google Patents

虚拟化妆的方法和系统 Download PDF

Info

Publication number
WO2018223561A1
WO2018223561A1 PCT/CN2017/103586 CN2017103586W WO2018223561A1 WO 2018223561 A1 WO2018223561 A1 WO 2018223561A1 CN 2017103586 W CN2017103586 W CN 2017103586W WO 2018223561 A1 WO2018223561 A1 WO 2018223561A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
feature
point set
target
real
Prior art date
Application number
PCT/CN2017/103586
Other languages
English (en)
French (fr)
Inventor
赖振奇
Original Assignee
广州视源电子科技股份有限公司
广州睿鑫电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司, 广州睿鑫电子科技有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018223561A1 publication Critical patent/WO2018223561A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method and system for virtual makeup.
  • Makeup is an important part of life. For consumers, how to effectively choose the cosmetics that suits them in many products has become the most concerned issue. Traditionally, consumers will choose to go to the physical store to try makeup, and try to choose their own satisfactory products through repeated trials. The user made makeup and appeared the technique of virtual makeup.
  • virtual makeup technology mainly analyzes acquired static images, extracts facial features from static images, and combines makeup features with facial features to synthesize makeup and provide users with virtual effects of makeup.
  • a method of virtual makeup comprising the following steps:
  • a virtual makeup system comprising:
  • An image acquisition unit configured to acquire a real-time image of the makeup object
  • a face recognition unit configured to identify a facial feature in a real-time image, and obtain a facial feature region
  • a color difference obtaining unit configured to acquire color data of other regions except the face feature region in the real-time image, and obtain color difference ranges of other regions according to the color data
  • a coloring processing unit configured to receive a coloring instruction, obtain an initial coloring value according to the coloring instruction, and calculate a target coloring value according to the initial coloring value and the color difference range;
  • the image rendering unit is configured to render the facial feature region in the real-time image by using the target coloring value to obtain a virtual makeup image of the makeup object.
  • the real-time image of the cosmetic object is acquired first, the facial features in the real-time image are recognized, the facial feature region in the real-time image is determined, and the human image is removed from the real-time image.
  • the color data of the other areas outside the face feature area acquires the color difference range, adjusts the coloring value of the makeup, and renders the face feature area by the adjusted coloring value to obtain a virtual makeup image of the makeup object.
  • the final rendering result can be Adapting to the influence of factors such as the ambient light in the makeup object, the generated virtual makeup image is more realistic and natural, and the rendering is performed on the real-time image of the makeup object, which can display the facial makeup of the makeup object from multiple angles, and improve the virtual makeup. display effect.
  • FIG. 1 is a schematic flow chart of a method of virtual makeup in one embodiment
  • FIG. 2 is a schematic structural view of a system for virtual makeup in one embodiment
  • Figure 3 is a schematic structural view of a system for virtual makeup in one embodiment
  • FIG. 4 is a schematic structural view of a system for virtual makeup in one embodiment
  • Figure 5 is a schematic structural view of a system for virtual makeup in one embodiment
  • FIG. 6 is a schematic diagram showing the distribution of facial feature points in one embodiment
  • Figure 7 is a schematic illustration of the scanning of a closed curve in one of the embodiments.
  • FIG. 1 is a schematic flow chart of a method for virtual makeup according to an embodiment of the present invention.
  • the method of virtual makeup in this embodiment includes the following steps:
  • Step S101 acquiring a real-time image of the cosmetic object
  • Step S102 Identify a facial feature in the real-time image, and obtain a facial feature region in the real-time image;
  • Step S103 acquiring color data of other areas except the face feature area in the real-time image, and acquiring color difference ranges of other areas according to the color data;
  • Step S104 receiving a coloring instruction, acquiring an initial coloring value according to the coloring instruction, and calculating a target coloring value according to the initial coloring value and the color difference range;
  • Step S105 Rendering the facial feature region in the real-time image by using the target coloring value to obtain a virtual makeup image of the makeup object.
  • the real-time image of the cosmetic object is obtained first, the facial features in the real-time image are recognized, the facial feature region in the real-time image is determined, and then the other regions except the facial feature region are passed through the real-time image.
  • the color data acquires the color difference range, adjusts the coloring value of the makeup, and renders the face feature area by using the adjusted coloring value to obtain a virtual makeup image of the makeup object.
  • the face feature area in the real-time image is recognized and rendered, but also the real
  • the color difference range corresponding to the color data of the other areas except the face feature area in the image, and the final rendering result can be adapted to factors such as the ambient light in which the makeup object is located, so that the generated virtual makeup image is more realistic and natural, and rendered. It is performed on the real-time image of the makeup object, and can display the facial makeup of the makeup object from multiple angles, and improve the display effect of the virtual makeup.
  • the face feature in the real-time image is recognized, and the dlib face detection library can be used.
  • the dlib face detection library has good detection efficiency and can effectively obtain facial feature points.
  • the step of acquiring a face feature region in the live image includes the following steps:
  • a plurality of facial features each of which has a specific shape and position, and a plurality of feature points can be used to represent the facial features, and the feature points constitute a point set of the facial feature parts;
  • the number of feature points in the point set of the feature part is limited, belonging to the discrete point, and the face feature part is generally a closed area, and the point set needs to be fitted and expanded to obtain a fitting curve, and the closed curve is conveniently determined by using the fitting curve.
  • Feature area The number of feature points in the point set of the feature part is limited, belonging to the discrete point, and the face feature part is generally a closed area, and the point set needs to be fitted and expanded to obtain a fitting curve, and the closed curve is conveniently determined by using the fitting curve.
  • the point set of the facial feature part includes a left eye feature point set, a right eye feature point set, a left eye feature point set, a right eye feature point set, a nose feature point set, a nose feature point set, and an upper point set. a set of lip feature points, a set of lower lip feature points, and a set of facial contour feature points;
  • the step of fitting the point set of the face feature part, the step of obtaining the fit curve includes the following steps:
  • the feature point sets are respectively fitted to corresponding closed curves.
  • the facial features include the left eyebrow, the right eyebrow, the left eye, the right eye, the nose bridge, the nose, the upper lip, the lower lip, and the facial contour. These features are different, and the facial features are divided into A plurality of different parts are obtained, and corresponding feature point sets are obtained, and respectively fit into different closed curves, and different face feature areas can be obtained by closing curves, which are convenient for rendering separately, so that the rendering operation is more targeted, thereby improving Rendering effect.
  • the facial feature region corresponding to the facial contour feature point set does not include a facial feature region corresponding to a feature point set such as a left eyebrow, a right eyebrow, a left eye, a right eye, a bridge of the nose, a nose, an upper lip, and a lower lip. Avoid rendering the feature areas of the left eyebrow, right eyebrow, left eye, right eye, nose bridge, nose, upper lip, lower lip, etc. while rendering the facial contour feature area, thereby affecting the overall rendering effect.
  • a facial feature region corresponding to a feature point set such as a left eyebrow, a right eyebrow, a left eye, a right eye, a bridge of the nose, a nose, an upper lip, and a lower lip.
  • the step of rendering the facial feature region in the real-time image by using the target coloring value further includes the step of: determining a facial feature region to be rendered according to the coloring instruction. Since there are multiple feature areas, one or more face feature areas can be selected for rendering during rendering.
  • the left eye feature point set, the right eye feature point set, the left eye feature point set, the right eye feature point set, the nose beam feature point set, the nose feature point set, the upper lip feature point set, and the lower lip The step of fitting the feature point set and the face contour feature point set to the corresponding closed curve respectively includes the following steps:
  • Sorting feature points in any feature point set selecting any feature point as the target feature point among the sorted feature points, determining the first midpoint of the connection between the target feature point and the previous feature point, and the target feature A second midpoint of the line connecting the point to the next feature point, the line connecting the first midpoint and the second midpoint is translated onto the target feature point, wherein the midpoint of the translated line is located at the target feature point position;
  • the first midpoint after the translation is used as the control point of the target feature point and the previous feature point, and the quadratic Bezier curve is drawn according to the target feature point, the previous feature point and the control point; wherein, the top of the feature point is sorted A feature point is to sort the last feature point;
  • the closed curve corresponding to the current feature point set includes a quadratic Bezier curve between all sorted adjacent feature points.
  • the quadratic Bezier curve includes three nodes, three nodes are the end points of the two ends of the curve and the middle control points, and the feature points collectively sort the two adjacent feature points as the second Bezier curve.
  • the endpoint of the end, the middle control point is determined by the midpoint of the feature point connection, according to the target feature point, A feature point and a control point can be used to draw a quadratic Bezier curve; the quadratic Bezier curves between all sorted adjacent feature points form a closed curve, which is enclosed into a face feature area.
  • the quadratic Bezier curve is a rounded arc, which is used to form a closed curve, which makes the edge of the face feature area enclosed by the closed curve appear natural and smooth, further enhancing the rendering effect.
  • the quadratic Bezier curve can be drawn according to the following formula:
  • the left eye feature point set, the right eye feature point set, the left eye feature point set, the right eye feature point set, the nose beam feature point set, the nose feature point set, the upper lip feature point set, and the lower lip The step of fitting the feature point set and the face contour feature point set to the corresponding closed curve respectively includes the following steps:
  • Sorting feature points in any feature point set selecting any feature point as the target feature point among the sorted feature points, determining the first midpoint of the connection between the target feature point and the previous feature point, and the target feature A second midpoint of the line connecting the point to the next feature point, the line connecting the first midpoint and the second midpoint is translated onto the target feature point, wherein the midpoint of the translated line is located at the target feature point position;
  • the second midpoint after the translation is used as the control point of the target feature point and the next feature point, and the quadratic Bezier curve is drawn according to the target feature point, the next feature point and the control point; wherein, the last feature point is sorted
  • a feature point is a feature point that is sorted first;
  • the closed curve corresponding to the current feature point set includes a quadratic Bezier curve between all sorted adjacent feature points.
  • the closed curve is mainly composed of a quadratic Bezier curve
  • the quadratic Bezier curve is composed of a line segment and three nodes, three nodes are end points of the two ends and control points in the middle, and the feature points are concentrated. Sort two adjacent feature points as the endpoints of the two ends of the quadratic Bezier curve.
  • the middle control point is determined by the midpoint of the feature point connection. According to the target feature point, the next feature point and the control point, you can draw two.
  • the secondary Bezier curve; the quadratic Bezier curve between all sorted adjacent feature points constitutes a closed curve, which is enclosed into a face feature area.
  • the quadratic Bezier curve is a rounded arc, which is used to form a closed curve, which makes the edge of the face feature area enclosed by the closed curve appear natural and smooth, further enhancing the rendering effect.
  • the step of determining a facial feature region based on the fitted curve comprises the steps of:
  • the face feature area determined by the current feature point set includes all scan line segments.
  • the area enclosed by the closed curve fitted by the feature point set is scanned, and the active side table determines the quadratic Bezier curve intersecting the scan line, and then the scan line and the quadratic Bezier curve are used.
  • the intersection point selects a scan line segment on the scan line, and all the scan line segment feature points are set by the determined face feature area.
  • the scanning line segment is composed of a plurality of scanning pixels. In this way, each pixel point in the face feature area can be accurately obtained, so that the face feature area can be uniformly rendered.
  • the active side table records the set of lines intersecting the current scan line in real time. Compared with the quadratic Bezier curve, the intersection of the line presented as a straight line segment and the scan line is easier to obtain.
  • the active edge table can be used to dynamically obtain the intersection state of the current scan line and each link. Since there is a step relationship in the scan line, when a line appears in the active side table during the scanning process, and the line no longer appears when it reaches a certain scanning line, the connection will not be in the subsequent scanning. Appears, for example, if the scan line 1 intersects the lines 1, 2, and 3, and the scan line 2 intersects the lines 1, 3, then any scan lines after the scan line 2 do not intersect the line 2, that is, the line 2 is removed.
  • the active edge table when generating the active edge table, does not need to calculate the intersection state of the scan line and the removed connection line, simplifies the dynamic update process of the active side table, and improves the processing efficiency.
  • the connecting line is a line segment between adjacent feature points, and the two ends of the quadratic Bezier curve are sorted adjacent feature points, so the connecting line and the second Bezier curve are in one-to-one correspondence, according to the active side
  • the lines in the table determine the quadratic Bezier curve that intersects the current scan line.
  • the scan line segment is selected on the current scan line according to the intersection of the current scan line and the quadratic Bezier curve. Since the scan line generally has two intersection points through a closed area, the scan line segment is a line segment with two intersection points as end points.
  • the secondary scanline corresponding to the new scan line and the new two connection lines are added.
  • the line segment between the intersection points is used as the excess line segment, and the excess line segment is removed from the scanning line segment between the intersection of the current scan line and the quadratic Bezier curve of the original line.
  • the closed curve fitted by the point set of the face feature part may be a concave polygonal area, and the scan line formed in the notch part does not belong to the face feature area, so it can be eliminated and the face is made.
  • the feature area is more accurate.
  • the step of acquiring the color difference range of the other regions according to the color data includes the following steps:
  • the Fourier transform and the filtering operation on the color data can shift the color data from the spatial domain to the frequency domain, so that the color difference range can be obtained very conveniently.
  • the step of acquiring the color difference range of the other regions according to the color data further includes the following steps:
  • the method further includes the following steps: performing a color balancing operation on the target adjustment region.
  • the color adjustment operation may be performed on the target adjustment area, where the target adjustment area is an area in which the brightness is higher than the first preset value and the contrast is lower than the second preset value.
  • the color balance operation in the area can improve the facial color of the person in the virtual makeup image and enhance the display effect of the virtual makeup.
  • the first preset value and the second preset value may be modified as needed.
  • the target adjustment area may be a face feature feature set of the facial contour feature point set, the nose feature point set, and the nose feature point set, and does not include the left eyebrow, the right eyebrow, the left eye, the right eye, the upper lip, A face feature area corresponding to a feature point set such as a lower lip.
  • the method of virtual makeup further includes the following steps:
  • the feature point set may be modified by the first modification instruction to adapt to various scenarios, such as user self-modification or face recognition error, etc., to enhance the applicability of the solution in practice.
  • the method of virtual makeup further includes the following steps:
  • Receiving a second modification instruction selecting a target facial feature region according to the second modification instruction, extracting an adjustment parameter from the second modification instruction, adjusting the target coloring value according to the adjustment parameter, and using the adjusted target coloring value in the real-time image
  • the target face feature area is rendered.
  • the target coloring value after the target coloring value is used for rendering, the target coloring value can be adjusted by the second modification instruction, and the rendering is performed again, so that the makeup object can select different makeup colors, and only the target coloring value is adjusted, and the change is accelerated.
  • the process of makeup is performed again.
  • the step of obtaining a live image of the cosmetic object comprises the steps of:
  • the makeup object is photographed, the captured preview image is acquired, and the shot preview image is subjected to denoising preprocessing to obtain a real-time image.
  • the preview image at the time of shooting is used, and the preview image can be changed in real time.
  • the virtual makeup can be displayed in real time through the preview image, and the effect of the virtual makeup can be displayed in real time, and the denoising preprocessing can be performed to improve the subsequent image.
  • the accuracy of face recognition is used, and the preview image can be changed in real time.
  • the embodiment of the present invention further provides a system for virtual makeup, and an embodiment of the virtual makeup system of the present invention will be described in detail below.
  • FIG. 2 a schematic structural diagram of a virtual makeup system according to an embodiment of the present invention is shown.
  • the virtual makeup system in this embodiment includes:
  • An image obtaining unit 210 configured to acquire a real-time image of the makeup object
  • the face recognition unit 220 is configured to identify a facial feature in the real-time image, and acquire a facial feature region;
  • the color difference obtaining unit 230 is configured to acquire color data of other regions except the face feature region in the real-time image, and obtain color difference ranges of other regions according to the color data;
  • the coloring processing unit 240 is configured to receive a coloring instruction, obtain an initial coloring value according to the coloring instruction, and calculate a target coloring value according to the initial coloring value and the color difference range;
  • the image rendering unit 250 is configured to render the facial feature region in the real-time image by using the target coloring value to obtain a virtual makeup image of the makeup object.
  • the face recognition unit 220 acquires a point set of the face feature part in the real-time image, fits the point set of the face feature part, obtains a fitting curve, and determines the face feature according to the fitting curve. region.
  • the point set of the facial feature part includes a left eye feature point set, a right eye feature point set, a left eye feature point set, a right eye feature point set, a nose feature point set, a nose feature point set, and an upper point set. a set of lip feature points, a set of lower lip feature points, and a set of facial contour feature points;
  • the face recognition unit 220 sets the left eye feature point set, the right eye feature point set, the left eye feature point set, the right eye feature point set, the nose beam feature point set, the nose feature point set, the upper lip feature point set, and the lower lip feature point.
  • the sets and facial contour feature points are respectively fitted to corresponding closed curves.
  • the face recognition unit 220 sorts the feature points in any one of the feature point sets, selects any one feature point as the target feature point among the sorted feature points, and determines the target feature point and the previous feature point. a first midpoint of the line, and a second midpoint of the line connecting the target feature point and the next feature point, and shifting the line connecting the first midpoint and the second midpoint to the target feature point, wherein The midpoint of the subsequent line is at the position of the target feature point;
  • the first midpoint after the translation is used as the control point of the target feature point and the previous feature point, and the quadratic Bezier curve is drawn according to the target feature point, the previous feature point and the control point; wherein, the top of the feature point is sorted A feature point is to sort the last feature point;
  • the closed curve corresponding to the current feature point set includes a quadratic Bezier curve between all sorted adjacent feature points.
  • the face recognition unit 220 sorts the feature points in any one of the feature point sets, selects any one of the feature points in the sorted feature points as the target feature points, and determines the target feature.
  • the second midpoint after the translation is used as the control point of the target feature point and the next feature point, and the quadratic Bezier curve is drawn according to the target feature point, the next feature point and the control point; wherein, the last feature point is sorted
  • a feature point is a feature point that is sorted first;
  • the closed curve corresponding to the current feature point set includes a quadratic Bezier curve between all sorted adjacent feature points.
  • the face recognition unit 220 scans the area enclosed by the closed curve of the current feature point set to obtain a scan line; and obtains a connection between each sorted adjacent feature point in the current feature point set.
  • the Sear curve selects a scan line segment on the current scan line according to the intersection of the current scan line and the quadratic Bezier curve; the face feature region determined by the current feature point set includes all scan line segments.
  • the color difference acquisition unit 230 performs Fourier transform on the color data, and then filters the Fourier transform result to obtain a color difference range of other regions.
  • the virtual makeup system further includes a color equalization unit 260;
  • the color difference obtaining unit 230 performs color statistics on the face feature regions corresponding to the feature point sets, and determines, according to the statistical result, the target adjustment regions whose brightness is higher than the first preset value and whose contrast is lower than the second preset value;
  • the color equalization unit 260 performs a color equalization operation on the target adjustment area after the image rendering unit 250 performs the rendering operation.
  • the virtual makeup system further includes a first modification unit 270, configured to receive a first modification instruction, and select a target feature point set according to the first modification instruction, from the first modification instruction.
  • the correction parameters are extracted, and the feature points in the target feature point set are corrected according to the correction parameters, and the face recognition unit 220 refits the target feature point set into a closed curve.
  • the virtual makeup system further includes a second modification unit. 280, configured to receive a second modification instruction, select a target facial feature region according to the second modification instruction, extract an adjustment parameter from the second modification instruction, and adjust the target coloring value according to the adjustment parameter;
  • the image rendering unit 250 renders the target facial feature region in the real-time image using the adjusted target coloring value.
  • the image acquisition unit 210 captures the makeup object, acquires the captured preview image, performs denoising preprocessing on the captured preview image, and obtains a real-time image.
  • the virtual makeup system of the present invention corresponds one-to-one with the virtual makeup method of the present invention, and the technical features and advantageous effects thereof described in the above embodiments of the virtual makeup method are all applicable to the embodiment of the virtual makeup system.
  • the method of virtual makeup of the present invention can be applied to virtual makeup software.
  • the software for applying the virtual makeup method of the present invention may be installed first, and then the virtual makeup test may be performed to confirm whether a purchase is necessary; or the cosmetic manufacturer may install one at the store.
  • the smart tablet device, the manufacturer pre-installs the software for applying the virtual makeup method of the present invention, and inputs the preset makeup parameters (for the user to select the coloring, material, and regional template) into the software system, and the customer can perform the virtual test makeup through the software. Without real makeup.
  • the makeup object can be photographed by the camera on the smart terminal device, and the preview image of the shooting is obtained.
  • the preview format of the mainstream camera is YUV420 image, and the resolution is not equal.
  • the preview image is mainly used for face recognition. It does not need too high resolution.
  • the resolution size is set to 640*480. Therefore, the resolution of the preview image can be scaled to 640*480 level during preprocessing.
  • the grayscale processing is performed, and then the grayscale image is denoised to complete the preprocessing operation. Before the pre-processing, you can also keep a copy of the original preview image for subsequent rendering operations, or you can render on the original preview image.
  • the face features in the pre-processed image are identified.
  • the dlib face detection library can be used for recognition.
  • the dlib face detection library has good detection efficiency and can effectively acquire facial feature points.
  • the facial feature part after acquiring the point set of the facial feature part, it can be divided into the left eye feature point set.
  • LEB (18-22), right eye feature point set REB (23-27), left eye feature point set LE (37-42), right eye feature point set RE (43-48), nasal beam feature point set NB ( 28-31), nose feature point set N (32-36), upper lip feature point set UM (49-55, 61-65), lower lip feature point set DM (49, 55-61, 65-68),
  • the facial contour feature point set F (1-17), a total of 9 sets.
  • the UM rendering of the above lip feature point set is taken as an example.
  • the conventional method is to connect the feature points.
  • the shape of the lip thus rendered is not naturally smooth. Therefore, the present invention expands the point set UM into a closed set with an approximate curve as UM- EXT, where a quadratic Bezier curve is used, and the following formula is a quadratic Bezier curve:
  • the integral is performed on t, and then the de-duplication and completion are performed to obtain the set UM-EXT.
  • the specific method is to sort the ordered set UM, traverse the UM, and generate a point set UM' of the same number of elements, wherein each point is a midpoint of a line connecting two adjacent points in the UM; for a point P in the UM set 0 , the midpoint of the line connecting with the upper point P 1 is P 1 ', and the midpoint of the line connecting the next point P 2 is P 2 ', connecting P 1 ' and P 2 ', the line segment P 1 'P 2 'Translating to the point P 0 , the midpoint of the translated line segment P 1 'P 2 ' is located at the point P 0 , and the end points of the translated line segment P 1 'P 2 ' are C 1 and C 2 , It can be used as a
  • the quadratic Bezier curve corresponding to each feature point in the discrete point set UM can enclose a closed concave polygon; Must be satisfied after the completion operation Where UM is recorded as the index set of UM-EXT.
  • the scanning area is established from the y-axis sequence to the area enclosed by the closed concave polygon, and the active side table ATE is generated according to the intersection state of the line segment formed by the scanning line and the ordered point in the set UM, and each activity in the active side table ATE A line segment that is formed by an ordered point in the collection UM;
  • each scan line there is a step relationship between each scan line, that is, the scan line scan-1 intersects the line segments 1, 2, and 3, and the scan-2 intersects with 1, 3, and any scan line after the scan-2 does not intersect the line segment 2, That is, the line segment 2 is removed from the active side table AET;
  • two intersections of the second Bezier curve corresponding to the scan line and the two active edges are selected.
  • the line segment between the two when the original active edge in the activity change table is not removed, and two new active edges with common endpoints are added, the secondary scanline corresponding to the current scan line and the newly added two active edges
  • the line segment between the intersection points is used as the excess line segment, and the excess line segment is removed from the scanning line segment between the intersection point of the current scanning line and the second Bezier curve corresponding to the original moving edge, as shown by the notch portion in FIG.
  • the scan line in the mouth part does not belong to the face feature area, so it can be removed.
  • the scanning line segment is composed of a plurality of scanning pixels. In this way, each pixel point in the face feature area can be accurately obtained, thereby determining the face feature area.
  • the face feature area is rendered using the target shading value.
  • the color balance operation is mainly used to improve the color of the face to make it more white, and the face features an area with high brightness and low contrast, and the brightness is low for the eyes, eyebrows and lips.
  • the area with high contrast does not perform color balance operation on the area, and retains the rendering effect of the previous step.
  • the user can also manually select the unsatisfactory rendering after imaging, the device collects the correction parameters and stores the corresponding region, and re-fitting the region through the point set to realize the correction of the face feature region, in subsequent rendering.
  • the rendered pixel values can also be adjusted during the process.
  • the above lip is still explained.
  • the user can correct the feature points in the UM and recalculate the rendering area.
  • the target shading value is adjusted, and the color and gradient values of the 8 ⁇ 8 pass field are recorded for subsequent correction.
  • the whole process of real-time rendering of the virtual makeup is realized, and the process can preview the effect of the virtual makeup in real time, and can render the image by adapting elements such as ambient lighting, so that the generated makeup image is more real and natural, and the user also You can manually correct the inaccurate color portion of the rendered image to achieve the desired result.
  • the program can be stored in a computer readable storage medium.
  • the program when executed, includes the steps described in the above methods.
  • the storage medium includes: a ROM/RAM, a magnetic disk, an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种虚拟化妆的方法和系统,获取化妆对象的实时图像(S101),对实时图像中的人脸特征进行识别,获取实时图像中的人脸特征区域(S102),通过实时图像中除人脸特征区域外的其他区域的色彩数据获取色差范围,对化妆的着色值进行调整,利用调整后的着色值对人脸特征区域进行渲染,得到化妆对象的虚拟妆容图像。在本方法中,不仅对实时图像中的人脸特征区域进行识别和渲染,还考虑了实时图像中除人脸特征区域外的其他区域的色彩数据对应的色差范围,最终的渲染结果可以适应化妆对象所处的环境光照等因素的影响,使得生成的虚拟妆容图像更加真实自然,渲染是针对化妆对象的实时图像进行的,可以多角度地展示化妆对象的面部妆容,提高虚拟化妆的显示效果。

Description

虚拟化妆的方法和系统 技术领域
本发明涉及图像处理技术领域,特别是涉及一种虚拟化妆的方法和系统。
背景技术
化妆是生活中的一个重要部分。对于消费者而言,如何在众多产品中有效地选择适合自己的化妆品成为最为关注的问题,传统上消费者会选择到实体店铺进行试妆,通过反复试妆以选择自己满意的产品,为方便用户进行化妆,出现了虚拟化妆这一技术。
目前虚拟化妆技术主要是对获取的静态图像进行分析,从静态图像中提取人脸特征,并将化妆特征与人脸特征进行组合,从而合成妆容,为用户提供化妆的虚拟效果。
但是,传统的虚拟化妆只能实现静态图像中的人脸特征处理,使虚拟化妆的整体妆容生硬,效果较差。
发明内容
基于此,有必要针对传统的虚拟化妆的整体妆容生硬,效果较差的问题,提供一种虚拟化妆的方法和系统。
一种虚拟化妆的方法,包括以下步骤:
获取化妆对象的实时图像;
对实时图像中的人脸特征进行识别,获取实时图像中的人脸特征区域;
获取实时图像中除人脸特征区域外的其他区域的色彩数据,根据色彩数据获取其他区域的色差范围;
接收着色指令,根据着色指令获取初始着色值,根据初始着色值和色差范围计算目标着色值;
利用目标着色值对实时图像中的人脸特征区域进行渲染,得到化妆对象的 虚拟妆容图像。
一种虚拟化妆的系统,包括:
图像获取单元,用于获取化妆对象的实时图像;
人脸识别单元,用于对实时图像中的人脸特征进行识别,获取人脸特征区域;
色差获取单元,用于获取实时图像中除人脸特征区域外的其他区域的色彩数据,根据色彩数据获取其他区域的色差范围;
着色处理单元,用于接收着色指令,根据着色指令获取初始着色值,根据初始着色值和色差范围计算目标着色值;
图像渲染单元,用于利用目标着色值对实时图像中的人脸特征区域进行渲染,得到化妆对象的虚拟妆容图像。
根据上述本发明的虚拟化妆的方法和系统,其是先获取化妆对象的实时图像,对实时图像中的人脸特征进行识别,确定实时图像中的人脸特征区域,再通过实时图像中除人脸特征区域外的其他区域的色彩数据获取色差范围,对化妆的着色值进行调整,利用调整后的着色值对人脸特征区域进行渲染,得到化妆对象的虚拟妆容图像。在本发明中,不仅仅对实时图像中的人脸特征区域进行识别和渲染,同时还考虑了实时图像中除人脸特征区域外的其他区域的色彩数据对应的色差范围,最终的渲染结果可以适应化妆对象所处的环境光照等因素的影响,使得生成的虚拟妆容图像更加真实自然,而且渲染是针对化妆对象的实时图像进行的,可以多角度地展示化妆对象的面部妆容,提高虚拟化妆的显示效果。
附图说明
图1是其中一个实施例中虚拟化妆的方法的流程示意图;
图2是其中一个实施例中虚拟化妆的系统的结构示意图;
图3是其中一个实施例中虚拟化妆的系统的结构示意图;
图4是其中一个实施例中虚拟化妆的系统的结构示意图;
图5是其中一个实施例中虚拟化妆的系统的结构示意图;
图6是其中一个实施例中人脸特征点的分布示意图;
图7是其中一个实施例中封闭曲线的扫描示意图。
具体实施方式
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。
参见图1所示,为本发明一个实施例的虚拟化妆的方法的流程示意图。该实施例中的虚拟化妆的方法包括以下步骤:
步骤S101:获取化妆对象的实时图像;
步骤S102:对实时图像中的人脸特征进行识别,获取实时图像中的人脸特征区域;
步骤S103:获取实时图像中除人脸特征区域外的其他区域的色彩数据,根据色彩数据获取其他区域的色差范围;
在本步骤中,实时图像中除人脸特征区域外的其他区域属于化妆对象所处的环境,其他区域的色差范围反映了环境中光照等因素对人脸特征区域的影响;
步骤S104:接收着色指令,根据着色指令获取初始着色值,根据初始着色值和色差范围计算目标着色值;
步骤S105:利用目标着色值对实时图像中的人脸特征区域进行渲染,得到化妆对象的虚拟妆容图像。
在本实施例中,先获取化妆对象的实时图像,对实时图像中的人脸特征进行识别,确定实时图像中的人脸特征区域,再通过实时图像中除人脸特征区域外的其他区域的色彩数据获取色差范围,对化妆的着色值进行调整,利用调整后的着色值对人脸特征区域进行渲染,得到化妆对象的虚拟妆容图像。在本发明中,不仅仅对实时图像中的人脸特征区域进行识别和渲染,同时还考虑了实 时图像中除人脸特征区域外的其他区域的色彩数据对应的色差范围,最终的渲染结果可以适应化妆对象所处的环境光照等因素的影响,使得生成的虚拟妆容图像更加真实自然,而且渲染是针对化妆对象的实时图像进行的,可以多角度地展示化妆对象的面部妆容,提高虚拟化妆的显示效果。
可选的,对实时图像中的人脸特征进行识别,可以使用dlib人脸检测库,dlib人脸检测库检测效率良好,能够有效获取人脸特征点。
可选的,根据着色指令获取初始着色值C0后,可以根据预设的调节范围区间A和其他区域的色差范围区间B对初始着色值C0进行调节,得到目标着色值C,目标着色值C、初始着色值C0、调节范围区间A和色差范围区间B满意以下关系:C=(1-t)*C0+t*k,t∈A,k∈B
在其中一个实施例中,获取实时图像中的人脸特征区域的步骤包括以下步骤:
获取实时图像中的人脸特征部位的点集,对人脸特征部位的点集进行拟合,获得拟合曲线,根据拟合曲线确定人脸特征区域。
在本实施例中,人脸特征有多种,每一种都有特定的形状和位置,可以利用多个特征点可以表征人脸特征,这些特征点组成人脸特征部位的点集;人脸特征部位的点集中的特征点的数量有限,属于离散点,人脸特征部位一般为封闭的区域,需要对点集进行拟合扩展,得到拟合曲线,利用拟合曲线便于确定封闭的人脸特征区域。
在其中一个实施例中,人脸特征部位的点集包括左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集;
对人脸特征部位的点集进行拟合,获得拟合曲线的步骤包括以下步骤:
将左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集分别拟合成对应的封闭曲线。
在本实施例中,人脸特征部位包括左眉、右眉、左眼、右眼、鼻梁、鼻子、上嘴唇、下嘴唇和脸部轮廓,这些特征部位各不相同,将人脸特征分割为多个不同部位,获取对应的特征点集,将其分别拟合成不同的封闭曲线,通过封闭曲线可以得到不同的人脸特征区域,便于分别进行渲染,使得渲染操作更有针对性,从而提高渲染效果。
可选的,脸部轮廓特征点集对应的人脸特征区域不包括左眉、右眉、左眼、右眼、鼻梁、鼻子、上嘴唇、下嘴唇等特征点集对应的人脸特征区域,避免在对脸部轮廓特征区域进行渲染的同时对左眉、右眉、左眼、右眼、鼻梁、鼻子、上嘴唇、下嘴唇等特征区域进行渲染,从而影响整体渲染效果。
可选的,利用目标着色值对实时图像中的人脸特征区域进行渲染的步骤之前还包括以下步骤:根据着色指令确定待渲染的人脸特征区域。由于特征区域有多个,在渲染时可以从中选择一个或多个人脸特征区域进行渲染。
在其中一个实施例中,将左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集分别拟合成对应的封闭曲线的步骤包括以下步骤:
对任意一个特征点集中的特征点排序,在排序后的各特征点中选择任意一个特征点为目标特征点,确定目标特征点与上一特征点的连线的第一中点,以及目标特征点与下一特征点的连线的第二中点,将第一中点和第二中点的连线平移到目标特征点上,其中,平移后的连线的中点位于目标特征点的位置;
将平移后的第一中点作为目标特征点与上一特征点的控制点,根据目标特征点、上一特征点和控制点绘制二次贝塞尔曲线;其中,排序最前的特征点的上一特征点为排序最后的特征点;
与当前特征点集对应的封闭曲线包括所有排序相邻的特征点之间的二次贝塞尔曲线。
在本实施例中,二次贝塞尔曲线包括三个节点,三个节点为曲线两端的端点和中间的控制点,特征点集中排序相邻的两个特征点作为二次贝塞尔曲线两端的端点,中间的控制点通过特征点连线的中点来确定,根据目标特征点、上 一特征点和控制点就可以绘制二次贝塞尔曲线;所有排序相邻的特征点之间的二次贝塞尔曲线构成封闭曲线,围成一个人脸特征区域。二次贝塞尔曲线是一条圆滑的弧线,利用它构成封闭曲线,可以使封闭曲线围成的人脸特征区域边缘显得自然平滑,进一步提升渲染效果。
可选的,当确定目标特征点、上一特征点和控制点后,可以根据以下公式来绘制二次贝塞尔曲线:
B(t)=(1-t)2P0+2t(1-t)P1+t2P2,t∈[0,1]
在其中一个实施例中,将左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集分别拟合成对应的封闭曲线的步骤包括以下步骤:
对任意一个特征点集中的特征点排序,在排序后的各特征点中选择任意一个特征点为目标特征点,确定目标特征点与上一特征点的连线的第一中点,以及目标特征点与下一特征点的连线的第二中点,将第一中点和第二中点的连线平移到目标特征点上,其中,平移后的连线的中点位于目标特征点的位置;
将平移后的第二中点作为目标特征点与下一特征点的控制点,根据目标特征点、下一特征点和控制点绘制二次贝塞尔曲线;其中,排序最后的特征点的下一特征点为排序最前的特征点;
与当前特征点集对应的封闭曲线包括所有排序相邻的特征点之间的二次贝塞尔曲线。
在本实施例中,封闭曲线主要是由二次贝塞尔曲线构成,二次贝塞尔曲线是由线段与三个节点组成,三个节点为两端的端点和中间的控制点,特征点集中排序相邻的两个特征点作为二次贝塞尔曲线两端的端点,中间的控制点通过特征点连线的中点来确定,根据目标特征点、下一特征点和控制点就可以绘制二次贝塞尔曲线;所有排序相邻的特征点之间的二次贝塞尔曲线构成封闭曲线,围成一个人脸特征区域。二次贝塞尔曲线是一条圆滑的弧线,利用它构成封闭曲线,可以使封闭曲线围成的人脸特征区域边缘显得自然平滑,进一步提升渲染效果。
在其中一个实施例中,根据拟合曲线确定人脸特征区域的步骤包括以下步骤:
对当前特征点集拟合的封闭曲线围成的区域进行扫描,获得扫描线;
获取当前特征点集中各排序相邻特征点之间的连线,根据扫描线与各连线的相交状态生成活动边表;其中,活动边表为与当前扫描线相交的连线的集合;
根据活动边表中的连线确定与当前扫描线相交的二次贝塞尔曲线,根据当前扫描线与该二次贝塞尔曲线的交点在当前扫描线上选取扫描线段;
当前特征点集所确定的人脸特征区域包括所有扫描线段。
在本实施例中,对特征点集拟合的封闭曲线围成的区域进行扫描,活动边表确定与扫描线相交的二次贝塞尔曲线,再利用扫描线与二次贝塞尔曲线的交点在扫描线上选取扫描线段,所有的扫描线段特征点集所确定的人脸特征区域。扫描线段由多个扫描像素点构成,通过此种方式,可以准确获得人脸特征区域中的每一个像素点,便于统一对人脸特征区域进行渲染。
活动边表实时记录与当前扫描线相交的连线的集合,相比于二次贝塞尔曲线,呈现为直的线段的连线与扫描线的相交状态更容易获取。利用活动边表可以动态获取当前扫描线与各连线的相交状态。由于扫描线存在步进关系,在扫描过程中,当一条连线出现在活动边表中,且到某一条扫描线时该连线不再出现,则在后续扫描时该连线也不会在出现,例如,扫描线1与连线1、2、3相交,扫描线2与连线1、3相交,则扫描线2以后的任意扫描线均不与连线2相交,即把线段2移出活动边表,如此在生成活动边表时,无需计算扫描线与已移除的连线的相交状态,简化活动边表的动态更新过程,提高处理效率。
连线是各排序相邻特征点之间的线段,二次贝塞尔曲线的两端是排序相邻的特征点,因此连线与二次贝塞尔曲线是一一对应的,根据活动边表中的连线可以确定与当前扫描线相交的二次贝塞尔曲线。根据当前扫描线与该二次贝塞尔曲线的交点在当前扫描线上选取扫描线段,由于扫描线经过一个封闭区域一般有两个交点,扫描线段是以两个交点为端点的线段。
可选的,当活动变表中原有的连线并未移除,且新增两条有共同端点的连线时,将当前扫描线与新增两条连线对应的二次贝塞尔曲线的交点之间的线段作为多余线段,从当前扫描线与原有连线的二次贝塞尔曲线的交点之间的扫描线段中剔除多余线段。在人脸特征部位的点集拟合而成的封闭曲线围成的可能是凹多边形区域,此时形成凹口部位中的扫描线不属于人脸特征区域,因此可以将其剔除,使人脸特征区域更加准确。
在其中一个实施例中,根据色彩数据获取其他区域的色差范围的步骤包括以下步骤:
对色彩数据进行傅里叶变换,再对傅里叶变换结果进行滤波,获取其他区域的色差范围。
在本实施例中,对色彩数据进行傅里叶变换和滤波操作,可以将色彩数据从空间域转到频域,从而可以非常便捷地获取色差范围。
在其中一个实施例中,根据色彩数据获取其他区域的色差范围的步骤之后还包括以下步骤:
对各特征点集对应的人脸特征区域进行颜色统计,根据统计结果确定亮度高于第一预设值且对比度低于第二预设值的目标调整区域;
在利用目标着色值对实时图像中的人脸特征区域进行渲染的步骤之后还包括以下步骤:对目标调整区域进行色彩均衡操作。
在本实施例中,进行渲染后可以对目标调整区域进行色彩均衡操作,目标调整区域是人脸特征区域中亮度高于第一预设值且对比度低于第二预设值的区域,对该区域进行色彩均衡操作可以改善虚拟妆容图像中人的面部色彩,加强虚拟妆容的显示效果。其中第一预设值和第二预设值可以根据需要进行修改。
可选的,目标调整区域可以是脸部轮廓特征点集、鼻梁特征点集、鼻子特征点集各自对应的人脸特征区域,不包括左眉、右眉、左眼、右眼、上嘴唇、下嘴唇等特征点集对应的人脸特征区域。
在其中一个实施例中,虚拟化妆的方法还包括以下步骤:
接收第一修改指令,根据第一修改指令选择目标特征点集,从第一修改指 令中提取矫正参数,根据矫正参数对目标特征点集中的特征点进行修正,返回至将目标特征点集拟合成封闭曲线的步骤。
在本实施例中,特征点集可以通过第一修改指令进行修正,以适应各种不同的场景,如用户自主修改,或人脸识别错误等,加强本方案在实际中的适用性。
在其中一个实施例中,虚拟化妆的方法还包括以下步骤:
接收第二修改指令,根据第二修改指令选择目标人脸特征区域,从第二修改指令中提取调整参数,根据调整参数对目标着色值进行调整,利用调整后的目标着色值对实时图像中的目标人脸特征区域进行渲染。
在本实施例中,利用目标着色值进行渲染之后,可以通过第二修改指令对目标着色值进行调整,并重新进行渲染,便于化妆对象选择不同的妆容,而且只需调整目标着色值,加快换妆的过程。
在其中一个实施例中,获取化妆对象的实时图像的步骤包括以下步骤:
对化妆对象进行拍摄,获取拍摄的预览图像,对拍摄预览图像进行去噪预处理,获得实时图像。
在本实施例中,使用的是拍摄时的预览图像,预览图像可以实时变化,通过预览图像进行虚拟化妆可以实时显示虚拟化妆的效果,而且对其进行去噪预处理,可以提高后续对图像中人脸识别的准确度。
根据上述虚拟化妆的方法,本发明实施例还提供一种虚拟化妆的系统,以下就本发明的虚拟化妆的系统的实施例进行详细说明。
参见图2所示,为本发明一个实施例的虚拟化妆的系统的结构示意图。该实施例中的虚拟化妆的系统包括:
图像获取单元210,用于获取化妆对象的实时图像;
人脸识别单元220,用于对实时图像中的人脸特征进行识别,获取人脸特征区域;
色差获取单元230,用于获取实时图像中除人脸特征区域外的其他区域的色彩数据,根据色彩数据获取其他区域的色差范围;
着色处理单元240,用于接收着色指令,根据着色指令获取初始着色值,根据初始着色值和色差范围计算目标着色值;
图像渲染单元250,用于利用目标着色值对实时图像中的人脸特征区域进行渲染,得到化妆对象的虚拟妆容图像。
在其中一个实施例中,人脸识别单元220获取实时图像中的人脸特征部位的点集,对人脸特征部位的点集进行拟合,获得拟合曲线,根据拟合曲线确定人脸特征区域。
在其中一个实施例中,人脸特征部位的点集包括左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集;
人脸识别单元220将左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集分别拟合成对应的封闭曲线。
在其中一个实施例中,人脸识别单元220对任意一个特征点集中的特征点排序,在排序后的各特征点中选择任意一个特征点为目标特征点,确定目标特征点与上一特征点的连线的第一中点,以及目标特征点与下一特征点的连线的第二中点,将第一中点和第二中点的连线平移到目标特征点上,其中,平移后的连线的中点位于目标特征点的位置;
将平移后的第一中点作为目标特征点与上一特征点的控制点,根据目标特征点、上一特征点和控制点绘制二次贝塞尔曲线;其中,排序最前的特征点的上一特征点为排序最后的特征点;
与当前特征点集对应的封闭曲线包括所有排序相邻的特征点之间的二次贝塞尔曲线。
在其中一个实施例中,人脸识别单元220对任意一个特征点集中的特征点排序,在排序后的各特征点中选择任意一个特征点为目标特征点,确定目标特 征点与上一特征点的连线的第一中点,以及目标特征点与下一特征点的连线的第二中点,将第一中点和第二中点的连线平移到目标特征点上,其中,平移后的连线的中点位于目标特征点的位置;
将平移后的第二中点作为目标特征点与下一特征点的控制点,根据目标特征点、下一特征点和控制点绘制二次贝塞尔曲线;其中,排序最后的特征点的下一特征点为排序最前的特征点;
与当前特征点集对应的封闭曲线包括所有排序相邻的特征点之间的二次贝塞尔曲线。
在其中一个实施例中,人脸识别单元220对当前特征点集拟合的封闭曲线围成的区域进行扫描,获得扫描线;获取当前特征点集中各排序相邻特征点之间的连线,根据扫描线与各连线的相交状态生成活动边表;其中,活动边表为与当前扫描线相交的连线的集合;根据活动边表中的连线确定与当前扫描线相交的二次贝塞尔曲线,根据当前扫描线与该二次贝塞尔曲线的交点在当前扫描线上选取扫描线段;当前特征点集所确定的人脸特征区域包括所有扫描线段。
在其中一个实施例中,色差获取单元230对色彩数据进行傅里叶变换,再对傅里叶变换结果进行滤波,获取其他区域的色差范围。
在其中一个实施例中,如图3所示,虚拟化妆的系统还包括色彩均衡单元260;
色差获取单元230对各特征点集对应的人脸特征区域进行颜色统计,根据统计结果确定亮度高于第一预设值且对比度低于第二预设值的目标调整区域;
色彩均衡单元260在图像渲染单元250执行渲染操作之后,对目标调整区域进行色彩均衡操作。
在其中一个实施例中,如图4所示,虚拟化妆的系统还包括第一修改单元270,用于接收第一修改指令,根据第一修改指令选择目标特征点集,从第一修改指令中提取矫正参数,根据矫正参数对目标特征点集中的特征点进行修正,人脸识别单元220重新将目标特征点集拟合成封闭曲线。
在其中一个实施例中,如图5所示,虚拟化妆的系统还包括第二修改单元 280,用于接收第二修改指令,根据第二修改指令选择目标人脸特征区域,从第二修改指令中提取调整参数,根据调整参数对目标着色值进行调整;
图像渲染单元250利用调整后的目标着色值对实时图像中的目标人脸特征区域进行渲染。
在其中一个实施例中,图像获取单元210对化妆对象进行拍摄,获取拍摄的预览图像,对拍摄预览图像进行去噪预处理,获得实时图像。
本发明的虚拟化妆的系统与本发明的虚拟化妆的方法一一对应,在上述虚拟化妆的方法的实施例中阐述的技术特征及其有益效果均适用于虚拟化妆的系统的实施例中。
在一个具体的实施例中,本发明的虚拟化妆的方法可以应用在虚拟化妆软件中。例如,个人用户在家中网购时想要尝试妆容效果,可先安装应用本发明的虚拟化妆的方法的软件,之后进行虚拟试妆,再确认是否有购买必要;或者,化妆品厂商可在卖场安装一个智能平板设备,厂商预先安装好应用本发明的虚拟化妆的方法的软件,并将预设妆容参数(供用户选取着色、材质、区域模板)输入软件系统中,顾客可通过该软件进行虚拟试妆,而不用真实上妆。
在实际应用时,可以通过智能终端设备上的摄像头拍摄化妆对象,得到拍摄的预览图像,目前主流摄像头的预览格式为YUV420图像,分辨率不等,在得到预览图像以后,可以对其进行预处理。此预览图像主要是用于人脸识别,无需过高的分辨率,分辨率大小设置为640*480即可,因此预处理时可以将其预览图像的分辨率缩放至640*480级别,再进行灰度化处理,然后对灰度化图像进行去噪,从而完成预处理操作。在预处理之前,还可以保留一份原始预览图像备份,用于后续渲染操作,也可以在原始预览图像上进行渲染操作。
对预处理后的图像中的人脸特征进行识别,在识别时可以使用dlib人脸检测库,dlib人脸检测库检测效率良好,能够有效获取人脸特征点。
如图6所示,在获取人脸特征部位的点集后,可将其分割为左眉特征点集 合LEB(18-22),右眉特征点集合REB(23-27),左眼特征点集合LE(37-42),右眼特征点集合RE(43-48),鼻梁特征点集合NB(28-31),鼻子特征点集合N(32-36),上嘴唇特征点集合UM(49-55、61-65),下嘴唇特征点集合DM(49、55-61、65-68),脸部轮廓特征点集合F(1-17),一共9个集合。
以上嘴唇特征点集合UM渲染为例,传统方法是连接特征点即可,然而这样渲染出的嘴唇形状不自然平滑,因此本发明用近似曲线把点集UM扩充为一个封闭的集合记为UM-EXT,此处使用二次贝塞尔曲线连接,下式为二次贝塞尔曲线公式:
B(t)=(1-t)2P0+2t(1-t)P1+t2P2,t∈[0,1]
根据离散点集UM生成二次贝塞尔曲线控制点集合UM-C后,对t做积分,之后进行去重和补全后得到集合UM-EXT。具体做法为,对有序集合UM进行排序,遍历UM,生成同样元素个数的点集UM’,其中每个点均为UM中相邻两点连线的中点;对于UM集合中一点P0,其与上一点P1连线的中点为P1′,与下一点P2连线的中点为P2′,连接P1′和P2′,将该线段P1′P2′平移至点P0处,平移后的线段P1′P2′的中点位于点P0的位置,平移后的线段P1′P2′的两端端点为C1和C2,其可以作为P0和P1、P0和P2之间的控制点。按照此方法对离散点集UM中所有特征点进行处理后,相邻两点之间会有两个控制点,选择其中一个控制点即可,即上述实施例中将平移后的第一中点作为目标特征点与上一特征点的控制点,将平移后的第二中点作为目标特征点与下一特征点的控制点这两种情形。将两个特征点和控制点通过二次贝塞尔曲线公式生成二次贝塞尔曲线,离散点集UM中各特征点对应的二次贝塞尔曲线可以围成封闭凹多边形;去重和补全操作后必须满足
Figure PCTCN2017103586-appb-000001
其中UM记为UM-EXT的索引集合。
对封闭凹多边形进行形态学处理,步骤如下:
对封闭凹多边形围成的区域从y轴顺序开始建立扫描线,根据扫描线与集合UM中的有序点练成的线段的相交状态生成活动边表ATE,活动边表ATE中的每一条活动边为集合UM中的有序点练成的线段;
根据活动边表中的连线确定与当前扫描线相交的二次贝塞尔曲线,根据当 前扫描线与该二次贝塞尔曲线的交点在当前扫描线上选取扫描线段;
每条扫描线存在步进关系,即加入扫描线scan-1与线段1、2、3相交,scan-2与1、3相交,则scan-2后的任意扫描线均不与线段2相交,即把线段2移除活动边表AET;
如图7所示,与任意一条扫描线相交的活动边会有多条,一般在有两条活动边时,选取扫描线与两条活动边对应的二次贝塞尔曲线的两个交点之间的线段,当活动变表中原有的活动边并未移除,且新增两条有共同端点的活动边时,将当前扫描线与新增两条活动边对应的二次贝塞尔曲线的交点之间的线段作为多余线段,从当前扫描线与原有活动边对应的二次贝塞尔曲线的交点之间的扫描线段中剔除多余线段,如图7中的凹口部分,形成凹口部位中的扫描线不属于人脸特征区域,因此可以将其剔除。
扫描线段由多个扫描像素点构成,通过此种方式,可以准确获得人脸特征区域中的每一个像素点,进而确定人脸特征区域。
对人脸特征区域以外的图像域进行傅里叶变换,进行滤波操作,可以将图像域中的色彩数据从空间域转到频域,从而获取当前色差范围;对各分割区域进行颜色直方图统计,获取亮度高且对比度低的区域。
对于用户给定输入的着色值C0,结合色差范围区间B,预设的可调节的区间A,目标着色值C满足以下公式:C=(1-t)*C0+t*k,t∈A,k∈B。利用目标着色值对人脸特征区域进行渲染。
根据上述得到亮度高且对比度低的区域进行色彩均衡操作,主要用于改善面部色彩,使之更加白皙,脸部特征为亮度高且对比度低的区域,对于眼睛、眉毛、嘴唇部分则是亮度低且对比度高的区域,对该区域不做色彩均衡操作,保留上一步的渲染效果。
用户还可以通过手动选取成像后渲染不理想的地方,设备收集到矫正参数后存储其对应区域,并将该区域通过点集进行重新拟合,实现对人脸特征区域的修正,在后续的渲染过程中也可以调整渲染后的像素值。此处仍以上嘴唇进行说明,对于特征点集合UM,用户可修正UM中的特征点,重新计算渲染区域, 对于渲染结果不理想的区域,调整目标着色值,并将8×8通域的色彩和梯度值进行记录,用于后续修正。
通过以上步骤,即可实现实时渲染的虚拟试妆的整个过程,该过程能够实时预览虚拟化妆的效果,能够通过适应环境光照等要素来渲染图像,使生成的妆容图像更真实自然,而且用户还可以手动矫正渲染后图像不准确的颜色部分,达到理想的效果。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成。所述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,包括上述方法所述的步骤。所述的存储介质,包括:ROM/RAM、磁碟、光盘等。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (12)

  1. 一种虚拟化妆的方法,其特征在于,包括以下步骤:
    获取化妆对象的实时图像;
    对所述实时图像中的人脸特征进行识别,获取所述实时图像中的人脸特征区域;
    获取所述实时图像中除所述人脸特征区域外的其他区域的色彩数据,根据所述色彩数据获取所述其他区域的色差范围;
    接收着色指令,根据所述着色指令获取初始着色值,根据所述初始着色值和所述色差范围计算目标着色值;
    利用所述目标着色值对所述实时图像中的人脸特征区域进行渲染,得到所述化妆对象的虚拟妆容图像。
  2. 根据权利要求1所述的虚拟化妆的方法,其特征在于,所述获取所述实时图像中的人脸特征区域的步骤包括以下步骤:
    获取所述实时图像中的人脸特征部位的点集,对所述人脸特征部位的点集进行拟合,获得拟合曲线,根据所述拟合曲线确定所述人脸特征区域。
  3. 根据权利要求2所述的虚拟化妆的方法,其特征在于,所述人脸特征部位的点集包括左眉特征点集、右眉特征点集、左眼特征点集、右眼特征点集、鼻梁特征点集、鼻子特征点集、上嘴唇特征点集、下嘴唇特征点集和脸部轮廓特征点集;
    所述对所述人脸特征部位的点集进行拟合,获得拟合曲线的步骤包括以下步骤:
    将所述左眉特征点集、所述右眉特征点集、所述左眼特征点集、所述右眼特征点集、所述鼻梁特征点集、所述鼻子特征点集、所述上嘴唇特征点集、所述下嘴唇特征点集和所述脸部轮廓特征点集分别拟合成对应的封闭曲线。
  4. 根据权利要求3所述的虚拟化妆的方法,其特征在于,所述将所述左眉特征点集、所述右眉特征点集、所述左眼特征点集、所述右眼特征点集、所述鼻梁特征点集、所述鼻子特征点集、所述上嘴唇特征点集、所述下嘴唇特征 点集和所述脸部轮廓特征点集分别拟合成对应的封闭曲线的步骤包括以下步骤:
    对任意一个特征点集中的特征点排序,在排序后的各特征点中选择任意一个特征点为目标特征点,确定所述目标特征点与上一特征点的连线的第一中点,以及所述目标特征点与下一特征点的连线的第二中点,将所述第一中点和所述第二中点的连线平移到所述目标特征点上,其中,平移后的连线的中点位于所述目标特征点的位置;
    将平移后的第一中点作为所述目标特征点与所述上一特征点的控制点,根据所述目标特征点、所述上一特征点和所述控制点绘制二次贝塞尔曲线;其中,排序最前的特征点的上一特征点为排序最后的特征点;
    与当前特征点集对应的封闭曲线包括所有排序相邻的特征点之间的二次贝塞尔曲线。
  5. 根据权利要求3所述的虚拟化妆的方法,其特征在于,所述将所述左眉特征点集、所述右眉特征点集、所述左眼特征点集、所述右眼特征点集、所述鼻梁特征点集、所述鼻子特征点集、所述上嘴唇特征点集、所述下嘴唇特征点集和所述脸部轮廓特征点集分别拟合成对应的封闭曲线的步骤包括以下步骤:
    对任意一个特征点集中的特征点排序,在排序后的各特征点中选择任意一个特征点为目标特征点,确定所述目标特征点与上一特征点的连线的第一中点,以及所述目标特征点与下一特征点的连线的第二中点,将所述第一中点和所述第二中点的连线平移到所述目标特征点上,其中,平移后的连线的中点位于所述目标特征点的位置;
    将平移后的第二中点作为所述目标特征点与所述下一特征点的控制点,根据所述目标特征点、所述下一特征点和所述控制点绘制二次贝塞尔曲线;其中,排序最后的特征点的下一特征点为排序最前的特征点;
    与当前特征点集对应的封闭曲线包括所有排序相邻的特征点之间的二次贝塞尔曲线。
  6. 根据权利要求4或5所述的虚拟化妆的方法,其特征在于,所述根据所述拟合曲线确定所述人脸特征区域的步骤包括以下步骤:
    对当前特征点集拟合的封闭曲线围成的区域进行扫描,获得扫描线;
    获取当前特征点集中各排序相邻特征点之间的连线,根据所述扫描线与各所述连线的相交状态生成活动边表;其中,所述活动边表为与当前扫描线相交的连线的集合;
    根据所述活动边表中的连线确定与当前扫描线相交的二次贝塞尔曲线,根据当前扫描线与该二次贝塞尔曲线的交点在当前扫描线上选取扫描线段;
    当前特征点集所确定的人脸特征区域包括所有扫描线段。
  7. 根据权利要求6所述的虚拟化妆的方法,其特征在于,所述根据所述色彩数据获取所述其他区域的色差范围的步骤包括以下步骤:
    对所述色彩数据进行傅里叶变换,再对傅里叶变换结果进行滤波,获取所述其他区域的色差范围。
  8. 根据权利要求6所述的虚拟化妆的方法,其特征在于,所述根据所述色彩数据获取所述其他区域的色差范围的步骤之后还包括以下步骤:
    对各所述特征点集对应的人脸特征区域进行颜色统计,根据统计结果确定亮度高于第一预设值且对比度低于第二预设值的目标调整区域;
    在所述利用所述目标着色值对所述实时图像中的人脸特征区域进行渲染的步骤之后还包括以下步骤:对所述目标调整区域进行色彩均衡操作。
  9. 根据权利要求6所述的虚拟化妆的方法,其特征在于,还包括以下步骤:
    接收第一修改指令,根据所述第一修改指令选择目标特征点集,从所述第一修改指令中提取矫正参数,根据矫正参数对所述目标特征点集中的特征点进行修正,执行将所述目标特征点集拟合成封闭曲线的步骤。
  10. 根据权利要求6所述的虚拟化妆的方法,其特征在于,还包括以下步骤:
    接收第二修改指令,根据所述第二修改指令选择目标人脸特征区域,从所 述第二修改指令中提取调整参数,根据调整参数对所述目标着色值进行调整,利用调整后的目标着色值对所述实时图像中的目标人脸特征区域进行渲染。
  11. 根据权利要求6所述的虚拟化妆的方法,其特征在于,所述获取化妆对象的实时图像的步骤包括以下步骤:
    对所述化妆对象进行拍摄,获取拍摄的预览图像,对所述拍摄预览图像进行去噪预处理,获得所述实时图像。
  12. 一种虚拟化妆的系统,其特征在于,包括:
    图像获取单元,用于获取化妆对象的实时图像;
    人脸识别单元,用于对所述实时图像中的人脸特征进行识别,获取人脸特征区域;
    色差获取单元,用于获取所述实时图像中除所述人脸特征区域外的其他区域的色彩数据,根据所述色彩数据获取所述其他区域的色差范围;
    着色处理单元,用于接收着色指令,根据所述着色指令获取初始着色值,根据所述初始着色值和所述色差范围计算目标着色值;
    图像渲染单元,用于利用所述目标着色值对所述实时图像中的人脸特征区域进行渲染,得到所述化妆对象的虚拟妆容图像。
PCT/CN2017/103586 2017-06-07 2017-09-27 虚拟化妆的方法和系统 WO2018223561A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710423890.0 2017-06-07
CN201710423890.0A CN107273837B (zh) 2017-06-07 2017-06-07 虚拟化妆的方法和系统

Publications (1)

Publication Number Publication Date
WO2018223561A1 true WO2018223561A1 (zh) 2018-12-13

Family

ID=60067504

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103586 WO2018223561A1 (zh) 2017-06-07 2017-09-27 虚拟化妆的方法和系统

Country Status (2)

Country Link
CN (1) CN107273837B (zh)
WO (1) WO2018223561A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705346A (zh) * 2019-08-22 2020-01-17 杭州趣维科技有限公司 一种大尺度人脸变形方法
CN112767285A (zh) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN116452413A (zh) * 2023-04-24 2023-07-18 广州番禺职业技术学院 一种基于视频人脸自动匹配粤剧化妆的系统及其方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102081947B1 (ko) * 2018-04-24 2020-02-26 주식회사 엘지생활건강 이동 단말기 및 화장품 자동인식 시스템
CN110728618B (zh) * 2018-07-17 2023-06-27 淘宝(中国)软件有限公司 虚拟试妆的方法、装置、设备及图像处理方法
CN109409262A (zh) * 2018-10-11 2019-03-01 北京迈格威科技有限公司 图像处理方法、图像处理装置、计算机可读存储介质
CN111507907B (zh) * 2019-01-30 2023-05-30 玩美移动股份有限公司 执行于计算设备的系统、方法及存储媒体
CN110084154B (zh) * 2019-04-12 2021-09-17 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110221822A (zh) * 2019-05-29 2019-09-10 北京字节跳动网络技术有限公司 特效的合并方法、装置、电子设备及计算机可读存储介质
CN110460773B (zh) * 2019-08-16 2021-05-11 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN113453027B (zh) * 2020-03-27 2023-06-27 阿里巴巴集团控股有限公司 直播视频、虚拟上妆的图像处理方法、装置及电子设备
CN111583163B (zh) * 2020-05-07 2023-06-13 厦门美图之家科技有限公司 基于ar的人脸图像处理方法、装置、设备及存储介质
CN112419444B (zh) * 2020-12-09 2024-03-29 北京维盛视通科技有限公司 服装板片绘制方法、装置、电子设备及存储介质
CN113870400A (zh) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 虚拟对象的生成方法及装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870821A (zh) * 2014-04-10 2014-06-18 上海影火智能科技有限公司 一种虚拟试妆的方法和系统
CN104952036A (zh) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 一种即时视频中的人脸美化方法和电子设备
CN105976309A (zh) * 2016-05-03 2016-09-28 成都索贝数码科技股份有限公司 一种高效且易于并行实现的美颜移动终端
CN106097261A (zh) * 2016-06-01 2016-11-09 广东欧珀移动通信有限公司 图像处理方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870821A (zh) * 2014-04-10 2014-06-18 上海影火智能科技有限公司 一种虚拟试妆的方法和系统
CN104952036A (zh) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 一种即时视频中的人脸美化方法和电子设备
CN105976309A (zh) * 2016-05-03 2016-09-28 成都索贝数码科技股份有限公司 一种高效且易于并行实现的美颜移动终端
CN106097261A (zh) * 2016-06-01 2016-11-09 广东欧珀移动通信有限公司 图像处理方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705346A (zh) * 2019-08-22 2020-01-17 杭州趣维科技有限公司 一种大尺度人脸变形方法
CN112767285A (zh) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN112767285B (zh) * 2021-02-23 2023-03-10 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN116452413A (zh) * 2023-04-24 2023-07-18 广州番禺职业技术学院 一种基于视频人脸自动匹配粤剧化妆的系统及其方法
CN116452413B (zh) * 2023-04-24 2024-03-29 广州番禺职业技术学院 一种基于视频人脸自动匹配粤剧化妆的系统及其方法

Also Published As

Publication number Publication date
CN107273837B (zh) 2019-05-07
CN107273837A (zh) 2017-10-20

Similar Documents

Publication Publication Date Title
WO2018223561A1 (zh) 虚拟化妆的方法和系统
CN106909875B (zh) 人脸脸型分类方法和系统
RU2680765C1 (ru) Автоматизированное определение и обрезка неоднозначного контура документа на изображении
US11989859B2 (en) Image generation device, image generation method, and storage medium storing program
CN1475969B (zh) 用于增强人像图像的方法和系统
Gerstner et al. Pixelated image abstraction
CN105979122B (zh) 图像处理装置以及图像处理方法
CN108810406B (zh) 人像光效处理方法、装置、终端及计算机可读存储介质
JP2004265406A (ja) バッチモードで処理されるポートレート画像を向上する方法及びシステム
US10169891B2 (en) Producing three-dimensional representation based on images of a person
CN116583878A (zh) 用于个性化3d头部模型变形的方法和系统
KR101853269B1 (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 장치
US11321960B2 (en) Deep learning-based three-dimensional facial reconstruction system
JP7462120B2 (ja) 2次元(2d)顔画像から色を抽出するための方法、システム及びコンピュータプログラム
CN116997933A (zh) 用于构造面部位置图的方法和系统
US20240029345A1 (en) Methods and system for generating 3d virtual objects
KR20230110787A (ko) 개인화된 3d 머리 및 얼굴 모델들을 형성하기 위한 방법들 및 시스템들
CN109064431A (zh) 一种图片亮度调节方法、设备及其存储介质
CN113344837A (zh) 人脸图像处理方法及装置、计算机可读存储介质、终端
KR100602739B1 (ko) 재귀적 제어선 정합을 이용한 반자동 필드 기반 영상 변형방법
CN114155569B (zh) 一种化妆进度检测方法、装置、设备及存储介质
CN114596213A (zh) 一种图像处理方法及装置
JP7406348B2 (ja) 画像処理装置、画像処理方法、及びプログラム
EP4306077A1 (en) A method and a system of determining shape and appearance information of an ocular prosthesis for a patient, a computer program product and a conformer
CN110751078B (zh) 一种确定三维人脸的非肤色区域的方法和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17913067

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/06/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17913067

Country of ref document: EP

Kind code of ref document: A1