US9754153B2 - Method and apparatus for facial image processing - Google Patents
Method and apparatus for facial image processing Download PDFInfo
- Publication number
- US9754153B2 US9754153B2 US14/086,235 US201314086235A US9754153B2 US 9754153 B2 US9754153 B2 US 9754153B2 US 201314086235 A US201314086235 A US 201314086235A US 9754153 B2 US9754153 B2 US 9754153B2
- Authority
- US
- United States
- Prior art keywords
- face
- pixels
- segmentation region
- contour edge
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G06K9/00234—
-
- G06K9/4652—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G06K9/00268—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- Exemplary and non-limiting embodiments of the present invention generally relate to image processing, and more specifically, to a method and apparatus for facial image processing.
- One challenge that the current image processing techniques face is how to automatically and precisely segment a face from an image comprising the face and surrounding areas, as a basis for further facial image processing. Segmenting a fine face region is a benefit for realizing subsequent various satisfactory image processing, such as facial image editing, effecting and the like. If some background images are introduced or some face regions are missed during the face segmentation from the image, only a coarse face segmentation region as shown in FIG. 1 will be obtained.
- the detected face segmentation region is surrounded by a dotted line. It can be seen from the face segmentation region surrounded by the dotted line that, due to background illumination or surrounding color's proximity when taking photos, the left side of the segmentation region comprises a small non-face region, and the right side excludes a partial face region near the left ear from the face segmentation region. Obviously, such a result of the face segmentation is coarse, and subsequent processing based on such a coarse face segmentation region usually leads to severe distortion in the resulting face image, or an unacceptable effect.
- the difficulty for fine face region segmentation lies in a variety of objects in a picture, a variety of devices for photo taking and a variety of environmental illumination when photo taking.
- most of the current solutions are insufficient to process pictures having various facial features, such as, a picture of a white person or a black person, front view or side view, indoor or outdoor, a young person or an old person, and pictures with different definition or ambiguity.
- complicated and varying shooting conditions may incur unbalanced color distribution on a face, which may blur the image. This is why the face segmentation of a picture based only on a luminance cue and a skin color cue does not generate a satisfactory effect.
- color similarity between a face region and background objects also makes it difficult to differentiate color information in segmenting the whole face region.
- how to obtain a reliable and fine face segmentation region becomes a primary issue in facial image processing.
- embodiments of the present invention provide an efficient solution for facial image segmentation that enables further refinement of a coarse face segmentation region so as to obtain a fine face segmentation region with high quality and precision. Based on the fine face segmentation region, embodiments of the present invention also propose to perform further image processing on the fine face segmentation region so as to meet requirements of different users on image effects.
- an embodiment of the present invention provides a method.
- the method comprises performing face detection of an image.
- the method further comprises obtaining a coarse face segmentation region of at least one face and a contour edge of the at least one face based on the face detection.
- the method comprises adjusting the coarse face segmentation region based on the contour edge to obtain a fine face segmentation region.
- the apparatus comprises at least one processor and at least one memory containing computer program code.
- the processor and the memory are configured to, with the processor, cause the apparatus to at least perform face detection of an image.
- the processor and the memory are configured to, with the processor, cause the apparatus to at least obtain a coarse face segmentation region of at least one face and a contour edge of the at least one face based on the face detection.
- the processor and the memory are configured to, with the processor, cause the apparatus to at least adjust the coarse face segmentation region based on the contour edge to obtain a fine face segmentation region.
- the apparatus comprises a detecting device for performing face detection of an image.
- the apparatus further comprises an obtaining device for obtaining a coarse face segmentation region of at least one face and a contour edge of the at least one face.
- the apparatus comprises an adjusting device for adjusting the coarse face segmentation region based on the contour edge to obtain a fine face segmentation region.
- a further embodiment of the present invention provides a computer program product.
- the computer program product comprises at least one computer readable storage medium having a computer readable program code portion stored therein, wherein the computer readable program code portion is used for implementing the method for image processing according to embodiments of the present invention.
- embodiments of the present invention can significantly improve precision of the face region segmentation and thereby provides a good basis for the subsequent face image processing.
- FIG. 1 exemplarily illustrates a picture subject to coarse face region segmentation obtained by using the prior art
- FIG. 2 is a simplified flow chart exemplarily illustrating a method for face image processing according to an embodiment of the present invention
- FIG. 3 is a detailed flow chart exemplarily illustrating a method for face image processing according to an embodiment of the present invention
- FIG. 4 is a schematic diagram exemplarily illustrating operations for obtaining a coarse face segmentation region according to an embodiment of the present invention
- FIG. 5 a -5 c are schematic diagrams respectively illustrating performing refinement processing on a picture so as to obtain a fine face segmentation region according to different embodiments of the present invention
- FIG. 6 is a schematic diagram exemplarily illustrating performing whitening processing on a face region using the fine face segmentation region obtained according to an embodiment of the present invention
- FIG. 7 is a schematic view exemplarily illustrating performing smoothing processing on a face using the fine face segmentation region obtained according to an embodiment of the present invention
- FIG. 8 is a flow chart exemplarily illustrating a whole process for facial image processing according to an embodiment of the present invention in connection with a specific image processing process;
- FIG. 9 is a block diagram exemplarily illustrating an apparatus capable of implementing embodiments of the present invention.
- FIG. 10 is a block diagram exemplarily illustrating another apparatus for implementing embodiments of the present invention.
- Exemplary embodiments of the present invention provide a method and apparatus for efficient face region segmentation, and a method and apparatus for “beautifying” (including whitening and smoothing) a fine face segmentation region obtained by using the method and apparatus.
- the exemplary embodiments of the present invention propose that coarse segmentation is firstly performed on a face in an image using face detection, so that a coarse face segmentation region is obtained. Next, the face in the image is processed to obtain a face contour edge of the face region. Subsequently, the above two are effectively combined to obtain a fine face segmentation region.
- the present invention proposes to adjust the coarse face segmentation region in a two-dimensional space (along a lateral or longitudinal direction) with the contour edge as a reference, so as to fill the area within the contour edge with the coarse face segmentation region, so that the fine face segmentation region is obtained.
- the present invention also proposes to perform an interpolation operation, when the contour edge is disconnected, between the disconnected areas so that the fine face segmentation region is obtained.
- FIG. 2 is a simplified flow chart exemplarily illustrating a method 200 for facial image processing according to an embodiment of the present invention.
- method 200 performs face detection of an image.
- step S 204 method 200 obtains a coarse face segmentation region of at least one face and a contour edge of the at least one face based on the face detection.
- method 200 builds a skin color model by using a partial region of the at least one face, and subsequently applies the built skin color model (such as the model illustrated by a picture P 406 in FIG. 4 ) to the at least one face, so that the coarse face segmentation region of the at least one face is obtained.
- method 200 determines the face contour edge by using a wavelet transform (such as the Haar wavelet convolution algorithm).
- method 200 Upon obtaining the coarse segmentation region of the at least one face and the contour edge of the at least one face, method 200 proceeds to step S 206 , wherein method 200 adjusts the coarse face segmentation region based on the contour edge to obtain a fine face segmentation region. In one embodiment, method 200 adjusts the coarse face segmentation region along at least one of the lateral and longitudinal directions with the contour edge as a reference, so as to fill the whole contour with the coarse face segmentation region, so that the fine face segmentation region is obtained.
- method 200 detects whether edge pixels of the coarse face segmentation region deviate from the contour edge within a preset position adjusting range (for example, within a certain threshold range), and when detecting that the deviation is within the position adjusting range, the method 200 adjusts the deviated edge pixels to points on the contour edge aligning with these pixels in the lateral or longitudinal direction. When detecting that the deviation exceeds the position adjusting range, method 200 regards the deviation as a protrusion out-stretching from the contour edge, and removes the protrusion so as to obtain the fine face segmentation region.
- a preset position adjusting range for example, within a certain threshold range
- method 200 further detects whether the contour edge is disconnected, and if so, performs an interpolation operation between two disconnected end points. For example, a linear or non-linear interpolation operation along the trend of the contour edge may be adopted to fill the disconnected edge, so as to obtain the fine face segmentation region.
- Fine segmentation of the face region in a picture can be achieved by using various implementations of the above method 200 and its variations or expansions. Such fine segmentation may not only effectively eliminate interfering information or image noise from the background, but also preserve facial details as many as possible. In addition, such a fine face segmentation region provides a good material basis for the subsequent image processing (including whitening and smoothing) of the present invention.
- method 200 further determines grey values of all pixels within the fine face segmentation region, divides all pixels into at least two classes based on the grey values, and adjusts the grey values of each class of pixels according to different levels, to enable the whitening processing on the fine face segmentation region.
- method 200 will perform the whitening processing to different degrees on the pixels with different grey values, so that the initial grey pixels will not become too white, and the initial white pixels will become whiter, and thereby contrast of the face in the image is enhanced.
- method 200 take statistics of the grey values of all pixels within the fine face segmentation region, and determines at least one threshold value for the grey values based on the statistics, and thereby divides all pixels into aforesaid at least two classes based on the at least one threshold value.
- method 200 ranks the grey values of all pixels within the fine face segmentation region, selects a predetermined number of pixels in sequence, and averages the grey values of the selected pixels so as to determine the average value as the at least one threshold value.
- method 200 further selects different (whitening) levels for each class of pixels (for example, pictures P 606 , P 608 and P 610 in FIG. 6 show different levels of whitening effects), and the levels can be adjusted by parameters.
- a setting is helpful to provide a user with a selection for different whitening effects, and the user may preview the pictures through a preview function and selects a desired whitening effect in the end.
- method 200 further determines grey value differences between each pixel within the fine face segmentation region and respective neighboring pixels in a neighboring region, compares each grey value difference with a predetermined threshold value to determine smoothing weights of each pixel with regard to respective neighboring pixels, wherein the smoothing weight value is inversely proportional to the grey value difference, and adjusts the grey value of each pixel based on the grey values of the respective neighboring pixels, the smoothing weights, and spatial distances between the each pixel and the respective neighboring pixels, so as to realize the smoothing processing on the fine face segmentation region.
- the smoothing processing of the embodiments of the present invention takes the grey value of each pixel and grey values of pixels in its neighboring region into consideration.
- the neighboring pixels contribute little to the smoothing operation on the pixels to be smoothed.
- a less weight is given to the pixel to be smoothed for that neighboring pixel.
- a more weight is to be given.
- method 200 calculates gradient values of all pixels within the fine face segmentation region, ranks the gradient values of all pixels and selects a predetermined number of pixels in sequence, and averages the gradient values of the selected pixels so as to set the average value as the predetermined threshold value.
- a threshold setting is more specific and pertinent to each picture, so that a better smoothing effect can be achieved.
- smoothing processing on the fine face segmentation region is discussed above. It should be noted that the smoothing processing can be performed immediately after the step S 206 , and can also be performed before or after the whitening processing, which is up to different preferences of users or settings.
- FIG. 3 is a detailed flow chart exemplarily illustrating a method 300 for facial image processing according to an embodiment of the present invention.
- method 300 performs face detection of an image to be processed.
- an edge box may be detected by using a face detecting model.
- the extended box includes not only the face region but also a neck region connected to the face (as illustrated by picture P 402 in FIG. 4 , description will be provided with reference to FIG. 4 later), which may facilitate the subsequent facial image processing.
- step S 304 method 300 builds a skin color model for the detected face.
- step S 306 method 300 performs coarse face segmentation on the image to obtain a coarse face segmentation region. The processes of the skin color modeling and the coarse face segmentation would be detailed by referring to FIG. 4 later.
- step S 308 method 300 performs the processing of obtaining a contour edge of the face region in parallel with the processing steps S 304 and S 306 of the coarse face segmentation.
- a wavelet transform such as the Haar wavelet transform
- the contour edge might be disconnected due to the quality of the image or other potential reasons.
- the coarse face segmentation region can be obtained by using the skin color model, the result is not sufficient to assist in facial image processing tasks.
- coarseness of the face segmentation region may be attributed to that (1) some background images are mistakenly categorized as the skin; and (2) some face regions are missed during the segmentation.
- complex and changing illumination conditions may lead to unbalanced color distribution on a human face, which is difficult to be overcome by processing based merely on the skin color model.
- embodiments of the present invention propose to refine the obtained coarse face segmentation region so as to obtain a satisfactory fine face segmentation region.
- step S 310 method 300 selects at least one of the lateral (or x direction) or longitudinal (or y direction) direction to perform fine segmentation of the face region (or referred to as coarse face segmentation region refinement).
- the refining process adjusts the coarse face segmentation region obtained from step S 306 based on strong edge responses on the face contour edge obtained from step S 308 .
- step S 312 method 300 adjusts the relatively deviated pixels based on the face contour edge.
- the relative deviation can be set within a certain threshold range, so as to apply the processing of step S 314 to the pixels exceeding the threshold range.
- Specific operations of step S 312 will be exemplarily described by referring to FIG. 5 a in the following.
- step S 314 it will detect whether there are pixels obviously deviating from the face contour edge in the picture obtained from step S 312 .
- the obvious deviation means that the deviation has exceeded the previously set searching range, namely, a condition where the adjustment cannot be achieved by using step S 312 .
- the obvious deviation can be a protrusion out-stretching from the face region caused by mistaking the background of the similar color with the skin as the face region.
- step S 316 it will detect whether the contour edge is disconnected. As those skilled in the art may understand, during the actual calculation of the contour edge, it might happen that the contour edge is broken into several sections. Thus, when such a disconnection is detected, step S 316 is executed to perform an interpolation operation on the disconnected areas of the contour edge, so as to fill the disconnected parts. Specific operations of step S 316 will be exemplarily described by referring to FIG. 5 c in the following.
- step S 316 After step S 316 is performed, method 300 obtains a fine face segmentation region in step S 318 , which is ready for use in image processing in the subsequent steps S 320 and S 322 . It should be noted that the above descriptions of obtaining the coarse face segmentation region and calculating the face contour edge are merely exemplary, and those skilled in the art can adopt any other suitable methods for implementation.
- method 300 After obtaining the fine face segmentation region, method 300 additionally “beautifies” the picture, including whitening processing in step S 320 and smoothing processing in step S 322 . Firstly, in step S 320 , method 300 performs a whitening operation based on the fine face segmentation region. Specific operations of step S 320 will be exemplarily described by referring to FIG. 6 in the following.
- step S 322 method 300 further performs the smoothing processing based on the fine face segmentation region.
- embodiments of the invention propose to adaptively smooth each pixel mainly based on local region information, so as to remove wrinkles or freckles on the forehead, cheeks of a face or neck, without weakening facial details such as the mouth, teeth and eyes.
- Specific operations of step S 322 will be exemplarily described by referring to FIG. 7 in the following.
- Step 300 is described above by referring to each step as shown in FIG. 3 .
- the order of steps as shown in the flow chart is merely exemplary, while embodiments of the present invention are not limited thereto.
- relevant steps may be omitted regarding different face images.
- steps S 312 , S 314 and S 316 are described in sequence, when the coarse face segmentation region does not contain the conditions that need the processing in steps S 312 , S 314 or S 316 , the corresponding steps may be omitted.
- steps S 320 and S 322 can be swapped.
- the smoothing processing can be performed before the whitening processing. This can be selected and set according to, for example, user's preferences.
- FIG. 4 is a schematic diagram exemplarily illustrating an operation for obtaining coarse face region segmentation according to an embodiment of the present invention, comprising the skin color modeling and the coarse face segmentation.
- an inner box surrounding the main face region of a man in picture P 402 is the edge box (or referred to as a face detection box), and an outer box extended to include the neck of the man is the extended box.
- the extended box not only covers the whole face region, but also covers extra background regions, such as the background behind the man's face and collar in the extended box, and these background regions can be filtered out later by using the skin color model.
- the present invention proposes to build a skin color model specific to each face, so as to obtain a more effective skin color model. As shown in FIG. 4 , the present invention proposes that a fragment of the face region (as indicated in picture P 404 in the upper part of FIG. 4 ) is picked up from the edge box obtained from the face image detection as a learning region for implementing a skin color model learning process in a UV color space of a color difference signal.
- a corresponding skin distribution can be modeled by a 2D-Gaussian paramedic model, to obtain a 2D-Gaussian skin-color difference model as show in picture P 406 in the right upper part of FIG. 4 (i.e., the skin color model of the present invention).
- the skin color model may be subsequently applied to filtering the facial skin region in the extended box in the left picture P 402 , so as to filter out the background image information, so that picture P 408 as shown in the lower part of FIG. 4 is obtained.
- the modeling and filtering processes will be exemplarily described as follows.
- a certain specific threshold value such as a certain decimal between 0 and 1
- FIG. 5 a illustrates a process for refining the coarse face segmentation region of a picture according to an embodiment of the present invention, i.e. the processing in step S 312 of method 300 .
- region P 502 illustrates the currently obtained coarse face segmentation region
- curve 52 indicates the contour edge of the face. It is easily seen from region P 502 that the upper right area of the coarse face segmentation region out-stretches from the contour edge 52 , while the lower right area retracts into the contour edge 52 . Obviously, such face region segmentation is not precise.
- a searching range may be set for pixels (as indicated by reference numeral 54 ) on the edge of the coarse face segmentation region. Supposing that pixel 54 has a two dimensional coordinate (x, y), the lateral searching range for it may be denoted as (x ⁇ , x+ ⁇ ), which is shown as double head arrows 56 and 58 in FIG. 5 a . Similarly, its longitudinal searching range may be denoted as (y ⁇ , y+ ⁇ ) (not shown).
- the value selection of parameter “ ⁇ ” determines the weight of edge information during the refining process.
- a suitable value may be selected via a lot of learning processes, so that unnecessarily expanding the searching range can be avoided, and over-reducing the searching range may also be avoided.
- it may be detected, within the searching range, whether there is a point (also referred to as a strong edge response point, obtained in a RGB color space) on the face contour edge 52 in the lateral or longitudinal direction aligning with the pixel 54 . If there is such a point, the pixel is adjusted to the position of the point on the face contour edge.
- the refining process is illustrated with a real picture P 506 in the lower part of FIG. 5 a as an example.
- pictures P 508 and P 510 are obtained respectively. It can be seen from regions surrounded by circles of different sizes in picture P 508 , a darker region in the face (such as eyebrows) is considered as a non-face region in the coarse face segmentation region.
- a contour edge corresponding to it can be found (shown by a circle).
- the coarse segmented picture P 508 may be processed with the above described adjusting manner. For example, the pixels retracting into the facial contour edge (such as eyebrows) may be adjusted to the points on the contour edge aligning with them in the lateral or longitudinal direction within the searching range, so that the fine face segmentation region as shown by picture P 512 is obtained.
- FIG. 5 b illustrates a process for adjusting the coarse face segmentation region of a picture according to an embodiment of the invention, which is the processing in step S 314 of method 300 .
- region P 514 shows the current coarse face segmentation region, having three points, a, b and c on its edge. While there are three other points, a, d and e on the face contour edge, wherein point a is a common point shared by the coarse face segmentation region and the face contour edge.
- the face segmentation region should extend along the face contour edge points a, d and e, rather than the initial pixels a, b and c, so that the fine face segmentation region, i.e. P 516 , can be obtained.
- the refining process is illustrated by using real picture P 518 in the lower part of FIG. 5 b as an example.
- pictures P 520 and P 522 are obtained respectively. It can be seen from regions surrounded by circles of different sizes in picture P 520 , the background region behind the face (such as a seat having a similar color with the facial skin) is regarded as the face region in the coarse segmentation.
- the corresponding face contour edge in picture P 522 shown by a circle
- the coarsely segmented picture P 520 may be processed with the above described adjusting manner. For example, the pixels obviously protruding from the face contour edge can be adjusted into the face contour edge, or cut out, so as to remove the redundant background region, and obtain the fine face segmentation region as illustrated in picture P 524 .
- FIG. 5 c illustrates a process for refining the coarse face segmentation region of a picture according to an embodiment of the present invention, which is the processing in step S 316 of method 300 .
- contour edge curve 59 is broken into two sections, ab and cd.
- a linear interpolation technique may be used to calculate coordinates of a plurality of points between b and c, so as to achieve connection between b and c points.
- a non-linear interpolation technique such as the bsline interpolation algorithm, may also be considered. After such an interpolation operation, a refined fine face segmentation region can be obtained as illustrated by region P 528 .
- the above descriptions of the coarse face segmentation region adjustment with reference to FIGS. 5 a -5 c are mainly made for the lateral direction.
- the above adjusting method may also be applicable to the adjustment in the longitudinal direction.
- the embodiments of the present invention are respectively performed in the lateral and longitudinal directions on an image through such a refinement process, so as to obtain the fine face segmentation region.
- FIG. 6 is a schematic diagram exemplarily illustrating whitening (i.e. the processing in step S 320 of method 300 ) a face by using the fine face segmentation region obtained according to an embodiment of the present invention.
- the whitening processing of the face should have a number of adjustable levels, so that users may obtain a plurality of different whitening effects.
- embodiments of the present invention proposes to take statistics of grey values for the fine face segmentation region, divide the face region into several different levels based on the statistics, and perform different levels of whitening operations on the different levels, so as to obtain different whitening effects.
- an embodiment of the present invention proposes a facial skin whitening function as follows:
- f ⁇ ( x ) ⁇ ( 1 - exp ⁇ ( - x thres ) ) ⁇ thres , 0 ⁇ x ⁇ thres 255 - f ⁇ ( thres ) level - thres ⁇ ( x - thres ) + f ⁇ ( thres ) , thres ⁇ x ⁇ 255 ( 1 )
- x represents the grey value of a certain pixel
- “thres” represents a parameter for dividing a space constituted of all pixels x into multiple sections. For example, as illustrated by a left curve of FIG. 6 , grey levels of an image are divided into section A and section B.
- the selection of “thres” may be adaptively learned from the face image to be processed. For example, grey values of all pixels within the face region are ranked in a descending order, and grey values of the pixels within the last 5% are averaged and the obtained average value is set as the value of “thres”. These 5% data are generally collected from eyes, eyebrows or beards of the face.
- the value of another parameter “level” is adjustable from “thres+1” to 255 to obtain whitening curves of different levels (such as multiple curves shown in B section of the curve picture in FIG. 6 ) so as to realize different whitening effects. As shown in pictures P 606 , P 608 and P 610 in FIG. 6 , with the increase of the level, the contrast of the picture subject to the whitening processing gradually increases and the definition is also significantly improved comparing with the original picture P 604 .
- FIG. 7 is a schematic diagram exemplarily illustrating performing the smoothing processing (i.e. the processing in step S 322 of method 300 ) on a face by using the fine face segmentation region obtained according to an embodiment of the present invention.
- the embodiment of the present invention takes statistics of gradient magnitudes of all pixels within a face region.
- the gradient magnitudes are ranked in a descending order, and the first 10% of data is averaged to obtain a constant parameter “cont”, which may be regarded as a coarse measurement of the strength of useful face details.
- a region box with a size of L*L and centered at the pixel is selected, where L may be a suitable integer.
- the grey difference ⁇ is mainly used for determining weight p( ⁇ ) for smoothing the pixel (x, y), and the weight p( ⁇ ) can be calculated by, for example, the following equation:
- parameters a, b and c in equation (2) determine the steepness of the curve. It can be seen from the curve relationship as shown in the middle of FIG. 7 that when the grey difference ⁇ is greater than cont, the curve will converge to the cont value, which means that an edge might exist between the pixel (x, y) and the pixel (p, q). The pixel (p, q) will not contribute any weight to the smoothing of the pixel (x, y). Based on this rule, the grey value of the pixel (x, y) may be updated by using the following equation and based on a local weighting mechanism:
- g new ⁇ ( x , y ) ⁇ p , q ⁇ f ⁇ ( ⁇ ) ⁇ p ⁇ ( ⁇ ) ⁇ g ⁇ ( p , q ) ⁇ p , q ⁇ f ⁇ ( ⁇ ) ⁇ p ⁇ ( ⁇ ) ( 3 )
- g(p,q) represents the current grey value of pixel (x, y)
- g new (x, y) represents the updated grey value of pixel (x, y) (i.e., the value after the smoothing)
- f( ⁇ ) is a Gaussian function to give a larger weight to a closer pixel, and its function form is
- f ⁇ ( ⁇ ) exp ⁇ ( - ⁇ 2 ⁇ 2 ) , wherein the value of ⁇ is proportional to L.
- FIG. 8 is a flow chart exemplarily illustrating a whole process 800 for face image processing according to an embodiment of the present invention in connection to a specific image processing process.
- step S 802 pictures or photos to be processed are obtained, where the pictures or photos may be captured by a handheld device with a shooting function.
- step S 804 a face in the picture is detected or identified.
- step S 806 a face skin is modeled, for example, by performing the operation illustrated in step S 304 of FIG. 3 .
- step S 808 coarse face region segmentation is performed on a face region, for example, by performing the operation illustrated in step S 306 of FIG. 3 .
- step S 810 fine face segmentation is performed in step S 810 , for example, by performing operations illustrated in steps S 308 , S 310 , S 312 , S 314 and S 316 of FIG. 3 .
- flow 800 proceeds to the face processing phase, which includes whitening and smoothing the fine face segmentation region in step S 812 , for example, by performing operations in steps S 320 and S 322 of FIG. 3 .
- a feathering operation (S 814 ) can be selectively performed on the processed picture, so as to enable the edge of the picture to have a twilight effect.
- the result may be output in step S 816 at the end.
- the final processed pictures or photos may be displayed to a user on a handheld device, and thereby the user may share them with friends or upload and release them via a network.
- FIG. 9 is a block diagram exemplarily illustrating an apparatus 900 capable of implementing the embodiments of the present invention.
- the apparatus 900 comprises a detecting device 901 , an obtaining device 902 , and an adjusting device 903 .
- the detecting device 901 is used for performing face detection of an image
- the obtaining device 902 is used for obtaining a coarse face segmentation region of at least one face and a contour edge of the at least one face based on the face detection
- the adjusting device 903 is used for adjusting the coarse face segmentation region based on the contour edge to obtain a fine face segmentation region.
- the apparatus 900 can perform corresponding steps in methods 200 and 300 , so as to obtain the fine face segmentation region.
- the apparatus 900 can additionally comprise a device for whitening and smoothing a face image based on the fine face segmentation region, such as performing the operations previously described by referring to FIGS. 6 and 7 .
- FIG. 10 is a block diagram exemplarily illustrating another apparatus 1000 for implementing the embodiments of the present invention.
- the apparatus 1000 may comprise at least one processor 1001 and at least one memory 1002 containing computer program code 1003 .
- the processor 1001 and the memory 1002 are configured to, with the processor 1001 , cause the apparatus 1000 to perform corresponding steps in methods 200 and 300 , so as to obtain the fine face segmentation region.
- the processor 1001 and the memory 1002 are further configured to, with the processor 1001 , cause the apparatus 1000 to perform the operations previously described with reference to FIGS. 6 and 7 , so as to whiten and smooth the fine face segmentation region.
- FIGS. 9 and 10 exemplarily illustrate in a form of block diagram the apparatuses according to the embodiments of the present invention
- the apparatuses of the present invention can be implemented as or integrated into any electronic device with an image capturing function, including but not limited to various kinds of smart phones with a camera function, a desktop computer, laptop and flat computer with a camera, and an electronic device for collecting and processing a face image.
- the apparatuses of the present invention can also be implemented as an electronic device for merely performing face processing on images captured by any other arbitrary electronic devices and providing the processed face image to the same.
- Exemplary embodiments of the present invention are described by referring to the flow charts and block diagrams as shown in the above Figures. It should be explained that the methods as disclosed in the embodiments of the present invention can be implemented in software, hardware or the combination of software and hardware.
- the hardware part may be implemented using specific logic; while the software part may be stored in memory and executed by a suitable instruction executing system, such as a microprocessor or personal computer (PC).
- a suitable instruction executing system such as a microprocessor or personal computer (PC).
- the present invention is implemented as software, including but not limited to firmware, resident software, microcode and the like.
- the embodiments of the present invention can also take the form of a computer program product accessible from a computer usable or readable medium, where the medium provides program code for use by a computer or any instruction executing system, or in combination with the computer or any instruction executing system.
- the computer usable or readable medium can be any tangible device that may comprise, store, communicate, broadcast or transmit programs so as to be used by an instruction executing system, apparatus or device, or in combination with the same.
- the medium may be an electric, magnetic, optical, electromagnetic, infrared, or semi-conductive system (or apparatus or device), or transmission medium.
- Examples of the computer readable medium include semi-conductor or solid memory, cassette, removable disk, random access memory (RAM), read-only memory (ROM), hard disk and optical disk.
- Examples of optical disk comprise compact disk-read-only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
In the above function, “x” represents the grey value of a certain pixel, “thres” represents a parameter for dividing a space constituted of all pixels x into multiple sections. For example, as illustrated by a left curve of
wherein g(p,q) represents the current grey value of pixel (x, y), gnew(x, y) represents the updated grey value of pixel (x, y) (i.e., the value after the smoothing), f(Δ) is a Gaussian function to give a larger weight to a closer pixel, and its function form is
wherein the value of σ is proportional to L.
Claims (27)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210496169 | 2012-11-23 | ||
CN201210496169.1A CN103839250B (en) | 2012-11-23 | 2012-11-23 | The method and apparatus processing for face-image |
CN201210496169.1 | 2012-11-23 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140147003A1 US20140147003A1 (en) | 2014-05-29 |
US9754153B2 true US9754153B2 (en) | 2017-09-05 |
Family
ID=50773341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/086,235 Active 2034-03-02 US9754153B2 (en) | 2012-11-23 | 2013-11-21 | Method and apparatus for facial image processing |
Country Status (4)
Country | Link |
---|---|
US (1) | US9754153B2 (en) |
EP (1) | EP2923306B1 (en) |
CN (1) | CN103839250B (en) |
WO (1) | WO2014080075A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10991101B2 (en) * | 2019-03-12 | 2021-04-27 | General Electric Company | Multi-stage segmentation using synthetic images |
US11282224B2 (en) * | 2016-12-08 | 2022-03-22 | Sony Interactive Entertainment Inc. | Information processing apparatus and information processing method |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9549147B2 (en) * | 2014-02-13 | 2017-01-17 | Nvidia Corporation | System and method for creating a video frame from a single video field |
CN104184942A (en) * | 2014-07-28 | 2014-12-03 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106033593A (en) * | 2015-03-09 | 2016-10-19 | 夏普株式会社 | Image processing equipment and image processing method |
CN106156692B (en) * | 2015-03-25 | 2019-12-13 | 阿里巴巴集团控股有限公司 | method and device for positioning human face edge feature points |
US10152778B2 (en) | 2015-09-11 | 2018-12-11 | Intel Corporation | Real-time face beautification features for video images |
CN105472238A (en) * | 2015-11-16 | 2016-04-06 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105787878B (en) * | 2016-02-25 | 2018-12-28 | 杭州格像科技有限公司 | A kind of U.S. face processing method and processing device |
KR102488563B1 (en) * | 2016-07-29 | 2023-01-17 | 삼성전자주식회사 | Apparatus and Method for Processing Differential Beauty Effect |
CN106372602A (en) * | 2016-08-31 | 2017-02-01 | 华平智慧信息技术(深圳)有限公司 | Method and device for processing video file |
WO2018040022A1 (en) * | 2016-08-31 | 2018-03-08 | 华平智慧信息技术(深圳)有限公司 | Method and device for processing video file |
TWI616843B (en) * | 2016-09-12 | 2018-03-01 | 粉迷科技股份有限公司 | Method, system for removing background of a video, and a computer-readable storage device |
CN108053366A (en) * | 2018-01-02 | 2018-05-18 | 联想(北京)有限公司 | A kind of image processing method and electronic equipment |
RU2684436C1 (en) * | 2018-04-03 | 2019-04-09 | Общество С Ограниченной Ответственностью "Фиттин" | Method of measuring shape and size of parts of human body, method of searching flat object of known shape and size on image, method of separating part of human body from background on image |
CN109345480B (en) * | 2018-09-28 | 2020-11-27 | 广州云从人工智能技术有限公司 | Face automatic acne removing method based on image restoration model |
CN109741272A (en) * | 2018-12-25 | 2019-05-10 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN109840912B (en) * | 2019-01-02 | 2021-05-04 | 厦门美图之家科技有限公司 | Method for correcting abnormal pixels in image and computing equipment |
CN109784304B (en) * | 2019-01-29 | 2021-07-06 | 北京字节跳动网络技术有限公司 | Method and apparatus for labeling dental images |
CN110135428B (en) * | 2019-04-11 | 2021-06-04 | 北京航空航天大学 | Image segmentation processing method and device |
CN110288521A (en) * | 2019-06-29 | 2019-09-27 | 北京字节跳动网络技术有限公司 | Image beautification method, device and electronic equipment |
CN110765861A (en) * | 2019-09-17 | 2020-02-07 | 中控智慧科技股份有限公司 | Unlicensed vehicle type identification method and device and terminal equipment |
CN111492200B (en) * | 2020-03-17 | 2021-05-14 | 长江存储科技有限责任公司 | Method and system for semiconductor structure thickness measurement |
CN112819841B (en) * | 2021-03-19 | 2021-09-28 | 广东众聚人工智能科技有限公司 | Face region segmentation method and device, computer equipment and storage medium |
CN113837067B (en) * | 2021-09-18 | 2023-06-02 | 成都数字天空科技有限公司 | Organ contour detection method, organ contour detection device, electronic device, and readable storage medium |
CN115205317B (en) * | 2022-09-15 | 2022-12-09 | 山东高速集团有限公司创新研究院 | Bridge monitoring photoelectric target image light spot center point extraction method |
CN117523235B (en) * | 2024-01-02 | 2024-04-16 | 大连壹致科技有限公司 | A patient wound intelligent identification system for surgical nursing |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6650778B1 (en) * | 1999-01-22 | 2003-11-18 | Canon Kabushiki Kaisha | Image processing method and apparatus, and storage medium |
EP1447773A2 (en) | 2003-02-13 | 2004-08-18 | Kabushiki Kaisha Toshiba | Image processing apparatus for reducing noise from image |
CN1763765A (en) | 2004-10-21 | 2006-04-26 | 佳能株式会社 | Method, device and storage medium for detecting face complexion area in image |
US20080112622A1 (en) | 2006-11-13 | 2008-05-15 | Samsung Electro-Mechanics Co., Ltd | Skin detection system and method |
US20080240571A1 (en) * | 2007-03-26 | 2008-10-02 | Dihong Tian | Real-time face detection using temporal differences |
US20090226044A1 (en) * | 2008-03-07 | 2009-09-10 | The Chinese University Of Hong Kong | Real-time body segmentation system |
US20090245679A1 (en) * | 2008-03-27 | 2009-10-01 | Kazuyasu Ohwaki | Image processing apparatus |
US20100021056A1 (en) | 2008-07-28 | 2010-01-28 | Fujifilm Corporation | Skin color model generation device and method, and skin color detection device and method |
US20100097485A1 (en) | 2008-10-17 | 2010-04-22 | Samsung Digital Imaging Co., Ltd. | Method and apparatus for improving face image in digital image processor |
US20100177981A1 (en) | 2009-01-12 | 2010-07-15 | Arcsoft Hangzhou Co., Ltd. | Face image processing method |
US20100215238A1 (en) * | 2009-02-23 | 2010-08-26 | Yingli Lu | Method for Automatic Segmentation of Images |
US7840066B1 (en) * | 2005-11-15 | 2010-11-23 | University Of Tennessee Research Foundation | Method of enhancing a digital image by gray-level grouping |
US7856150B2 (en) | 2007-04-10 | 2010-12-21 | Arcsoft, Inc. | Denoise method on image pyramid |
US20110222728A1 (en) | 2010-03-10 | 2011-09-15 | Huawei Device Co., Ltd | Method and Apparatus for Scaling an Image in Segments |
US8139854B2 (en) | 2005-08-05 | 2012-03-20 | Samsung Electronics Co., Ltd. | Method and apparatus for performing conversion of skin color into preference color by applying face detection and skin area detection |
CN102567998A (en) | 2012-01-06 | 2012-07-11 | 西安理工大学 | Head-shoulder sequence image segmentation method based on double-pattern matching and edge thinning |
US20120301020A1 (en) * | 2011-05-23 | 2012-11-29 | Infosys Limited | Method for pre-processing an image in facial recognition system |
US20130094768A1 (en) * | 2011-10-14 | 2013-04-18 | Cywee Group Limited | Face-Tracking Method with High Accuracy |
US20140270540A1 (en) * | 2013-03-13 | 2014-09-18 | Mecommerce, Inc. | Determining dimension of target object in an image using reference object |
-
2012
- 2012-11-23 CN CN201210496169.1A patent/CN103839250B/en active Active
-
2013
- 2013-11-19 EP EP13857311.8A patent/EP2923306B1/en active Active
- 2013-11-19 WO PCT/FI2013/051080 patent/WO2014080075A1/en active Application Filing
- 2013-11-21 US US14/086,235 patent/US9754153B2/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6650778B1 (en) * | 1999-01-22 | 2003-11-18 | Canon Kabushiki Kaisha | Image processing method and apparatus, and storage medium |
EP1447773A2 (en) | 2003-02-13 | 2004-08-18 | Kabushiki Kaisha Toshiba | Image processing apparatus for reducing noise from image |
CN1521695A (en) | 2003-02-13 | 2004-08-18 | ��ʽ���綫֥ | Image processing apparatus for reducing noise from image |
CN1763765A (en) | 2004-10-21 | 2006-04-26 | 佳能株式会社 | Method, device and storage medium for detecting face complexion area in image |
US8139854B2 (en) | 2005-08-05 | 2012-03-20 | Samsung Electronics Co., Ltd. | Method and apparatus for performing conversion of skin color into preference color by applying face detection and skin area detection |
US7840066B1 (en) * | 2005-11-15 | 2010-11-23 | University Of Tennessee Research Foundation | Method of enhancing a digital image by gray-level grouping |
US20080112622A1 (en) | 2006-11-13 | 2008-05-15 | Samsung Electro-Mechanics Co., Ltd | Skin detection system and method |
US20080240571A1 (en) * | 2007-03-26 | 2008-10-02 | Dihong Tian | Real-time face detection using temporal differences |
US7856150B2 (en) | 2007-04-10 | 2010-12-21 | Arcsoft, Inc. | Denoise method on image pyramid |
US20090226044A1 (en) * | 2008-03-07 | 2009-09-10 | The Chinese University Of Hong Kong | Real-time body segmentation system |
CN101971190A (en) | 2008-03-07 | 2011-02-09 | 香港中文大学 | Real-time body segmentation system |
US20090245679A1 (en) * | 2008-03-27 | 2009-10-01 | Kazuyasu Ohwaki | Image processing apparatus |
US20100021056A1 (en) | 2008-07-28 | 2010-01-28 | Fujifilm Corporation | Skin color model generation device and method, and skin color detection device and method |
US20100097485A1 (en) | 2008-10-17 | 2010-04-22 | Samsung Digital Imaging Co., Ltd. | Method and apparatus for improving face image in digital image processor |
US20100177981A1 (en) | 2009-01-12 | 2010-07-15 | Arcsoft Hangzhou Co., Ltd. | Face image processing method |
US20100215238A1 (en) * | 2009-02-23 | 2010-08-26 | Yingli Lu | Method for Automatic Segmentation of Images |
US20110222728A1 (en) | 2010-03-10 | 2011-09-15 | Huawei Device Co., Ltd | Method and Apparatus for Scaling an Image in Segments |
US20120301020A1 (en) * | 2011-05-23 | 2012-11-29 | Infosys Limited | Method for pre-processing an image in facial recognition system |
US20130094768A1 (en) * | 2011-10-14 | 2013-04-18 | Cywee Group Limited | Face-Tracking Method with High Accuracy |
CN102567998A (en) | 2012-01-06 | 2012-07-11 | 西安理工大学 | Head-shoulder sequence image segmentation method based on double-pattern matching and edge thinning |
US20140270540A1 (en) * | 2013-03-13 | 2014-09-18 | Mecommerce, Inc. | Determining dimension of target object in an image using reference object |
Non-Patent Citations (23)
Title |
---|
Achuthan et al., "Wavelet energy-guided level set-based active contour: A segmentation method to segment highly similar regions," 2010, Computers in Biology and Medicine 40, pp. 608-620. * |
Achuthan et al., Wavelet energy-guided level set-based active contour: A segmentation method to segment highly similar regions, 2010, Computers in Biology and Medicine 40, pp. 608-620. * |
Adipranata, Rudy et al., "Fast Method for Multiple Human Face Segmentation in Color Image", International Journal of Advanced Science and Technology, vol. 3, Feb. 2009. |
Chai, Douglas and King, N.N., "Face Segmentation Using Skin-Color Map in Videophone Applications", IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 4, Jun. 1999. |
Face Skin Analyzer System Magic Mirror [online] [retrieved Sep. 6, 2012]. Retrieved from the Internet: <URL: http://cgem168.en.alibaba.com/product/327208619-210219248/face—Skin—analyzer—System—magic—mirror.html>. 6 pages. |
Gong et al., "A robust framework for face contour detection from clutter background," 2010, Int J Mach Learn Cyber, 3, pp. 111-118. * |
Gong et al., "A Robust Framework for Face Contour Detection from Clutter Background", International Journal on Machine Learning and Cybernetics, No. 3, Aug. 31, 2011, pp. 111-118. |
Gong et al., A robust framework for face contour detection from clutter background, 2010, Int J Mach Learn Cyber, 3, pp. 111-118. * |
International Search Report and Written Opinion received for corresponding Patent Cooperation Treaty Application No. PCT/FI2013/051080, dated Mar. 27, 2014, 13 pages. |
Li et al., "Face Contour Shape Extraction with Active Shape Models Embedded Knowledge", IEEE Int. Conf. on Signal Processing, Aug. 2000, vol. III, pp. 1347-1350. |
Ma et al., "Face Segmentation Algorithm Based on ASM", IEEE Int. Conf. on Intelligent Computing and Intelligent Systems, Nov. 2009, vol. 4, pp. 495-499. |
Office Action and Search Report from corresponding Chinese Application No. 201210496169.1 dated Jan. 29, 2016. |
Office Action for Chinese Application No. 2012104961691 dated Sep. 18, 2016. |
Office Action for European Application No. EP 13 85 7311 dated Sep. 26, 2016. |
Partial Supplementary European Search Report from corresponding European Patent Application No. 13857311.8 dated Jun. 20, 2016. |
Perlibakas V Ed-Calderara Simone Bandini Stefania Cucchiara Rita: "Automatical Detection of Face Features and Exact Face Contour"; Pattern Recognition Letters, Elsevier, Amsterdam, NL; vol. 24, No. 16; Dec. 1, 2003; pp. 2977-2985; XP004463259. |
PERLIBAKAS, V.: "Automatical detection of face features and exact face contour", PATTERN RECOGNITION LETTERS., ELSEVIER, AMSTERDAM., NL, vol. 24, no. 16, 1 December 2003 (2003-12-01), NL, pages 2977 - 2985, XP004463259, ISSN: 0167-8655, DOI: 10.1016/S0167-8655(03)00158-2 |
Prakash, J. and Rajesh, K., "Human Face Detection and Segmentation using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms", World Academy of Science, Engineering and Technology, 2008. |
Professional Skin Analyzer Magic Mirror Skin Diagnosis System with Trolley [online] [retrieved Sep. 6, 2012]. Retrieved from the Internet: <URL: http://tingmei.en.ecplaza.net/professional-skin-analyzer-magic-mirror--313060-2421934.html>. 3 pages. |
RGB and UV Lighting Big Magic Mirror for Skin Analyser Machine CE Approval XM-T3 [online] [retrieved Sep. 6, 2012]. Retrieved from the Internet: <URL: http://www.tjskl.org.cn/products-search/cz50933e1/rgb—and—uv—lighting—big—magic—mirror—for—skin—analyser—machine—ce—approval—xm—t3-pz535e6a7.html>. 4 pages. |
Sobottka, Karin, Pittas, Joannis, "A novel method for automatic face segmentation, facial feature extraction and tracking", Signal Processing: Image Communication, vol. 12, Issue 3, Jun. 1998, pp. 263-281. |
WT-03-B RGB/UV/PL 3 Light Magic Face Analyzer [online] [retrieved Sep. 6, 2012]. Retrieved from the Internet: <URL: http://www.alibaba.com/product-gs/561270595/WT—03—B—RGB—UV—PL.html?s=p>. 4 pages. |
Wu, Li-Fang et al.; "Face Segmentation Based on Curve Fitting"; Chinese Journal of Computers; No. 26, No. 7; Jul. 2003; pp. 893-897. |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11282224B2 (en) * | 2016-12-08 | 2022-03-22 | Sony Interactive Entertainment Inc. | Information processing apparatus and information processing method |
US10991101B2 (en) * | 2019-03-12 | 2021-04-27 | General Electric Company | Multi-stage segmentation using synthetic images |
Also Published As
Publication number | Publication date |
---|---|
WO2014080075A1 (en) | 2014-05-30 |
US20140147003A1 (en) | 2014-05-29 |
CN103839250A (en) | 2014-06-04 |
EP2923306B1 (en) | 2023-03-08 |
EP2923306A4 (en) | 2016-10-26 |
EP2923306A1 (en) | 2015-09-30 |
CN103839250B (en) | 2017-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9754153B2 (en) | Method and apparatus for facial image processing | |
CN108765273B (en) | Virtual face-lifting method and device for face photographing | |
KR102045695B1 (en) | Facial image processing method and apparatus, and storage medium | |
EP3338217B1 (en) | Feature detection and masking in images based on color distributions | |
US9547908B1 (en) | Feature mask determination for images | |
US9483835B2 (en) | Depth value restoration method and system | |
US8983202B2 (en) | Smile detection systems and methods | |
US8983152B2 (en) | Image masks for face-related selection and processing in images | |
WO2022161009A1 (en) | Image processing method and apparatus, and storage medium and terminal | |
US10509948B2 (en) | Method and device for gesture recognition | |
US10217275B2 (en) | Methods and systems of performing eye reconstruction using a parametric model | |
US10217265B2 (en) | Methods and systems of generating a parametric eye model | |
CN105243371A (en) | Human face beauty degree detection method and system and shooting terminal | |
US9613403B2 (en) | Image processing apparatus and method | |
WO2012000800A1 (en) | Eye beautification | |
KR101624801B1 (en) | Matting method for extracting object of foreground and apparatus for performing the matting method | |
KR20210145781A (en) | Facial softening system and method to improve the texture of fine-condensed skin | |
US9171357B2 (en) | Method, apparatus and computer-readable recording medium for refocusing photographed image | |
CN111047619B (en) | Face image processing method and device and readable storage medium | |
CN114187166A (en) | Image processing method, intelligent terminal and storage medium | |
KR101592087B1 (en) | Method for generating saliency map based background location and medium for recording the same | |
Karaali et al. | Image retargeting based on spatially varying defocus blur map | |
JP6467817B2 (en) | Image processing apparatus, image processing method, and program | |
Zhang et al. | Eye corner detection with texture image fusion | |
JP2022073604A (en) | Image processing device, image processing method and image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, JIANGWEI;WANG, KONGQIAO;YAN, HE;REEL/FRAME:031650/0381 Effective date: 20130426 |
|
AS | Assignment |
Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, JIANGWEI;WANG, KONGQIAO;YAN, HE;REEL/FRAME:032017/0125 Effective date: 20130426 |
|
AS | Assignment |
Owner name: NOKIA TECHNOLOGIES OY, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:034781/0200 Effective date: 20150116 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |