WO2011015928A2 - Procede de traitement d'image pour corriger une image cible en fonction d'une image de reference et dispositif de traitement d'image correspondant - Google Patents
Procede de traitement d'image pour corriger une image cible en fonction d'une image de reference et dispositif de traitement d'image correspondant Download PDFInfo
- Publication number
- WO2011015928A2 WO2011015928A2 PCT/IB2010/001914 IB2010001914W WO2011015928A2 WO 2011015928 A2 WO2011015928 A2 WO 2011015928A2 IB 2010001914 W IB2010001914 W IB 2010001914W WO 2011015928 A2 WO2011015928 A2 WO 2011015928A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- target image
- mask
- area
- points
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the invention relates to an image processing method for producing a mask capable of correcting or mitigating certain imperfections or irregularities found on a target image.
- the invention also relates to the corresponding image processing system.
- the invention provides different technical means.
- a first object of the invention is to provide an image processing method for defining a mask that can be applied automatically to a target image, in particular to certain well-defined areas. image, such as mouth, eyes, cheeks, etc.
- Another object of the invention is to provide a method for generating a mask that can help correct imperfect areas, particularly for an image representing a face.
- the invention provides an automatic image processing method for the application of a mask to be applied to a target image, comprising the following steps:
- the method also comprises, before the step of applying the correction mask, a step consisting in:
- the comparison between the target image and the reference image involves a comparison between the relative disposition of one or more key points of the target area of the target image with respect to the corresponding points of the target image. reference image.
- the target image represents a face seen substantially from the front and the areas concerned are selected from a list comprising the mouth, the eyes, the eyebrows, the contour of the face, the nose, the cheeks. , the chin.
- the components of the face relief of the image represent a face for which a plurality of spatial reference points are listed.
- the area of the target image comprises the mouth and the landmarks comprise at least the commissures of the mouth. It also preferably comprises the substantially central point of the lower lip furthest from the center of the nose and also preferably one of the two highest points of the upper lip and finally the lowest point between the two points preceding the two points. of the upper lip.
- the area of the target image comprises the eyes.
- the area of the target image comprises the eyebrows.
- the landmarks comprise a plurality of points located substantially on the contour of the face.
- the reference image substantially corresponds to the face of the beauty canon, whose physical proportions are established in a standard manner.
- the invention also comprises an image processing system for implementing the previously described method.
- the invention finally comprises an image processing system comprising:
- a comparison module adapted to perform a comparison between certain characteristics of at least one zone of a target image and characteristics of a similar nature of a reference image based on criteria tests applied to detect any imperfections of the area in relation to shape characteristics of the target image;
- a selection module adapted to select at least one correction mask for application on the area of the target image, said mask being selected according to the type of imperfection detected by the comparison module;
- an application module for applying the selected mask to the target image to obtain a modified image.
- the comparison, selection and application modules are integrated into a work module implemented by coded instructions, said work module being adapted to obtain target image data. , reference image data and test criteria.
- FIG. 1 is an example of a target image obtained for treatment purposes according to the method of the invention with the contour of the detected and identified face;
- FIG. 2 corresponds to the original target image, before treatment;
- FIGS. 3 and 4 illustrate an exemplary reference image, in this case the Canon of Beauty, with the main points making it possible to make comparisons with a target image;
- FIGS. 5 and 6 illustrate an example of a target image with the points corresponding to those presented in FIGS. 3 and 4 for the reference image;
- FIG. 7 shows the points and dimensions for allowing detection of the orientation of the eyes of a target image in comparison with the reference image;
- FIG. 8 shows the points making it possible to detect the type of spacing between the eyes in comparison with the reference image;
- FIG. 9 illustrates the points and distances making it possible to detect the shape of the eyes of the target image in comparison with the reference image
- FIG. 10 illustrates the points and distances making it possible to detect the proportion of the mouth of the target image in comparison with the reference image
- FIG. 11 illustrates the points and distances making it possible to detect the size of the lips of the target image in comparison with the reference image
- FIGS. 12 and 13 show block diagrams illustrating the main steps of the image processing method according to the invention.
- FIGS. 14a, 14b and 14c show a TSL diagram for determining the available colors closest to the colors detected on the target image
- FIG. 15 schematically shows the main modules and elements provided for implementing the method according to the invention.
- FIGS. 16a to 16d show the lips of a target image with different examples of retouching intended to correct various types of defects detected on the lips after comparison with a reference image;
- FIGS. 17a to 17c present different examples of masks for the eyes according to the type of eye detected
- FIGS. 18a to 18c show examples of corrections as a function of the type of face detected for the target image in comparison with the reference image
- Figures 19 and 20 show some key points and distances for detecting the shape of the face of the target image
- FIG. 21 illustrates the points and distances useful for the detection of the type of chin with respect to that of the reference image
- FIG. 22a illustrates the points and dimensions that are useful for detecting the nose type of the target image in relation to the reference image
- FIGS. 22b, 22c and 22e show examples of corrections to be applied to the nose as a function of the characteristics detected;
- FIG. 22d illustrates the points and dimensions that are useful for detecting the shape of the nose of the target image in relation to the reference image;
- FIG. 22f illustrates the points and dimensions that are useful for detecting the width of the nose of the target image in relation to the reference image
- FIG. 23 shows the points and distances making it possible to determine, according to another approach, the shape of the face of the target image in relation to the reference image
- Figures 24 and 25 show the points and dimensions useful for establishing the criteria for the detection of eye size
- FIG. 26 shows the points useful for determining the space between the eye and the eyebrow of a target image.
- the oval shape is considered ideal.
- the distances between the eyes 4 and 5, the nose 3 with respect to the mouth 2, as well as the distance between the eyes and the bottom of the chin as well as the ratios between these distances must verify certain standard values.
- the oval face has the following dimensions expressed in absolute units, as shown in Figures 3 and 4.
- the height of the head is 3.5 units.
- the primer of the scalp 11 and the top of the head occupy a space of 0.5 unit.
- the width of the head is 2.5 units.
- the width of the face represents 13/15 of the width of the head.
- the ears are in the second unit of height. Nose 3 is in the midline of the face and in the second unit in height. Her width is half of the CPU.
- the height of the nostrils occupies 0.25 unit height.
- the inner corners of the eyes 43 and 53 are on each side of the half central unit. Depending on the vertical or longitudinal axis, the inner corners of the eyes are at 1.75 units of the reference 0. The width of the eyes 4 and 5 occupies 0.5 units.
- the inner corners of the eyebrows 63 and 73 are on the same vertical as the inner corner of the eye of the same side.
- the outer corners of the eyebrows 61 and 71 are on the same line passing through the outer corner of the eye 42 or 52 and the outer corner of the nostril 31 or 32 on the same side.
- the height of the eyebrow 6 or 7 is one third of its length starting outwards and its top 62 or 72 has a height of a quarter of its length.
- the mouth 2 is placed on the horizontal line located halfway up the unit and occupies half a unit in height.
- the height of the mouth 2 is expressed with the respective heights of the lower and upper lips: the lower lip occupies one third of the 1/2 unit.
- the upper lip occupies one-third of the remaining half of the unit.
- the width of the mouth 2 is defined by means of the two lateral end points 22 and 23 of the mouth. These two lateral end points of the mouth are each located on a line passing on the one hand by the center of separation of the eyes and on the other by the lower external points of the nostrils 31 and 32.
- the mouth is also delimited by means of the lower point 21 and the upper points 24, 25 and 26.
- Figure 12 illustrates the key steps of the method of correcting a target image based on a reference image in the form of a functional flowchart.
- a target image is obtained. At least one zone of this image is selected in step 310 for processing. The at least key points of this area are identified in step 320. The preferred methods of identifying points are described in detail in WO 2008/050062. Other detection modes can also be used.
- the test criteria are applied in order to detect any imperfections of the zone in question. The applied tests involve a comparison 335 between the characteristics of the target image with respect to similar characteristics of the reference image. According to the imperfections detected in relation to the shape characteristics of the target image, one or more correction masks are identified in step 340.
- the retained masks are applied to the target image, for obtaining a modified or corrected image.
- Figure 15 allows to relate the key steps of the method with different functional modules called at different times of the process to allow its implementation.
- the data 210 of the reference image and the data 220 of the target image are made available, for example by means of memory locations.
- a work module 200 comprises a comparison module 201, a selection module 202 and a module 203 application of the mask selected on the target image.
- the test criteria 230 are made available for example from memory means.
- the modified image 240 namely the target image on which the correction mask is applied, is obtained.
- Figure 13 shows an alternative embodiment in which one or more tests in relation to the coloring of the reference image are performed.
- the staining characteristics of a defined area are detected from the target image. These may be skin coloring characteristics for one or more areas of the face, eye coloring characteristics, and / or hair.
- any corrections to be applied to the target image according to the color characteristics detected at step 325 are defined.
- the correction mask defined in step 340 is modified to account for color patches, before being applied to the target image in step 350.
- the following description shows examples of comparisons made between a target image and a reference image to detect the characteristic features of the face represented by the target image.
- We present in turn the detection of the shape of the face, the orientation, the distance and the size of the eyes, the shape of the eyes and the mouth, the size of the lips as well as their relative proportions, the dimensions of the chin, nose and the gap between eyebrows and eyes.
- the color selection is presented.
- FACIAL CHARACTERISTIC FEATURES FORMS OF THE FACE (FIGS. 20 and 21)
- the shape of the face is one of the essential features of the latter. However, it is technically very difficult to accurately detect the exact contour of a face.
- the junction zone with the scalp also poses significant detection problems, especially when the transition is gradual.
- the delineation of the lateral contours and the chin, often with shadows, also implies many chronic difficulties and inaccuracies.
- various technical tools or criteria are presented and illustrated in order to detect the shape and / or category to which the contour of the face or an element of the latter belongs. These detections are made in relation to the contour or the corresponding elements of a reference image. In an advantageous embodiment, the reference image corresponds to that of the canon of beauty.
- distance ratios are used. These ratios make it possible to sort or classify the target face 101 according to categories of standard shapes, preferably as follows: round, oval, elongated, square, undetermined. Other classes or subclasses can also be used, such as heart or pear, inverted triangle, etc. Different criteria make it possible to determine the class to which a given face belongs.
- Lv1 corresponds to the zone of greatest width of the target face 101 and Lv3 corresponds to the width at the lowest point 121 of the lips 102.
- the width Lv2 is measured at the level of the nose using the points 132 and 133 defining the nostrils.
- Hv1 is the height between the low point of the chin 112 and the point 115 at the height of the pupils 140 and 150 of the eyes 104 and 105.
- a face is:
- Figures 18a, 18b and 18c show examples of correction or compensation masks.
- the face shape of the target image is detected, preferably with the criteria presented above.
- one or more correction masks are provided to ensure that the target image can have a shape approximating that of the reference image.
- a square face is corrected or compensated with a mask that aims to remove or reduce the visibility of portions or
- Figures 18b and 18c illustrate types of masks for correction of a face whose detected shape is either too round ( Figure 18b) or too elongated
- Figure 23 illustrates another approach to know the shapes of the face.
- a circle centered on a central point on the face is used to establish a spatial basis of comparison. We superimpose the outline first
- OVCA that is, the Canon Face Ovale or outline of the reference image
- the superposition is performed by positioning the point 15 in the middle of the distance separating the centers of the pupils of the OVCA contour from the reference image on the point 115 of the target image and the lowest point 12 of the face on the point Corresponding 112.
- Point 15/115 is used to form the center of the circle.
- the radius is dimensioned based on the distance between the point 15 and the point 12.
- a comparison of the OVCA form with the contour of the target image can then be performed. The comparison is preferably made point by point, from predefined key points.
- the circle is advantageously used as a new reference to measure the distances between the latter and different points on the contour of the target image.
- the distance Lvc7 makes it possible to evaluate the distance between the point 119c of the top of the front with respect to the point 119c2 of the circle.
- the distance Lvc ⁇ is similar on the other side of the face.
- the distances Lvc3 and Lvc ⁇ make it possible to evaluate the distances between, on the one hand, the points 119a of the outline and 119a2 of the circle, and on the other hand 119b of the outline and 119b2 of the circle. All distances are measured using lines passing through the points to be evaluated and the center 115 of the circle.
- This approach can also be used to compare other facial components between the two images.
- this approach is used to compare the positions of the points of the contour of the target image with respect to the reference contour (OVCA) without passing through the reference circle.
- OVCA reference contour
- it is then useful to provide an indication to specify whether the point of the target image is inside or outside the reference contour.
- FIG. 7 shows the points and dimensions that are useful for establishing the criteria relating to the detection of the inclination of the eyes of the target image in relation to the reference image.
- the eyes are advantageously classified or sorted according to three categories, namely falling, normal (right), or oblique.
- -Normal if the alpha angle is greater than 358 degrees and less than 5 degrees (or a range of +/- 7 degrees around the horizontal axis).
- Figure 17a shows a typical mask to decorate an eye with no particular imperfection. This mask has a neutral impact on the shape, but allows a coloring effect to enhance the look of the person and makeup. In the second case, the mask to be applied will have the correction goal of not accentuating more or slightly accentuate the oblique effect, because this effect is often sought.
- Figure 17c illustrates an example of a mask providing such an effect. A darker zone f5c more developed towards the upper outer corner of the eye creates such an effect.
- FIG. 8 shows the points and dimensions that are useful for establishing the criteria relating to the detection of the spacing between the two eyes of the target image in relation to the reference image.
- This spacing can be classified into 3 categories according to which the eyes are considered either close, with normal spacing, or distant.
- the points used for these criteria correspond to the internal ends 143 and 153 and external 142 and 152 of the eyes 104 and 105.
- the eyes have a normal or equivalent distance to the reference image if:
- the eyes are close if: (Ly1 + Ly2) / 2 is substantially smaller than Ly3.
- the eyes are distant if: (Ly1 + Ly2) / 2 is substantially larger than Ly3.
- the mask to be applied will have no compensation or correction goal.
- the mask to be applied will aim to compensate for the small difference by means of a light effect producing a larger gap.
- the mask to be applied is intended to compensate for the large difference by means of a shadow effect producing a closer effect.
- An example of this type of mask is shown in Figure 17b.
- Such a mask creates an effect of approximation of the eyes with a dark area above the eye covering at least the outer side of the latter, whereas for the normal eye, illustrated in Figure 17a, the dark zone of the mask at above the eye just touches the upper outer corner of the eye. The widening of the dark zone f5b of FIG. 17b creates an effect of approximation of the eyes.
- Figures 24 and 25 show the points and dimensions useful for establishing the criteria for the detection of eye size. These criteria are intended to establish the proportions of the eyes relative to the rest of the face and these components.
- the eyes are advantageously classified into three categories, namely small, normal (proportionate), or large. This makes it possible to know the proportion of the two eyes with respect to the rest of the face and its components.
- a first approach consists in superimposing the reference image and the target image. This superposition makes it possible to adjust the scale of the reference image.
- Points 13a and 13b of the reference image are preferably used (see FIG. 3) to manage the change of scale in the direction of the width.
- the reference grid is centered by superimposing its point 15 located in the middle of the distance between the centers of the pupils on the corresponding point 115 of the target image.
- the points of the contour of the face 113a and 113b located on the same height as the point 115 are then used to adapt the scale in width.
- the choice of the point to be remembered is advantageously based on the greatest distance that separates the point 115 from either point 113a or point 113b. The farthest point of the center is retained.
- the reference scale R is adapted (increased or decreased, as the case may be) so that the corresponding points 13a or 13b of the reference image are aligned in width according to the distance chosen.
- the reference scale is adapted in height by superimposing the point 12 on the corresponding point 112 of the target image.
- FIG. 25 shows that the scale R of the reference image does not correspond to the scale C of the target image.
- the differences between the two scales can then be used to detect the positioning differences of the points of the target image that one wishes to evaluate or compare. It then becomes possible to compare the completeness of differences in sizes, distances, etc. between the components of the face of the target image and the reference image.
- the units of the reference grid are designated R.
- the distances between the two corners of the eyes 152 and 153 or 142 and 143 are compared on the two scales, corresponding for the eye 105 to 0.5C or 0, 5R and 1 C and 1 R.
- the eyes are: -Normal if: the length of 0.5C to 1C is substantially equal to the length of 0.5R to 1 R.
- the mask to be applied n ' will have no purpose of compensation or correction.
- the length of 0.5C to 1C is substantially greater than the length of 0.5R to 1R.
- the mask to apply will aim to enlarge the eye, for example by diffusing the color or using a lighter color.
- the mask preferably uses a higher ratio than with the normal application (barrel case).
- the mask to be applied will aim to shrink the eye, for example by reducing the size of application of color .
- the mask preferably uses a lower ratio than with the normal application (barrel case).
- the detection of the size of the eyes can also be performed by calculating the area of the eyes according to the surface of the face. This last surface is easily known from the known points and / or detected along the contour. According to this approach, the eyes are:
- Figure 9 shows the points and dimensions useful for establishing the criteria for the detection of the shape of the eyes. These criteria are intended to establish the proportions of the eyes relative to the rest of the face and these components.
- the shape criteria of the eyes correspond to the shape of the opening of the eye.
- a classification according to three categories is planned, namely, fine, normal (proportionate), or round. Other categories may be provided to refine precision or to take into account particular cases.
- the eyes of the barrel are proportioned with a height corresponding to one-third of the width. To check for any corrections to be applied to the target images submitted for comparison, the following criteria are applied.
- the points used for these criteria correspond to the ends 142 and 143 of the eyes for the segment Ly4, while the segment hy3 is defined by the lowest points 141 and the highest 146 of the eye. So, one eye is:
- hy3 is substantially greater than 1/3 Ly4.
- different types of correction masks are suggested in order to correct the shapes that stand out from those of the barrel.
- the masks are intended to refine the profile of a round eye or round an eye too fine.
- the corrections identified following the application of the various criteria can be of various kinds. Some patches are more or less thick contour type masks, of different shapes and colors. Such masks define areas of degraded colors of different shapes more or less bright. It is also possible to deform or intensify in part or not the eyelashes that are on the eye contour.
- Figure 10 shows the points and dimensions useful for establishing the criteria relating to the detection of the shape of the mouth. These criteria aim to establish the proportions of the mouth of the target image in relation to the rest of the face and its components, in relation to the reference image.
- the points used for these criteria correspond to the high and low points of each lip, ie for hb3 the distance between the imaginary line passing through the commissures 122 and 123 and the high point 125 on one side, for hb4 the distance between the imaginary line passing by the commissures 122 and 123 and the high point 124 on the other side and for hb5 the distance between the low point of the lower lip 121 and the line crossing the commissures at points 122 and 123.
- the application is similar to that performed with the reference image.
- the mouth is narrow if: Lb1 is substantially less than 3 A of unit R.
- the application aims to widen the mouth by drawing the contour of the lips while projecting slightly towards the commissures.
- the mouth is wide if: Lb1 is substantially greater than 3 A of unit R.
- the application aims to reduce the width of the mouth by drawing the outline without the commissures, and possibly to attenuate the commissures.
- FIG. 11 shows the points and dimensions that are useful for establishing the criteria relating to the detection of the size of the lips in relation to the reference image. These criteria are intended to establish the proportions of the lips relative to the mouth. It is to detect the size of the lips by making a ratio between the width and the height of the mouth or the height of the lips. The lips can be classified into three categories, namely fine, normal (proportionate), large. The points used for these criteria correspond to the high and low points on each side of the mouth of the target image, for hb1 the distance between points 125 and 121, and for hb2 the distance between points 124 and 121.
- the lips are normal if: (hb1 + hb2) / 2 is substantially equal to Lb1 / 2.7, ie the proportions corresponding to the lips of the reference image.
- the lips are fine if: (hb1 + hb2) / 2 is substantially smaller than Lb1 / 2.7.
- FIG. 10 also shows the points and dimensions that are useful for establishing the criteria relating to the detection of the comparative size or proportions of the lips. These criteria are intended to establish the proportions of the lips relative to each other. It is to detect the size of the lips by making a ratio between the height of each of the lips. For the upper lip, an average height dimension is preferably used.
- the lips can be classified according to three categories, namely lower lip greater, balanced lips, upper lip greater.
- the points used for these criteria correspond to the high and low points of each lip, ie for hb3 the distance between the imaginary line passing through the commissures 122 and 123 and the high point 125 on one side, for hb4 the distance between the imaginary line passing through the commissures 122 and 123 and the high point 124 on the other side and for hb5 the distance between the low point of the lower lip 121 and the line crossing the commissures at the points 122 and 123.
- Figures 16a to 16d illustrate examples of corrections to be applied to the lips according to the rankings made.
- Figure 16a shows balanced lips.
- Figures 16b, 16c, and 16d show examples of fixes for common situations.
- the patches are intended for application along the outer contour of the lips or on a portion of this contour. It is thus possible to correct various disproportions and thus rebalance the lips relative to the rest of the face.
- the contour is traced on the outside or inside of the outer limit of the lips.
- the contour is traced along the line f1, with narrower borders.
- a lower lip which is thinner than the upper lip is compensated with a lower lip contour traced along f2 so as to move the lower edge of the lower lip downwards.
- the example of FIG. 16d relates to an asymmetric upper lip, corrected by a contour retraced along f3, so as to increase the smaller detected area. The goal is to rebalance point 125 with point 124 by putting them at the same level.
- FIG. 21 shows the points and dimensions that are useful for establishing the criteria relating to the detection of the dimensions of the chin of the target image. These criteria are intended to establish the relative proportions of the chin with respect to the rest of the face and these components.
- the chin can thus be classified into three categories, namely short, normal, or long.
- the axes of Figure 21 are used to establish these proportions.
- Hv1 corresponds to the height between the point 115 at the level of the pupils and the low point 112 of the chin.
- Hv2 corresponds to the height of the chin between the base of the lips 121 and the base of the chin 112.
- the chin is normal or substantially equivalent to the reference image if:
- the chin is short if hv1 / hv2 ⁇ 3.2 units.
- the chin is long if: hv2 / hv1> 3.8 units.
- the method provides the use of different types of masks providing patches on the lower part, making this area more or less visible, as appropriate.
- an application darker than the tone of the skin is planned.
- a clearer application than the tone of the skin is then recommended.
- FIG. 22a shows the points and dimensions that are useful for establishing the criteria relating to the detection of the dimensions of the nose. These criteria are intended to establish the relative proportions of the nose relative to the rest of the face.
- the nose can thus be classified into three categories, namely short, normal, or long.
- the axes of Figure 22a are used to establish these proportions.
- the height of the nose with respect to the chin is preferably established on the basis of an average between the two sides of the nose.
- Hv3 corresponds to the height between the point 112 at the base of the chin and the point 133 at the base of one of the sides of the nose.
- Hv4 is the height between point 112 at the base of the chin and point 132 at the base of the other side of the nose.
- Hv5 is the distance between the points of the base of the nose 132 on one side and the inner corner 153 of the eye on the same side.
- Hv6 corresponds to the distance between the points of the base of the nose 133 on the other side and the inner corner 143 of the eye also on this side.
- the nose is normal if:
- the nose is short if:
- the nose is long if:
- Figure 22a also shows the points and dimensions useful for establishing the criteria relating to the detection of the width of the nose. These criteria are intended to establish the relative proportions of the nose relative to the rest of the face.
- the nose can thus be classified into three categories, namely narrow, normal, or wide.
- the axes of Figure 22a are used to establish these proportions.
- the height of the nose with respect to the chin is preferably established on the basis of an average between the two sides of the nose. Hv5 and hv6 have been previously described.
- Lv4 corresponds to the width between the points 132 and 133 of the base of the nose, on each side of the nostrils.
- the nose is normal or equivalent to the reference image if:
- Lv4 is substantially equal to 2/3 x (hv5 + hv6) / 2.
- the nose is narrow if:
- Lv4 is substantially smaller than 2/3 x (hv5 + hv6) / 2.
- the nose is wide if:
- Lv4 is substantially larger than 2/3 x (hv5 + hv6) / 2.
- FIG. 22f shows the points and dimensions that are useful for establishing the criteria relating to the detection of the width of the nose.
- the nose is also classified into three categories, namely narrow, normal or wide.
- the points 117a, 117b and 132, 133 passing through the axis M3 of FIG. 22f are used to establish these proportions.
- a comparison between the width of the face and the width of the nose makes it possible to determine in which category is the nose.
- Lv4 corresponds to the width between the points 132 and 133 of the base of the nose, on each side of the nostrils
- Lv7 corresponds to the width between the points 117a and 117b of the face.
- the nose is normal or equivalent to the reference image if:
- Lv4 is substantially equal to 1/4 x Lv7.
- the nose is narrow if:
- Lv4 is substantially smaller than 1/4 x Lv7.
- the nose is wide if:
- Lv4 is substantially larger than 1/4 x Lv7.
- Figures 22b and 22c illustrate examples of patches to be applied to the nose according to the rankings made.
- Figure 22b shows a nose too wide and Figure 22c shows a too narrow nose.
- the zones F11bd and f11bg each represent an area where texture can be applied in the hollow of the wings of the nose.
- the flObd and flObg forms provide darker application than the detected skin tone to darken this portion of the nose.
- the Areas f12cd and f12cg provide a clearer application than the detected skin tone, to brighten this portion of the nose.
- a nose In the case where a nose is too small, it aims to thin some portions of the nose, preferably using a top type of mask as illustrated. In the opposite case, if the nose is too long, an application darker than the tone of the skin is performed on the lower part of the nose.
- Figure 22d shows the points and dimensions useful for establishing the criteria relating to the detection of the shape of the nose. These criteria are intended to determine the straightness of the nose in relation to the face.
- the nose can thus be classified in three categories, namely right, deviated to the left (zone G) or deviated to the right (zone D).
- the axes of Figure 22d are used to establish these proportions.
- M1 and M2 have been previously described.
- Lv5 and Lv6 correspond to the width between the axis M1 and the points 132 and 133 of the base of the nose, on each side of the nostrils.
- the nose is normal or equivalent to the reference image if:
- Lv5 is substantially equal to Lv6.
- the nose is deviated to the right if:
- Lv5 is substantially larger than Lv6.
- the nose is deviated to the left if:
- Lv5 is substantially smaller than Lv6.
- FIG. 22e illustrates an example of a patch to be applied to the nose as a function of the rankings made for the shape of the nose.
- Figure 22e shows a deviated nose on the left. In this case, to perform the compensation, it is intended to use a mask as illustrated.
- the zones F13ed and f13eg each represent an area with a clearer application than the skin tone detected, to lighten this portion of the nose.
- the f14e area provides a darker application than the detected skin tone, to darken this portion of the nose.
- EYEBROWS Figure 26 shows the points useful for determining the space between the eye and the eyebrow.
- Ls1 represents the distance between the inner corner of the eye 143 and the inner end of the eyebrow 163.
- Ls2 represents the distance between the upper portion of the eye 144 and the top of the eyebrow 162. These distances make it possible to detect the type of eyebrow gap between eye and eyebrow.
- the type of deviation can be determined either from Ls1, or from Ls2, or from these two distances, with a compound or cumulative criterion. Depending on the category detected, it is possible to automatically suggest one or more types of mask that can be applied. For the user, a corresponding makeup can then be applied, based on the example of the mask.
- the deviation types are as follows:
- a typical makeup involves predetermined colors. These colors are applied in a neutral way, without taking into account the features and the shape of the face of the person to make up.
- most faces are not fully adapted for a color application without adaptation.
- an image of the person to be made up is used in order to extract certain characteristics related to the features, the shape, and possibly the colors.
- the clothing colors can also be taken into account for the adjustment or adaptation of the colors of the mask.
- the colors of the mask can help to suggest subsidiary colors to guide the selection of a dress.
- the source of the colors comes for example from the references of the various products provided by the user. These colors are in a database provided for this purpose. They can be preclassified into categories.
- the colors are taken from specific areas on the face. These color values are usually in hexadecimal and then converted to HSL values (Hue, Saturation, Brightness).
- HSL values Hue, Saturation, Brightness
- the TSL diagram depicts a three-dimensional representation of the color in the form of two inverted cones whose common base shows at the periphery the maximum saturation of color. The center of the circle is gray, while the brightness increases upward and decreases downward.
- One or more rules can be applied to the values obtained in order to classify them in a list of colors.
- the color characteristics of three zones are used to compose the coloring mask: the eyes 104 and 105, more particularly the iris (preferably without reference to the reference image for the color ), the skin, especially at the level of the cheeks, as well as the hair.
- a double comparison is advantageously used, namely on the one hand a comparison of the location of the landmarks and on the other hand a comparison of the colors of the areas near the landmarks. .
- the following table lists some typical colors for each zone.
- an appropriate mask can be selected. If a mask has already been selected by the shape and line criteria of the target image, the target image can be adapted or nuanced according to the color classification performed at this step of the process.
- the search for the color adapted to a target image is advantageously performed according to its location in the TSL color space.
- This search consists in detecting the closest colors available in the database by adding the possible adaptation rules.
- the color is determined by the shortest distance between the detected color and the available colors in a TSL or equivalent space.
- the TSL values of a color reference are pre-loaded into the database. It is also possible to add other constraints to the color selection. For example: choice by product, manufacturer, budget, etc.
- the adaptation of a mask to simulate the addition of skin coloration is determined according to the color of the skin detected.
- COLO is the position of the detected color.
- CO represent the colors of the database.
- the product whose tone is most appropriate for the color of the skin can be obtained.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP10747492A EP2462535A2 (fr) | 2009-08-04 | 2010-07-28 | Procédé de traitement d'image pour corriger une image cible en fonction d'une image de reference et dispositif de traitement d'image correspondant |
JP2012523398A JP2013501292A (ja) | 2009-08-04 | 2010-07-28 | 基準画像に対して対象画像を補正する画像処理方法及びその画像処理装置 |
CA2769583A CA2769583A1 (fr) | 2009-08-04 | 2010-07-28 | Procede de traitement d'image pour corriger une image cible en fonction d'une image de reference et dispositif de traitement d'image correspondant |
US13/388,511 US20120177288A1 (en) | 2009-08-04 | 2010-07-28 | Image-processing method for correcting a target image with respect to a reference image, and corresponding image-processing device |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR09/03856 | 2009-08-04 | ||
FR0903856 | 2009-08-04 | ||
FR1001916A FR2959846B1 (fr) | 2010-05-04 | 2010-05-04 | Procede de traitement d'image pour corriger une cible en fonction d'une image de reference et dispositif de traitement d'image correspondant |
FR10/01916 | 2010-05-04 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011015928A2 true WO2011015928A2 (fr) | 2011-02-10 |
WO2011015928A3 WO2011015928A3 (fr) | 2011-04-21 |
Family
ID=43425905
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2010/001914 WO2011015928A2 (fr) | 2009-08-04 | 2010-07-28 | Procede de traitement d'image pour corriger une image cible en fonction d'une image de reference et dispositif de traitement d'image correspondant |
Country Status (6)
Country | Link |
---|---|
US (1) | US20120177288A1 (fr) |
EP (1) | EP2462535A2 (fr) |
JP (1) | JP2013501292A (fr) |
KR (1) | KR20120055598A (fr) |
CA (1) | CA2769583A1 (fr) |
WO (1) | WO2011015928A2 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254147A (zh) * | 2011-04-18 | 2011-11-23 | 哈尔滨工业大学 | 一种基于星图匹配的远距离空间运动目标识别方法 |
US20130141458A1 (en) * | 2011-12-02 | 2013-06-06 | Hon Hai Precision Industry Co., Ltd. | Image processing device and method |
CN108710853A (zh) * | 2018-05-21 | 2018-10-26 | 深圳市梦网科技发展有限公司 | 人脸识别方法及装置 |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136543B (zh) * | 2011-12-02 | 2016-08-10 | 湖南欧姆电子有限公司 | 图像处理装置及图像处理方法 |
US8433107B1 (en) * | 2011-12-28 | 2013-04-30 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Method of enhancing a nose area of an image and related computing device |
US8538089B2 (en) * | 2011-12-28 | 2013-09-17 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Method of performing eyebrow shaping on an image and related computing device |
JP5895703B2 (ja) * | 2012-05-22 | 2016-03-30 | ソニー株式会社 | 画像処理装置及び画像処理方法、並びにコンピューター・プログラム |
CN103632165B (zh) | 2013-11-28 | 2017-07-04 | 小米科技有限责任公司 | 一种图像处理的方法、装置及终端设备 |
JP6413271B2 (ja) * | 2014-03-20 | 2018-10-31 | フリュー株式会社 | 情報提供装置、画像分析システム、情報提供装置の制御方法、画像分析方法、制御プログラム、および記録媒体 |
JP6369246B2 (ja) * | 2014-09-08 | 2018-08-08 | オムロン株式会社 | 似顔絵生成装置、似顔絵生成方法 |
US10163247B2 (en) * | 2015-07-14 | 2018-12-25 | Microsoft Technology Licensing, Llc | Context-adaptive allocation of render model resources |
US9916497B2 (en) | 2015-07-31 | 2018-03-13 | Sony Corporation | Automated embedding and blending head images |
CN108292418B (zh) * | 2015-12-15 | 2022-04-26 | 日本时尚造型师协会 | 信息提供装置及信息提供方法 |
GB2550344B (en) * | 2016-05-13 | 2020-06-03 | Holition Ltd | Locating and augmenting object features in images |
WO2017149315A1 (fr) * | 2016-03-02 | 2017-09-08 | Holition Limited | Localisation et augmentation de caractéristiques d'objet dans des images |
TWI573093B (zh) * | 2016-06-14 | 2017-03-01 | Asustek Comp Inc | 建立虛擬彩妝資料的方法、具備建立虛擬彩妝資料之方法的電子裝置以及其非暫態電腦可讀取記錄媒體 |
CN108804972A (zh) * | 2017-04-27 | 2018-11-13 | 丽宝大数据股份有限公司 | 唇彩指引装置及方法 |
CN109419140A (zh) | 2017-08-31 | 2019-03-05 | 丽宝大数据股份有限公司 | 推荐眉毛形状显示方法与电子装置 |
JP2019070870A (ja) * | 2017-10-05 | 2019-05-09 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP7087331B2 (ja) | 2017-10-05 | 2022-06-21 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP6803046B2 (ja) * | 2017-10-05 | 2020-12-23 | 株式会社顔分析パーソナルメイクアップ研究所 | フィルム及び顔分析装置 |
CN108230315B (zh) * | 2018-01-04 | 2021-05-25 | 西安理工大学 | 一种基于机器视觉的口罩带缺失检测方法 |
US10574890B2 (en) | 2018-01-12 | 2020-02-25 | Movidius Ltd. | Methods and apparatus to operate a mobile camera for low-power usage |
US10915995B2 (en) | 2018-09-24 | 2021-02-09 | Movidius Ltd. | Methods and apparatus to generate masked images based on selective privacy and/or location tracking |
KR102607789B1 (ko) * | 2018-12-17 | 2023-11-30 | 삼성전자주식회사 | 이미지 처리 방법 및 그 전자 장치 |
CN112561850A (zh) * | 2019-09-26 | 2021-03-26 | 上海汽车集团股份有限公司 | 一种汽车涂胶检测方法、设备及存储介质 |
US11501472B2 (en) * | 2019-09-27 | 2022-11-15 | Clemson University Research Foundation | Color adjustment system for disparate displays |
JP7455545B2 (ja) * | 2019-09-30 | 2024-03-26 | キヤノン株式会社 | 情報処理装置、情報処理方法、およびプログラム |
CN112950529A (zh) * | 2019-12-09 | 2021-06-11 | 丽宝大数据股份有限公司 | 脸部肌肉特征点自动标记方法 |
TW202122040A (zh) * | 2019-12-09 | 2021-06-16 | 麗寶大數據股份有限公司 | 臉部肌肉狀態分析與評價方法 |
CN111563855B (zh) * | 2020-04-29 | 2023-08-01 | 百度在线网络技术(北京)有限公司 | 图像处理的方法及装置 |
JP2023531264A (ja) * | 2020-06-29 | 2023-07-21 | ロレアル | 改善された顔属性分類およびその使用のためのシステム及び方法 |
CN111832512A (zh) * | 2020-07-21 | 2020-10-27 | 虎博网络技术(上海)有限公司 | 表情检测方法和装置 |
CN115797198B (zh) * | 2022-10-24 | 2023-05-23 | 北京华益精点生物技术有限公司 | 图像标记矫正方法及相关设备 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008050062A1 (fr) | 2006-10-24 | 2008-05-02 | Jean-Marc Robin | Procédé et dispositif de simulation virtuelle d'une séquence d'images vidéo. |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3529954B2 (ja) * | 1996-09-05 | 2004-05-24 | 株式会社資生堂 | 顔だち分類法及び顔だちマップ |
JPH10289303A (ja) * | 1997-04-16 | 1998-10-27 | Pola Chem Ind Inc | 好印象を形成するメークアップの選択法 |
US5990901A (en) * | 1997-06-27 | 1999-11-23 | Microsoft Corporation | Model based image editing and correction |
AU3662600A (en) * | 2000-03-30 | 2001-10-15 | Lucette Robin | Digital remote data processing system for transforming an image, in particular an image of the human face |
JP4789408B2 (ja) * | 2003-06-30 | 2011-10-12 | 株式会社 資生堂 | 目の形態分類方法及び形態分類マップ並びに目の化粧方法 |
KR101363097B1 (ko) * | 2004-10-22 | 2014-02-13 | 가부시키가이샤 시세이도 | 입술의 분류 방법, 화장 방법, 분류 맵 및 화장용 기구 |
WO2007063878A1 (fr) * | 2005-12-01 | 2007-06-07 | Shiseido Company, Ltd. | Méthode de classement de visage, dispositif de classement de visage, carte de classement, programme de classement de visage, support d’enregistrement sur lequel ce programme est enregistré |
WO2008102440A1 (fr) * | 2007-02-21 | 2008-08-28 | Tadashi Goino | Dispositif et procédé de création d'image de visage maquillé |
US7916971B2 (en) * | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
JP2009039523A (ja) * | 2007-07-18 | 2009-02-26 | Shiseido Co Ltd | メイクアップシミュレーションに利用する端末装置 |
KR101590868B1 (ko) * | 2009-07-17 | 2016-02-02 | 삼성전자주식회사 | 피부색을 보정하는 영상 처리 방법, 장치, 디지털 촬영 장치, 및 컴퓨터 판독가능 저장매체 |
-
2010
- 2010-07-28 CA CA2769583A patent/CA2769583A1/fr not_active Abandoned
- 2010-07-28 US US13/388,511 patent/US20120177288A1/en not_active Abandoned
- 2010-07-28 JP JP2012523398A patent/JP2013501292A/ja active Pending
- 2010-07-28 WO PCT/IB2010/001914 patent/WO2011015928A2/fr active Application Filing
- 2010-07-28 EP EP10747492A patent/EP2462535A2/fr not_active Withdrawn
- 2010-07-28 KR KR1020127005746A patent/KR20120055598A/ko not_active Application Discontinuation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008050062A1 (fr) | 2006-10-24 | 2008-05-02 | Jean-Marc Robin | Procédé et dispositif de simulation virtuelle d'une séquence d'images vidéo. |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254147A (zh) * | 2011-04-18 | 2011-11-23 | 哈尔滨工业大学 | 一种基于星图匹配的远距离空间运动目标识别方法 |
US20130141458A1 (en) * | 2011-12-02 | 2013-06-06 | Hon Hai Precision Industry Co., Ltd. | Image processing device and method |
CN108710853A (zh) * | 2018-05-21 | 2018-10-26 | 深圳市梦网科技发展有限公司 | 人脸识别方法及装置 |
CN108710853B (zh) * | 2018-05-21 | 2021-01-01 | 深圳市梦网科技发展有限公司 | 人脸识别方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CA2769583A1 (fr) | 2011-02-10 |
JP2013501292A (ja) | 2013-01-10 |
KR20120055598A (ko) | 2012-05-31 |
WO2011015928A3 (fr) | 2011-04-21 |
US20120177288A1 (en) | 2012-07-12 |
EP2462535A2 (fr) | 2012-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2462535A2 (fr) | Procédé de traitement d'image pour corriger une image cible en fonction d'une image de reference et dispositif de traitement d'image correspondant | |
EP2678735B1 (fr) | Procédé de détermination d'au moins un paramètre géométrico-physionomique associé au montage d'une lentille ophthalmique dans une monture de lunettes portée par un porteur | |
EP1431907B1 (fr) | Evaluation de la netteté d'une image d'iris d'oeil | |
EP2076886A1 (fr) | Procédé et dispositif de simulation virtuelle d'une séquence d'images vidéo | |
WO2003007243A2 (fr) | Procede et systeme pour modifier une image numerique en prenant en compte son bruit | |
FR2981254A1 (fr) | Methode de simulation d'une chevelure a colorimetrie variable et dispositif pour la mise en oeuvre de la methode | |
WO2013098512A1 (fr) | Procédé et dispositif de détection et de quantification de signes cutanés sur une zone de peau | |
FR3058818A1 (fr) | Procede d'augmentation de la saturation d'une image, et dispositif correspondant. | |
EP3921798A1 (fr) | Procédé de segmentation automatique de dents | |
CN104917935A (zh) | 图像处理装置以及图像处理方法 | |
EP3614306A1 (fr) | Procédé de localisation et d'identification de visage et de détermination de pose, à partir d'une vue tridimensionnelle | |
EP3866064A1 (fr) | Procede d'authentification ou d'identification d'un individu | |
FR3111268A1 (fr) | Procédé de segmentation automatique d’une arcade dentaire | |
CN106548114B (zh) | 图像处理方法、装置及计算机可读介质 | |
WO2021245273A1 (fr) | Procédé et dispositif de reconstruction tridimensionnelle d'un visage avec partie dentée à partir d'une seule image | |
Eppenhof et al. | Retinal artery/vein classification via graph cut optimization | |
EP2333693B1 (fr) | Procédé de correction de la position des yeux dans une image | |
JP4775599B2 (ja) | 目の位置の検出方法 | |
WO2011138649A2 (fr) | Procédé de traitement d'images pour application d'une couleur | |
EP4229544A1 (fr) | Procédé de traitement d'images | |
EP3929809A1 (fr) | Procédé de détection d'au moins un trait biométrique visible sur une image d entrée au moyen d'un réseau de neurones à convolution | |
WO2011141769A1 (fr) | Procédé d'évaluation d'une caractéristique de la typologie corporelle | |
WO2011055224A1 (fr) | Dispositif et procede de detection et suivi des contours interieur et exterieur des levres | |
JP4683236B2 (ja) | 目の位置の検出方法 | |
CA3161536A1 (fr) | Procede, dispositif et produit programme d'ordinateur de decodage d'un bulletin de jeu |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10747492 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2769583 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012523398 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20127005746 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010747492 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13388511 Country of ref document: US |