WO2004023395A1 - 頭部装着物画像合成方法、化粧画像合成方法、頭部装着物画像合成装置、化粧画像合成装置及びプログラム - Google Patents
頭部装着物画像合成方法、化粧画像合成方法、頭部装着物画像合成装置、化粧画像合成装置及びプログラム Download PDFInfo
- Publication number
- WO2004023395A1 WO2004023395A1 PCT/JP2003/011210 JP0311210W WO2004023395A1 WO 2004023395 A1 WO2004023395 A1 WO 2004023395A1 JP 0311210 W JP0311210 W JP 0311210W WO 2004023395 A1 WO2004023395 A1 WO 2004023395A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- estimating
- orientation
- shape
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47F—SPECIAL FURNITURE, FITTINGS, OR ACCESSORIES FOR SHOPS, STOREHOUSES, BARS, RESTAURANTS OR THE LIKE; PAYING COUNTERS
- A47F10/00—Furniture or installations specially adapted to particular types of service systems, not otherwise provided for
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C5/00—Constructions of non-optical parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D44/005—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention is based on images such as eyelashes, earrings, hats, hairpieces, etc., which are worn on the head and images of ii, etc., which are applied to the face, and in the state where these difficulties and makeup are not actually applied. It is related to the image M that is combined with the image of the user as if he were actually wearing it and mirroring it on the mirror as if he were m.
- an image composition system that can determine whether or not the user likes and does not use the actual ones for various fts and fts without performing t3 ⁇ 4 on the face.
- this image synthesizing system as one of the following * ⁇ # ⁇ for B-rise, after the user's ®g image was obtained by the camera, the operator stored the eye image that had been stored in advance by hand. It is known that an image is synthesized by opening and standing (for example, see Japanese Patent Application Laid-Open No. 63-03929 (hereinafter referred to as Reference 1)).
- an image synthesizing system capable of responding to changes in the direction of the face or the blue color, or to a detailed simulation of the lens (for example, Japanese Patent Application Laid-Open No. H06-13939318) Reference 8), Japanese Patent Application Laid-Open No. H06_11 8 349 (see Reference 9).
- this image synthesis system after a 2 ⁇ 3 ⁇ 4 ⁇ image of a body is obtained, a standard cubic model is blitted onto the image and transformed.
- an image synthesizing system in which a human body is subjected to ⁇ for detecting the orientation of a face to thereby produce a synthetic image that naturally follows the power of the head (see, for example, Japanese Patent Application Laid-Open No. — See 2 8 3 4 5 5 (hereinafter referred to as reference 10).
- a user's image is obtained with a force camera, then displayed on the screen and manipulated with a tablet to process the image.
- a simulation is performed (for example, see Japanese Patent Application Laid-Open No. Sho 62-144280 (hereinafter referred to as Reference 11)).
- an image synthesizing system in which the light pen for the working roe is formed in the shape of a chemical shelf prop so as to be natural (for example, Japanese Patent Application Laid-Open No. 63-316,037 (hereinafter referred to as Reference 12)) Lipsticks and powders (K, eyebrows) Also, an image synthesizing system is known (see, for example, Japanese Patent Application Laid-Open No. H06-31696 (hereinafter, reference 13)).
- the simulators for glasses and notes mainly synthesize images into a two-dimensional still image, so that they are actually dressed or photographed using mirrors. Contrary to what you see, the user could not be sure of the fitting.
- the image synthesizing system described above cannot present a result without a sense of sharpness similar to ⁇ in the mirror that the user originally went to.
- an object of the present invention is to provide a natural and fuzzy image, such as actually performing eye injections, hats, hairpieces, etc. on the head and performing ⁇ with a mirror. Is to be able to synthesize
- a step of measuring a three-dimensional shape of a person's face a step of fitting eyes into a m ⁇ m three-dimensional shape, and a step of measuring an image of a person's face.
- Let m be a process that includes the step of determining the orientation of the face g and the step of displaying the B rise in the image by using the estimated orientation of the face g3 ⁇ 4 ⁇ .
- the second invention is a step of measuring a three-dimensional shape of a person's face and its reward I.
- the tfjfS instead of the step of estimating the position of the face in the image representing the face of the person using the three-dimensional shape and the Rlt feature [4], the tfjfS
- a step of estimating a face position g3 ⁇ 4 direction in an image of the face of the ItllB person, and a step of estimating the estimated face position g3 ⁇ 4 ⁇ direction Let m be the step of displaying 3 in the image by using and displaying it as a face.
- the fifth invention is the invention according to the first, second, third or fourth invention, wherein an earring, a hat, a hairpiece, or other wearing on the head is used instead of the tiff's own eyes. It is assumed that the combination of "" is combined with the image.
- a texture image with less shadow is used instead of the reflection characteristic.
- a step of measuring a three-dimensional shape of a person's face, and applying a shape to a three-dimensional shape measured by Ml form a standard three-dimensional shape of a face with a "deformed deformation" in advance.
- the process of applying the self-measured 3D shape, the process of changing the direction of the face in the image obtained by »tilting the face of the person, and the process of changing the facial expression in the image were estimated. It is assumed that the step of generating and displaying [[ ⁇ ] in the image to display " ⁇ " on the face using the change in the position, orientation, and expression of the face is denoted by ⁇ .
- the eighth invention is a process of measuring the three-dimensional shape of a person's face and its characteristics 14, a process of applying (be) to the measured three-dimensional shape, and a face standard three-dimensional A process of applying the shape to the three-dimensional shape measured by tiff self-measurement; a process of estimating the position of the face in the image of the person's face 6 3 ⁇ 4 ⁇ using the self-measured 3 »shape and the special ffi; The process of inferring the facial expression change in the image, and using the determined face position, orientation, and 3 ⁇ 4f blue change in the image and ⁇ Srf in the face »Is defined as the process of displaying and displaying.
- Itjf own 3D shape, tfiia characteristics and power, and a process of generating a basis for estimating the face position under various ⁇ 3 ⁇ 4 ⁇ lighting conditions, and using the basis Orienting the face in the image.
- the eighth invention instead of the step of estimating a facial cleansing change in an image in which the face of the tfilH person is swollen, a variety of previously determined three-dimensional shapes and characteristics are used.
- ⁇ The basis for estimating the direction of the face standing under lighting conditions is ⁇ :]
- the base is used in the image by using the base and the standard three-dimensional face with deformation /
- the eighth, ninth, or tenth invention it is preferable that, instead of the reflection characteristic, a shade or a texture image is used instead of the reflection characteristic.
- the twelfth invention provides a three-dimensional shape measuring means for measuring the three-dimensional shape of a person's face, eye and fitting means for fitting a selected eye to the measured three-dimensional shape, and tfjf.
- the face standing and orientation estimating means for estimating the orientation of the face in the image, and the determined face orientation
- the B rise is applied to the face in the fflWf image. It is supposed that it consists of an eye image generating means for displaying the placed gamma image and an "eye image displaying means" for synthesizing and displaying the foreground image and the ttiia rising image.
- the thirteenth invention is a method for measuring the three-dimensional characteristic of a person's face and its characteristics.
- 43 ⁇ 43 ⁇ 4O3 ⁇ 4f characteristic measurement means eyes for fitting the selected eye to the measured 3 ⁇ 55 shape »1 ⁇ 3 ⁇ 4 ⁇ buit means, and tfrl the face of the person's face» Face estimating direction using self 3 ⁇ shape and IMBKlt raw ⁇ 3 ⁇ 4 ⁇ Direction estimating means and using face position e3 ⁇ 4 ⁇ Eyes arranged so that! ⁇ It is assumed to be composed of B image means for displaying an image, and Tengai display means for combining and displaying the image and the tfiia image.
- the ⁇ & ⁇ three-dimensional shape and the Sit features are used in advance under various lighting conditions.
- Geodetic lighting calculation means for generating a geodetic lighting base for estimating the face position 3 ⁇ 4 ⁇ 3 ⁇ 4 ⁇ direction, and a face position ⁇ direction for estimating the face position ⁇ direction in the image using the base It is assumed that the estimation means is included.
- the fifteenth invention is based on the twelfth, thirteenth, or fourteenth inventions, and uses ffJlE position g3 ⁇ 4 3 ⁇ 4 direction means, IfJt self means, and tiiK «OO indication means. That is ⁇ .
- the first ⁇ invention is the invention according to the first, second, thirteenth, fourteenth, or fifteenth invention, wherein, instead of the eyeglasses, an earring, a hat, a hairpiece, or other wearing object on the head is used. , Make it difficult to combine those combinations into an image.
- the seventeenth invention is the invention according to the thirteenth, fourteenth, fifteenth, or sixteenth invention, wherein:
- An eighteenth invention is a three-dimensional shape measuring means for measuring the three-dimensional shape of a person's face, makeup means for applying ibffi to the measured three-dimensional shape, and a face standard three-dimensional with a predetermined ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ Face position in the MM image S and face position g & ⁇ direction estimating means for estimating the direction, face part changing means for estimating facial expression change in the image, ttilE estimated face position and orientation Using the facial expression change, the ilitt applied by the tfflS makeup means is transformed into a face in the llBM image so that it becomes ⁇ f. »Is defined as the facial expression change.
- the nineteenth invention is a tertiary shape measuring means for measuring the tertiary shape of a person's face and its special characteristics, and applying makeup to the returned three-dimensional job (bffi means, 3D model and face storage means to record the face standard 3D shape attached to the deformed zottle, and 3D model fitting means to fit the tiil5 standard 3D model to the 3D measured by the self-measurement , ⁇ 3 ⁇ 4 ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
- the face position under various difficult and lighting conditions is previously determined from the tfilB three-dimensional shape and the characteristics.
- Geodesic illumination basis calculation means for generating a geodesic illumination basis for estimating a direction; and a face position of a face in an image using the basis, a face position and a direction estimating means. I do.
- the twenty-first invention is similar to the nineteenth invention, except that, instead of the means for estimating the deformation of the parts, the various shapes and characteristics measured under various conditions and lighting conditions are used instead.
- the various shapes and characteristics measured under various conditions and lighting conditions are used instead.
- the face part changing means for changing the facial expression and the ⁇ means that it is.
- the twenty-second invention is based on the nineteenth, twentieth or twenty-first invention, and uses a less shaded and textured image in place of the
- ⁇ is assumed that the face part changing means for changing the facial expression and the ⁇ means that it is.
- the program of the twenty-third invention is a computer which measures a three-dimensional shape of a person's face, a step of fitting an eye! ⁇ To a tertiary measured tertiary shape, and a tff! B human face. Determining the orientation of the face in the captured image, and generating and displaying the glasses in the image using the estimated orientation of the face MR ⁇ .
- a program provides a computer with a step of measuring the three-dimensional shape of a person's face and its Sf characteristic, a step of fitting an eye to the measured third order ⁇ , Estimating the orientation ⁇ of the face in the image showing the face using the t & l 3D shape and the tfrt feature I, and using the fixed face orientation And a step of displaying the B rise in the image as if it were a ⁇ ".
- the program of the twenty-fifth invention is the program of the twenty-fourth invention, wherein Instead of the process of estimating the orientation of the face in the image using the three-dimensional shape and the SI ⁇ characteristic, the
- a program pre-stores a computer with a step of measuring the three-dimensional shape of a person's face and its ⁇ characteristic, and a step of applying makeup to the three-dimensional shape measured by 3 ⁇ 4f ”.
- the face of a touching person is revealed.
- the 3D shape of the person's face was first measured by the 3D shape measuring means, and then the eyes ⁇ 1 »The fitting means fits the glasses selected by the user to the three-dimensional shape, and the face position and orientation estimating means» Utilizing the estimated orientation, the eyeglass image generating means generates an eye image in which the eyeglasses are arranged to face the face in the image.
- the factory ⁇ B Tengai demonstrating means the customer image and the B profession image are combined and displayed.
- FIG. 1 is a functional block diagram for explaining the overall structure of an eyelash simulation apparatus according to a first difficult example of the present invention.
- FIG. 2 is an explanatory diagram of a face coordinate system.
- FIGS. 3A to 3C are explanatory diagrams of 3D data of B rise.
- FIGS. 4 (a) to 4 (d) are diagrams for explaining the legality of S ⁇ for a fine image.
- FIG. 5 is a functional block diagram illustrating a detailed configuration of a face position / direction estimation unit.
- FIG. 6 is a function block diagram for explaining the overall configuration of the doll making simulation apparatus according to the second example of the present invention.
- FIG. 7 is an explanatory view of the dangling means.
- FIG. 8 is an explanatory diagram of fitting between a standard face 3D shape model and measured 3D data.
- Figures 9 (a) to (d) are schematic diagrams corresponding to the male ⁇ ⁇ of the face, using the standard 3D model and its modified function.
- FIGS. 10 (a) to (c) are illustrations for legally applying a makeup image to an image.
- FIG. 11 is a schematic block diagram showing the overall configuration of a computer system to which the g-job simulation device and the uf simulation device are applied. BEST MODE FOR CARRYING OUT THE INVENTION
- the eyelash simulation device 201 shown in Fig. 1 Measurements of both of the 14 features are performed, and an image is stored and stored by a normal camera, and the processing of adding the HIT order to each of the B images is performed.
- the three-dimensional (3D) shape features [4 Measuring means 1 measures the three shapes of the person's face that will be the final body and its delicate characteristics, or the shadow images and texture images according to Are output as the tertiary signal 102 and the Sit characteristic 1 raw signal 101.
- a method for measuring the above three-dimensional data for example, a method disclosed in Japanese Patent Application Laid-Open No. 2000-010125 “3D weaving measurement method: a certain body” (hereinafter referred to as “Reference 14”) is used. can do.
- Reference 15 In addition to the three-dimensional shape measurement, various existing URLs described in “Three-dimensional image measurement”, such as “Tsururu” and “Akiramitsu” (hereinafter referred to as “Reference 15”) are used. be able to.
- Reference 14 after W, a texture image can be obtained simultaneously with the three-dimensional shape. Therefore, texture images that can be deemed to be delicately characteristic can be easily measured by duplicating lamps around the face so that light shines uniformly on the entire face.
- the present invention can be configured as long as it is a method capable of measuring the characteristic 14 while standing the face.
- Fig. 2 we consider a sphere that covers the face centered on the center of gravity of the face, which is the measurement object, and express the tertiary target value and texture value at each point P on the face surface.
- the point P can be uniformly expressed by the «'key (s,) of the point ⁇ 3 obtained by shadowing the center of the center of gravity on the lift self « plane.
- 3D iK / CT features The three-dimensional shape and texture image obtained by I
- the geodesic illumination source means 2 receives the three-dimensional ⁇ $ -like signal 102 and the special signal 101 as inputs, finds a basis for approximating the appearance of the face under arbitrary illumination, and Output as bottom signal 103.
- the measurement illumination base method refer to Japanese Patent Application Laid-Open No. 2002-1575705, “Object collation method, object illumination device, and recording medium recording the program” (hereinafter referred to as “Reference 16”). ”) Has been to IBM.
- the geodesic lighting unit JS ⁇ output means 2 will be described in accordance with Reference 16.
- the change in brightness due to the standing shadow on the face surface is calculated, and the texture group under various lighting cows is calculated.
- the change in brightness and color due to the change in the shadow of the texture nucleus can be calculated based on the fraction of the object surface corresponding to the element, the direction that can be calculated from the cubic shape, and the direction of illumination.
- shadow « use tff! B 3D shape! By performing processing such as / and ⁇ , it is possible to determine whether or not light is illuminated under the lighting conditions set for the disgusting pixels.
- each pixel (s, t) of the texture of the face is represented by the characteristic B (s, t) of the face surface corresponding to the pixel, and the 3 ⁇ 4 ⁇ direction.
- the eigenvectors are selected by the minimum number that gives a cumulative contribution rate of eigenvalues of 99% or more, and named as “geodesic lighting base”. As described above, an appropriate number, for example, 10 geodetic illumination bases can be obtained. This geodesic lighting base approximates the entire texture group.
- Fitting means 3 receives 3D ⁇ $ -like signal 102, 1 ⁇ 1 optional symbol 104 and eye-catching 3D data signal 105, and specifies by user
- the 3D data of the job B which is the obtained 3D data, is read from the eye storage means 4 and the measured face is referred to the 3D fraud and the objective is to bid on it.
- Fig. 3 (a) to (c) show an example of the three-dimensional data of B ⁇
- the camera 5 visually tests the image of the face of the person as a body and outputs an image signal 106 composed of a plurality of frames.
- An example (for one frame) of the image signal 106 is shown in FIG. 4 (a).
- the face erecting means 6 receives the W image signal 106, the three-dimensional shape signal 102, and the geodetic lighting base No. 103 as inputs, and as described in ffrlB literature 16 Then, for each frame of the image signal 106, the position and orientation of the face obtained by repeatedly searching for a better match are output as a face / standing / orientation signal 107.
- the face position / orientation means 6 will be described in detail with reference to FIG. The following In the following, a brief description of one frame of the g image signal will be given, but a similar touch is performed on each frame.
- the camera 5 is assumed to be a pinhole camera having a focal point ⁇
- the present X Upsilon Z coordinate system as a reference, the position of the face in front of the body to become a person of the camera 5 (TX, ⁇ ⁇ , ⁇ ⁇ ) ⁇ , its direction (R x, R Y, R z) T And Each coordinate position ( Xi , Yi , zj) T of the three-dimensional shape of the face measured at this time is located at the seat (Xi, Yi, ⁇ ;) in front of the camera 5. .
- Face position 'direction ⁇ means 15, the initial set position of the face ( ⁇ ⁇ ., ⁇ ⁇ . , ⁇ ⁇ .) ⁇ and direction (R x 0, R Y ° , R z.) T the original city, (T x . + Mm T x , ⁇ ⁇ . + ⁇ ⁇ , ⁇ ⁇ ° + ⁇ ⁇ ) T , (R X ° + AR X , R Y ° + AR Y , R 2 ° + AR Z ) T is output and output as face position / direction signal 121.
- Illumination correction means 16 receives face orientation signal 121, three-dimensional shape signal 102, and geodesic illumination base iSi code 103 as inputs, and generates a plurality of unique vectors forming a geodesic illumination base in a three-dimensional shape.
- the 5E image is output as the illumination correction image signal 122.
- the HtJt ⁇ means 17 inputs the M1H elephant signal 105 and the illumination.
- the final half 1J setting means 18 receives the image ⁇ symbol 1 2 3 and the face ⁇ standing.
- the orientation signal 1 2 1 is input, and if the scattering becomes sufficiently small, the estimation of the face position and orientation is completed. And the face position and orientation at that time are output as the face position and orientation signal 107. If it is still finished and it is determined that the replay is not correct, the position and direction of the replay are corrected and the calculation is performed again.
- the t command is output as the face position .direction ⁇ ] E command signal 124.
- the face position / direction storage means 19 receives a face position / direction signal 107 as an input, stores the value, and outputs the value as a previous frame face position / direction signal 125 as needed.
- Figure 4 (b) shows the result of drawing the cubic shape using the results of estimating the position and orientation of the body face.
- the B image ⁇ means 7 receives the tertiary shape signal 102, the face position .direction signal 107, the selection M ⁇ one-fitting signal 108 and inputs the image signal 106
- the face 37A3 ⁇ 4 ⁇ t ⁇ - is distributed in the same orientation and orientation as the face being photographed, and the relative positional relationship obtained for making the three-dimensional data fly is used.
- the face / eye matching 5 display means 8 manually outputs the image signal 106 and the image signal 109.
- the image with the B fiber applied is displayed.
- Fig. 4 (d) shows an example of the final composite image.
- the displayed image can be displayed as it is, or by flipping it left and right, it can be made to look as if it were reflected in a mirror.
- the medium K1 is a disk, a body memory, or another medium, and describes a program for causing the computer to function as the eyeglasses simulation device 201.
- This program is read by the simulation device 201, and by controlling its operation, on the B3 ⁇ 4 simulation device 201, 37 fire element type Totoku! ⁇ Raw measurement means 1, geodetic lighting Means 2, B selection 'fitting means 3, face position and orientation estimation means 6, B planning means 7, and face.
- the geodesic illumination base method described in Ref. 16 was described as a means for detecting the position and orientation of the face. You can make efforts. In the above description, the eye mirror image and the image were simply combined with each other in the means 8.
- the present invention can also be similarly configured by using other parts such as earrings, hats, and hairpieces on the head.
- the present invention can also be configured to realize a more natural display by taking into account the movement of the face, taking into account the movement of the face, and making the movement and length of the wearing and the hairpiece itself more natural.
- the present invention can be similarly configured by using not only one of the individual clothes but also a combination thereof.
- the ik3 ⁇ 4 simulation device 202 shown in Fig. 6 measures the three-dimensional shape of the face and the orchid characteristics, and forms a B image using a normal camera to generate ⁇ . e is next synthesized ⁇ ⁇ ⁇ .
- the makeup means 9 receives the 33 ⁇ 4fe 9f ⁇ number 102, the pictorial number 101, and the signal 110 while inputting makeup, applies makeup on the measured face, and finally finishes the makeup. 3) Output as signal 1 1 1 As for the conversion method, please do the secondary « «image »! ⁇ After applying the technique for applying images to images as described in References 1, 1, 12, and 13, etc. Conversely, the point corresponding to the shape can be obtained]].
- Fig. 7 shows an example.
- the characteristics are texture-mapped to the three-dimensional shape, and an image viewed from an appropriate direction is generated by using computer graphics as shown in FIG.
- the appropriate color and nib are selected from the palette 1 2 6 and the color is applied with the pen 1 2 7 (Kazuru Saku signal 1 ⁇ ⁇ ).
- the work can be used to create a table in each standing room. As shown in Fig. 2, the difficulty at point P can be obtained by craneing the center of gravity onto the plane.
- the point ⁇ g (s, t) of the point Q can be uniformly expressed by the same coordinate system as a cubic shape and a special feature.
- the created makeup is output as a makeup 3D signal 1 1 1.
- the three-dimensional model fitting means 10 is a three-dimensional shape signal 102, a reflection characteristic ⁇ signal 101, a fitting operation signal 112, a face standard three-dimensional model.
- No. 16 is input and the measured 3D weave is related to the standard 3D shape model of the face with the ⁇ ⁇ ff change notch, and the relationship between them is standard 3 3 ⁇ 43 ⁇ 43 ⁇ 4 fitting signal 1 1 Output as 3.
- This fitting is described in “Tables Using Range Finder and Structure of Expression Editing Tool” (hereinafter referred to as Ref. 18).
- the present invention can be configured by a method of forming a two-dimensional image as described in References 8 and 9 by once taking two images using measurement data.
- the present invention can be configured in any manner that performs such association.
- the point P 134 when calculating the movement amount of the point P 134 in the area defined by the right eye corner point 131, the right eye corner point 132, and the right mouth end point 133, the point P 134
- the movement amount is calculated by the following equation for 3 ⁇ 4 ⁇ which is arranged at the position represented by the ratio as shown by.
- q ⁇ r is a value from 0 to 1 ⁇ .
- the parameters a to p obtained by the above operation, and the movement amount of each vertex coordinate on the face standard 3D model ( ⁇ , 1 , ⁇ ⁇ ⁇ 1 ) ⁇ ( ⁇ ⁇ 2 , ⁇ ⁇ 2 , ⁇ ⁇ ⁇ 2 ) ⁇ is output as the standard three-dimensional fitting signal 113. Therefore, all points on the face standard 3D model are mapped on the measurement data. Also, the face standard 3D model used in this difficult example is provided with a deformed "knowledge", which is a barley if change that can be performed by a person, such as opening and closing the eyes and opening and closing the mouth. It is defined so that it can be controlled as the movement amount of each coordinate point on the three-dimensional model.
- Fig. 9 (a) is a schematic diagram showing the deformation of the eyes among the deformations.
- the leftmost position is the initial state, and the coordinate points are given by giving the amount of opening of the eyes. Is given the amount of movement, as if gradually narrowing his eyes
- Fig. 9 (c) it is assumed that the leftmost eye is the measured face 3D shape
- Fig. 9 (b) is a schematic diagram of a face standard 3D model fitted to this shape.
- Fig. 9 (d) is a schematic diagram of an eye shadow that has been measured and texture mapped onto an S 3D shape. In this case, V can be deformed along with the deformation of the face standard 3D model.
- Face part means 11 is face position and orientation signal 107, standard 3D shape signal 102, geodesic illumination base signal 103, face image signal 106, standard 3D fitting signal 113, face
- the face 3D model is applied to the measured 3D model, and the measured 3 and the geodetic lighting base on the surface are measured. Add ⁇ .
- a plurality of three-dimensional shapes corresponding to each facial expression for example, a facial expression with the eyelid closed, a facial expression with the eyelid open, a magical expression with the lid closed halfway, etc.
- Fig. 9 (c) is a template for closing the eyelids obtained by deforming the standard 3D model shown in Fig. 9 (a).
- the makeup means 1 2 is composed of a three-dimensional ⁇ e signal 101, a direction signal 107, It takes the dimensional signal 1 1 1 and the face part deformation signal 1 1 4 as input, creates an image with the first applied makeup fitted to the image, and outputs it as a makeup image signal 1 15.
- the face / ⁇ playing means 13 receives the image signal 106 and the bffi image signal 115, and overlays the image on the screen to overlay the image on the screen. To display.
- Fig. 10 (a) shows an example of an input image
- Fig. 10 (b) shows an example of a makeup image
- Fig. 10 (c) shows an example of a composite image.
- the composite image can be displayed as it is, or it can be shown as if it were mirrored by inverting left and right. Further, in the above-described face-to-face makeup display means 13, it is possible to obtain a more natural synthesized image by adding transparency to the force makeup image displayed when the ⁇ tig image is overlaid on the B image.
- the face standard 3D model / deformation function storage means 14 stores a face standard 3D model accompanied by a face pattern generated by an expression or the like, and stores the face standard 3D model / deformation model as needed. ! Output as symbol 1 1 6
- the knowledge medium # 2 is a disk, a body memory, and other photographic media, and a program for causing a computer to function as the i li simulation device 202 is described.
- This program is read by the ftffi simulation device 202, and by controlling its operation, the three-dimensional characteristics 14 measurement means 1 and the geodetic lighting base detection means are provided on the ftffi simulation device 202.
- Face position 'orientation Means 6 Means 9, 3D model, Nore fitting means 10
- Face part changing means 1 1 Difficulty 1 3
- the geodesic illumination base method described in the above-mentioned reference 15 has been described as a means for detecting the position and direction of the damage. ⁇ ] r can be.
- the facial parts » ⁇ step even though the ⁇ -light basis method described in Ref. 15 described above is described in an extended form, the present invention is constituted as long as the facial parts are estimated. be able to.
- the above description relates to the MM image signal 106 for one frame output from the camera 5, and the same operation as that described above composes the ffi® image signal 106. This is sequentially performed for each frame.
- the force s which has been described as a separate example of wearing on the head of a occupation or the like and makeup, can be constructed in the same way using this combination.
- the computer system 300 shown in FIG. 11 includes a node 310 and, as each component connected to the bus 310, a CPU 322 such as a CPU that handles information processing in the system. It has a main memory 330, a ROM (read 'only' memory) 340, a disk controller 350, a display controller 360, an input device 370, and a camera interface 380.
- a CPU 322 such as a CPU that handles information processing in the system. It has a main memory 330, a ROM (read 'only' memory) 340, a disk controller 350, a display controller 360, an input device 370, and a camera interface 380.
- the main memory 330 is composed of, for example, RAM (random access memory) and other dynamic storage devices (for example, DRAM, SRAM, etc.). It is capable of storing variables and other intermediate information that are temporarily generated during execution of a program instruction by the processor 320 and are included in the H blue report.
- RAM random access memory
- DRAM dynamic random access memory
- SRAM static random access memory
- the ROM 340 is constituted by, for example, a static * storage device (for example, PROM, EPROM, EPROM, or the like), and returns a fixed information command issued by the processor 320.
- a static * storage device for example, PROM, EPROM, EPROM, or the like
- An HDD (hard disk drive 15 1) and / or a removable media drive (magnetic disk, optical disk, magneto-optical disk, etc.) 15 2 are connected to the disk controller 350.
- An HDD 35 1 and / or a removable disk Reference numeral 352 denotes various information commands which are processed by the disk controller 350 and processed by the processor 320.
- the display controller 360 includes a display 361 such as a CRT (Cathode Ray Tube) or a liquid crystal display.
- the display 36 1 is displayed by the disk controller 360 so as to display each of the collisions generated by the processor 320.
- the input device 370 is composed of, for example, a keyboard 371 and a pointing device (such as a mouse) 372.
- the input device 370 sends various ⁇ f # ⁇ commands to be input, instructed, and selected by the user operation to the processor 320.
- the camera interface 380 is equipped with, for example, a communication interface of a predetermined type (can be adopted for both of wireless and wireless). Through the communication interface, is composed of a CMOS type fixed electrode. Camera 3 8 1 Power S An image signal picked up by the camera 38 1 is taken in through the camera interface 3 80.
- the programs executed by the computer system 300 (OS 391, graphics processing program 392, etc.) 390 are, for example, ROM 340, hard disk 351 and other system 15 stored in a recording medium and executed by the processor 320.
- OS 391, graphics processing program 392, etc. are, for example, ROM 340, hard disk 351 and other system 15 stored in a recording medium and executed by the processor 320.
- the present invention is not limited to the examples of the representative examples, and if the present invention is applied to the present invention, based on the content of the scope of the request for special knowledge, various details may be provided within the scope of the invention.
- the present invention can be modified and changed, and they also belong to the scope of the present invention.
- the present invention is suitable for applications such as a spectacle simulation device and a makeup simulation device using a computer system.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/513,354 US20050175234A1 (en) | 2002-09-03 | 2003-09-02 | Head-mounted object image combining method, makeup image combining method, headmounted object image combining device, makeup image composition device, and program |
EP03794188A EP1501046A4 (en) | 2002-09-03 | 2003-09-02 | PLACE OBJECT IMAGE ASSOCIATING METHOD ON THE HEAD, MOUNTING IMAGE ASSOCIATING METHOD, OBJECT IMAGE ASSOCIATING DEVICE ON THE HEAD, MOUNTING IMAGE COMPOSING DEVICE, AND PROGRAM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-257544 | 2002-09-03 | ||
JP2002257544A JP2004094773A (ja) | 2002-09-03 | 2002-09-03 | 頭部装着物画像合成方法、化粧画像合成方法、頭部装着物画像合成装置、化粧画像合成装置及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004023395A1 true WO2004023395A1 (ja) | 2004-03-18 |
Family
ID=31972994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/011210 WO2004023395A1 (ja) | 2002-09-03 | 2003-09-02 | 頭部装着物画像合成方法、化粧画像合成方法、頭部装着物画像合成装置、化粧画像合成装置及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20050175234A1 (ja) |
EP (1) | EP1501046A4 (ja) |
JP (1) | JP2004094773A (ja) |
WO (1) | WO2004023395A1 (ja) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005346206A (ja) * | 2004-05-31 | 2005-12-15 | Hoya Corp | 三次元画像作成システム及び三次元画像データ |
WO2006049147A1 (ja) * | 2004-11-04 | 2006-05-11 | Nec Corporation | 三次元形状推定システム及び画像生成システム |
JP3920889B2 (ja) * | 2004-12-28 | 2007-05-30 | 沖電気工業株式会社 | 画像合成装置 |
JP2006227838A (ja) * | 2005-02-16 | 2006-08-31 | Nec Corp | 画像処理装置及び画像処理プログラム |
JP4417292B2 (ja) * | 2005-05-25 | 2010-02-17 | ソフトバンクモバイル株式会社 | オブジェクト出力方法及び情報処理装置 |
US20100110073A1 (en) * | 2006-11-15 | 2010-05-06 | Tahg Llc | Method for creating, storing, and providing access to three-dimensionally scanned images |
US20100157021A1 (en) * | 2006-11-15 | 2010-06-24 | Abraham Thomas G | Method for creating, storing, and providing access to three-dimensionally scanned images |
EP2206093B1 (en) * | 2007-11-02 | 2013-06-26 | Koninklijke Philips Electronics N.V. | Automatic movie fly-path calculation |
JP2009211513A (ja) * | 2008-03-05 | 2009-09-17 | Toshiba Corp | 画像処理装置及びその方法 |
EP2304647B1 (en) * | 2008-05-08 | 2018-04-11 | Nuance Communication, Inc. | Localizing the position of a source of a voice signal |
JP5327866B2 (ja) * | 2009-09-11 | 2013-10-30 | 国立大学法人東京農工大学 | 眼鏡のフィッティングシミュレーションシステム、眼鏡のフィッティングシミュレーション方法及びプログラム |
JP5648299B2 (ja) * | 2010-03-16 | 2015-01-07 | 株式会社ニコン | 眼鏡販売システム、レンズ企業端末、フレーム企業端末、眼鏡販売方法、および眼鏡販売プログラム |
JP5652097B2 (ja) | 2010-10-01 | 2015-01-14 | ソニー株式会社 | 画像処理装置、プログラム及び画像処理方法 |
JP2012181688A (ja) * | 2011-03-01 | 2012-09-20 | Sony Corp | 情報処理装置、情報処理方法、情報処理システムおよびプログラム |
US9224248B2 (en) * | 2012-07-12 | 2015-12-29 | Ulsee Inc. | Method of virtual makeup achieved by facial tracking |
JP2014199536A (ja) * | 2013-03-29 | 2014-10-23 | 株式会社コナミデジタルエンタテインメント | 顔モデル生成装置、顔モデル生成装置の制御方法、及びプログラム |
CN103646245B (zh) * | 2013-12-18 | 2017-02-15 | 清华大学 | 儿童面部形状的模拟方法 |
DE102014217961A1 (de) * | 2014-09-09 | 2016-03-10 | Bayerische Motoren Werke Aktiengesellschaft | Bestimmung der Pose eines HMD |
JP6583660B2 (ja) * | 2015-03-26 | 2019-10-02 | パナソニックIpマネジメント株式会社 | 画像合成装置及び画像合成方法 |
US10685457B2 (en) | 2018-11-15 | 2020-06-16 | Vision Service Plan | Systems and methods for visualizing eyewear on a user |
US11030798B2 (en) | 2019-01-30 | 2021-06-08 | Perfect Mobile Corp. | Systems and methods for virtual application of makeup effects based on lighting conditions and surface properties of makeup effects |
JP6813839B1 (ja) * | 2019-09-09 | 2021-01-13 | 株式会社カザック | 眼鏡試着支援装置およびプログラム |
CN111009031B (zh) * | 2019-11-29 | 2020-11-24 | 腾讯科技(深圳)有限公司 | 一种人脸模型生成的方法、模型生成的方法及装置 |
DE102021108142A1 (de) * | 2021-03-31 | 2022-10-06 | Bundesdruckerei Gmbh | Verfahren und Vorrichtung zur Erzeugung eines brillenlosen dreidimensionalen Modells des Kopfes einer bebrillten Person |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999023609A1 (en) * | 1997-10-30 | 1999-05-14 | Headscanning Patent B.V. | A method and a device for displaying at least part of the human body with a modified appearance thereof |
US6421629B1 (en) * | 1999-04-30 | 2002-07-16 | Nec Corporation | Three-dimensional shape measurement method and apparatus and computer program product |
US20020097906A1 (en) * | 2000-11-20 | 2002-07-25 | Nec Corporation | Method and apparatus for collating object |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4539585A (en) * | 1981-07-10 | 1985-09-03 | Spackova Daniela S | Previewer |
JPH06118349A (ja) * | 1992-10-02 | 1994-04-28 | Seiko Epson Corp | 眼鏡装用シミュレーション装置 |
JP3537831B2 (ja) * | 1997-05-16 | 2004-06-14 | Hoya株式会社 | 眼鏡のオーダーメイドシステム |
US6792401B1 (en) * | 2000-10-31 | 2004-09-14 | Diamond Visionics Company | Internet-based modeling kiosk and method for fitting and selling prescription eyeglasses |
US7016824B2 (en) * | 2001-02-06 | 2006-03-21 | Geometrix, Inc. | Interactive try-on platform for eyeglasses |
-
2002
- 2002-09-03 JP JP2002257544A patent/JP2004094773A/ja active Pending
-
2003
- 2003-09-02 US US10/513,354 patent/US20050175234A1/en not_active Abandoned
- 2003-09-02 WO PCT/JP2003/011210 patent/WO2004023395A1/ja active Application Filing
- 2003-09-02 EP EP03794188A patent/EP1501046A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999023609A1 (en) * | 1997-10-30 | 1999-05-14 | Headscanning Patent B.V. | A method and a device for displaying at least part of the human body with a modified appearance thereof |
US6421629B1 (en) * | 1999-04-30 | 2002-07-16 | Nec Corporation | Three-dimensional shape measurement method and apparatus and computer program product |
US20020097906A1 (en) * | 2000-11-20 | 2002-07-25 | Nec Corporation | Method and apparatus for collating object |
Non-Patent Citations (2)
Title |
---|
See also references of EP1501046A4 * |
SHUNSUKE OHASHI ET AL.: "Range finder o mochiita hyojo henkei rule to hyojo henshu tool no kochiku", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS GIJUTSU KENKYU HOKOKU, vol. 101, no. 610, 18 January 2002 (2002-01-18), pages 1 - 6, XP002975640 * |
Also Published As
Publication number | Publication date |
---|---|
EP1501046A4 (en) | 2009-03-11 |
US20050175234A1 (en) | 2005-08-11 |
EP1501046A1 (en) | 2005-01-26 |
JP2004094773A (ja) | 2004-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004023395A1 (ja) | 頭部装着物画像合成方法、化粧画像合成方法、頭部装着物画像合成装置、化粧画像合成装置及びプログラム | |
US11694392B2 (en) | Environment synthesis for lighting an object | |
JP7342008B2 (ja) | 仮想アバタのためのメッシュの合致 | |
JP6542744B2 (ja) | カスタム製品を創作するための方法及びシステム | |
KR100523742B1 (ko) | 3차원 안경 시뮬레이션 시스템 및 방법 | |
CN101779218B (zh) | 化妆模拟系统及其化妆模拟方法 | |
US11640672B2 (en) | Method and system for wireless ultra-low footprint body scanning | |
JP3250184B2 (ja) | 眼鏡装用シミュレーションシステム | |
US20190266778A1 (en) | 3d mobile renderer for user-generated avatar, apparel, and accessories | |
US8139068B2 (en) | Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh | |
US8218903B2 (en) | 3D object scanning using video camera and TV monitor | |
Foster et al. | Integrating 3D modeling, photogrammetry and design | |
CN110431599A (zh) | 具有虚拟内容扭曲的混合现实系统及使用该系统生成虚拟内容的方法 | |
CN103871106A (zh) | 利用人体模型的虚拟物拟合方法及虚拟物拟合服务系统 | |
JP2000107129A (ja) | 眼光学系のシミュレーション方法及び装置 | |
US10685457B2 (en) | Systems and methods for visualizing eyewear on a user | |
JP2018195996A (ja) | 画像投影装置、画像投影方法、及び、画像投影プログラム | |
WO2018182938A1 (en) | Method and system for wireless ultra-low footprint body scanning | |
JP3825654B2 (ja) | 眼光学系のシミュレーション方法及び装置 | |
JP6062589B1 (ja) | プログラム、情報処理装置、影響度導出方法、画像生成方法及び記録媒体 | |
JP2009003812A (ja) | デフォーカス画像の生成方法及び生成装置 | |
CN115187703A (zh) | 一种眉妆指导方法、装置、设备及介质 | |
JP2022117813A (ja) | 仮想試着システム及びそれに用いられるプログラム | |
JPH11272728A (ja) | 眼鏡レンズ画像生成方法及び装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10513354 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003794188 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2003794188 Country of ref document: EP |