WO2005024728A1 - 形態変形装置、物体動作符号化装置および物体動作復号化装置 - Google Patents
形態変形装置、物体動作符号化装置および物体動作復号化装置 Download PDFInfo
- Publication number
- WO2005024728A1 WO2005024728A1 PCT/JP2004/012181 JP2004012181W WO2005024728A1 WO 2005024728 A1 WO2005024728 A1 WO 2005024728A1 JP 2004012181 W JP2004012181 W JP 2004012181W WO 2005024728 A1 WO2005024728 A1 WO 2005024728A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- change
- data
- face
- deformation
- external force
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Definitions
- Morphological deformation device object motion coding device and object motion decoding device
- the present invention relates to a morphological deformation device that deforms the shape of an object, an object motion (morphological deformation) coding device, and an object motion (morphological deformation) decoding device, and in particular, the shape of a part of a face is deformed.
- the present invention relates to a morphological deformation device for shaping and giving an expression, an object motion (morphological deformation) coding device, and an object motion (morphological deformation) decoding device.
- Such flow of facial expression synthesis is roughly divided into two flows.
- One is the method of moving the wire frame model that hits the skin surface by treating the skin surface that moves in conjunction with the expression muscle under the skin of the face as a physical model and solving the equation of motion based on the muscle and bone movement. It is.
- This is one of the methods for creating a facial expression by applying regular geometric deformation directly to a wire frame model.
- This prior art is, for example, Japanese Patent Application Laid-Open No. 8-96162.
- FACS Fluorescence A- rection Coding System
- AU Action Unit
- This FACS is generally used in combination with a method of creating a facial expression by applying regular geometric deformation directly to a wire frame model.
- the expression process can be expressed smoothly, and the arrangement of expression muscles can be changed to cope with various facial features. If there is a huge amount of calculation, there is a problem with
- a face image synthesizing device is disclosed in Japanese Patent Application Laid-Open No. 3-74777.
- motion parameters and facial expression parameters based on human instinctive motion or unconscious motion are added to facial image information including facial expressions and motion parameters, and based thereon, facial images are synthesized. Ru.
- Japanese Patent Application Laid-Open No. 6-76044 discloses an expression code reproduction and display device.
- the device of this conventional example has an expression generation condition input device for setting expression generation conditions.
- the arithmetic processing unit calculates the movement amount, movement direction, and shape change of movable parts such as forehead, eyebrows and mouth in the face according to the instruction signal of expression generation condition input device force, and the expression element code based on medical analysis of expression.
- an expression code reproducing unit for reproducing the The facial expression display device reproduces and displays the form of the face according to the facial expression element code reproduced by the arithmetic processing unit, and also moves and deforms the forehead, eyebrows, eyes, mouth and the like.
- Japanese Patent Application Laid-Open No. 6-76058 discloses an expression coding device.
- the image input device takes in a face image as an electric signal and encodes it to generate predetermined face image data.
- the feature part extraction processing unit receives predetermined facial image data from the image input device, and based on the feature part extraction condition, the feature part extraction processing unit Extract the image.
- the facial expression element extraction processing unit extracts facial expression elements based on the characteristic site image power, a predetermined facial expression element, and a relation rule between the characteristic sites to generate facial expression element information.
- the expression factor quantification processor calculates expression factor code by quantifying expression factor from the expression factor information based on predetermined expression factor quantification rule.
- the storage unit stores the expression element code.
- the facial expression element code output device outputs the facial expression element code stored in the arithmetic processing unit.
- Japanese Patent Application Laid-Open No. 8-305878 discloses a face image forming apparatus.
- the face image generation unit generates a face image by selecting a part image that forms each part of the face.
- the expression applying unit performs image deformation, display position movement and display non-display processing on a part image forming a face image according to a specified expression, and applies expression change to the face image.
- the expression difference data extraction unit extracts feature points from the expression face sample of the target person, divides the expression face sample into plural parts based on the feature points, and Calculate the difference between the shape and texture accompanying the change of expression transition between two different expressions specified in, and obtain the difference shape data and the difference texture.
- the operation pattern setting unit specifies an operation pattern function for specifying time change of each part.
- the time-series expression image synthesis unit generates intermediate difference shape data at an arbitrary time calculated based on the movement pattern function and the difference shape data, and at an arbitrary time calculated based on the movement pattern function and the difference texture.
- the facial expression animation reproducing unit continuously displays the facial expression animation by continuously displaying the temporal facial expression images at predetermined time intervals generated by mapping the temporal facial expression texture to the shape based on the temporal facial expression shape data.
- the facial expression synthesizer apparatus is disclosed in Japanese Patent Application Laid-Open No. 2002-329214 (P2002-329214A).
- This conventional facial expression synthesis method uses computer graphics.
- the synthesis method includes the steps of inputting a facial expression model as data of facial feature points, an expression model Extracting the substantially elliptical sphincter region surrounded by the sphincter muscle, and defining the region above the major axis of the sphincter region as a semielliptical region surrounded by the major axis and the semielliptical shape; According to the steps of searching for feature points included in the area, calculating the movement point of feature points contained in the semi-elliptical area when the facial expression changes, and following the movement point. Moving the feature points to synthesize an expression model after expression change.
- An object of the present invention is to reduce the amount of calculation when realizing a change in form such as facial expression synthesis.
- the morphological deformation encoding device is configured to convert between the pre-change morphological data representing the form of the object before the morphological change and the post-change morphological data representing the form of the object after the morphological change. Based on the calculation unit for calculating difference data, the pre-change form data, and the difference data, an operation area in which the form change of the object has occurred, and an operation area for the form change are stored. And a determination unit that determines an external force to be obtained.
- the calculation unit is configured to convert the pre-change morphological data and the post-change morphological data into pre-change morphological model data and post-change morphological model data based on a three-dimensional model, and the pre-change morphological model And a difference calculation unit that calculates the difference data from a difference between data and the post-change morphology model data.
- the model is a three-dimensional polygon mesh model, and the pre-change form data and the post-change form data may be three-dimensional data.
- the operation area includes a plurality of small areas, each of the plurality of small areas includes a plurality of control points, and the external force is based on a physical model structure of each of the plurality of small areas.
- the object includes a human face
- the form of the object before the form change indicates an expressionless face state
- the form of the object after the form change indicates a face state in which an expression is expressed. It may be shown.
- a shape deformation decoding apparatus is configured to change the shape of an object before changing its shape.
- a decoding unit that determines a movement position of a control point in the operation area based on pre-change form data representing a state and information of an external force applied to the operation area corresponding to the form change;
- a form generation unit for generating post-change form data representing the form of the object after the form change from the movement position of each of the plurality of control points.
- the operation area includes a plurality of small areas, each of the plurality of small areas includes a plurality of control points, and a movement position of the control point of each of the plurality of small areas is the plurality of small area units. Is determined using the physical model.
- the pre-change form data is three-dimensional data
- the decoding unit changes pre-change model data based on the three-dimensional model of the form of the object before the change in form and the pre-change form data.
- a model data generation unit may be generated, and a movement amount calculation unit may be provided to determine the movement position of each control point in the operation area from the pre-change form model data based on the external force information.
- the pre-change form data is three-dimensional data
- the decoding unit generates mesh shape model data before-change form mesh model data based on the three-dimensional polygon mesh model and the pre-change form data.
- a data generation unit, and a movement amount calculation unit that determines the movement position of each control point in the motion area from the pre-change shape mesh model data based on the external force information. .
- the form generation unit generates the form of the object after the form change in units of the plurality of small areas from the pre-change form data and the movement position of each of the plurality of control points. At this time, it may further include an adjacent small area adjusting unit for smoothing a discontinuous portion occurring in a boundary part of adjacent ones of the plurality of small areas after the change in shape. Further, it may further include an unoperating area adjusting unit for smoothing a discontinuous portion occurring at the boundary between the operating area after the form change and the non-operating area therearound.
- the control point is coupled to the panel by a damper.
- the object includes a human face, and the form before the form change is expressionless
- the form after the change in form may indicate a face on which an expression is exposed.
- the device may further comprise a three-dimensional actuator group for driving the movement area of the flexible object based on the post-change form data.
- a morphological deformation code decoding apparatus includes any of the above-mentioned morphological deformation coding apparatus and the morphological deformation coding system described in any one of claims 8 to 17. And a transmission device for transmitting the operation area generated by the form change coding unit and the external force information to the form deformation decoding apparatus through a communication channel.
- a shape deformation encoding method comprising: pre-change form data representing a form of an object before a form change; and post-change form data representing a form of the object after the form change. Calculating the difference data between them, the movement region in which the shape change of the object has occurred, and the movement region for the shape change, based on the pre-change shape data and the difference data; This is achieved by determining the applied external force.
- the calculating step includes the steps of: converting the pre-change form data and the post-change form data into pre-change form model data and post-change form model data based on a three-dimensional model; It may be achieved by the step of calculating the difference data from the difference between the morphological model data and the post-change morphological model data.
- the model may be a three-dimensional polygon mesh model, and the pre-change form data and the post-change form data may be three-dimensional data.
- the operation area includes a plurality of small areas, each of the plurality of small areas includes a plurality of control points, and the external force changes the form based on the physical model structure of each of the plurality of small areas.
- the object includes a human face, the form of the object before the change in shape indicates an expressionless face state, and the shape of the object after change in form indicates the state of the face in which the expression is expressed. Show.
- a morphological deformation decoding method displays a form of an object before a morphological change. Determining the movement position of the control point in the operation area based on the pre-change form data and the information of the external force applied to the operation area corresponding to the form change; the change pre-form data and the plurality of This is achieved by the step of generating post-change form data representing the form of the object after the form change from the movement position of each of the control points.
- the operation area includes a plurality of small areas, each of the plurality of small areas includes a plurality of control points, and a movement position of the control point of each of the plurality of small areas is the plurality of small areas alone. The order is determined using a physical model.
- the pre-change form data is three-dimensional data
- the step of determining the movement position is based on the three-dimensional model of the form of the object before the form change and the pre-change form data. This is achieved by the steps of generating pre-change form model data, and determining the movement position of each of the control points in the operation area from the pre-change form model data based on the external force information.
- the pre-change form data is three-dimensional data
- the step of determining the movement position generates pre-change form mesh model data based on the three-dimensional polygon mesh model and the pre-change form data. This may be achieved by the steps of: determining the movement position of each of the control points within the motion area from the pre-change shape mesh model data, based on the external force information.
- the step of calculating the movement position of the non-control point other than the control point included in the small area from the movement position of the control point near the non-control point is further provided. You may possess.
- the pre-change form data and the movement position of each of the plurality of control points are used to generate the post-change form of the object in units of the plurality of small areas.
- the method may further include the step of smoothing a discontinuous portion occurring at the boundary between adjacent ones of the plurality of small areas after the change in shape, or the operation region and the periphery thereof after the change in shape.
- the method may further include the step of smoothing discontinuities that occur at the boundary with the surrounding non-operating area.
- the object that the control point is preferably joined by a panel and a damper includes a human face, and the form before the form change is An expressionless face may be shown, and the form after the change in form may indicate a face on which an expression is exposed.
- the object may be a robot, and the method may further include the step of driving the movement area of the object having flexibility based on the post-change form data.
- a shape change coding / decoding method comprising the steps of: generating information on the operation area and information on the external force by any of the above-described shape transformation coding methods; Transmitting the information on the area and the external force information through the communication channel, and the information on the transmitted operating area and the information on the transmitted external force according to any of the above-described morphological deformation decoding methods; This is achieved by the step of generating post-change morphology data representing the morphology.
- a computer readable software for realizing any of the above-described morphological deformation coding method or any of the above-described morphological deformation decoding method.
- a computer readable software for realizing any of the above-described morphological deformation coding method or any of the above-described morphological deformation decoding method.
- the present invention it is possible to reduce the amount of calculation when realizing facial expression synthesis and the like using a physical model.
- the entire region where the expression muscle is disposed is regarded as one region, and the movement position of each point included in the region is calculated based on the external force.
- the movement position of each control point is calculated independently in smaller sub-area units.
- the target for which the movement position according to the external force is to be determined is limited to the control point which is a part of the point belonging to the small area, and the remaining non-control point is the movement of the control point near the non-control point. It is calculated from the position by interpolation.
- the movement area is divided into a plurality of small areas and the movement position is calculated independently in small area units, it is possible to make the step or the like at the boundary between the small areas inconspicuous. It is possible to make the level difference and the like generated at the boundary between the and the non-operating area around it inconspicuous.
- a physical model structure in which control points are connected by a panel and a damper is used as a physical model structure of a small area, it is possible to reproduce an operation close to the movement of the expression muscle.
- the number of control points in all the small areas and the physical model structure are equal, the number of parameters in calculating the control point movement position can be reduced, and the amount of calculation can be further reduced. It can be reduced.
- the object model includes a human face model, and the temporal change in form includes a change in human facial expression, it becomes possible to synthesize the human facial expression.
- the present invention it is possible to obtain an operation area which is a different area before and after the change of the form of the object and an external force applied to each control point included in the operation area with a small amount of calculation.
- the entire region in which the expression muscle is disposed is one region, and the movement force of all points included in the region is calculated.
- the amount of calculation is enormous, in the present invention, the movement amount force external force of each control point is calculated independently in smaller units of smaller areas.
- the physical model structure of the small area As the physical model structure of the small area, the physical model structure in which the control points are connected by the panel and the damper is used, so it is possible to calculate the external force for reproducing the movement close to the movement of the expression muscle.
- the number of control points in all the small regions equal to the physical model structure, the amount of calculation required for external force calculation can be further reduced, and the number of required parameters can be reduced.
- the form change is an expressionless state
- the state change is a state that an expression is exposed, it is necessary to make the person's face change from an expressionless face to a certain expression face. External force can be calculated.
- three-dimensional data of an object can be generated based on external force information and transformed (transformed) into three-dimensional data with a small amount of calculation.
- the entire region in which the expression muscle is disposed is regarded as one region, and the movement position of each point included in the region is calculated based on the external force.
- the movement position of each control point is calculated independently in smaller units of smaller areas.
- the entire area where the expression muscle is disposed is regarded as one area, and the movement position of each point included in that area is based on the external force.
- the movement position of each control point is calculated independently in smaller small area units. For example, based on a plurality of external force information such as external force information for “opening the mouth wide” and external force information for “raising the upper eyelid”, each control after the form change corresponding to the three-dimensional data or image before the form change. You can calculate the amount of movement of the points.
- the operating area is divided into multiple small areas and independent in small area units As a result, when the movement position is calculated, it is possible to make the step or the like at the boundary between small areas which is likely to occur not noticeable and to make the step or the like at the boundary between the active area and the non-active area around it inconspicuous.
- the physical model structure in which control points are connected by a panel and a damper is used as a physical model structure of a small area, so that motion close to the motion of the expression muscle can be reproduced, and the object includes a human face. Since the state before change is an expressionless state and the state after change is a state in which a certain expression is expressed, it is possible to give a certain expression to a person's expressionless face.
- a change in the facial expression of a certain person is detected, and the same change is applied to the same person or a face image of another person or the like to synchronize the expression with a specific person.
- the external force is calculated from the movement position of each point included in the area, with the entire area where the expression muscle is disposed as one area.
- the external force is calculated from the movement position of each control point independently in a smaller fine area unit.
- the entire region in which the expression muscle is disposed is regarded as one region, and the movement position of each point included in that region is regarded as an external force.
- the calculation amount is enormous due to the calculation based on the grid, in the present invention, the movement position of each control point is calculated independently in a smaller fine area unit. It is possible to reduce the amount of calculation for making facial expressions appear on an object such as a mask attached to the head of a robot.
- the entire region in which the expression muscle is disposed is regarded as one region, and the movement position of each point included in the region is calculated based on the external force.
- the movement position of each control point is calculated independently in smaller and smaller area units.
- FIG. 1 is an explanatory view of the principle of the present invention.
- FIG. 2 is a block diagram showing the configuration of a form deformation apparatus according to a first embodiment of the present invention.
- FIG. 3 shows an operation area designation unit in the form deformation apparatus according to the first embodiment of the present invention. It is a flowchart which shows the process example of.
- FIG. 4 is a flow chart showing an example of processing of an operation area dividing unit in the form transformation device according to the first example of the present invention.
- FIG. 5 is a view showing a configuration example of control point data in the form transformation device according to the first example of the present invention.
- FIG. 6 is a flow chart showing an example of processing of a control point operation unit in the form transformation device according to the first example of the present invention.
- FIG. 7A is an operation explanatory view of the inner portion of the operation of the deformation device according to the first embodiment of the present invention.
- FIG. 7B is an operation explanatory view of the inner portion in the deformation apparatus according to the first embodiment of the present invention.
- FIG. 8 is a flow chart showing an example of processing of the inner part of the operation of the form transformation apparatus according to the first embodiment of the present invention.
- FIG. 9 is a block diagram of a form deformation apparatus according to a second embodiment of the present invention.
- FIG. 10A is a diagram showing the appearance of the occurrence of a discontinuous portion such as a step at the boundary of the region j.
- FIG. 10B is a view showing how a discontinuous portion such as a step is generated at the boundary portion of a small area.
- FIG. 10C is a diagram showing how a discontinuous portion such as a step is generated at the boundary of a small area.
- FIG. 11 is a flow chart showing an example of processing of an adjacent small area adjusting unit in the form deformation device of the second embodiment of the present invention.
- FIG. 12 is a flowchart of smoothing processing performed by the adjacent small area adjustment unit in the form deformation apparatus according to the second embodiment of the present invention.
- FIG. 13 is a diagram for explaining the operation of the adjacent small area adjusting unit in the form deformation apparatus according to the second embodiment of the present invention.
- FIG. 14 is a block diagram of a form deformation apparatus according to a third embodiment of the present invention.
- Figure 15A shows that there is a discontinuity such as a step at the boundary between the active area and the inactive area. It is a figure which shows a mode that it generate
- FIG. 15B is a diagram showing how a discontinuous portion such as a step is generated at the boundary between the active area and the inactive area.
- FIG. 16 is a flowchart of smoothing processing performed by the non-operating area adjustment unit in the third embodiment of the present invention, according to the third embodiment of the present invention.
- FIG. 17 is a diagram for explaining the operation of the non-operating area adjusting unit in the form deformation apparatus according to the third embodiment of the present invention.
- FIG. 18 is a block diagram of a face motion (morphological deformation) encoding device according to a fourth embodiment of the present invention.
- FIG. 19 is a view showing an example of a face triangular polygon mesh model used in the face motion (morphological deformation) encoding apparatus according to the fourth embodiment of the present invention.
- FIG. 20 is an operation explanatory view of a three-dimensional difference calculation unit in the force-based face motion (form deformation) encoding apparatus according to the fourth embodiment of the present invention.
- FIG. 21 is a view showing an example of contents of three-dimensional difference data generated by a three-dimensional difference calculation unit in the face motion (form deformation) encoding apparatus according to the fourth embodiment of the present invention .
- FIG. 22A is an operation explanatory diagram of the external force calculation unit in the face operation (morphological deformation) encoding apparatus according to the fourth embodiment of the present invention.
- FIG. 22B is an operation explanatory diagram of the external force calculation unit in the face operation (morphological deformation) coding apparatus according to the fourth embodiment of the present invention.
- FIG. 23 is a block diagram of a face motion (morphological modification) decoding apparatus according to a fifth embodiment of the present invention.
- FIG. 24 is a block diagram of a face motion (morphological deformation) decoding apparatus according to a sixth embodiment of the present invention.
- FIG. 25 is a block diagram of a face motion (form deformation) coding apparatus according to a seventh embodiment of the present invention.
- FIG. 26 is a block diagram of a face movement (form deformation) decoding apparatus according to an eighth embodiment of the present invention.
- FIG. 27 is a block diagram of a ninth embodiment of the present invention, which is a block diagram of the face movement (morphological transformation) decoding apparatus. It is a lock figure.
- FIG. 28 is a block diagram of a face motion (morphological modification) code decoding apparatus according to a tenth embodiment of the present invention.
- FIG. 29 is a block diagram of a face motion (morphological modification) code decoding apparatus according to an eleventh embodiment of the present invention.
- FIG. 30 is a block diagram of a robot control apparatus using a form deformation decoding apparatus according to a twelfth embodiment of the present invention.
- At least one operation area 1001 is set on the area 1000 of the entire face.
- the motion area 1001 is a target area of deformation, and is defined corresponding to the expression to be exposed.
- AU Action Unit
- a portion with expression muscles that cause the operation unit is defined as one operation region.
- An area different from the expressionless face and the certain expression movement face may be defined as the movement area.
- such an operation area 1001 is divided into a plurality of small areas 1002, and the movement amount according to the external force is calculated in units of individual small areas 1002.
- the points (pixels) belonging to one small area 1002 are divided into a plurality of (three or more) control points 1003 indicated by black circles and non-control points 1004 indicated by white circles other than the control points.
- the two are connected by a spring 1005 and a damper 1006.
- external force is applied only to the control point 1003 as shown by the arrow 1007 in FIG.
- the movement amount force of each control point 1003 belonging to the small area 1 002 is calculated based on the external force applied to the control point 1003, the constant of the spring 1005 and the damper 1006, other than the control point 1003
- the movement amount of the non-control point 1004 is calculated by, for example, interpolation of movement amounts of a plurality of control points 1003 near the non-control point 1004.
- the entire individual subregions 1002 move together as a result of the external force acting on the control point 1003 inside, and all the subregions 1002 move in the same manner.
- the motion area 1001 is deformed, and expression of a desired expression becomes possible.
- the motion region 1001 moves as one while interacting as a spring model.
- a control point 1003 in which an external force 1007 acts only on a plurality of control points 1003 which are damaged by the small area 1002 and in each small area 1002 are mutually connected by a spring 1005 and a damper 1006.
- the action unit action is reproduced using a model that is dragged and operated. Since the motion of the motion unit in this model is expressed by panel model calculation in the small region 1002 unit instead of the operation region 1001, parameter setting is simplified and the amount of calculation is reduced. In addition, since external force calculation is performed instead of directly handling the movement amount, there is an advantage that the individuality at the time of expression of the expression and the universality of the expression movement can be separated.
- a and B are matrices of 2nx2n and 2nxn respectively represented as follows.
- the speed at time t is
- the operation area 1001 is divided into a plurality of small areas 1002, and the movement amount of each control point 1003 according to the external force 1007 is calculated in units of individual small areas 1002. Be done. Therefore, in the present invention, even when the external force 1007 is calculated from the movement amount of each control point 1003, the external force 1007 is calculated from the movement amount of each control point 1003 in each small area 1002 unit.
- the equation (8) can also be used to calculate this external force.
- a physical model of a structure in which the control points 1003 of the small area 1002 are connected by the spring 1005 and the damper 1006 is used.
- a physical model that is simply proportional to the external force, a gravity model or an electromagnetic force model that is inversely proportional to the square of the distance from a specific point such as the control point or the center of gravity of a small area It is also possible to use other types of physical models. In that case, the movement amount of the control point 1003 in the small area 1002 can be calculated by solving the equation corresponding to each.
- FIG. 2 is a block diagram showing the configuration of the form deformation apparatus according to the first embodiment of the present invention.
- the deformation device deforms a human's expressionless face image into a facial expression action face image.
- the deformation apparatus of this embodiment includes a processing device CNT1, a storage device Ml-M6, a display device DPY, and an input device KEY.
- the plurality of storage devices Ml to M6 are formed of, for example, magnetic disks, and among them, the storage device Ml stores input data S1 applied to the faceless face of a person.
- the storage device M6 stores output data S6 of force on the facial expression movement face of the person.
- the remaining memory M2 M5 stores intermediate results and control data required in the process.
- the display device DPY is, for example, a liquid crystal display, and is used to display the facial expression action face image obtained as the output data S6 and to display the data of the processing process.
- the input device KEY comprises, for example, a keyboard and a mouse, and is used to receive various data and instructions from the user.
- the control device CNT1 is formed of, for example, a central processing device of a computer, and executes the main processing of the form modification device of this embodiment.
- the control device CNT1 includes an operation area designation unit 1, an operation area division unit 2, a control point operation unit 3, and an operation interpolation unit 4.
- the respective functional units 14 to 14 of the control device CNT1 can be realized by a computer constituting the control device CNT1 and a program for deformation of the device.
- the program for the form deformation apparatus is recorded on a computer readable recording medium PM1 such as a magnetic disk, and is read by the computer when the computer is started up, and the operation of the computer is controlled to control the computer. Implement each
- the control device CNT1 inputs and processes the input data S1 stored in the storage device Ml, and finally outputs the output data S6 to the storage device M6 and the display device D to display the output data S6.
- Control PY In the process, the control device CNT loads and uses the control point data S3 stored in the storage device M3, and stores the operation region specified data S2 and the small region divided data S4 in the storage devices M2, M4 and M5. And, the initial 3D coordinate adjusted control point data S 3 ′ and the terminal 3D coordinate set control point data S 3 ′ ′ are properly stored. The details will be described below.
- the operation area designation unit 1 inputs the input data S1 from the storage device Ml, and a user designates a desired area of the input data S1 as an operation area.
- the desired area to be designated as the movement area is predetermined corresponding to the expression to be displayed.
- the part having the expression muscle causing the operation unit is designated as the operation area.
- the operation unit of No. 27 of FACS has the contents of "open mouth wide", but when the operation of the operation unit is performed, the region including the mouth is designated as the operation region.
- the number (No.) of the operation unit is inputted by the user from the input device KEY and stored in the control device C NT1.
- Information on the operation area specified by the operation area specification unit 1 is added to the image or three-dimensional data of the input data S1, and is stored in the storage device M2 as operation area specified data S2.
- the operation area designated data S2 is stored as RGB a with 1 bit of each additional image added to each image, and the operation area is 1, with value a. The other way is to make it 0.
- three-dimensional In the case of data it is stored in the form of (x, y, ⁇ , ⁇ ).
- the method of specification is, for example, a tool that uses a GUI, in which images and three-dimensional data are displayed, and there can be considered a method in which an operation area is specified by a mouse or tablet.
- the motion area designation unit 1 loads an expressionless image of the face of a person as input data S1 from the storage device Ml and displays it on the screen of the display device DPY (step F101).
- the motion area designation unit 1 receives a FACS operation unit number from the user (step F102).
- the motion area designating unit 1 has, for each motion unit number of the FACS, a list of typical motion areas indicating a face region having a facial expression muscle that causes the motion unit to be input from the user.
- the operation area corresponding to the operation unit number is acquired from the list and displayed on the display device DPY (step F103). In FIG. 3, the operation area is indicated by a broken line.
- the motion area designation unit 1 moves the display position of the motion area in accordance with the drag operation of the mouse by the user, and adjusts the size and shape of the motion area (step F104).
- the operation unit of FACS No. 27 is assumed, and the operation area is adjusted to an appropriate range including the mouth.
- the operation area specification unit 1 adds the information on the specified operation area to the input data S1 by the method described above to generate the operation area specification completed data S2 and stores it in the storage device M2 (step F105) ).
- the motion area dividing unit 2 inputs the motion area designated data S2 from the storage device M2, and based on the control point data S3 inputted from the storage device M3, the motion area in the motion area designated data S2 is divided into a plurality of small A small area divided data S4 divided into areas is generated and stored in the storage device M4 together with the information of the control point data S3 (initial 3D coordinate adjusted control data S3 ').
- the small area divided data S4 and the initial 3D coordinate adjusted control data S3 ′ correspond to the face model described with reference to FIG.
- details of the processing of the motion area dividing unit 2 will be described with reference to FIG.
- the operation area division unit 2 loads the operation area designated data S2 from the storage device M2, and enlarges the entire operation area as necessary to display it on the display device DPY. (Step Fi ll).
- the operation area division unit 2 loads control point data S3 corresponding to the operation area specified by the operation area specified data S2 from the storage device M3 and displays the table. Display is made on the display device DPY (step F112).
- the control point data S3 corresponding to a certain operation unit contains information on a large number of control points evenly arranged at the end of the expression muscle which generates the operation unit or in the inside.
- FIG. 5 shows a configuration example of one control point data.
- the control point data is a control point number (m) 2001 which is an identifier assigned to the control point data, and an initial 3D coordinate (XO, Y0, Z0) which is an initial three-dimensional coordinate of the control point data.
- m control point number
- XO, Y0, Z0 initial 3D coordinate of the control point data.
- tag number 2003 assigned to this control point data
- terminal 3D coordinates which are three-dimensional coordinates of this control point data after movement (Xt, Yt, Zt) 2005 and connection information 2006 with other control point data connected to this control point data are included.
- the same tag number 2003 is set to control point data belonging to the same small area.
- the standard position of the control point in the standard face is initially set, and the initial display of the control point is performed based on this initial 3D coordinate.
- the terminal 3D coordinates 2005 are null values at the time of loading, and values are set when calculating the movement amount later.
- the connection information 2006 the control point number of the control point data of the connection destination and the constant of the panel and the damper interposed between the control point data of the connection destination are set. If all control points are connected by the same constant panel and damper, it is necessary to set the connection information 2006 of all control points data in one place. Well ,.
- the motion area dividing unit 2 moves the position of each control point to a desired position and finely adjusts it in accordance with the dragging operation of the mouse by the user (step F113).
- the initial 3D coordinates 2002 of the control point data are updated to the three-dimensional coordinates of the pixel closest to the control point data among the pixels in the expressionless face image.
- the operation area division unit 2 divides the operation area by control points and then assigns a tag number 2003 of the control point to the area determined by the boron point division.
- the action area is divided into a plurality of small areas by grouping adjacent Boronoid areas to which the same tag number 2003 is assigned into one small area (step F114).
- Each small area is assigned the same tag number as the tag number assigned to the borony area that composes it. Therefore, small areas are generated by the number of tag numbers.
- an image in which the tag number of the small area including the pixel is added to each pixel in the divided area in the operation area specified data S2 is generated as the small area divided data S4, and the initial 3D coordinates are adjusted.
- the control point data S3 ' is stored in the memory M4 together with the control point data S3' (step F115).
- the pixels on the boundary between the small area and the small area are treated as belonging to the plurality of small areas.
- the small area divided data S4 data in which information of pixels included in the small area of the tag number can be used for each tag number.
- the control point operation unit 3 receives the control point data S3 'of initial 3D coordinates adjusted from the storage device M4, and calculates, for each small area, movement positions of all control points included in the small area.
- the calculation method is determined by the small area force S including the control point and the physical model represented.
- the boundary conditions of the initial position and the initial velocity are given as already described in detail.
- the movement position can be calculated by solving the second-order ordinary differential equation.
- control point operation unit 3 inputs the control point data S3 'whose initial 3D coordinates have been adjusted from the storage device M4 (step F121). Next, look at one small area, that is, one tag number (step F122). Next, the movement positions of all control points belonging to the small area of interest are calculated, and are set to the terminal 3D coordinates 2005 of the control point (step F123). All control points belonging to the small area of interest are control points having the same tag number 2 003 as the tag number of the small area. The control points in the small area are the panels as described with reference to FIG.
- the final 3D coordinates can be calculated from the initial 3D coordinates 2002 and the external force 2004 of each control point data by substituting a predetermined parameter into equation (7).
- the predetermined parameters correspond to the aforementioned t, k, d in the case of Equation (7). These may be added to the control point data S3 and loaded from the storage device M3 or may be embedded in the control point operation unit 3 in advance.
- the control point operation unit 3 calculates the movement positions of all control points belonging to one small area, and when it is set to the terminal 3D coordinates 2005 of those control point data, moves to the next one small area A note is transferred (step F124), and the process returns to step F123 to repeat the same process as described above. After this processing is repeated until the processing for all the small areas is completed (YES in step F125), the control point data S3 "for which the terminal 3D coordinates 2005 are set is stored in the storage device M5 (step F126).
- the operation inner part 4 inputs the small area divided data S4 from the storage device M4 and the end 3D coordinate set control point data S3 "from the storage device M5, and moves the control points in the small area From the position, the movement position of the pixels other than the control point in each small area is calculated by linear interpolation or extrapolation as shown in FIGS. 7A and 7B to generate an image or three-dimensional data of an expression movement face, and output
- the data is stored in the storage device M6 as data S6
- the calculation may be performed using high-order interpolation and extrapolation such as spline etc.
- the output data S6 stored in the storage device M6 is automatically or input device Displayed on the display device DPY according to the instruction from the key.
- the motion interpolation unit 4 inputs the small area divided data S4 and the terminal 3D set control point data S3 ′ ′ from the memory M4 and the memory M5 (step F131).
- One small area in the operation area in the data S4 is focused (step F132), and one pixel in the small area is focused (step F133)
- Three control point data having an initial 3D coordinate closest to the three-dimensional coordinate of the pixel is retrieved from the terminal 3D set control point data S3 "(step F134).
- step F135 it is determined whether any one of the initial 3D coordinates of the three control point data retrieved matches the three-dimensional coordinates of the pixel of interest (step F135).
- the terminal 3D coordinate of the matching control point data is stored as the terminal 3D coordinate of the pixel of interest in the small region of interest (step F136).
- the focused pixel is a non-control point, and the movement position of the focused pixel is shown in FIGS. 7A and 7B from the end 3D coordinates of the three control point data.
- step F137 the calculated movement position is stored as the terminal 3D coordinates of the noticed pixel in the small area of interest (step F138).
- step F139 When the processing for one pixel in the small area of interest is finished, the operation inside unit 4 moves attention to the next one pixel in the small area of interest (step F139), and the same as the above-described process. Repeat the process.
- step S140 When the processing for all the pixels in the small area of interest is completed (YES in step S140), the next small area in the operation area is focused on (step S141), and the same process as the above-described process is repeated.
- step F142 When processing for the pixels in all the small areas in the operation area is completed (YES in step F142), output data S6 including the end 3D coordinates of each pixel in the operation area is stored in storage device M6 (step F143). ).
- FIG. 9 shows a form deformation apparatus according to a second embodiment of the present invention.
- the morphological deformation device according to the second embodiment deforms an expressionless face image of a person into an expression motion face image.
- the form deformation apparatus of the second embodiment is characterized in that the memory apparatus M7 is provided, and that the processing apparatus CNT1 further includes the adjacent small area adjustment unit 5, the form deformation apparatus of the first embodiment shown in FIG.
- the other points are the same as those of the first embodiment.
- the operation area is divided into a plurality of small areas and the movement amount is calculated independently for each small area, if there is a clear difference between the movement amounts of the adjacent small areas, Discontinuities such as steps occur at the boundary between these small areas.
- FIG. 10A two small areas 1001 and 1002 which were in contact with each other before movement are different from each other as shown in FIG. 10B if there is a difference in the amount of movement.
- FIG. 10C may partially overlap each other.
- the adjacent small area adjusting unit 5 of this embodiment has a function of adjusting the discontinuous state generated at the boundary of such small areas.
- the P-contact small area adjusting unit 5 can be realized by the program stored in the recording medium PM1 and the computer constituting the processing device CNT1 as in the other functional units 14-14.
- the alignment unit 5 receives the small area divided data S4 and the output data S6 from the storage device M4 and the storage device M6 (step F201).
- all boundary points of all the small areas are searched based on the small area divided data S4 (step F202).
- a boundary point of a small area is a pixel located on the boundary of a plurality of adjacent small areas. Such pixels belong to a plurality of small regions, and this is possible by extracting all the pixels belonging to a plurality of small regions from the small region divided data S4.
- the adjacent small area adjusting unit 5 focuses on one of the boundary points searched in step F 202 (step F 203), checks the small area to which the boundary point belongs from the small area divided data S 4, and belongs The terminal 3D coordinates in all the small regions are taken out from the output data S6, and the average (or even the median) is determined (step F204).
- one boundary point 1100 shown in FIG. 10A is a point at the end 3D coordinates (Xa, Ya, Za) in the small area 1101 as shown in FIG. 10B) or FIG. 10C after movement.
- step F205 the terminal 3D coordinates (Xa, Ya, Za) of the point 1103-1 are updated to ⁇ (Xa + Xb) / 2, (Ya + Yb) / 2, (Za + Zb) / 2 ⁇ , and the point 1103 is also the same.
- step F206 When the adjacent small area adjusting unit 5 finishes the processing for one boundary point, it focuses attention on the next one boundary point (step F206), returns to step F204, and performs the same processing as the above-described processing. repeat.
- step F208 When all the boundary points have been processed (YES in step F207), the small area boundary is smoothed (step F208).
- the adjacent small area adjusting unit 5 sets the change amount of the boundary point whose end 3D coordinates are changed at step S205 to the pixels near the boundary point for each small area. By propagating, the boundary is smoothed.
- the detail of this smoothing process F208 is shown in FIG.
- the adjacent small area adjusting unit 5 focuses attention on one small area (step F211), and Initialize to a value smaller than 1 and close to 1 (step F212).
- 3D correction amounts of all boundary points between adjacent small areas in the small area of interest are calculated (step F213).
- the 3D correction amount of the boundary point is the end point of the 3D coordinates of the boundary point in the small area of interest in the output data S6 and the end points of all the small regions to which the boundary point calculated in step F204 in FIG. It is the difference from the average of 3D coordinates.
- the terminal 3D coordinates of a certain boundary point in a certain small region are (Xa, Ya, Za), and the average of the terminal 3D coordinates in all the small regions to which the boundary point belongs calculated in step F204 of FIG. If ⁇ (Xa + Xb) / 2, (Ya + Yb) / 2, (Za + Zb) / 2 ⁇ , then the 3D correction amount is ⁇ (Xa + Xb) / 2 ⁇ to Xa, ⁇ (Ya + Yb) It is obtained as Z2 ⁇ _Ya, ⁇ (Za + Zb) / 2 ⁇ Za ⁇ .
- the P-contact small area adjusting unit 5 searches all pixels in contact with the boundary point in the small area of interest as an inner contact (step F 214). For example, as shown in FIG. 13, among the two / J ⁇ regions 1101 and 1102 in contact with each other, / J ⁇ region 1101 force S when attention is drawn, a pixel on the boundary with the small region 1102 a -Since f is the initial boundary point, the pixel g-1 in contact with the pixel a-f is determined as the inner contact.
- the adjacent small area adjustment unit 5 focuses on one inner contact point (step F215), and calculates the average (may be the median) of the 3D correction amounts of all the boundary points in contact with the inscribed point. (Step F216). For example, when the pixel g in FIG. 13 is focused as one inner contact, the boundary points in contact with it are the pixels a and b, so the adjacent small area adjustment unit 5 calculates the 3D correction amount of the pixel a and the pixel b. Calculate the average of the 3D correction amount. Next, the adjacent small area adjusting unit 5 adds the 3D correction amount obtained by multiplying the average of the calculated 3D correction amount by the correction coefficient to the terminal 3D coordinates of the inner contact (step F217).
- the inner contact is treated as a new boundary point (step F218).
- the P small contact region adjusting unit 5 shifts the attention to the next one inner contact (step F219). For example, when the process of the pixel g in FIG. 13 is finished, the attention is shifted to the pixel h, and the process returns to step F216 to repeat the same process as the process described above.
- the P-contact small area adjustment unit 5 sets a predetermined value (eg, 0.1, etc.) predetermined from the value of the correction coefficient. 0.2) is reduced (step F221).
- step F222 If the value of the correction factor is greater than 0 (YES in step F222), the process returns to step F214.
- Figure In the case of 13 when the processing of the pixel g-1 is finished, the pixel g-1 is treated as a new boundary point, so the set of new inner contacts searched in step F221 becomes the pixel m-r. Thus, the 3D correction amount of the original boundary point a-f is propagated to neighboring pixels and smoothed.
- the adjacent small area adjusting unit 5 finishes the smoothing process on the small area of interest and shifts the attention to the next one small area (step F223). Go back to step F212 and repeat the same processing as described above.
- the adjacent small area adjusting unit 5 finishes the process for all the small areas (YES in step F224), the adjacent area adjustment unit 5 ends the smoothing process F208 of the small area boundary.
- the discontinuous state generated in the P adjacent small area adjusted data S7 that is, the boundary part of the small area in the output data S6.
- the output data S7 stored in the storage device M7 is displayed on the display device DPY automatically or in accordance with an instruction from the input device KEY.
- FIG. 14 shows a form deformation apparatus according to a third embodiment of the present invention.
- the shape transformation device of the third embodiment transforms an expressionless face image of a person into an expression motion face image.
- the modification of the third embodiment is the same as the modification of the second embodiment, except that the storage device M8 is provided and the processing device CNT1 further includes the non-operating area adjustment unit 6. This is different from the form deformation apparatus of the second embodiment shown in FIG.
- the motion region 1001 and the motion region 1001 are displayed during facial expression motion. Discontinuities such as steps may occur at the boundary of the non-operating area.
- the motion area 1201 and the non-motion area 1202 which were in contact with each other in the expressionless image, move only the motion area 1201 among them, so the motion area 1201 is shown in FIG.
- the non-operating area 1202 may overlap each other, or a gap 1204 may be generated between the operating area 1201 and the non-operating area 1202.
- the non-operating area adjusting unit 6 of the present embodiment has a function of adjusting the discontinuous state generated at the boundary between the operating area and the non-operating area.
- the non-operating area adjustment unit 6 As in the case of 1-15, it can be realized by the program stored in the recording medium PM1 and the computer constituting the processing device CNT1.
- the non-operating area adjusting unit 6 receives the operating area specified data S2 and the adjacent small area adjusting data S7 from the storage device M2 and the storage device M7 (step F301).
- the non-operating area adjustment unit 6 initializes the correction coefficient to a value smaller than 1 and closer to 1 (step F302), and based on the operating area designated data S2, the 3D movement amounts of all the inner boundary points Calculate (step F3 03).
- the inner boundary point is a pixel that belongs to the active region and is located on the boundary with the non-active region. For example, as shown in FIG.
- the 3D movement amount of the inner boundary point refers to the 3D coordinates (that is, the position at the time of expressionlessness) on the movement region specified data S2 of the inner boundary point and the 3D coordinates on the adjacent small region adjusted output data S7 It is the difference with (that is, the position at the time of expression movement).
- the non-operating area adjusting unit 6 searches all the external boundary points based on the operating area specified data S2 (step F304).
- the outer boundary point is a pixel located on the boundary of the operation area among the pixels of the non-operation area. For example, in the example of FIG. 17, the pixels d, e, f, etc. are outer boundary points.
- the non-operating area adjustment unit 6 focuses on one of the outer boundary points found in step F304 (step F305), and determines the 3D movement amount of all the inner boundary points that the outer boundary points touch. Find the average (or the median) (step F306). For example, when the pixel e shown in FIG. 17 is focused as the outer boundary point, all the inner boundary points in contact with the pixel e are the pixels a, b and c, so the average of their 3D movement amounts is calculated. Be done. Next, the 3D movement amount obtained by multiplying the calculated average value by the correction coefficient is added to the terminal 3D coordinates of the outer boundary point (step F307). Then, the outer boundary point is treated as a point of the operation area (step F 308).
- step F309 When the processing for one outer boundary point is finished, the attention is shifted to the next one outer boundary point (step F309). For example, when the processing of the pixel e in FIG. 17 is finished, next, attention is paid to the pixel f, and the process returns to step F306 and the same processing as the above-described processing is repeated.
- the non-operating area adjustment unit 6 sets a predetermined constant value (for example, 0.1 or 0.2) from the value of the correction coefficient. Etc.). (Step F311). If the value of the correction coefficient is larger than 0 (YES in step F3 12), the process returns to step F304. In the case of FIG.
- the non-operation area adjustment unit 6 finishes the non-operation area adjustment processing, and the non-operation area adjustment completed data S8, that is, the adjacent small area adjustment completed output data Output data obtained by adjusting the discontinuous state generated at the boundary between the operating area and the non-operating area in S7 is output to the storage device M8.
- the output data S8 stored in the storage device M8 is displayed on the display device DPY automatically or in accordance with an instruction from the input device KEY.
- the non-operating area adjusting unit 6 is executed after the adjacent small area adjusting unit 5 is executed, but the non-operating area adjusting unit 6 is first executed to output data S6 from the output data S6. Data in which the discontinuity at the boundary with the non-operating area has been eliminated is generated, and then the adjacent small area adjustment unit 5 is executed to further eliminate the discontinuity in the boundary between the small areas Even if you want to generate Also.
- the adjacent small area adjusting unit 5 may be omitted, and only the adjustment by the non-operating area adjusting unit 6 may be performed.
- FIG. 18 is a block diagram of a facial motion encoding apparatus according to a fourth embodiment of the present invention, which shows motions during facial expression motion from three-dimensional data of facial expressions during facial expression and motion of a certain person. Calculate and output the area and external force.
- the face motion encoding apparatus of this embodiment includes a processing unit CNT11, a storage unit Mil-ML5, a display unit DPY, and an input unit KEY.
- a plurality of storage devices Mil-M 15 are configured by, for example, a magnetic disk, and among them, the storage device Mil stores the expressionless face 3D data S100 and the expression action face 3D data S101 as input data, and the storage device M15 stores external force data S12 of the facial expression motion and the motion area S13 of the facial expression motion as output data, and the remaining storage device M12 M14 stores the intermediate data and control data required in the process.
- the display device DPY is formed of, for example, a liquid crystal display, and is used to display process data and the like.
- the input device KEY is, for example, It consists of a board and a mouse, and is used to receive various data and instructions from the user.
- the control device CNT11 is formed of, for example, a central processing unit of a computer, and executes the main processing of the face motion encoding device of this embodiment.
- the control device CNT11 includes a 3D face model application unit 101, a 3D difference calculation unit 102, and an external force calculation unit 103.
- the respective functional units 101 and 103 of the control device CNT11 can be realized by a computer constituting the control device CNT11 and a program for the face motion encoding device.
- the program for the facial motion encoding device is recorded on a computer readable recording medium PM11 such as a magnetic disk, and is read by the computer when the computer is started, etc., by controlling the operation of the computer.
- Each functional unit 101 103 is implemented on the computer.
- the control device CNT 11 first uses the 3D face model assignment unit 101 to store the expressionless face three-dimensional data S 100 of a person and the expression movement face three-dimensional data S 101 stored in the storage device Mil and used for encoding.
- the face triangular polygon mesh models S102 and S103 are used for each of the expressionless face and the expression action by inputting and matching the face triangular polygon mesh model S 105 prepared in advance in the storage device M12 with respect to each data. It is generated and stored in the storage device M13.
- the face triangular polygon mesh models S 102 and S 103 are input from the storage device M13 3 for expression-free and expression-driven operations, respectively, and the two are compared.
- the 3D difference data S 104 is input from the storage device M14 and given to each vertex of the face triangular polygon of the expressionless face to transform the expressionless face into the expression movement face.
- the external force is calculated by the above-mentioned equation (7), and the motion area S 13 of the expression motion including information of the polygon moved (moved) when the expression changes from the expressionless face to the expression movement face, and each polygon
- the external force data S 12 of the expression movement including the external force applied to the top is output to the storage device M 15.
- Three-dimensional face data S 100, S 10 H acquired by the three-dimensional face model addition unit 101 which is input without expression and during expression operation, acquired by the unit acquiring the three-dimensional data of the face, storage device M 11 is stored.
- the three-dimensional data of the face can be a good or artificial three-dimensional computer with data measured in any part such as a light emitting type range finder, stereo image measurement with a stereo 'multi-power camera, MRI measurement, infrared measurement etc. Good even with data created with graphics.
- the texture as information attached to the three-dimensional data correspond to the correspondence between the two face data
- another part that finds the correspondence between the two face data In some cases, it is not essential.
- a marker with temperature different from body temperature is attached and acquired.
- a match may be considered.
- Even when acquiring a texture if the marker is drawn on part of the face, the texture of the entire face is not essential.
- a face triangular polygon mesh model S105 as shown in FIG. 19 is stored.
- the number of polygons and the size of each polygon are arbitrary.
- a part of the vertices of the triangular mesh is generated to correspond to the feature points of the face.
- the three-dimensional face model addition unit 101 generates a three-dimensional expressionless face triangular polygon mesh model S102 by matching the face triangular polygon mesh model S105 with the expressionless face three-dimensional data S100.
- the expressionless face three-dimensional data S100 and the face triangular polygon mesh model S105 are displayed on the screen of the display device DPY, and each vertex of the face triangular polygon mesh model S105 is operated by the user using the input device KEY.
- Map to the feature points of the face that should be mapped to that vertex It is also possible to automatically associate some of the vertices of the face triangular polygon mesh model S105 with reference to the feature points of the expressionless face three-dimensional data S100. For example, if a certain vertex of a certain polygon of the face triangular polygon mesh model S105 is a vertex that is made to correspond to the tip of the nose, the position of the marker attached to the tip of the nose is detected. Can be associated automatically.
- the three-dimensional face model assigning unit 101 assigns the three-dimensional coordinates of the point of the expressionless face three-dimensional data S100 associated with the vertex to each vertex of each polygon in the face triangular polygon mesh model S105,
- the three-dimensional expressionless face triangular polygon mesh model S102 is stored in the storage device M 13 as a three-dimensional expressionless face triangular polygon mesh model S102 after assigning the three-dimensional coordinates of points of the expressionless face three-dimensional data S100 to the vertices of the polygons.
- the three-dimensional face model assignment unit 101 matches the three-dimensional facial expression polygon three-dimensional data S101 with the three-dimensional facial expression polygon three-dimensional data S101 to match the three-dimensional facial expression facial triangular polygon mesh model S103.
- the three-dimensional difference calculation unit 102 calculates a three-dimensional expressionless face triangular polygon mesh model S 102 in which three-dimensional coordinates of an expressionless face are assigned to each vertex of each polygon, and an expression at each vertex of each polygon. Three-dimensional coordinates are calculated between the same vertices of the same polygon as that of the three-dimensional expression facial movement face triangular polygon mesh model S103 to which the three-dimensional coordinates of the movement face are assigned. For example, assuming that the three-dimensional difference calculation unit 102 sets the same polygons of the modelles S102 and S103 as S102-1 and S103-1 shown in FIG.
- the three-dimensional difference calculation unit 102 calculates three-dimensional data including data including the three-dimensional coordinates of the expressionless face and the expression movement face assigned to the vertex of the face triangular polygon mesh model of the movement area determined in this way and the face.
- Data S104 is stored in the storage device M14.
- FIG. 21 shows an example of the contents of the three-dimensional difference data S104.
- a polygon number m is set for each polygon of the face triangle polygon mesh model S 105, a vertex number i is set for each vertex of each polygon, and a flag indicating whether the polygon including the vertex is an operation area and the vertex.
- the external force calculation unit 103 inputs data of a polygon which is an operation area from the three-dimensional difference data S104, and the expressionless three-dimensional coordinates of the vertex of the polygon, the three-dimensional coordinates at the time of expression movement, and the necessary
- the external parameters required to move the position of each vertex from the position at the time of expressionlessness to the position at the time of expression movement are calculated using various parameters.
- the external force calculation unit 103 calculates the three-dimensional coordinates of the control points in the no-expression mode with norams t, k, d, and the above-mentioned equation (7).
- the external force Fi is calculated by giving xi (0), xi (t) as three-dimensional coordinates of the control point at the time of facial expression movement, and their initial conditions.
- the control points can be freely selected within the polygon, but in the simplest case, three vertices of the polygon are taken as control points in this embodiment as shown in FIGS. 22A and 22B.
- FIG. 23 is a block diagram of a face motion decoding apparatus according to a fifth embodiment of the present invention, in which three dimensional face data of a person without expression is based on external force information at the time of expression operation. Transforms and generates three-dimensional face data of the person during facial expression movement and outputs it.
- the face motion decoding apparatus of the present embodiment includes a processing device CNT21, a storage device M21 M26, a display device DPY, and an input device K EY.
- the plurality of storage devices M21 to M26 are configured by, for example, magnetic disks.
- Storage devices M21 and M22 store expressionless face 3D data S300 as input data and external force information S30 at the time of expression operation, storage device M26 stores expression face 3D data 32 as output data, and the remaining storage Devices M23 to M25 store intermediate results and control data required in the process.
- the display device DPY is formed of, for example, a liquid crystal display, and is used to display the generated expression face 3D data 32 and data of the processing process.
- the input device KEY comprises, for example, a keyboard and a mouse, and is used to receive various data and instructions from the user.
- the control device CNT21 is formed of, for example, a central processing unit of a computer, and executes the main processing of the face motion decoding device of this embodiment.
- the control device CNT21 has a 3D face model corresponding unit 301, a movement amount calculation unit 302, and an expression creation unit 31.
- the respective functional units 301, 302, and 31 of the control device CNT21 can be realized by a computer constituting the control device CNT21 and a program for the face motion decoding device.
- the program for the facial motion decoding device is recorded on a computer readable recording medium PM21 such as a magnetic disk, and is read by the computer when the computer is started up, etc., by controlling the operation of the computer.
- the control device CNT21 first uses the 3D face model corresponding unit 301 to input the expressionless face three-dimensional data S300 of a certain person used for decoding stored in the storage device M21, for the data.
- a three-dimensional expressionless face triangular polygon mesh model S302 is generated and stored in the storage device M24.
- each of the motion areas of the three-dimensional expressionless face triangular polygon mesh model S302 stored in the storage device M24 according to the external force information S30 of the expression motion stored in the storage device M22.
- the movement position of the control point is calculated and stored in the storage device M25 as movement position data S303 of the control point.
- the expressionless face three-dimensional data S300 stored in the storage device M21 is deformed and displayed.
- Three-dimensional data of the facial expression movement face at the time of movement is generated, stored as a facial expression face 3D data S32 in the storage device M26, and displayed on the display device DPY. Details will be described below.
- the external force information S30 of the facial expression motion of the storage device M22 includes external force data S12 of the facial expression motion generated by the facial motion encoding apparatus of the fourth embodiment shown in FIG. 18 and a motion area S 13 of the facial expression motion. The information corresponding to is used. Also, as the face triangle polygon mesh model S105 prepared in the storage device M23, the same model as the face triangle polygon mesh model S105 used in the face action code device of the fourth embodiment shown in FIG. 18 is used. .
- the three-dimensional face model corresponding part 301 generates the expressionless face three-dimensional data S 300 by the same method as the three-dimensional face model attaching part 101 of the face motion encoding apparatus of the fourth embodiment shown in FIG.
- a three-dimensional expressionless face triangular polygon mesh model S 302 is generated. That is, the three-dimensional coordinates of the point of the expressionless face three-dimensional data S300 corresponding to the vertex are assigned to each vertex of each polygon in the face triangular polygon mesh model S105, and the expressionless face cubic is assigned to the vertex of all polygons.
- the result is stored in the storage device M24 as a three-dimensional expressionless face triangular polygon mesh model S302.
- the movement amount calculation unit 302 inputs external force information S30 during expression operation corresponding to the face triangular polygon mesh model S105 from the storage device M22, and is similar to the control point operation unit 3 of the first embodiment of FIG.
- the movement position S303 of the control point at the time of expression operation is calculated by the method. For example, when a face triangular polygon mesh model similar to that of the fourth embodiment shown in FIG. 18 is used, the movement amount calculator 302 calculates the three-dimensional coordinates of the three vertexes of the polygon and the point described in the external force information S30.
- the facial expression creation unit 31 is a movement position of a point corresponding to the top point of the polygon in the motion area of the face triangular polygon mesh model S105 among the points constituting the expressionless face 3D data S300 stored in the storage device M21. Since the position data S303 has already been calculated in the position data S303, the movement positions of the other points (in-polygon points) are calculated in the same manner as in the operation inner part 4 of the first embodiment of FIG. For example, when the face triangular polygon mesh model is used, the surface information creating unit 31 calculates the moving position of each point in each polygon from the moving position of the vertex by the inner eye. Then, the three-dimensional coordinates of each point of the expressionless face 3D data S300 are corrected by the movement position only, and the expression face three-dimensional data S32 is generated.
- three-dimensional face data S300 of a person without an expression is deformed based on external force information S30 of only one emotion motion, but three-dimensional data of an expressionless face S300 is It is also possible to deform based on external force information S30 of a plurality of different facial expressions.
- the movement amount calculation unit 302 combines external force information S30 of a plurality of expression motions into one external force information by adding external forces acting on the same control point, and based on the combined external force, The movement position data S303 of the control point is calculated.
- FIG. 24 is a block diagram of a face motion decoding apparatus according to a sixth embodiment of the present invention, wherein external force information at the time of facial expression motion of a face image (two-dimensional face image) of a person without expression is Based on the deformation And generates and outputs a face image (two-dimensional face image) at the time of expression operation of the person. That is, in the face motion decoding apparatus of the present embodiment, the expressionless face three-dimensional data S300 in the face motion decoding apparatus of the fifth embodiment shown in FIG. 23 is replaced by the expressionless face image S301, and the output expression The three-dimensional face data S32 is replaced with the expression face image S32.
- the face motion decoding apparatus of the present embodiment includes a processing device CNT31, storage devices M31 to M36, a display device DPY, and an input device KEY.
- a plurality of storage devices M31 M36 are, for example, magnetic disks, and storage devices M31 and M32 store an expressionless face image S301 as input data and external force information S30 at the time of expression operation, and storage device M36 outputs data
- the remaining facial expression S33 is stored, and the remaining storage unit M33 M35 stores intermediate results and control data required in the process.
- the display device DPY is formed of, for example, a liquid crystal display, and displays the generated expression facial image S33 and data of the processing process.
- the input device KEY comprises, for example, a keyboard and a mouse, and receives various data instructions from the user.
- the control device CNT31 is constituted by, for example, a central processing unit of a computer, and executes the main processing of the face motion decoding device of this embodiment.
- the control device CNT 31 includes a face image three-dimensionalization unit 300, an operation amount calculation unit 302, and an expression image creation unit 33.
- the respective functional units 300, 302, and 33 of the control device CNT31 can be realized by a computer constituting the control device CNT31 and a program for the face motion decoding device.
- the program for the facial motion decoding device is recorded on a computer-readable recording medium PM31 such as a magnetic disk, and is read by the computer when the computer is started, etc., by controlling the operation of the computer. Implement each of the functional units 300, 302, and 33 on the computer.
- the control device CNT31 first uses the face image three-dimensionalization unit 300 to input the expressionless face image S301 of a certain person used for decoding stored in the storage device M31, and prepares the storage device M33 in advance.
- the expressionless face three-dimensional data S300 for decoding is generated and stored in the storage device M34.
- the stored expressionless face three-dimensional data S300 is the three-dimensional expressionless face triangular polygon mesh mode of the fifth embodiment shown in FIG. This corresponds to LE S302.
- the facial expression face three-dimensional data stored in the storage device M34 according to the external force information S30 of the facial expression motion stored in the storage device M32 using the movement amount calculation unit 302.
- the movement position of each control point in the operation area of S300 is calculated, and is stored in the storage device M35 as movement position data S303 of the control point.
- the expressionless face three-dimensional data S300 stored in the device M34 is deformed to generate three-dimensional data of the expression movement face at the time of expression movement.
- a facial expression face image S33 is generated from this three-dimensional data, stored in the storage device M36, and displayed on the display device DPY. Details will be described below.
- external force information S 30 of the facial expression motion of the storage device M 32 As external force information S 30 of the facial expression motion of the storage device M 32, external force data S 12 of the facial expression motion generated by the facial motion encoding apparatus of the fourth embodiment shown in FIG. 18 and a motion area S 13 of the facial expression motion. Corresponding information is used.
- the three-dimensional model S302 of the expressionless face prepared in the storage device M33 the same mesh model as the face triangular polygon mesh model S105 used in the face motion coding apparatus of the fourth embodiment shown in FIG. The 3D model of the person used is used.
- the face image three-dimensionalization unit 300 receives the expressionless face image S301 for decoding and the three-dimensional model S302 of the expressionless face, and inputs the expressionless face image S301 for decoding into the three-dimensional model of the expressionless face It is pasted to Le S302 to create expressionless face three-dimensional data S300 for decoding.
- the expressionless face three-dimensional data of a specific person is registered as a face three-dimensional model, and the feature points of the face model are made to correspond to the feature points of the expressionless face image S301 for decoding. Otherwise, the correspondence is given by the inner and outer rings as shown in FIG. 7 from the relative positional relationship with the feature points.
- the texture information of the face three-dimensional model S302 associated with the expressionless face image S301 for decoding can be treated as the expressionless face three-dimensional data S300 for decoding.
- the expressionless face three-dimensional data S302 of the specific person described above is an average face obtained by averaging the expressionless face three-dimensional data of a real person who needs to be of a real person. Even good things, good.
- a face mesh in which a part of the apex of the triangular mesh corresponds to the feature point of the face There is also a method of using a model as a three-dimensional model S302 of an expressionless face. As can be seen from FIG.
- This face mesh model can be regarded as artificially created face 3D data of a very rough expression, and the faceless face image S301 for decoding is associated with the face mesh model in feature point units, and out of feature points The points of are given correspondences by inner and outer rings as shown in FIG. 7, and in the same way as in the case of expressionless face 3D data, expressionless face 3D data S3 00 for decoding is created.
- the facial expression image creation unit 33 creates three-dimensional data of a facial expression at the time of decoding to be output similarly to the facial expression creation unit 31 of the fifth embodiment, and then projects the three-dimensional data of the face onto a designated view.
- a facial expression motion face image S33 at the time of decoding is generated and output to the storage device M36.
- the expressionless face image S301 of a person is deformed based on the external force information S30 of the only facial expression motion, but the same as in the fifth embodiment of FIG.
- the expressionless face image S301 may be deformed based on the external force information S30 of a plurality of different facial expressions.
- the expressionless face three-dimensional data S300 is input, the three-dimensional data S32 of the expression movement face is output, and in the sixth embodiment, the expressionless face image S301 is input, and the expression An operation face image S33 is output.
- a face movement decoding apparatus to which an expressionless face three-dimensional data S300 is input and an expression movement face image S33 is output, or an expressionless face image S301 is input, and three-dimensional data S32 of expression movement face is input.
- a face motion decoding device to be output is also conceivable.
- FIG. 25 is a block diagram of a face motion encoding device according to a seventh embodiment of the present invention, and as in the fourth embodiment, three-dimensional data of a face with no expression and with expression of a certain person Then, calculate the motion area and external force at the time of facial expression motion and output.
- the face motion encoding apparatus of the present embodiment includes a processing unit CNT41, storage units M41 to M43, a display unit DPY, and an input unit KEY.
- the plurality of storage devices M41 and M43 are, for example, magnetic disks, and the storage device M41 stores the expressionless face 3D data S 100 and the expression action face 3D data S101 as input data, and the storage device M43 is an output data as output data.
- the external force data S12 of the motion and the motion region S13 of the facial expression motion are stored, and the remaining storage device M42 stores the intermediate result.
- display The device DPY comprises, for example, a liquid crystal display and is used to display process data and the like.
- the input device KEY comprises, for example, a keyboard and a mouse, and is used to receive various data and instructions from the user.
- the control device CNT41 is formed of, for example, a central processing unit of a computer, and executes the main processing of the face motion coding device of this embodiment.
- the control device CNT 41 has a difference calculation unit 10 and an external force calculation unit 11.
- the respective functional units 10 and 11 of the control device CNT41 can be realized by a computer constituting the control device CNT41 and a program for face motion coding device.
- the program for the face movement code device is recorded on a computer readable recording medium PM41 such as a magnetic disk, and is read by the computer when the computer is started, etc., and the computer is controlled by controlling the operation of the computer. Implement the functional units 10 and 11 on top.
- the control device CNT41 first uses the difference calculation unit 10 to input the expressionless face three-dimensional data S 100 and the expression action face three-dimensional data S 101 of the person used for encoding stored in the storage device Mil.
- the three-dimensional coordinate difference S11 is generated from these input data and stored in the storage device M42.
- the difference calculation unit 10 performs the following process.
- the difference calculation unit 10 associates the same parts of the face three-dimensional data S100 and S101 during expressionless operation and expression movement (for example, expressionless face right-eye corner and expression movement face right-eye corner).
- the method of correspondence may be automated by using the above-mentioned marker etc. as a clue, or both images may be displayed on the display device DPY and manually matched by the user using a GUI tool.
- a difference in three-dimensional coordinates is calculated between corresponding points of the expressionless face three-dimensional data S100 and the three-dimensional data S101 of the expression movement face.
- a restricted area such as an area where the difference between the three-dimensional coordinates is different, that is, an area where the distance is equal to or greater than a predetermined threshold or an area where an expression muscle is attached It is considered as a non-operating area.
- the positions of the expressionless face three-dimensional data (three The three-dimensional coordinate difference S11 is output as the three-dimensional coordinate difference S11) and the two-dimensional coordinate on the coordinate of the plane when projected onto a two-dimensional surface such as a cylinder.
- the external force calculation unit 11 uses the three-dimensional coordinate difference S11 describing the position of expressionless face three-dimensional data and its three-dimensional coordinate difference only with respect to the operation point, and detects the position of the operation point at the expressionless face three-dimensional
- the external force required to move from the position of the data by the three-dimensional coordinate difference is calculated, and is output as external force data S12 of the facial expression operation.
- an operation area S 13 in which an external force acts and an expression movement occurs is output.
- the operating area S13 is divided into small areas in which three or more control points connected to each other by the panel and the damper are subjected to external force and dragged by the control points as shown in FIG.
- the operation of each small area becomes the operation of the entire operation area.
- the dynamics system in each small area can be formulated as described above by assuming that all panels and dampers have the same spring constant and damper constant.
- FIG. 26 is a block diagram of the eighth embodiment of the present invention, and as in the fifth embodiment of FIG. 23, three-dimensional face data of a person without expression are used as external force information at the time of expression operation.
- An embodiment is shown in which the present invention is applied to a face motion decoding apparatus which generates and outputs three-dimensional data of a face during expression operation of a person based on a base, and modified.
- the face motion decoding apparatus according to the present embodiment is configured to include a processing unit CNT 51, storage units M51 to M54, a display unit DPY, and an input unit KEY.
- a plurality of storage devices M51 to M54 are formed of, for example, magnetic disks, and among them, the storage devices M51 and M52 store expressionless face 3D data S300 as input data and external force information S30 at the time of expression operation, and a storage device M54 Stores expression face 3D data 32 as output data, and the remaining storage device M53 stores intermediate results.
- the display device DPY is configured of, for example, a liquid crystal display, and is used to display the generated expression face 3D data S32, data of the processing process, and the like.
- the input device KEY comprises, for example, a keyboard and a mouse, and is used to receive various data and instructions from the user.
- the control unit CNT 51 is formed of, for example, a central processing unit of a computer, and executes the main processing of the face motion decoding apparatus of this embodiment.
- the control device CNT 51 has an external force decoding unit 30 and a facial expression creation unit 31.
- the respective functional units 30, 31 of the control device CNT51 can be realized by a computer constituting the control device CNT51 and a program for the face motion decoding device.
- the program is recorded on a computer-readable recording medium PM51 such as a magnetic disk, and is read by the computer when the computer is started up, etc., to control the operation of the computer, thereby enabling each functional unit 30, 30, Achieve 31
- the control device CNT 51 first uses the external force decoding unit 30 to input the expressionless face three-dimensional data S300 stored in the storage device M51 and the external force information S30 of the expressionless movement stored in the storage device M52.
- the external force information S30 of the expressionless movement is obtained by integrating the external force data S12 of the expression movement and the operation area S13 of the expression movement in the seventh embodiment of FIG.
- the motion area S13 is stored as an area indicating a part of a face, and an area described by a relative position based on the positions and distances of feature points such as both eyes and tear points.
- a storage format a method of projecting a face on a flat surface, a cylinder, a spherical surface or the like and storing it in two-dimensional coordinates is conceivable.
- the operation area is divided into small areas, the operation area is also divided into small areas on two-dimensional coordinates.
- the world map as a projection plane of the movement area, with the face of the earth, a part of the continent on the earth as the movement area, and the countries in the continent as the small area.
- the small areas are numbered to create a set of control points.
- the external force data S12 the external force three-dimensional vector and the number of the small area, which are applied to the control points of the small area, and the position of the control point on the same two-dimensional coordinate as the operation area are stored.
- the external force decoding unit 30 generates facial expression movement position data S31 of the control point from the expressionless face three-dimensional data S300 and the external force information S30 of the expressionless movement, for example, as follows, and stores the storage device M53 Save to First, the external force decoding unit 30 expands the expressionless face three-dimensional data S300 into a format described in the same relative position based on the position and distance of the face feature point as in the movement region of the external force information S30. Next, a portion corresponding to the motion area of the external force information S30 is searched on the expanded format, and the portion is set as the motion area of the expressionless face three-dimensional data s300.
- the corresponding area and the corresponding point of the expressionless face three-dimensional data S300 are searched.
- the expressionless face three-dimensional data S300 may be displayed on the display device DPY, and the user may manually associate the control point and the small area described in the external force information 30 with a tool such as a GUI. And may be performed automatically based on the feature points. If the corresponding control point of expressionless face three-dimensional data S300 is found, external force decoding The unit 30 assigns the three-dimensional coordinates of the point to xi (0) of the equation (7).
- the external force at (the corresponding point of) the control point is read from the external force information S30, and is set as the external force Fi of the above-mentioned equation (7).
- the parameter d and t representing the intensity of the facial expression motion are given, and the control region set of the external region information 30 has the same small area number according to equation (7).
- the three-dimensional position information of the control point after this movement and the position information in the above format of the operation area, the small area, and the control point in the expressionless face three-dimensional data S 300 for position decoding are the expression action of the control point.
- the movement position is S31.
- the expression generation unit 31 moves the control point of the expressionless face three-dimensional data S300 to the position indicated by the expression movement movement position S31, and the other points are from the position relative to the control point belonging to the same small area.
- the three-dimensional coordinates of the movement position are calculated by the inner eyelid and the outer eyelid, and the corresponding position in the face three-dimensional data S300 is moved to the movement position.
- three-dimensional data S32 of the facial expression motion face is completed. If the texture is attached to the expressionless moving face three-dimensional data S300 for decoding, the three-dimensional data S32 of the expression movement face becomes the face three-dimensional data with color information. Similar to the facial expression creation unit 31 of the fifth embodiment shown in FIG.
- the three-dimensional face data S300 of a person without an expression is deformed based on the external force information S30 of the unique expression movement. It is also possible to deform based on the external force information S30 of different facial expressions.
- external force decoding unit 30 combines external force information S30 of a plurality of facial expressions into one external force information by adding forces and external forces to the same control point, and based on the combined external force, By calculating the expression movement movement position data S31, the expression movement movement position data of the final control point corresponding to the plurality of external force information S30 is calculated, or the control points are calculated based on the respective external force information S30.
- FIG. 27 is a block diagram of a ninth embodiment of the present invention, in which a face image (two-dimensional face image) of a person without expression is deformed based on external force information at the time of expression operation, An embodiment in which the present invention is applied to a face motion decoding apparatus that generates and outputs a face image (two-dimensional face image) at the time of facial expression motion of a person is shown. That is, the face movement decoding apparatus of the present embodiment outputs the expressionless face three-dimensional data S300 which is an input in the face movement decoding apparatus of the eighth embodiment shown in FIG.
- the face-motion decoding apparatus of the present embodiment is configured to include a processing device CNT 61, storage devices M61 to M66, a display device DPY, and an input device KEY.
- a plurality of storage devices M61 M 66 are formed of, for example, magnetic disks, among which the storage devices M61 and M62 store an expressionless face image S301 as input data and external force information S30 at the time of expression operation, and the storage device M66
- the expression facial image S33 which is output data, is stored, and the remaining storage devices M63 to M65 store intermediate results and control data required in the process.
- the display device DPY is configured of, for example, a liquid crystal display, and displays the generated expression facial image S33 and data of the processing process.
- the input device KEY comprises, for example, a keyboard and a mouse, and receives various data and instructions from the user.
- the control device CNT 61 is constituted by, for example, a central processing unit of a computer, and executes the main processing of the face motion decoding device of the present embodiment.
- the control device CNT 61 has a face image three-dimensionalization unit 32, an external force decoding unit 30 and an expression image generation unit 33.
- Each functional unit 32, 30, 33 of the control device CNT61 can be realized by a computer constituting the control device CNT61 and a program for the face motion decoding device.
- the program for the facial motion decoding apparatus is recorded on a computer readable recording medium PM61 such as a magnetic disk, read by the computer when the computer is started, etc., and controlling the operation of the computer enables the computer to read the program.
- the control device CNT 61 uses the face image three-dimensionalization unit 32 to input the expressionless face image S 301 of a certain person used for decoding stored in the storage device M 61 and to input the same to the storage device M 63 in advance.
- the expressionless face three-dimensional data S300 for decoding is generated and stored in the storage device M64.
- the external force decoding unit 30 is used to decode the expressionless face stored in the storage device M64 in accordance with the external force information S30 of the facial expression motion stored in the storage device M62.
- the movement position of each control point in the movement area of the three-dimensional data S300 is calculated, and is stored in the storage device M65 as expression movement movement position data S31 of the control point.
- the facial expression image creation unit 33 similarly to the facial expression creation unit 31 of the eighth embodiment of FIG. 26, based on the facial expression movement movement position data S31 of the control point stored in the storage device M65, The three-dimensional data S300 for reconstruction, which is stored in M64, is deformed to generate three-dimensional data of a facial expression movement face during facial expression movement, and then this three-dimensional data is projected onto a specified view.
- a facial expression action face image S33 at the time of decoding is generated and output to the storage device M36.
- the expressionless face three-dimensional data S300 is input, the three-dimensional data S32 of the expression movement face is output, and in the ninth embodiment, the expressionless face image S301 is input. Emotional face image S33 is output.
- a face movement decoding apparatus to which an expressionless face three-dimensional data S300 is input and an expression movement face image S33 is output, or an expressionless face image S301 is input, three-dimensional data of an expression movement face A face motion decoding apparatus to which S32 is output is also conceivable.
- the expressionless face image S301 of a certain person is deformed based on the external force information S30 of the only facial expression motion, but as in the eighth example of FIG.
- the expressionless face image S 301 may be deformed based on the external force information S30 of a plurality of different expression movements.
- FIG. 28 is a block diagram of a facial motion encoding / decoding apparatus according to a tenth embodiment of the present invention.
- the expression motion is added to the three-dimensional data of the expressionless face.
- the face motion code decoding apparatus includes an encoding apparatus 71, a transmission apparatus 72, and a decoding apparatus 73.
- the transmission apparatus 72 and the decoding apparatus 73 can communicate with each other through a communication path 74. It is connected to the.
- a storage device M storing expressionless face three-dimensional data S 100 of person A and expression action face three-dimensional data S 101 of person A as input data of the decoding apparatus 71 71, storage device M 72 for storing external force data S 12 of facial expression motion and motion area S 13 of facial expression motion generated by decoding device 71, transmission device 72 integrates external force data S 12 of facial expression motion and motion area S 13 Storage device M 73 for temporarily storing external force information S 30 of the facial expression motion, storage device M 74 for storing the expressionless face three-dimensional data S 300 of person B, M 74 for the facial expression motion sent from the transmission device 72 through the communication path 74 Memory device M7 5 for storing the facial expression face three-dimensional data S32 of the person B generated by the decoding device 73 based on the external force information S30 and the expressionless face three-dimensional data S300 of the person B stored in the storage device M74. Is equipped.
- the encoding device 71 can be realized by the face motion encoding device of the seventh embodiment shown in FIG. That is, the encoding device 71 is configured of a processing device CNT41 including the difference calculation unit 10 and the external force calculation unit 11 of FIG. 25, a storage device M42, a display device DPY, and an input device KEY. Therefore, the operation of the coding device 71 is the same as the operation of the face motion coding device of the seventh embodiment.
- the transmission device 72 transfers external energy data S12 of the facial expression motion and its motion area S13 to the decoding device 73 and controls the data.
- the motion area S13 is stored as an area indicating a part of a face, an area described by a position relative to a position or distance of a feature point such as both eyes or a tear point, and the like.
- a storage format a method of projecting a face on a flat surface, a cylinder, a spherical surface, etc. and storing it in two-dimensional coordinates can be considered.
- the operation area is also divided into small areas on two-dimensional coordinates. Small areas are numbered to create a set of control points.
- the external force data S12 the external force three-dimensional vector that is applied to the control point of the small area, the number of the small area, and the position of the control point on the same two-dimensional coordinate as the operation area are stored. In addition, it is not necessary to send the whole area of the movement area or the small area, but only the outline thereof. As a transmission method, it may be simply transmitted as it is, or it is possible to perform lossless coding and then transmit and decode on the receiving side.
- the external force data S12 and its operation area S13 are sent to the decoding device 73 as external force information S30 integrated.
- the transmission device 72 As a function of the transmission device 72, there is processing at the time of creation of an asymmetric facial expression motion.
- the encoding side creates a plurality of expression operations, and the transmission device 72 simultaneously sends them to the decoding device 73, and the decoding device 73 mixes the expression operations, and the expression operation is mixed. make It is. In this case, it is necessary to accumulate information of external force information S30 for a plurality of facial expressions.
- Decoding device 73 can be realized by the face motion decoding device according to the eighth embodiment shown in FIG. That is, the decryption device 73 can be configured by the processing device CNT 51 including the external force decryption unit 30 and the facial expression creation unit 31 of FIG. 26, the storage device M 53, the display device DPY, and the input device KEY. Therefore, the operation of the decoding device 73 is the same as the operation of the face-motion decoding device of the eighth embodiment.
- the face motion coding apparatus according to the seventh embodiment shown in FIG. 25 is used as the coding apparatus 71
- the face motion coding apparatus according to the eighth embodiment shown in FIG. 26 is shown as the decoding apparatus 73.
- Use the device using the facial motion encoding device of the fourth embodiment shown in FIG. 18 as the encoding device 71, facial motion using the facial motion decoding device of the fifth embodiment shown in FIG. 23 as the decoding device 73.
- An embodiment of a coder / decision device is also conceivable.
- FIG. 29 is a block diagram of a facial motion encoding / decoding apparatus according to an eleventh embodiment of the present invention, in which a certain person A from another set of three-dimensional data from an expressionless face and an expression movement is shown.
- the expression movement is added to the expressionless face image of. That is, in the face motion code decoding apparatus according to the present embodiment, the expressionless face three-dimensional data S 300 which is an input of the decoding apparatus in the face motion code decoding apparatus according to the tenth embodiment shown in FIG.
- the expression facial image S301 is substituted, and the output facial expression three-dimensional data S32 is substituted with the expression facial image S32.
- the face-motion code decoding apparatus includes an encoding apparatus 81, a transmission apparatus 82, and a decoding apparatus 83.
- the transmission apparatus 82 and the decoding apparatus 83 can communicate with each other through a communication path 84. It is connected.
- storage device M 81 for storing expressionless face three-dimensional data S 100 of person A and expression action face three-dimensional data S 101 of person A as input data of decoding device 81, and generated by decoding device 81
- Memory device M 82 for storing external force data S 12 of facial expression motion and motion region S 13 of facial expression motion, temporary storage of external force data S 30 for facial expression motion created by integrating external force data S 12 and motion region S 13 of facial expression motion by transmission device 82.
- the coding device 81 and the transmission device 82 are the same as the coding device 71 and the transmission device 72 of the tenth embodiment shown in FIG. Therefore, the encoder 81 can be realized by the face motion encoder of the seventh embodiment shown in FIG.
- the decoding device 83 can be realized by the face motion decoding device of the ninth embodiment shown in FIG. That is, the decoding device 83 is configured by the processing device CNT 61 including the face image three-dimensionalizing unit 32, the external force decoding unit 30, and the facial expression image forming unit 33 of FIG. 27, the storage device M63 M65, the display device DPY and the input device KEY. can do. Therefore, the operation of the decoding apparatus 83 is the same as the operation of the face motion decoding apparatus according to the ninth embodiment.
- the facial motion encoding apparatus of the seventh embodiment shown in FIG. 25 is used as the encoding apparatus 81
- the facial motion of the ninth embodiment shown in FIG. 27 is shown as the decoding apparatus 83.
- a decryption device is used.
- 18 is used as the coding apparatus 81
- the face motion coding apparatus according to the fourth embodiment shown in FIG. 18 is used
- the face motion decoding apparatus according to the sixth embodiment shown in FIG. 24 is used.
- a face motion code decoding apparatus is also conceivable in which is used.
- the tenth embodiment is provided with a decoding device 73 to which the expressionless face three-dimensional data S300 is input and the three-dimensional data S32 of the expression movement face is output, and the eleventh embodiment relates to the expressionless face
- the decoding apparatus 83 is provided with a facial image S301 and a facial expression facial image S33.
- a face motion code decoding device provided with a decoding device to which an expressionless face three-dimensional data S300 is input and a facial expression motion face image S33 is output, or an expressionless face image S301 is input.
- a face motion encoding / decoding device comprising a decoding device from which three-dimensional data S32 of facial expression motion face is output is also conceivable.
- the facial expression is given to the face by deforming the face image and the face three-dimensional data using computer graphics technology.
- a flexible mask is attached to the head of the robot, and a facial expression is given to the face by deforming the shape of the mask.
- Figure 30 shows an example of the configuration of the control device built into the robot's head.
- reference numeral 1300 shows a cross section of a mask attached to the head of the robot.
- Mask 1 The end portion 1303 of the movable piece 1302 of the three-dimensional actuator 1301 is fixed to the back surface of the 300 at each control point in each operation area, and the end portion 1303 is at the position at the time of expressionlessness in the initial state. Is set as.
- Each three-dimensional actuator 1301 is connected to the control device 1304 by a signal line 1305, and the XYZ coordinate position of the end portion 1303 of the movable piece 1302 is changed according to a movement position signal given from the control device 1304 through the signal line 1305. It is changed. Since the end 1303 of the movable piece 1302 is fixed to the control point on the back surface of the temporary surface, when the position of the end 1303 moves, each control point of the temporary surface 1300 moves integrally with its peripheral portion, and the temporary surface A desired expression is expressed by deformation. In this case, since the area around each control point is dragged to move to each control point, smoothing of the boundary between adjacent small areas and smoothing of the boundary between the operating area and the non-operating area are particularly necessary. Nah,
- the XYZ coordinate position after movement of each control point is stored in the storage unit 1306.
- the calculation of the XYZ coordinate position of each control point stored in the storage unit 1306 is similar to the first embodiment, the fifth embodiment, the sixth embodiment, the eighth embodiment and the ninth embodiment described above. It will be. That is, the storage device 1306 is the storage device M5 of the first embodiment shown in FIG. 2, the storage device M25 of the fifth embodiment of FIG. 23, the storage device M35 of the sixth embodiment shown in FIG. This corresponds to the storage device M53 of the eighth embodiment shown in FIG. 26, and the storage device M65 of the ninth embodiment shown in FIG.
- the control device 1304 can be realized by a computer and a program for control device.
- the present invention is not limited to the above embodiments, and various additions and modifications can be made.
- the present invention can be applied to any object with temporal change in form such as the face of a human face or a robot face as an object with change in form with time. It is.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/570,560 US7668401B2 (en) | 2003-09-03 | 2004-08-25 | Form changing device, object action encoding device, and object action decoding device |
EP04772142A EP1669931A1 (en) | 2003-09-03 | 2004-08-25 | Form changing device, object action encoding device, and object action decoding device |
JP2005513622A JPWO2005024728A1 (ja) | 2003-09-03 | 2004-08-25 | 形態変形装置、物体動作符号化装置および物体動作復号化装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003311072 | 2003-09-03 | ||
JP2003-311072 | 2003-09-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005024728A1 true WO2005024728A1 (ja) | 2005-03-17 |
Family
ID=34269688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/012181 WO2005024728A1 (ja) | 2003-09-03 | 2004-08-25 | 形態変形装置、物体動作符号化装置および物体動作復号化装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US7668401B2 (ja) |
EP (1) | EP1669931A1 (ja) |
JP (1) | JPWO2005024728A1 (ja) |
KR (1) | KR100891885B1 (ja) |
CN (1) | CN1846234A (ja) |
WO (1) | WO2005024728A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009545083A (ja) * | 2006-07-28 | 2009-12-17 | ソニー株式会社 | モーションキャプチャにおけるfacs(顔の表情符号化システム)クリーニング |
JP2009545084A (ja) * | 2006-07-28 | 2009-12-17 | ソニー株式会社 | モーションキャプチャにおけるfacs(顔の表情符号化システム)解決法 |
CN101296431B (zh) * | 2007-04-28 | 2012-10-03 | 北京三星通信技术研究有限公司 | 外形控制装置和方法及应用该装置和方法的便携式终端 |
JP2014063276A (ja) * | 2012-09-20 | 2014-04-10 | Casio Comput Co Ltd | 情報処理装置、情報処理方法、及びプログラム |
JP2016091129A (ja) * | 2014-10-30 | 2016-05-23 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理装置、画像処理方法、画像処理プログラム |
JP2017037553A (ja) * | 2015-08-12 | 2017-02-16 | 国立研究開発法人情報通信研究機構 | 運動解析装置および運動解析方法 |
WO2017217191A1 (ja) * | 2016-06-14 | 2017-12-21 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置及び三次元データ復号装置 |
WO2018016168A1 (ja) * | 2016-07-19 | 2018-01-25 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ作成方法、三次元データ送信方法、三次元データ作成装置及び三次元データ送信装置 |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1653392A1 (en) * | 2004-12-10 | 2006-05-03 | Agilent Technologies Inc | Peak pattern evaluation using spring model |
JP2008017042A (ja) * | 2006-07-04 | 2008-01-24 | Sony Corp | 情報処理装置および方法、並びにプログラム |
US7821510B2 (en) * | 2007-04-13 | 2010-10-26 | International Business Machines Corporation | Dynamic conference table display system |
CN101815562A (zh) * | 2007-09-19 | 2010-08-25 | 戴维·阿龙·贝内特 | 能够变形的机器人表面 |
US20090189916A1 (en) * | 2008-01-25 | 2009-07-30 | Chou-Liang Tsai | Image warping method |
TWI416360B (zh) * | 2008-09-19 | 2013-11-21 | Hon Hai Prec Ind Co Ltd | 特徵元素擬合方法及其電腦系統 |
CN101876536B (zh) * | 2009-04-29 | 2012-09-19 | 鸿富锦精密工业(深圳)有限公司 | 三维色阶比对动态分析方法 |
TWI426406B (zh) * | 2009-05-15 | 2014-02-11 | Hon Hai Prec Ind Co Ltd | 三維色階比對動態分析方法 |
WO2011027534A1 (ja) * | 2009-09-04 | 2011-03-10 | パナソニック株式会社 | 画像生成システム、画像生成方法、コンピュータプログラムおよびコンピュータプログラムを記録した記録媒体 |
WO2012009719A2 (en) * | 2010-07-16 | 2012-01-19 | University Of South Florida | Shape-shifting surfaces |
US8402711B2 (en) | 2010-07-16 | 2013-03-26 | University Of South Florida | Multistable shape-shifting surfaces |
JP5821610B2 (ja) * | 2011-12-20 | 2015-11-24 | 富士通株式会社 | 情報処理装置、情報処理方法及びプログラム |
US9442638B2 (en) * | 2013-08-22 | 2016-09-13 | Sap Se | Display of data on a device |
JP6382050B2 (ja) * | 2014-09-29 | 2018-08-29 | キヤノンメディカルシステムズ株式会社 | 医用画像診断装置、画像処理装置、画像処理方法及び画像処理プログラム |
JP5927270B2 (ja) * | 2014-11-06 | 2016-06-01 | ファナック株式会社 | ロボットシミュレーション装置 |
US9747573B2 (en) | 2015-03-23 | 2017-08-29 | Avatar Merger Sub II, LLC | Emotion recognition for workforce analytics |
US9693695B1 (en) * | 2016-09-23 | 2017-07-04 | International Business Machines Corporation | Detecting oral temperature using thermal camera |
DE102018215596B3 (de) | 2018-09-13 | 2019-10-17 | Audi Ag | Anzeigevorrichtung für ein Kraftfahrzeug, Verfahren zum Betreiben einer Anzeigevorrichtung sowie Kraftfahrzeug |
RU2756780C1 (ru) * | 2020-04-21 | 2021-10-05 | ООО "Ай Ти Ви групп" | Система и способ формирования отчетов на основании анализа местоположения и взаимодействия сотрудников и посетителей |
WO2022197685A1 (en) * | 2021-03-15 | 2022-09-22 | Massachusetts Institute Of Technology | Discrete macroscopic metamaterial systems |
CN114913278A (zh) * | 2021-06-30 | 2022-08-16 | 完美世界(北京)软件科技发展有限公司 | 表情模型的生成方法及装置、存储介质、计算机设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04270470A (ja) * | 1990-12-25 | 1992-09-25 | Kongouzen Souhonzan Shiyourinji | アニメーション作成方法 |
JPH08297751A (ja) * | 1995-04-27 | 1996-11-12 | Hitachi Ltd | 三次元モデルの作成方法および装置 |
JP2002216161A (ja) * | 2001-01-22 | 2002-08-02 | Minolta Co Ltd | 画像生成装置 |
JP2002304638A (ja) * | 2001-04-03 | 2002-10-18 | Atr Ningen Joho Tsushin Kenkyusho:Kk | 表情アニメーション生成装置および表情アニメーション生成方法 |
JP2003016475A (ja) * | 2001-07-04 | 2003-01-17 | Oki Electric Ind Co Ltd | 画像コミュニケーション機能付き情報端末装置および画像配信システム |
JP2003196678A (ja) * | 2001-12-21 | 2003-07-11 | Minolta Co Ltd | 3次元モデルシステムおよびコンピュータプログラム |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0374777A (ja) | 1989-08-17 | 1991-03-29 | Graphic Commun Technol:Kk | 顔画像合成装置 |
US7006881B1 (en) * | 1991-12-23 | 2006-02-28 | Steven Hoffberg | Media recording device with remote graphic user interface |
JP2573126B2 (ja) | 1992-06-22 | 1997-01-22 | 正重 古川 | 表情のコード化及び情緒の判別装置 |
JP2648682B2 (ja) | 1992-07-17 | 1997-09-03 | 正重 古川 | 表情要素コードの再生表示及び情緒・表情の加工・発生装置 |
JP2802725B2 (ja) | 1994-09-21 | 1998-09-24 | 株式会社エイ・ティ・アール通信システム研究所 | 表情再現装置および表情再現に用いられるマトリックスの算出方法 |
JPH08305878A (ja) | 1995-05-09 | 1996-11-22 | Casio Comput Co Ltd | 顔画像作成装置 |
US7239908B1 (en) * | 1998-09-14 | 2007-07-03 | The Board Of Trustees Of The Leland Stanford Junior University | Assessing the condition of a joint and devising treatment |
KR100317137B1 (ko) | 1999-01-19 | 2001-12-22 | 윤덕용 | 얼굴 표정 애니메이션 방법 |
US6859565B2 (en) * | 2001-04-11 | 2005-02-22 | Hewlett-Packard Development Company, L.P. | Method and apparatus for the removal of flash artifacts |
JP2002329214A (ja) | 2001-04-27 | 2002-11-15 | Mitsubishi Electric Corp | 表情合成方法及び表情合成装置 |
US7180074B1 (en) * | 2001-06-27 | 2007-02-20 | Crosetto Dario B | Method and apparatus for whole-body, three-dimensional, dynamic PET/CT examination |
JP4270470B1 (ja) | 2007-12-07 | 2009-06-03 | 電気化学工業株式会社 | 耐火被覆材、該耐火被覆材の製造方法、及び前記耐火被覆材を用いた耐火被覆方法 |
-
2004
- 2004-08-25 EP EP04772142A patent/EP1669931A1/en not_active Withdrawn
- 2004-08-25 WO PCT/JP2004/012181 patent/WO2005024728A1/ja active Application Filing
- 2004-08-25 JP JP2005513622A patent/JPWO2005024728A1/ja not_active Withdrawn
- 2004-08-25 CN CNA2004800253175A patent/CN1846234A/zh active Pending
- 2004-08-25 KR KR1020067004562A patent/KR100891885B1/ko not_active IP Right Cessation
- 2004-08-25 US US10/570,560 patent/US7668401B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04270470A (ja) * | 1990-12-25 | 1992-09-25 | Kongouzen Souhonzan Shiyourinji | アニメーション作成方法 |
JPH08297751A (ja) * | 1995-04-27 | 1996-11-12 | Hitachi Ltd | 三次元モデルの作成方法および装置 |
JP2002216161A (ja) * | 2001-01-22 | 2002-08-02 | Minolta Co Ltd | 画像生成装置 |
JP2002304638A (ja) * | 2001-04-03 | 2002-10-18 | Atr Ningen Joho Tsushin Kenkyusho:Kk | 表情アニメーション生成装置および表情アニメーション生成方法 |
JP2003016475A (ja) * | 2001-07-04 | 2003-01-17 | Oki Electric Ind Co Ltd | 画像コミュニケーション機能付き情報端末装置および画像配信システム |
JP2003196678A (ja) * | 2001-12-21 | 2003-07-11 | Minolta Co Ltd | 3次元モデルシステムおよびコンピュータプログラム |
Non-Patent Citations (1)
Title |
---|
CHOI C.S. ET AL: "Chiteki Gazo Fugoka ni Okeru Tobu no Ugoki to Ganmen no Ugoki Joho no Koseido Suitei", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS GIJUTSU KENKYU HOKOKU, (PRU90-68 TO 80 PATTERN NINSHIKI RIKAI) , THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. 90, no. 253, 19 October 1990 (1990-10-19), pages 1 - 8, XP002986215 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009545083A (ja) * | 2006-07-28 | 2009-12-17 | ソニー株式会社 | モーションキャプチャにおけるfacs(顔の表情符号化システム)クリーニング |
JP2009545084A (ja) * | 2006-07-28 | 2009-12-17 | ソニー株式会社 | モーションキャプチャにおけるfacs(顔の表情符号化システム)解決法 |
CN101296431B (zh) * | 2007-04-28 | 2012-10-03 | 北京三星通信技术研究有限公司 | 外形控制装置和方法及应用该装置和方法的便携式终端 |
JP2014063276A (ja) * | 2012-09-20 | 2014-04-10 | Casio Comput Co Ltd | 情報処理装置、情報処理方法、及びプログラム |
JP2016091129A (ja) * | 2014-10-30 | 2016-05-23 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理装置、画像処理方法、画像処理プログラム |
US11138783B2 (en) | 2014-10-30 | 2021-10-05 | Sony Interactive Entertainment Inc. | Image processing apparatus, image processing method, and image prodessing program for aligning a polygon model with a texture model |
JP2017037553A (ja) * | 2015-08-12 | 2017-02-16 | 国立研究開発法人情報通信研究機構 | 運動解析装置および運動解析方法 |
CN109313820A (zh) * | 2016-06-14 | 2019-02-05 | 松下电器(美国)知识产权公司 | 三维数据编码方法、解码方法、编码装置、解码装置 |
JPWO2017217191A1 (ja) * | 2016-06-14 | 2019-04-04 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置及び三次元データ復号装置 |
WO2017217191A1 (ja) * | 2016-06-14 | 2017-12-21 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置及び三次元データ復号装置 |
CN109313820B (zh) * | 2016-06-14 | 2023-07-04 | 松下电器(美国)知识产权公司 | 三维数据编码方法、解码方法、编码装置、解码装置 |
WO2018016168A1 (ja) * | 2016-07-19 | 2018-01-25 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ作成方法、三次元データ送信方法、三次元データ作成装置及び三次元データ送信装置 |
CN109478338A (zh) * | 2016-07-19 | 2019-03-15 | 松下电器(美国)知识产权公司 | 三维数据制作方法、发送方法、制作装置、发送装置 |
JPWO2018016168A1 (ja) * | 2016-07-19 | 2019-05-09 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 三次元データ作成方法、三次元データ送信方法、三次元データ作成装置及び三次元データ送信装置 |
US10810786B2 (en) | 2016-07-19 | 2020-10-20 | Panasonic Intellectual Property Corporation Of America | Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device |
JP2022171985A (ja) * | 2016-07-19 | 2022-11-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ作成方法及び三次元データ作成装置 |
US11710271B2 (en) | 2016-07-19 | 2023-07-25 | Panasonic Intellectual Property Corporation Of America | Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device |
JP7538192B2 (ja) | 2016-07-19 | 2024-08-21 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 三次元データ作成方法及び三次元データ作成装置 |
Also Published As
Publication number | Publication date |
---|---|
CN1846234A (zh) | 2006-10-11 |
US20060285758A1 (en) | 2006-12-21 |
JPWO2005024728A1 (ja) | 2007-11-08 |
US7668401B2 (en) | 2010-02-23 |
EP1669931A1 (en) | 2006-06-14 |
KR100891885B1 (ko) | 2009-04-03 |
KR20060054453A (ko) | 2006-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005024728A1 (ja) | 形態変形装置、物体動作符号化装置および物体動作復号化装置 | |
US10540817B2 (en) | System and method for creating a full head 3D morphable model | |
US7020347B2 (en) | System and method for image-based surface detail transfer | |
US20030184544A1 (en) | Modeling human beings by symbol manipulation | |
US8180613B1 (en) | Wrinkles on fabric software | |
EP3980975B1 (en) | Method of inferring microdetail on skin animation | |
Orvalho et al. | Transferring the rig and animations from a character to different face models | |
KR100942026B1 (ko) | 다중 감각 인터페이스에 기반한 가상의 3차원 얼굴메이크업 시스템 및 방법 | |
JPH08297751A (ja) | 三次元モデルの作成方法および装置 | |
JP2023505615A (ja) | 細かいしわを有する顔メッシュ変形 | |
Thalmann et al. | The Making of the Xian terra-cotta Soldiers | |
JP2723070B2 (ja) | 人物像表示によるユーザインタフェース装置 | |
KR20020079268A (ko) | 3차원 인체모델에 가상현실공간상의 3차원 콘텐츠를실시간으로 합성하는 시스템 및 방법. | |
CN117853320B (zh) | 一种基于多媒体操控的图像映射方法、系统及存储介质 | |
US20230196702A1 (en) | Object Deformation with Bindings and Deformers Interpolated from Key Poses | |
WO2022176132A1 (ja) | 推論モデル構築方法、推論モデル構築装置、プログラム、記録媒体、構成装置及び構成方法 | |
KR20060067242A (ko) | 해부 데이터를 이용한 얼굴 애니메이션 생성 시스템 및 그방법 | |
Nedel | Simulating virtual humans | |
Liu et al. | Geometry-optimized virtual human head and its applications | |
KR20240119644A (ko) | 디지털 휴먼의 표정 변화에 따라 피부 텍스처를 변화시키기 위한 방법 | |
Karunaratne et al. | A new efficient expression generation and automatic cloning method for multimedia actors | |
KR20240119645A (ko) | 디지털 휴먼의 표정 변화에 따라 피부 텍스처를 변화시키기 위한 3차원 그래픽 인터페이스 장치 | |
KR20240022179A (ko) | 3d 가상현실 공간에서 실제인물과 가상인물을 실시간 합성하는 시스템 및 그 방법 | |
Zhang | Modeling of human faces with parameterized local shape morphing | |
KR20030070579A (ko) | 얼굴 애니메이션을 위한 디에스엠 복원 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480025317.5 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005513622 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020067004562 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004772142 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067004562 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006285758 Country of ref document: US Ref document number: 10570560 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2004772142 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10570560 Country of ref document: US |